Neppit

/prog/

Programming and DIY

Talk about computer programming and DIY projects

Reply to thread





Anonymous 2017-10-13 15:57:42 No. 4

How do you interact with your computer?

I'm interested in how developers work with computers.

As for me, I've been feeling really dissatisfied with my current setup. I feel like bash and friends are a really poor way to interact with a computer. I've been looking into using scheme as a shell and it feels promising. If I had tab completion for file names I would be using it now.

I also really dislike that Vim is a TUI application. I think that TUI limits what can be done with an application, but Vim+TMUX is quite powerful. I haven't found a really good GUI editor with a CLI yet.

In terms of hardware, I'm pretty interested in split keyboards. The Infinity Ergodox looks solid, so I'm planning to buy one when I can afford it. I think that programmable keyboards are a must now. Being unable to modify my keyboard's firmware has been annoying recently.


Anonymous 2017-10-13 15:57:51 No. 5

I have a trackball mouse, and a while ago I experimented with controlling it using my foot. Surprisingly this worked very well, although things like highlighting text were impossible. I think that if you set two buttons on your keyboard to left/right click it would work quite well.

Ideally I would probably have another mouse on my desk for when I'm relaxing.


Anonymous 2017-10-14 23:35:06 No. 7

I've been interested in how VR/AR might be used for software development. I think that in the short term, VR would be useful when traveling. Wearing something like an Occulus Rift while flying could let you have a "multi-monitor" setup. The downside of this is that you aren't able to see the keyboard and mouse. However, it does block out any distractions around you which can provide a number of benefits.

I think that as AR becomes more powerful, it will be preferable to VR. Being able to see keyboard/mouse is useful, but it is also probably better for your eyes as you can very easily look into the distance.

I'm uncertain as to how useful VR/AR will be without a BCI. Full-dive VR will almost certainly be mediocre without a BCI, but I'm uncertain as to how useful a Full-dive environment would be for development.

AR with a BCI probably would not provide much in the way of extra-functionality over current solutions, but it would almost certainly provide a superior experience. I imagine that it would be possible to provide menus around yourself that can be manipulated by dragging them with your hand. This would be useful in a development environment in much the same way that multiple monitors are useful. I can imagine dev environments that provide a lot of information about running applications in an unobtrusive way. Looking at Lisp environments makes me think of some interesting possibilities, although I find it difficult to explain them.

I'm curious what the general societal response to a BCI will be. I would expect the response to be negative, but the way that people have embraced smartphones and social media makes me think otherwise. I find it interesting that the response to Google Glass was largely negative. I understand the suspicion around having a camera potentially always on, but I don't see how it differs from everyone carrying a smartphone in their pocket.

I think that a BCI has serious privacy/health implications. Ensuring that such a device is safe seems difficult. Personally, I would not be comfortable with corporations having a direct interface to my brain, but I'm not immediately against a BCI with open firmware. I don't know enough neuroscience to consider possible health problems.

Using the IoT as an example, having a BCI open to the network is terrifying. It might be possible to have such a device be connected to a computer, which could allow network information to arrive on the device in a safe manner with proper sandboxing. Strong capabilities wrt what information can travel to and from the device is a must.


Anonymous 2017-10-15 18:14:52 No. 9

>>4
I wanted to expand a bit on what I said wrt Vim. What I said was part of some bigger thoughts on editors. I feel that modal editing is vastly superior to anything else that I've seen. When writing English it doesn't matter so much as writing is largely linear, but when programming or writing something like LaTeX, you end up jumping around a lot, and you might want to make the same change in many places at once.

I've had the thought that Vi's modal editing might be the wrong way to do it. There's an editor called Kakoune[1] that does modal editing differently. In Kakoune the language is more like Object-Verb as opposed to Vim's Verb-Object. I.e. in kakoune you select the text you want to modify and then dictate what you want to do with it. I haven't really used Kakoune for anything, but I think it's worth exploring different ways to do modal editing.

People like to talk about how great Emacs is, but I feel like elisp is a mistake. When you look at the things that people do with it, it feels like a really poor language. I think the better solution is to write all of the editor functionality in a language suited for writing large systems like Rust. Then include a CLI in the application that uses something like Scheme or whatever. Then you can provide an API between the editor and the CLI language that would let you do some manipulations, but major things would require you to dive into the underlying system.

I feel that this sort of stratification encourages people to jump into the underlying language when they want to add features, but when they want to do something programatically they can just use the CLI. I think that it would provide the best of both worlds.

One thing that I've never seen in an editor that I would really like is a paint mode. When I'm designing a data structure, I like to draw how it works, or I sometimes want to write things down on scratch paper to help me think about them. Being able to turn on an overlay in my editor that acts like a very basic paint application would be really useful to me.

[1] https://github.com/mawww/kakoune


Anonymous 2017-10-21 23:11:29 No. 11

I've been thinking about mobile devices recently, i.e. smartphones and tablets. I feel like Android and iOS are completely useless for development. The smartphone form-factor feels way to small to do any serious work on. I think that 10"-12" tablets are a reasonable size to work with. Onscreen keyboards are absolutely abyssmal for typing on, but writing software on them is even more hellish.

Using a normal keyboard with a tablet could work quite well, but I think the operating systems are quite limiting. There's no reason why you couldn't just run Linux or whatever on them. I know that on some tablets it isn't too difficult to install Linux, so the major limiting factor seems to be hardware support. You don't really have ports on tablets unless you use something like usb otg. Bluetooth also works, but it requires special hardware.

USB-C might make this type of experience better, but I've yet to hear much about this. I've thought that using a tablet as a compute device that you carry around with you and then dock when you start working could be really nice. I really like the idea of the Microsoft Surface Pro, although I think that it was poorly executed.

I've been trying to think about how an operating system might best take advantage of a tablet while supporting use as a full-blown computer, but I haven't had much luck. Most of my thoughts at the moment are ensuring that normal applications work well with a touchscreen to support the tablet usecase.

I'm really interested in use cases for the smartphone form-factor. I don't use my smartphone except for making phone calls and occassionally checking email. I have been told that using smartphones as cameras and GPS is useful, and I think that many people use them as mp3 players. A smartphone as a compute device that can be docked would be really interesting. Maybe pairing this with ideas from Plan9 and Inferno has promise? I think that developers face a lot of annoyance with maintaining their dev env across multiple environments.


Anonymous 2017-10-22 06:58:19 No. 15

I've been looking at using Scheme as a language for interacting with the computer, like how most people use bash. One major problem with this is composability. Scheme requires you to understand how information is going to flow before you start writing, otherwise you end up having to jump back to the beginning of the call.

Bash handles this really well, allowing you to pipe data from one program to another. However, Bash can only pass around plain text, which creates a lot of problems.

In Haskell you have the & operator which acts very similarly to bash's pipe operator, e.g. `[3,4,5] & map (+1) => [4,5,6]`. I feel that this piping behaviour is really important for programming. When writing software this isn't an issue because we write software in a non-linear fashion[1]. This combined with Haskell's structured data makes Haskell seem like it could be really nice as a programming language. In the past my thought has been that Haskell's type system and difficulty with IO would get in the way too much. While these guarantees are nice when writing software, they tend to get in the way when programming. Like with opening a file, I don't care about error handling. If the file fails to open for any reason, I should just get an error message.

Thinking about it a little bit, Haskell's type inference might negate any barriers it could create. It also may be possible to write an IO wrapper that handles errors in a desired way. I might look into Haskell more for programming. I'm not sure how powerful Haskell's metaprogramming facilities are though, which I'm thinking is important for programming.

The issue with Scheme is that I'm not sure how to achieve this piping functionality without adding a special form for it. Maybe you could wrap your calls in some function? Something like this:
```
(def (chain . exprs)
(if (null? (cdr exprs))
(car exprs)
(apply chain (cons ((cadr expr) (car expr))
(cddr expr)))))

(chain '(3 4 5) (lambda (x) (map 1+ x))) ;; '(4 5 6)
```
This works pretty well, although it does demonstrate the annoyance with lack of implicit currying. If you wanted repeatedly map over this list you would have to write a lambda for each instance which quickly grows tedious. This might be a contrived situation though, so in practice this may work really well.

I think that the chain procedure provides a decent solution to my original problem. I feel that when programming you generally know when you are going to want to pipe data through multiple procedures, although you may not know how many chains you want to make. Because of this, having to add the call to chain at the beginning shouldn't be a problem.

I think that some repls will assign values to temp variables. What I mean by this is that running `(+ 1 2 3)` at the repl would print 6, but it would also assign 6 to the variable `$1`. I think that Guile does this although I haven't checked it. Although it might not be as good as piping on its own, this additional functionality might also be really important for programming.


[1] I'm not sure that this is the correct way to express what I mean here. Why is software written differently than we program? Is it better that we write it differently? What advantages and disadvantages are there to each approach?


Anonymous 2017-10-25 06:22:21 No. 17

An interesting thread about desk setups.
https://news.ycombinator.com/item?id=15543617


Anonymous 2017-10-31 07:37:54 No. 20

So I don't think that I really talked about it before, but I feel that Bash's raison d'etre is no longer needed. Although I haven't been able to find the motivation for the creation of the Thompson Shell[1] was a way to execute programs. I'm uncertain of another way to execute programs on UNIX without syscalls (which would require execution of a compiler). I would be interested in hearing if this is incorrect.

Today, virtually all desktop computers allow execution of programs via the desktop environment. The most prominent example is probably the Windows start menu. Today scripting languages like Python, Ruby, etc. are commonplace. Most scripting languages provide facilities for running programs like Bash does, but these scripting languages have great advantages. Most notably, they typically have an import system and the full power of a typical programming language. Have you ever tried adding two numbers in Bash? It's not pretty.

I think that language import systems, coupled with FFI, obsolete the need to run programs. What I mean by that is the way that you run programs in Bash, things like `cat file | grep TODO`. I don't mean that you should run your browser by importing a library or whatever. And of course you still need to execute an interpreter.

Basically I feel that the Bash model is antiquated. I think that a more complete model will provide far more power. I do think that a Bash replacement should be capable of running programs. This is certainly useful in the short term and in general. However I don't feel that it should be first class as in Bash. Calling a procedure like `(run "firefox")` should be good enough.

[1] The Thompson Shell is the original sh.


Anonymous 2017-11-01 00:35:16 No. 21

I've been focusing on identifying the key features of a command language. I've talked about some of them before, and I plan to expand on the others over the next few days.

# Important
- Postfix function composition
- Run procedures in the background
- Easy FFI
- Dynamic type system
- Easy interaction with filesystem
- Powerful REPL
- Language power

# Less Important
- Speed
- Executing binaries

# Uncertain
- Lazyness
- Debugging


Anonymous 2017-11-03 06:05:11 No. 22

andreareina on HN told me about Clojure's `->>` macro, which let's me do something like `(->> '(3 4 5) (map 1+) (reduce +))`. Racket has something similar with `~>`. You can also use an `_` to specify which position the argument should be placed in. This basically removes my last complaint about Scheme compared to other languages.

This has showed me that I really need to take the time to learn Racket's macro system. This example trivially solves my biggest complaint.


Anonymous 2017-11-03 12:59:37 No. 24

>>21
>Run procedures in the background

I think this is important for executing long running jobs, something like compiling software, while being able to work on other things. I think for this to be usable, their must be an interface to view the status of running jobs and to bring them to the forefront. With bash you can run a program in the background with the `&` operator, but I never use it because I don't know of a way to view it's status and resume it. I always end up opening a new terminal to run the job.

I'm not sure how to run things in the background. One option is probably to run jobs asynchronously. Running jobs as seperate threads might work as well. I think that running them as seperate processes would not work as we probably need access to the current environment.

I'm not really sure how running in the background and resuming would work. I suppose you would have some way to view all running jobs and jobs that were running in the background but have completed. Running in the background seems kind of hard. If we make this implicit then we flood the job viewer with short-lived jobs.

Making it explicit as in bash is probably the best option, but there should probably be a way to place the currently running job in the background. Otherwise you might start compiling something and then decide that you want to work on other things while you wait. In bash you would have to stop the currently running process and then rerun it in the background.

This additional functionality seems difficult to add. I don't think that this could be done with threads. Maybe if you pause the execution, spawn a new thread, and then copies the job's data over. Running everything async by default might allow us to implement this. I don't think I know enough about this area to think more about how to do it now.

Conceivably, I think this is similar to running processes with screen or tmux.


Anonymous 2017-11-05 09:32:40 No. 26

In the modern day, a lot of computer work involves interacting with a webbrowser. Interacting with the browser is unfortunately rather painful.

For example, let's say that I'm on a webpage with a bunch of images. All of the images contain the class `item`. What I would like is to download all of the images. So what I can do is open up the JS console and call methods on the DOM to select all elements with the `item` class, and then filter this list to select only images. Then I can select the url of each image to get a list of urls.

Unfortunately I now have to copy this list into something like the shell in order to download them, since the browser allows limited interaction with the underlying system.

I heard about headless Firefox recently, and thought that it might allow me to pull info from the browser. It turns out that this is actually possible using the WebDriver api[1]. This api seems to be designed for testing web applications, but I think that it will be possible to use it to interact with the DOM.

My initial thoughts are that I will implement the webdriver api in my command language. I'm not too sure how it works, but maybe it uses JS? If that's the case, then ideally I would make it possible to compile the command language to JS for a unified experience.

Obviously more research is required for me to say anymore, but I'm pretty excited about being able to do this. I believe that a unified experience is necessary to have as much power as possible.

[1] https://w3c.github.io/webdriver/webdriver-spec.html#protocol


Anonymous 2017-11-06 19:43:08 No. 27

>>21
>Speed

Speed is less important for a command language because most programs written with it will only be executed once. Things that are executed many times or that require a lot of processing should be written in a language suited for developing software and accessed via FFI.

This doesn't mean that a command language shouldn't be fast. It's just not as important. An optimized interpreter would probably be fast enough, although a JIT compiler would probably be best.

Startup time might be important, but I'm uncertain. With terminals, it is common to spawn a new shell. This workflow might be unnecessary with good job control.


Anonymous 2017-11-15 19:01:05 No. 29

So I've felt like my thoughts are coherent enough that I've been focusing on implementation recently. I was able to get a basic interpreter going yesterday, although it doesn't alow `set!` right now. I was thinking about not adding it, but I think I will need to add the support under the hood to make improvements to the interpreter. At that point I may as well just add it to improve compliance with the standard. I'm still thinking about lazy evaluation, I probably won't know until I actually test things out. I'm also not sure how that would impact standard compliance, I haven't had a chance to read them.

>>21
>Language power

When I said language power, I was specifically thinking of macros. As an example of this, I'll compare Scheme to Javascript. They're pretty similar languages, both support first class functions, are dynamically typed, etc. One of the key differences is that Scheme supports a macro system, while Javascript does not. An example of why this matters is with the pipeline operator. In Scheme, this can be implemented as user code. In Javascript, you would have to modify the interpreter to support the new operator, which requires going through the standards process (there is currently a proposal for this).

Allowing the user to make complex additions to the language like this is very powerful, and can allow the user to change the language to fit their needs.

I also think that dynamic typing adds to the power of a command language, but I'll expand on that when I talk about why dynamic typing is important.


Anonymous 2018-05-07 18:42:56 No. 44

Something interesting that I noticed recently is that the UNIX Philosophy is not a general concept. For those who don't know, the UNIX Philosophy can be boiled down to "Do one thing and do it well". This is really a less general form of DRY - Don't Repeat Yourself. What UNIX has done is restrict the power of DRY by limiting itself to unstructured data.


Anonymous 2018-07-27 07:10:24 No. 46

I've been thinking about the filesystem abstraction recently. About the only thing that the filesystem has going for it is simplicity. The POSIX interface has major issues for developers, which SQLite have written about before. The interface for end users is also deficient. The biggest problem is the inability to search the system. I feel that when the interface was first developed, hard drives were small enough that it was easy to keep track of what files you had. Today where multi-terabyte hard drives are commonplace, it's no longer easy to remember everything.

If we look at the Internet, no one is expected to remember where things are there. Search engines for the Internet see ubiquitous use, but even these engines are a mere shadow of what they might be. Of course with search like Google it's understandable that they've neutered its power, as useful search requires more computational resources. But even compared to a decade ago Google's search has become less powerful, mainly due to the push for NLP.

For an improved filesystem abstraction; I believe that search must be an integral part of it. I think that one of the best ways to provide a powerful interface is through the use of metadata. Organizations like the American NSA have shown the world just how useful metadata is, but we haven't really seen it much for general computing. One place where it has seen use is with mp3s. Although the search interfaces I've seen for mp3s aren't very powerful, they still provide a good deal of utility.

My thoughts right now would make the filesystem closer to a database than to traditional filesystems. This lets us do some interesting things. As an example, let's say that I want to store all of the books that I own in the filesystem. All of the books would have metadata associated with them; Author, Title, ISBN, etc. My physical books might have a field for who they're loaned to, PDFs might have a content field that contains the PDF itself. This doesn't work so well with a traditional filesystem. Physical books might be pure metadata, so we would need to store it in a file of some sort, but what would we even call the file?

You might say that we could store the metadata in Postgres, and that's true, but then we have two interfaces for the same thing, and it just feels messy.

I'm not sure yet about how some things would work. Having directories form a hierarchy is a very useful abstraction, but it may not be necessary with good metadata. We might have a generic `tags` field that could store the same information. Another thing I'm uncertain about is general files.

As an example, let's think about a programming project. We have various source files, but what do we tag them with? I suppose things like filetype and what project it is a part of. Maybe we could have a `project` metadata piece that has various `file` subcomponents. I guess then we might have a `build` association that stores compilation data.

These are just some rough thoughts on how this might look. One thing to note is that a SQL interface probably won't be possible. I don't think that SQL handles adding new fields very well, especially if only some objects have the field. This system will probably be closer to NoSQL databases. Something to look into might be graph databases. This would let us represent directory hierarchies in a natural way. As for a search interface, it should be possible to provide many interfaces depending on the complexity of the search. A simple one might be a keyword search with AND, OR, NOT, etc combinators. Being able to use things like fuzzy search and RegEx adds a bit more power. On the high end we might have a ProLog interface. SICP talks about ProLog a bit with the example of interacting with a database; and I've been interested in it since.


Anonymous 2018-08-19 03:00:54 No. 47

>>46
I've given more thought to this. I've started reading the book `Dream Machine`, and it talks about a hypothetical device called the Memex. Something interesting about it is the concept of associations. In the outline of the Memex, they say that when humans remember something it is easy to recall related memories, so the Memex encodes this with the concept of `association`. That is, the Memex allows you associate a number of related documents together. This happens on the Internet with hyperlinks, and my understanding is that Google's Pagerank makes use of this. I think that this is useful for our improved filesystem abstraction. Current filesystems have an inode, which is a unique identifier. SQL databases have unique identifiers as well, although SQL's are per table rather than per system.

In our filesystem we should retain the concept of inodes. In this way, records can have associations which might just be a list of inodes which represent other records.

I've thought a bit more about directories, and I think this concept is important. In databases a very similar concept is tables. One of the underlying components of both concepts is grouping together similar data. So I might have a `books` table or a `books` directory. Some databases such as bolt[1] allow tables to contain other tables, which is basically what the filesystem does. I am thinking that we want to do the same thing. Having them seperate in the way that SQL does will make references between them harder. As an example, if I have a `books` table and a `movies` table, I might have records for the Ender's Game book and movie. So I might create an association between them. This is a good example of where global ids are important, otherwise I also need to specify its table.


Anonymous 2018-10-09 00:12:58 No. 49

I've been thinking a bit about how to write a GUI library. Specifically, I've been focusing on how interactions between digital objects work. I've been thinking in the context of an RPG like Skyrim as I think it makes things easier to reason about, but I think any results will be more generally applicable.

Let us think about what happens when a person pushes against a table. This is an interaction between two or more objects (more on this later). The first thing to notice is that a bi-directional transfer of information is occurring. Keeping it simple, My arm is applying a force to the table, and the table needs to know what force is being applied to it. Similarly the table is applying a force to my arm. Thus there are two pieces of information in this system, two forces, which both objects must be made aware of. Each object can then calculate what the outcome of this interaction is. Thinking about it now, it seems that this might create a redundant calculation, so it may be possible for one object to perform the calculation and to simply inform the other object of the result. This is a rather simple situation, so I'm hesitant to assert that this is the case generally.

The second thing that we should notice is that which object is initiating the interaction depends on your point-of-view, and our system must be independent from point-of-view. That is to say, it must not matter if the table initiates the interaction or my arm does, the result must be the same. I think that if it were dependent it would make it difficult to structure the simulation.

Returning to the "two or more objects" comment, some objects are best represented as a collection of objects. A person is a good example of this. I understand that video games often do this with the concept of bones for 3d models. This is important because I believe that it greatly simplifies some interactions. As an example consider an interaction where a person pushes on their own shoulder. If a person is considered as a single object, how do we model this interaction? I think that it becomes quite difficult. If instead a person is a collection of objects, the shoulder might be part of the torso object, so this is an interaction between the arm object and the torso object.

Another reason that these collections are important is for the associations that they create. As an example, consider the interaction where something hits a person's chest sending them flying. The naive view is to see this as an interaction between a blunt object and a torso object. But if we view it this way, we might send the torso flying, only to leave the arms and legs where they were. With a collection, the torso can know that it needs to inform the rest of the body of what has happened. How exactly this would work in an implementation is uncertain, and probably would be different from how I've described it. Nonetheless, I think that the general collection concept must remain.

This post is getting pretty long, but this is the current state of my thoughts. I looked a little bit at the Actor model, and I plan to look at Smalltalk a bit more. I'll write a followup to this later discussing the similarities and differences between these systems and whether they might fit my perception or not.


Anonymous 2018-10-09 22:20:09 No. 50

>>47
I thought a bit more about filesystems the other day. I analogized the filesystem to a database a little bit, but I want to do so again. First off, one way to think of existing filesystems is a tree data structure. Another way to think of it is as a key-value store, with paths as keys and file contents as values. The filesystem has a convenient way to access groupings of keys in the form of directories.

There are a number of situations that filesystems don't handle well. I already wrote about the situation where you want to store objects that are pure metadata. Another situation that isn't well supported is storing objects in multiple groups. As an example, let's look at my music collection. I have a Music folder which has folders A-Z, and each of these folders have folders for artists. The artist folders contain one or more folders for albums by this artist. This means that finding an album by a specific artist is very fast, so as long as you know all of the artists in your collection this works well. But my music collection has grown, and I can't remember everything in it anymore. What I would like to do is keep my current structure as it's quite convenient, but also group albums by genre.

Doing this is quite difficult. One option is to have multiple copies of the data, but this has numerous obvious problems like increased disk usage. Filesystems have the concept of symlinks, but in my experience they are difficult to work with. What I propose is to change the filesystem's tree structure to a graph structure. In this way an object might have multiple parent directories.

Now this creates some difficulties, namely how do we interact with the filesystem now. On normal filesystems, directories contain a parent pointer (the `..` entry). With multiple parents, how does this work? We might have a default parent which can be set and maps to `..`. We might list all parents with a `..` prefix, e.g. `..A` and `..Electronic`. This leads to another question: is this a directed or undirected graph? I think that it must be an undirected graph, but I'm not really certain.

Another question is whether we have a root node. With good tooling we might not need a root node, a bit like how the Internet works. It also might be good to have one for booting an OS. I think it would require more concrete plans to be certain.

While the graph structure poses some problems, I don't think that they are insurmountable. I feel that the benefits a graph structure provides over a tree structure will outweigh any difficulties it may create. Further, I think that with proper tooling these difficulties can be turned into advantages (see the search interface I discuss in >>46).

One thing that is worth pointing out, this filesystem will almost certainly require a new operating system in order to be used. I think that the POSIX interface is too simple to easily support the interactions I've outlined so far. Working with POSIX at all seems difficult, I'm not completely sure how I'd do it but it may become clear with more effort.


Anonymous 2018-10-13 03:14:35 No. 51

>>49
I thought a little bit more about this. While I said before that the system must be independent of point of view, I'm no longer certain that this is the case. For example, let's look at procedure calls. If I call procedure `foo`, there is a bidirectional transfer of information, i.e. foo receives procedure arguments and the caller receives return values. It's normal to think of this interaction from the point of view of the caller initiating, but does it make sense to think of foo as the initiator? I suppose that we might think of it as foo requesting procedure arguments from the caller. In this way it's a bit like a program waiting for user input. I guess that this view of the interaction makes sense, so procedure calls should fit into our model.


Anonymous 2019-01-14 07:50:03 No. 55

Recently I found that Chez Scheme supports tab-completion of file names in quotes. This was the main thing preventing me from testing my idea of using Scheme as a shell; so I spent some time writing procedures for common interactions and tried using the system.

First off, paths are a pain to work with. Generally you only want a part of a path to be displayed; e.g. if I execute `(ls "~")` I do not want the resulting list of paths to be displayed with the "/home/hunter" prefix. However, in order to work with paths independent of the current working directory we need the full path. I ended up just putting up with unnecessary prefixes but it is definitely a sub-optimal solution.

Another issue is tab-completion. I thought that tab-completion of procedure names and paths would be sufficient, but I've found that this is not the case. I really wanted more advanced information like how many arguments a procedure required and what type they should be. I kept mixing up the order of arguments to procedures; e.g. does string-contains take the pattern first or the search-string?

There are a few other things that I'm on the fence about, but overall I'm no longer certain that scheme will be sufficient. I think some of my issues could be solved with a good REPL, but some of them feel like messy solutions. At any rate, I've lost my faith in contemporary personal computing systems. I've found them to be grossly inadequate for accomplishing tasks; they merely apply a layer of chrome over preexisting methodology.

Rather unfortunately, the Rust programming language is moving in a direction that I don't agree with (focusing on webshit). It makes sense given that a lot of the developers come from a ruby/js background but I find it disappointing. I'm considering making my own language with blackjack and dependent types, but this is quite an undertaking. I'm reading various books on compilers and type theory now which I hope to be finished with in the next few months. At that point I'll reevaluate where things are and decide where to go.


Anonymous 2019-01-16 07:32:41 No. 56

"Do you want to use a machine, or do you want the machine to use you?"
I heard this quote recently, and I feel that this describes my design philosophy quite well. I'm not sure that I can explain it well, but I think that web browsers are a good example of "the machine using you". I think they were okay during the Web1.0 era where you largely had static documents (were forums part of 1.0? I think those are fine too). There are some issues with easily manipulating the data presented in these documents but this is partially the fault of the presenter as well as they might provide an API allowing access of the same information.

Contemporary websites are more often than not presented as applications to the user, even such pages as simple as this. Web-applications are inscrutable and really obfuscate information access/manipulation. Unfortunately this will only get worse as it seems that WebAssembly will encourage developers to present the web page within a WebGL context; thus making it completely useless.

This really reflects the entire desktop ecosystem. The desktop ecosystem consists of many applications which cannot reasonably talk to each other. I believe that Plan9 dealt with this a bit with Plumber although I have not thoroughly investigated it. This is one of the good things about the UNIX toolbox. Applications written in the style of the UNIX toolbox can easily[1] communicate with each other. One trend for UNIX applications is to create GUI applications which wrap a CLI program. Unfortunately this loses the advantage that CLI applications provide and from a user-perspective is no different from other desktop applications.

So what do we do? I'm not really sure. I've been thinking about an operating system for Fulldive VR recently. Some of my ideas for that might be interesting in the meanwhile, I'll try writing them up later. Anyway, it seems to me that the crux of the problem lies in the separation of data and ways to manipulate it. Maybe this means borrowing some ideas from Object-Oriented programming? I'm not sure that I really understand what a digital object is.

One viewpoint that I've been drawn to recently is that "it's all just bytes" but I really feel that this view lacks in some ways. As humans we apply structure to data in order to convey relationships between disparate chunks of data. The computer might not care about this, but it is very important for the human user to make sense of what's going on.

Recently I was reading "On the Existence of Digital Objects" by Yuk Hui which discusses some of these issues. I've only read the first four chapters, which mostly cover things like XML and some background on how Philosophy handles objects and ontologies. Reading it led to some interesting thoughts about objects and ontologies, so I intend to finish it. Anyway, I think that my favored view of "just bytes" isn't useful for thinking about how to better use a computer (although it is quite useful for developing software).

[1] There are of course issues with the lack of structure that I've previously written about here.


Anonymous 2019-02-04 07:52:17 No. 57

So I thought I'd write down my thoughts about operating systems, with a focus on operating systems for fulldive VR. They may be applicable to AR and contemporary modes of human-computer interaction.

While computers provide many advantages for storing information over traditional means; they lose out on spatial information. We should allow associations between filesystem objects and objects that can be placed in the virtual world. As an example, we might organize files in a bookshelf or a datacube. Another example of this is object metadata. For a physical analogue, consider a bottle of soda. The bottle is an object but it also has metadata associated with it such as its capacity, nutritional information, etc. In the physical world we print this information on a label which is placed on the object but in the virtual world there is no need for this information to be placed on the object. We can have a way to display metadata associated with an object instead. I think this has interesting psychological effects as it makes the object itself the only thing to focus on.

I thought that you might have hybrid applications. Consider a 3D modeling program. You might have a traditional interface but allow the user to project their work in front of them and actually move around it. Or you might do away with the traditional interface and go for a more physical interface; modeling objects as though they are clay. I think that there exist VR programs for painting which are analogous to this, although I think this is less useful.

Anyway, just some initial thoughts. As fulldive VR isn't on the radar at the moment, it might be worth experimenting with OS design by providing a video game interface. This would be a crude approximation but it may still be instructive. This also might make OS design easier as the "OS" might just be network based or even just an application on an existing system. I'll continue thinking about this. I'm not sure what programming might look like. DynamicLand may provide some good ideas here.


Anonymous 2019-03-12 19:47:11 No. 58

I thought a bit about operating systems the last few days, specifically processes. We want to facilitate communication between applications, so I think that we might model processes as objects similar to Smalltalk objects. Processes would contain a list of messages which they accept and also a standard `list-interface` message which returns the process' interface.

As an example imagine that we have a CAD program. We might want to do things programmatically like in OpenSCAD. The problem with OpenSCAD is that it restricts the user to a particular, domain-specific language. If we design our CAD program with a powerful enough message interface then we can use any language to programmatically control our CAD program without a loss in capability.

This leads to the idea of computer interface guidelines. The concept of human interface guidelines has been around for quite a while; providing a guide for developing software in such a way that humans can best utilize it. Computer interface guidelines would act as a guide for developing software in such a way that other software, and therefore humans, can best utilize it.

Just some opening thoughts. I'll continue to think about this.


Anonymous 2019-04-11 07:56:17 No. 59

>>50
So I realized a month or two ago that the system is pretty similar to how the Internet is, which is quite heartening. I view my proposed file system as the next step forward, essentially bringing the good parts of the web your hard drive while removing the remaining cruft of the "filing cabinet" filesystem.

Even with very little metadata available, Internet search engines work pretty well[1]. I realised that one problem with relying on search is the lack of precision. As more records are added to a system, results for a particular search may change making programmatic interaction difficult. On the web this is of course solved through domains. If I know the domain of a website I want to access I can bypass the search engine entirely and navigate to it directly. Similarly for our file system we might add domains which are unique names given to particular nodes in the graph allowing them to be accessed directly.

Domains would be useful for such things as configuration files for programs[2]. Or even for direct access of programs. We might have a `Music` domain which references all audio files on the system and can be used by a music player program. This does raise a question as to whether the filesystem's graph should be directed or undirected. I think that directed would provide more flexibility. Really I suppose it depends on how the filesystem is incorporated with the rest of the userspace. If the search feature is a basic aspect then directed is probably fine, and this helps with some of the confusion with regards to parent pointers. Interestingly, I wonder if we might not need a garbage collection feature. I suppose it depends on how the system is designed, whether we specify root nodes or not. We might have the gc list unreachable files which the user then decides to delete or not.

[1] At least they used to. It's debatable how well they work now, but this seems to be due to algorithmic changes rather than technical impossibilities.

[2] If it even makes senses to retain the idea of configuration files. I'll have to think about this a bit. Terry's idea of the User-Developer is quite intriguing and I'd like to examine it more thoroughly.