Neppit

/prog/

Programming and DIY

Talk about computer programming and DIY projects

Reply to thread





Anonymous 2017-10-13 15:56:24 No. 1

Compilers

Some resources for implementing compilers

https://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-1-an-interpreter/


Anonymous 2017-10-13 15:56:34 No. 2

http://www.craftinginterpreters.com/


Anonymous 2017-10-14 14:53:48 No. 6

I've been thinking about compiler optimizations recently. Right now in Rust, you can write const fns, but you're extremely limited in what they can do. E.g. writing a const fibonacci function isn't possible. I think that rustc doesn't have a way to run computations during compilation right now, but I imagine that this is planned. I could see them JITing Mir or emitting LLIR and JITing that.

I think this type of thing is pretty straightforward. You examine the function to make sure that it's only calling other const functions and then you know that you have everything you need to do the computation at compile time.

More interesting is doing this for non-const functions. E.g. if we have the same fibonacci function, but this time it isn't const and we call fib(10) in main. We have a sort of constraint system here. We can determine that fib has everything it needs but n, so we can look at any calls to fib and see if we have n provided.

In this case, 10 is a constant expression so we can simply evaluate the call to fib. In a situation where we read user input and then call fib on it, the constraint for n is not fulfilled so we can't evaluate it.

This might run into the problem of a "sufficiently smart compiler" or it might greatly increase compile times, but it seems like an interesting area to look into for optimizations.


Anonymous 2017-10-27 21:15:42 No. 18

I spoke with Alex about compilers yesterday, focusing on how to do parsing. He said that parsing is kind of complicated, and using a parser-generator like LALRPOP can make this more complicated than it needs to be if you aren't already familiar with it.

A problem I think that I've encountered in the past is that I try to go straight from the raw input to an AST, which makes everything much more complex. If I instead tokenize the input, which makes a Vector containing similar elements to an AST, but includes a bit more. For example, Scheme has lists as a component of the AST, but I don't have a list token. Instead I know that when I encounter a LeftParen token, I'm entering a list so then I can go to a function for building a list from the token. This alone made things much easier. I was able to write most of a Scheme interpreter in a few hours, although I skipped parsing the input into tokens. I feel that going from the input to tokens will be much more straightforward.

I showed my interpreter to Jorendorff, who advised me to think about how my AST is using memory. Specifically, I should place large objects like Lists and Procedures on the Heap. He suggested doing this with a Pair struct, so my List variant contains Rc<Pair>. I don't think I fully understand how Rc is different from Box, so I need to read the documentation over these types.


Anonymous 2017-11-15 19:08:15 No. 30

Here's a resource dump focused on bytecode interpreters.
http://www.aosabook.org/en/500L/a-python-interpreter-written-in-python.html
https://www.cs.cmu.edu/~fp/courses/15122-s11/lectures/23-c0vm.pdf
https://csl.name/post/python-jit/
https://codewords.recurse.com/issues/seven/dragon-taming-with-tailbiter-a-bytecode-compiler
http://people.inf.ethz.ch/wirth/CompilerConstruction/CompilerConstruction1.pdf
https://github.com/eliben/bobscheme
http://chrisseaton.com/rubytruffle/jokerconf17/
https://markfaction.wordpress.com/2012/07/15/stack-based-vs-register-based-virtual-machine-architecture-and-the-dalvik-vm/


Anonymous 2017-11-26 21:42:05 No. 32

I started work on expanding the parser for my Scheme interpreter this weekend. What an abyssmal experience. R7RS has a really simple syntax for identifiers that makes them unambiguous and then every implementation I tested shits on the standard. For example, `1a` is not a valid identifier according to R7RS but every implementation I tested allows this. They also allow things like `1.0.`, `1+2`, and `1/-2` which are very close to numbers (1.0, 1+2i, -1/2).

I wouldn't care too much but I haven't been able to find a grammar for the "real" identifier syntax anywhere. I was really surprised that Racket does not have a grammar anywhere. I think that I'm going to try to find the code for Racket's parser and look at that, but I'm not too optimistic.

Other than that things have been going pretty well. I fixed some bugs and added error handling recently. I'm thinking about how to define primitive procedures better but I haven't come up with anything yet.

I heard about CEK machines the other day and they supposedly make it easy to add complex things like continuations. Frankly they sound too good to be true, but it seems to be the work of Felleisen so it might be real. As I have no idea how to implement continuations, this is very attractive to me. I plan to look into them more, but when I've tried to look into these types of machines in the past I found them confusing.


Anonymous 2017-12-22 22:23:03 No. 38

I'm working on adding macros to my Scheme interpreter. I haven't been able to really find any information on Scheme macro systems, which is really unfortunate. I think that I'll end up implementing a Racket-style macro system, but the Racket docs are really annoying. They claim that macros expand one way that isn't what actually happens, so I have to test things myself.

I'm not really looking forward to implementing this, I really just want it done so that I can move on. My top priority right now is macros, continuations, and a bytecode interpreter. I really don't know anything about any of these topics so it's going to be a lot of work.


Anonymous 2017-12-24 23:14:02 No. 39

I thought a bit more about how macros might be implemented. I think that I've cleared some things up wrt hygiene in my head, but I was forgetting that macro expansion must occur before all else. I think that I can still see what I need to do for this, but it increases the complexity of the interpreter quite a bit as I know need to scan for macros, expand them, and then interpret the result. I'm not even sure that this would account for macros which use macros.

I have been planning to create a bytecode interpreter, so I think that at this point I should completely focus on that feature. I feel that with a bytecode handling macros will be easier as I can expand them on compilation. Really it's hard for me to see how this will look as this is new ground for me. I'm beginning to see the last few features come together in my mind now.


Anonymous 2018-10-22 09:45:23 No. 52

This weekend I wrote a runtime assembler for amd64 and then used it to write the backend for a Scheme->amd64 JIT. It went okay, but there were some problems with the runtime causing segfaults. I'm uncertain whether my approach to compilation is even remotely sane.

Right now I have a `State` struct which contains a symbol table (map of Symbol->Value), a stack pointer (I thought that reusing Rust's stack might be a bad idea, so I allocate an 8MB buffer and maintain a pointer to it), and head pointer for the GC (it isn't really implemented yet, but I intended to write a mark-and-sweep GC which keeps a list of all allocated objects). This struct is passed as the first argument to all procedures generated by my compiler.

My compiler also generates code to call methods on this struct (note that methods are just procedures whose first arguments is the struct itself) such as `define` and `lookup`. So something like `(define x 5)` generates code to load the symbol `x` into RSI, load the constant 5 into RDX, and load a pointer to `State::define` into RAX. State is always in RDI. We then emit a call RAX instruction.

This works just fine, but if we then try to evaluate something like `(define y x)`. We get a segfault. This would first generate code to perform a lookup on `x`, which is similar to how we generate code to do a define. This would place the value `x` is bound to in RDX, with all else the same. The segfault occurs in the call to State::define, some problem with inserting into the hashmap. I can't imagine what might be different as the same code is being emitted. The only difference is that this time we first perform a lookup. I should note that lookups work just fine on their own.

I also had a segfault on calling something like `(cons 1 2)`. This is procedure application, so we perform a lookup on cons and place its value in RAX. State is in RDI and we place the constant 1 in RSI and 2 in RDX. Then we call the code pointed at by RAX, which for cons is a primitive procedure defined in Rust. I debugged what was happening, and it segfaults when actually creating the Pair struct. The weird thing is that it happens on a line where we're setting a member of the struct to `root & CONST`. The Pair struct is #[repr(C, packed)] and apparently accessing packed struct fields is UB. Reading the issue about it though, it seems this is due to the potential for unaligned read/write, but they only mention this as an issue on SPARC, I've never heard of this as a problem on amd64.

This got a bit long, but my point is that I don't think my approach is very good. Not sure what else to do though, and I can't really find resources on this type of thing. The only option seems to be reading the source of existing projects like LuaJIT or V8, but those are pretty complex projects.


Anonymous 2018-10-24 11:14:38 No. 53

>>52

Well I fixed my segfaults. While my compiler generates code that never modifies RDI, I can't make rustc maintain this guarantee. We use code generated by rustc for the runtime and primitives, so whenever we emit a call instruction we need to save and restore RDI. This fixed the `(define y x)` issue, but interestingly it also fixed the cons issue. I'm still not quite sure how it did that, but I won't complain. Now I need to fiddle around with if and lambda compilation as they may be broken.


Anonymous 2018-12-25 08:46:33 No. 54

I've been thinking about writing a language recently. I had an idea for making a language with types, procedures, and byte arrays as the fundamental things. I think that with byte arrays it should be possible to do whatever you want; e.g. and i32 is byte-array of length 4.
I think that with a strong type-system and some special bits in the compiler (inline asm/ir) it should be possible? I need to read some more type theory to be sure.
I think this would be interesting as it allows high-level constructs to be implemented as macros (things like structs, unions, enums, etc.). Really not sure if it will work out but I might experiment with it.