After having success with moving running H2D2 programs between platforms, I wanted to try running H2D2 on the Raspberry Pi. I was pretty confident that it would work, having already compiled H2D2 from source on Debian on another machine. What I wanted to do this time was run some H2D2 code on the Raspberry Pi and then transplant the program to Windows and allow the code to run to completion. So here goes:
...well that seems to work OK. But that was pretty much expected. You can see that I ran the demo code for 500ms this time. But the real trick is to see if the file which has been generated is binary compatible with other platforms. If I move the output file to my Windows machine, will the H2D2 program continue exactly where it left off? Drum roll please.
Yay! After putting all that effort in it's nice to know that H2D2 bytecode is Raspberry Pi compatible, I hope that it goes a long way towards my efforts at crossplatformness. Of course, this also means that I have a means to fork a running program, since I could go back to the file that I copied off the Raspberry Pi and start again from that exact point as many times as I like.
...umm and I also need to write a different example program, all these mandelbrots are starting to get boring again. Trouble is, it is quite a good piece of test code. Anyway, I'm off to implement number arrays in H2D2 now.
So one of my goals in developing DALIS/H2D2 was to make it possible to run a single instance of a program on multiple platforms. Since the H2D2 virtual machine can run code for a timeslice and then persist the entire state of the running program, it should be possible to put an executing H2D2 program into hibernation, move it to a different platform (ie a machine with an entirely different type of OS or processor) and then carry on running the same program. No matter what point the program was frozen at, the code should be able to carry on where it left off.
Well I've now gotten to the point where I can test that theory. The first experiment is to run a program on Windows for a few milliseconds and then complete the execution on Linux. To make this work I needed to make sure that all my data was persisted in sizes that are the same on different platforms, so I need to use
int32_t instead of
int, that type of thing. Since I've written my code in a cross-platform way and since I'm using data sizes that will be the same on different platforms everything should just work. So here we go, I'm running my mandelbrot program on Windows for 200ms:
...so that outputs a file called 'demo.hby' which is the H2D2 bytecode including its persisted state (all the program instructions, the call and data stacks and the values of all variables). Now I need to move that file to my Linux box and run the code from where it stopped. On the Linux machine I have already compiled the H2D2 virtual machine from source using GCC of course. Here goes:
Awesome! It works! I guess it's not much more than a neat trick at the moment, but I think it's an achievement of sorts. If you had some kind of long running process, it might be handy to be able to wake it, run it on whatever machine was available, and then put it back into hibernation. Okay, you can't start re-writing all your business logic in H2D2 just yet... but it's early days. This is why I always imagined DALIS / H2D2 to be a cloud based language, where you don't care what type of platform or processor is being used from one moment to the next.
So the next obvious experiment is to do the same thing, but on the Raspberry Pi... maybe I'll do it in reverse, by starting the program on the Raspberry Pi and then finishing it on Windows.
Well, I tried running my H2D2 programming language / virtual machine thingy on the Fez Panda II by means of RLP, but I wasn't successful. Alas, the amount of memory left for running native code is not big enough for it. If I was really brave I could use the board as a native ARM development board I think, but I'd rather do other stuff...
Speaking of which, I've gotten round to hacking up a makefile for H2D2 so I can compile it on Linux with GCC. It was easy really, I've just created the simplest makefile you can imagine. But here is my usual victory dance running on Debian Squeeze emulated in VirtualBox:
Which is awesome, and 440 milliseconds is rather quick - especially because it's running in an emulator. I also tried it on the machine I built from an Intel D410PT motherboard and it managed to do it in 400ms flat. These rather unscientific benchmarks seem to indicate that GCC on Linux is more efficient than Pelles C generating code for Windows.
So I guess the only logical step now is to compile it on the Raspberry Pi. It would be rude not to.
I've been trying to make my latest attempt at writing a programming language (which I'm currently calling H2D2) as portable as possible. So I decided to try and recompile it for AVR microprocessors using WinAVR. It worked fine with just some very minor tweaks (and the odd bug fix). So here's a simple H2D2 program running on a (simulated) AVR microprocessor:
Specifically, it's a simulated ATmega128 running inside VMLAB. The program just writes out a series of ASCII characters in a loop, like this:
repeat (c=c+1 if c<91)
There have also been some more improvements to the syntax, where I'm continuing to draw on DALIS for inspiration. This time round I seem to be making more use of brackets. More things are coming out looking like C functions, which is why we have loop() and repeat(). I'll probably enforce brackets with if() as well I expect.
Currently, the microprocessor is parsing the source code, generating the syntax tree (i.e. compiling to H2D2 bytecode) and then running it in the H2D2 Virtual Machine. It might be better to just have the VM on a microprocessor and somehow get the bytecode onto the device pre-compiled.
But at least this microprocessor example demonstrates that my C code is reasonably portable, hopefully I can keep it like this. In reality running a Virtual Machine on top of a small microprocessor might not be very practical, but I'm just trying to make sure that my code can target other devices really...
I'm tempted to try running this code on my Fez Panda II by means of RLP, which should allow me to run H2D2 as native ARM code (being called from inside the .Net Micro Framework), that might have to be tried out.
So I did some timings with H2D2 (which is written in C) versus my original proof of concept, DALIS, which was written in C#. It isn't a comparison of the speed of C programs compared to .Net ones, because H2D2 is written totally differently - there's no parsing of source code going on when I'm running my H2D2 code for example. So I'm not trying to prove the .Net framework is slow, I'm just trying to compare DALIS with its offspring. Plus I've been more careful writing H2D2 because I have already proved the theory, this time I'd like to take my time, so the code is hopefully more efficient.
Having said that, I know that I could further optimise H2D2 if I wanted to. I've already gotten a few ideas to make loops faster, and there's still some debugging code that I could remove yet.
But I was shocked when I used my Mandelbrot set drawing program as a benchmark. Running DALIS from the command line, it took 11.8 seconds to draw the ASCII Mandelbrot set. Doing exactly the same thing in H2D2 took just 650 milliseconds. Wow. That's a big difference. It was worth the rewrite.
What I'd like to do now is take a single H2D2 program - one that I have compiled to bytecode - and run it for a timeslice on Windows and then execute the remainder of the program on Linux. I would like to prove that will work. Even better if I can do it on the Raspberry Pi... because then it will demonstrate different OSes and different processor types. Let's see what happens...
I don't know if there are many practical uses for sharing programs between devices and operating systems whilst they're actually being executed, but it seems like a neat trick anyway.
So... my latest attempt at writing a programming language, which I'm currently caling H2D2
is taking shape. I can build loops, assign numeric
variables and evaluate expressions. It is still only a subset of the syntax from my original DALIS language,
but I am able to write some simple programs now. You know what's coming don't you...? Here's a little something
that I've been working on, do you know what it is yet?
repeat if ((r*r)+(n*n)<4) & h>32
repeat if b<1
repeat if e>-1.2
Yup, it's the famous ASCII mandelbrot. You'll also notice that I've switched to lower case for
the keywords this time round. I got tired of feeling like I'm shouting when writing code. This version
also uses '&' to mean 'logical and', instead of the actual word 'and'.
This is the syntax tree that's created when I parse that code:
Well that's progress. It's not bad considering my C is still rusty, but it's coming back to me. I haven't timed
anything yet, but it certainly feels faster, on my laptop this program draws the mandelbrot set in about one second
I reckon. Of course, I can keep the syntax tree as a kind of compiled bytecode, meaning that the program can
be run without the need for the parser, which would make it faster still.