Joining the Ethereum Foundation

While I’ve been writing lots of articles over at thoughtram and MachineLabs within the last years, I guess few people know I’ve had this private blog for almost a decade now. I admit, I didn’t really gave it much attention lately which is why the last blog post (on Rust!) dates back to 2014 already.

Today is the day that I decided to break with the silence and bring in some fresh air here!

Looking back

I’ve been writing software for more than 20 years in various languages and environments but looking back it stands out to me that I’ve spend the majority of my professional career doing web development. In fact, I’ve been running a business with my friend Pascal for about 4 years, that took us on a journey around the world to teach Angular to a large number of people. It’s been a fun ride and I wouldn’t want to miss a single day.

That said, over the years I’ve become more and more tired with this field. I felt like, I’ve been moving in circles for far too long. I’ve been following the development of Angular since 2010, been preaching about RxJS (even in this very blog) since 2011 and yet I was still talking about the exact same things again and again to make a living.

On to something new

I’ve found myself striving for something new, something that would grab my attention and sparkle new excitement. In 2016 I got more and more excited about Machine Learning, started blogging about Keras and eventually co-founded a new company MachineLabs, Inc to help fostering the growing Machine Learning community. Last year, in 2017 I also began to read more and more about decentralization and cryptocurrencies and got really fascinated by the Ethereum network.

Let me cite from the beige paper for a short description on what Ethereum is.

The Ethereum Protocol is a deterministic but practically unbounded state-machine with two basic functions; the first being a globally accessible singleton state, and the second being a virtual machine that applies changes to that state.

To me, Ethereum is one of the most exciting and important ongoing research projects that has huge potential to reshape our world for a better future.

Joining the Ethereum Foundation

Today I’m excited to announce that I’ve officially joined the Python development team at the Ethereum Foundation as a Senior Software Engineer. The Ethereum Foundation is a non-profit organization that helps building up the Ethereum ecosystem.

The Ethereum development isn’t driven by a single team or even a single project. Instead, the development is much closer to the way web standards evolve. Multiple teams and projects implement software according to a defined specification. There is a constantly growing number of projects developing software in all kinds of different languages that are building on top of this spec.

I’ve started to look around in the space and it’s been a couple of months ago that I found out about py-evm a fresh reimplementation of the Ethereum Virtual Machine (EVM). It caught my attention for several reasons:

  • It’s written in Python, a language that I became to enjoy through my Machine Learning work
  • It’s aimed to support Ethereum research at the forefront
  • It has a super nice and welcoming team working on it

I started contributing for fun, but when I was offered the opportunity to join the Python team at the Ethereum Foundation, I didn’t have to think very long about it.

I’m ridiculously happy to get the chance to join a very talented team in such an exciting, open source, research project. At the same time, I’m starting to get a bit of an imposter syndrom because everyone just seems to be way smarter and more experienced than me. I still have long way to catch up with them!

What to expect from me?

I’ll be working along the rest of the team to close the remaining issues that keep us from shipping a first working node that can participate in a network.

Besides helping with the core development as well as improving the testing utilities, a main focus of my work will be on improving the documentation.

Great documentation leads to greater adoption and more external contributions and in general plays a very crucial role in the success of open source projects.

The fact that I am new to the blockchain field myself can be an advantage here, since I can very well relate to the questions that people are looking to get answers for from the documentation

What about thoughtram and MachineLabs?

If you’ve been following my work you may be left wondering about what happens to thoughtram and MachineLabs.

Although I will continue to be involved in the overall management of thoughtram, I will no longer have much to do with the day-to-day business. That said, we have a great team of trainers who still live and breath Angular and deliver excellent workshops. In fact, Pascal and Dominic just recently added tons of new courseware and we have many more courseware updates planed for the future.

Lots of the day-to-day tasks that one needs to take care of when running a business were already taken over by Elvira, Executive Assistant for both thoughtram and MachineLabs who will continue to work closely with Pascal to steer the business.

To conclude, I am confident that thoughtram has a very bright future and I am grateful to have a strong team behind me supporting my career shift.

MachineLabs on the other hand is a different beast since Machine Learning continues to be a major field of my personal interest next to Ethereum. I’m not gonna lie, obviously, in the future I’ll have less time that I’ll be able to dedicate to the project. That said, I will continue my work on MachineLabs as well as everyone else. Also we have just recently open-sourced the entire code base and are planning to shift to a decentralized governance model.

In that sense, MachineLabs aligns very well with my vision for an Ethereum-powered decentralized future. In fact, we’ll be on the forefront of the movement building an opensource service that will be owned and governed by the community that runs it. This is not a sprint, it’s a marathon.

Wrapping up

I couldn’t be more excited about the future. We are heading interesting times both technologically as well as a society. I’m a big believe in the decentralization movement and I believe this movement will keep us busy for the next 20 years or much longer. I’m excited to contribute to it and I encourage everyone to join the movement!


Rust will be the language of the future

Rust will be the language of the future. I bet on it.


What does it even take for a language to become successful?

Well, lots of things I think but at the very least it should have a unique selling point. Something that makes the language stand out and is attractive to developers.

In 2009 I was all excited about Node.js. I was writing web servers with ASP.NET at that time. Suddenly evented I/O was there to teach us how all our threading was bad for scalability. That combined with the promise of sharing code between the client and the server got all of us excited…Yay, five years passed since then.

Well nowadays it’s all about the march towards Go it seems.

So what does Go bring to the table? First of all: It’s a compiled language. For the last thirty years or so we’ve seemed to focus on interpreted languages or such that operate on a Virtual Machine. Think Java, Scala, C#, F# JavaScript, Python, Ruby and so on.

Now it’s all the fuzz about compiling to native code again. Even the .NET team just recently announced that you can now compile your C# code to native code to get the speed of C++ with the joy of C#.

So one of Go’s claims is to be fast just like C++ but without the pain of C++. It also offers a great answer to parallelism with goroutines and channels. In a few words: threads without the costs of threads[1]. Green threads.  Yeah, sounds great! As a language geek that got me hooked as well. Well not for long. Go is simple. It’s easy to learn. Great. But…On the other hand. It’s just a little *too* simple for me. No generics, barely any functional programming influence, it has nil (null). I quickly came to the conclusion that Go is not good for me.

So, what’s up with Rust?

Rust is a language by the awesome folks of Mozilla. It’s entirely developed in the open (that alone is such a huge win!) and it makes big claims that have potential to disrupt our current programing language world.

One of it’s biggest claims is that you can use Rust for projects that previously would have been written in C of C++. Like kernels, operating systems or browsers. Not even Go in it’s current form could make such claims because it’s not low level enough.

Rust on the other hand tries to achieve two things at once. Being low level and being high level at the same time! This is the most exciting part for me!  On one hand Rust is a very modern language. It has generics, traits, it is expression orientated, has pattern matching, closures and a lot of other exciting features.

On the other hand it is very low level, too. It doesn’t use garbage collection by default. It just defines a couple of new rules that the compiler enforces on you *at compile time* which eliminates the need for a garbage collection. That said, you still can apply garbage collection to certain parts of your program if you need to!

But it’s not just all about garbage collection. It also lets you take deep control about heap vs stack allocations that for instance wouldn’t be possible in Go because Go’s compiler uses escape analysis to figure out if something should go on the heap. So basically, you can have all the low level control you need, when you need it!

But does that mean that it’s restricted to low level systems programmers? I don’t think so! Speaking for myself, I’m certainly not a low level systems programmer. I wish I had C or C++ skills. Unfortunately I coded with pampers on for all my live. I used BASIC, Visual Basic, PHP, Java, C#, F#, JavaScript but I never used C or C++[2].

Whatever! I’m currently getting my feet wet with Rust. I started to work on a web application framework for Rust that aims to be simple.

Does this look too low level to you (apart from the terrible syntax highlighting)?

extern crate http;
extern crate nickel;

use std::io::net::ip::Ipv4Addr;
use nickel::{ Nickel, Request, Response };

fn main() {
    let mut server = Nickel::new();

    fn a_handler (_req: &Request, res: &mut Response) {
        res.send("hello world");

    server.get("/bar", a_handler);
    server.listen(Ipv4Addr(127, 0, 0, 1), 6767);

This is a simple application. I think it doesn’t look too scary for anyone ever written a web application in express.js. I’ll talk about it at the next Rust Bay Area meetup.

Leave alone the language for a moment. The community is great, too!

I’m probably the one with the most stupid Rust questions on StackOverflow but people invest a lot of time to put together great answers. And as I’m writing this people  on IRC help me to get the wording right.

And dear reader, if you are looking at this from a JavaScript or Ruby angle, you might be surprised to find some familiar faces. Yehuda Katz, EmberJS and Ruby on Rails core developer is currently hard at work to work on a package manager and several other libraries for Rust.

If you didn’t take a look at Rust yet I highly recommend to do so…

[1] Not trying to be accurate.  Go hacker news, kill me.
[2] Not 100%. I tried C++ with the age of 11 but failed miserably.

Some advice for Go debugging with gdb

I’m currently getting my feet wet with Go. I quite like the language but when it comes to debugging I I’m still trying to get warm with what it offers. I’m just not the “printf” kind of guy. I want a real debugger which I can use to step through the program flow to investigate (and manipulate) state.

Go programs can be debugged using the gdb debugger. I’m slowly getting warm with it and I thought I share some advice.

1. There is a Sublime plugin for gdb but my current feeling is that I rather use gdb directly on the command line

2. create such two aliases for your command line:

alias god='go build -gcflags "-N -l"'
alias godrun='go build -gcflags "-N -l" -o debug && gdb debug'

The god command just compiles your go app with all useful debug flags and uses the default program name for the output file. The godrun command uses a hardcoded name debug as the output file name and directly feeds it to gdb so it’s as easy as calling godrun to start debugging. Yay!

3. Put debug on your .gitignore file to not commit this special debug build to your repository

4. If you want to see the value of, say, a struct behind a pointer the syntax in gdb is p *'&varName' (yep asterisk outside the quotes, then the varName in quotes with an ampersand prepended).

5. Each print in gdb assigns the printed variable to a temporally $-prefixed variable (e.g. $5) which can be used from then on (e.g. $5.SomeStructProperty + 1000

6. Variables can be manipulated with set variable varName=value (e.g.set variable varName=1000 or set $5.SomeStructProperty = 1000)

Those are just random findings and might be totally obvious if you ever worked with gdb before. However, chances are that if your are looking into Go, you are actually NOT coming from the lower level language eco system (e.g. C/C++) but rather from higher level eco systems such as Node.js or .NET.

On, semantic html and the future for web apps

Now with being out of private beta and the code being online I was eager to take a closer look.

I stumbled over this talk by Steve Newcomb the CEO of

It’s *super* interesting to watch this stuff. Go watch it!

It would be very interesting to take some time to play with sometime soon. What’s interesting though is that it’s an entirely different paradigm than what AngularJS, Polymer, vue.js or even WebComponents are after.

They basically say that the gaming industry got rendering right (minute 43:35), and and the web industry got everything else right and they are trying to combine it. They don’t believe in semantically correct HTML for web apps. They say this stuff is for documents and no one should care about it building web apps. That’s also what Sencha is telling people.

It’s interesting and a part of me agrees with that. On the other hand, HTML is awesome for designers to work with. People even start building things *outside the browser* (think: githubs Atom editor) with web technology *because* of HTML/CSS.

I remember that working with Sencha was a pain from a designers perspective *because* you work very decoupled from the underlying HTML.

It really seems people are strongly divided about the direction the web is taking. Google seems to strongly believe in semantic html and to enrich it for web apps. Angular, Polymer, vue.js, WebComponents are all projects strongly driven by Google. From the perspective of a company driven by ad sales this makes perfect sense. Semantic markup has great value for Google as a search engine / ad selling company.

On the other hand there are companies like Apple, Sencha, which do *not* believe this would be the right way forward.

I can’t say that I have a clear opinion about this stuff. Just that it’s an interesting observation.

StackWho is now

A couple of weeks ago I blogged about StackWho – a project to search for users on StackOverflow by their location and skills. Today I’m happy to announce that StackWho is now!

Why the new name?

I felt that it was just too tightly coupled to StackOverflow. I’m planning to increase the scope of the project by combining different sources. I’ll soon leave more words about that…

With the new name, we knew that it was also time for a really cool logo.


We were lucky to have @oriSomething join us and support the project with his awesome design skills! You should definitely send him hugs on twitter!

Is this a true open source project?

I have been asked this question a couple of times so I want make this 100 % clear: There isn’t a single bit of code of this project that is not on github or not covered by the MIT license. You find all the code at our ninya organization on github and if you ever find a repository that is missing the MIT license, just send a PR and I promise to merge it instantly. Everything about the project will remain open source as long as I am in charge of things. And btw, you should really star the project here!

That being said, I have bills to pay for this project and if you are feeling generous feel free to leave a tip here 🙂

What’s coming?

Yep, lots of things. There’s a new sync API brewing which will make it much easier to sync with more sources. It will also drop the postgresDB dependency.

It’s also planned to overhaul the entire design of the website and to create a real ninya blog. Stay tuned.

Tag related sorting just landed in StackWho

StackWho just got a nice update which makes it much more useful! Previously results were always sorted by the users reputation. That’s probably what one would expect if you only search by location.

However, once you start searching by tags you would probably expect that the sorting relates to such tags. Fortunately that’s exactly how things work now 🙂

This makes for some very interesting queries. If you combine multiple tags then the results will be sorted by the cumulated answer score of such tags.

For instance those are the users from Hannover, Germany sorted by cumulated answer score for the tags javascript, angularjs and git.

This is just scratching the surface of what’s planned for the future. I migrated the search to use ElasticSearch to expand the feature set. Previously the search was implemented on top of postgresSQL (used as a NoSQL DB) which was fine for the beginning but I felt I can move faster by migrating to ElasticSearch.

StackWho is entirely open source and MIT licensed, if you like to get involved, head over to the repository and help shaping it’s future. If you don’t like to directly contribute you can still help if you star the repository on github and share this blog post as much as you can 🙂

What out for more awesomeness coming soon!

Introducing StackWho

I’m working on this little side project and I thought I share some words about it.

So, I wanted to figure out other StackOverflow users from my city and filter them by skill set. Turns out that’s not possible by default with the search provided by StackOverflow. Oh wait, it is! They have a thing called “Candidate Search”. Unfortunately a one month subscription for the candidate search costs 1000 $.

And of course the API provided by StackOverflow doesn’t make it easy to run such queries either. However, what you can do is just scrape the entire[1] user data of StackOverflow and then build such a search yourself.

Introducing: StackWho.

It’s pretty rudimentary at this point. You can enter comma separated locations to combine users from multiple cities (or to alias different spellings) and you can enter multiple tags which the users should have as their top answer tags.

The data is continuously synchronized with StackOverflow[1] which means user data should usually be only a couple of days old. The frontend is written with AngularJS and the backend is built with NodeJS and split into a query and a sync part. It’s hosted on heroku. Everything is MIT licensed and I’m happy to merge Pull Requests 🙂

Also any feedback is highly welcome! Now go and check it out the top users from San Francisco & Berkeley with strong AngularJS or NodeJS skills: Check it out here!

[1] I currently only sync the top 150k users