Joining the Ethereum Foundation

While I’ve been writing lots of articles over at thoughtram and MachineLabs within the last years, I guess few people know I’ve had this private blog for almost a decade now. I admit, I didn’t really gave it much attention lately which is why the last blog post (on Rust!) dates back to 2014 already.

Today is the day that I decided to break with the silence and bring in some fresh air here!

Looking back

I’ve been writing software for more than 20 years in various languages and environments but looking back it stands out to me that I’ve spend the majority of my professional career doing web development. In fact, I’ve been running a business with my friend Pascal for about 4 years, that took us on a journey around the world to teach Angular to a large number of people. It’s been a fun ride and I wouldn’t want to miss a single day.

That said, over the years I’ve become more and more tired with this field. I felt like, I’ve been moving in circles for far too long. I’ve been following the development of Angular since 2010, been preaching about RxJS (even in this very blog) since 2011 and yet I was still talking about the exact same things again and again to make a living.

On to something new

I’ve found myself striving for something new, something that would grab my attention and sparkle new excitement. In 2016 I got more and more excited about Machine Learning, started blogging about Keras and eventually co-founded a new company MachineLabs, Inc to help fostering the growing Machine Learning community. Last year, in 2017 I also began to read more and more about decentralization and cryptocurrencies and got really fascinated by the Ethereum network.

Let me cite from the beige paper for a short description on what Ethereum is.

The Ethereum Protocol is a deterministic but practically unbounded state-machine with two basic functions; the first being a globally accessible singleton state, and the second being a virtual machine that applies changes to that state.

To me, Ethereum is one of the most exciting and important ongoing research projects that has huge potential to reshape our world for a better future.

Joining the Ethereum Foundation

Today I’m excited to announce that I’ve officially joined the Python development team at the Ethereum Foundation as a Senior Software Engineer. The Ethereum Foundation is a non-profit organization that helps building up the Ethereum ecosystem.

The Ethereum development isn’t driven by a single team or even a single project. Instead, the development is much closer to the way web standards evolve. Multiple teams and projects implement software according to a defined specification. There is a constantly growing number of projects developing software in all kinds of different languages that are building on top of this spec.

I’ve started to look around in the space and it’s been a couple of months ago that I found out about py-evm a fresh reimplementation of the Ethereum Virtual Machine (EVM). It caught my attention for several reasons:

  • It’s written in Python, a language that I became to enjoy through my Machine Learning work
  • It’s aimed to support Ethereum research at the forefront
  • It has a super nice and welcoming team working on it

I started contributing for fun, but when I was offered the opportunity to join the Python team at the Ethereum Foundation, I didn’t have to think very long about it.

I’m ridiculously happy to get the chance to join a very talented team in such an exciting, open source, research project. At the same time, I’m starting to get a bit of an imposter syndrom because everyone just seems to be way smarter and more experienced than me. I still have long way to catch up with them!

What to expect from me?

I’ll be working along the rest of the team to close the remaining issues that keep us from shipping a first working node that can participate in a network.

Besides helping with the core development as well as improving the testing utilities, a main focus of my work will be on improving the documentation.

Great documentation leads to greater adoption and more external contributions and in general plays a very crucial role in the success of open source projects.

The fact that I am new to the blockchain field myself can be an advantage here, since I can very well relate to the questions that people are looking to get answers for from the documentation

What about thoughtram and MachineLabs?

If you’ve been following my work you may be left wondering about what happens to thoughtram and MachineLabs.

Although I will continue to be involved in the overall management of thoughtram, I will no longer have much to do with the day-to-day business. That said, we have a great team of trainers who still live and breath Angular and deliver excellent workshops. In fact, Pascal and Dominic just recently added tons of new courseware and we have many more courseware updates planed for the future.

Lots of the day-to-day tasks that one needs to take care of when running a business were already taken over by Elvira, Executive Assistant for both thoughtram and MachineLabs who will continue to work closely with Pascal to steer the business.

To conclude, I am confident that thoughtram has a very bright future and I am grateful to have a strong team behind me supporting my career shift.

MachineLabs on the other hand is a different beast since Machine Learning continues to be a major field of my personal interest next to Ethereum. I’m not gonna lie, obviously, in the future I’ll have less time that I’ll be able to dedicate to the project. That said, I will continue my work on MachineLabs as well as everyone else. Also we have just recently open-sourced the entire code base and are planning to shift to a decentralized governance model.

In that sense, MachineLabs aligns very well with my vision for an Ethereum-powered decentralized future. In fact, we’ll be on the forefront of the movement building an opensource service that will be owned and governed by the community that runs it. This is not a sprint, it’s a marathon.

Wrapping up

I couldn’t be more excited about the future. We are heading interesting times both technologically as well as a society. I’m a big believe in the decentralization movement and I believe this movement will keep us busy for the next 20 years or much longer. I’m excited to contribute to it and I encourage everyone to join the movement!

Advertisement

Rust will be the language of the future

Rust will be the language of the future. I bet on it.

 

What does it even take for a language to become successful?

Well, lots of things I think but at the very least it should have a unique selling point. Something that makes the language stand out and is attractive to developers.

In 2009 I was all excited about Node.js. I was writing web servers with ASP.NET at that time. Suddenly evented I/O was there to teach us how all our threading was bad for scalability. That combined with the promise of sharing code between the client and the server got all of us excited…Yay, five years passed since then.

Well nowadays it’s all about the march towards Go it seems.

So what does Go bring to the table? First of all: It’s a compiled language. For the last thirty years or so we’ve seemed to focus on interpreted languages or such that operate on a Virtual Machine. Think Java, Scala, C#, F# JavaScript, Python, Ruby and so on.

Now it’s all the fuzz about compiling to native code again. Even the .NET team just recently announced that you can now compile your C# code to native code to get the speed of C++ with the joy of C#.

So one of Go’s claims is to be fast just like C++ but without the pain of C++. It also offers a great answer to parallelism with goroutines and channels. In a few words: threads without the costs of threads[1]. Green threads.  Yeah, sounds great! As a language geek that got me hooked as well. Well not for long. Go is simple. It’s easy to learn. Great. But…On the other hand. It’s just a little *too* simple for me. No generics, barely any functional programming influence, it has nil (null). I quickly came to the conclusion that Go is not good for me.

So, what’s up with Rust?

Rust is a language by the awesome folks of Mozilla. It’s entirely developed in the open (that alone is such a huge win!) and it makes big claims that have potential to disrupt our current programing language world.

One of it’s biggest claims is that you can use Rust for projects that previously would have been written in C of C++. Like kernels, operating systems or browsers. Not even Go in it’s current form could make such claims because it’s not low level enough.

Rust on the other hand tries to achieve two things at once. Being low level and being high level at the same time! This is the most exciting part for me!  On one hand Rust is a very modern language. It has generics, traits, it is expression orientated, has pattern matching, closures and a lot of other exciting features.

On the other hand it is very low level, too. It doesn’t use garbage collection by default. It just defines a couple of new rules that the compiler enforces on you *at compile time* which eliminates the need for a garbage collection. That said, you still can apply garbage collection to certain parts of your program if you need to!

But it’s not just all about garbage collection. It also lets you take deep control about heap vs stack allocations that for instance wouldn’t be possible in Go because Go’s compiler uses escape analysis to figure out if something should go on the heap. So basically, you can have all the low level control you need, when you need it!

But does that mean that it’s restricted to low level systems programmers? I don’t think so! Speaking for myself, I’m certainly not a low level systems programmer. I wish I had C or C++ skills. Unfortunately I coded with pampers on for all my live. I used BASIC, Visual Basic, PHP, Java, C#, F#, JavaScript but I never used C or C++[2].

Whatever! I’m currently getting my feet wet with Rust. I started to work on a web application framework for Rust that aims to be simple. Nickel.rs.

Does this look too low level to you (apart from the terrible syntax highlighting)?

extern crate http;
extern crate nickel;

use std::io::net::ip::Ipv4Addr;
use nickel::{ Nickel, Request, Response };

fn main() {
    let mut server = Nickel::new();

    fn a_handler (_req: &Request, res: &mut Response) {
        res.send("hello world");
    }

    server.get("/bar", a_handler);
    server.listen(Ipv4Addr(127, 0, 0, 1), 6767);
}

This is a simple nickel.rs application. I think it doesn’t look too scary for anyone ever written a web application in express.js. I’ll talk about it at the next Rust Bay Area meetup.

Leave alone the language for a moment. The community is great, too!

I’m probably the one with the most stupid Rust questions on StackOverflow but people invest a lot of time to put together great answers. And as I’m writing this people  on IRC help me to get the wording right.

And dear reader, if you are looking at this from a JavaScript or Ruby angle, you might be surprised to find some familiar faces. Yehuda Katz, EmberJS and Ruby on Rails core developer is currently hard at work to work on a package manager and several other libraries for Rust.

If you didn’t take a look at Rust yet I highly recommend to do so…

[1] Not trying to be accurate.  Go hacker news, kill me.
[2] Not 100%. I tried C++ with the age of 11 but failed miserably.

Some advice for Go debugging with gdb

I’m currently getting my feet wet with Go. I quite like the language but when it comes to debugging I I’m still trying to get warm with what it offers. I’m just not the “printf” kind of guy. I want a real debugger which I can use to step through the program flow to investigate (and manipulate) state.

Go programs can be debugged using the gdb debugger. I’m slowly getting warm with it and I thought I share some advice.

1. There is a Sublime plugin for gdb but my current feeling is that I rather use gdb directly on the command line

2. create such two aliases for your command line:

alias god='go build -gcflags "-N -l"'
alias godrun='go build -gcflags "-N -l" -o debug && gdb debug'

The god command just compiles your go app with all useful debug flags and uses the default program name for the output file. The godrun command uses a hardcoded name debug as the output file name and directly feeds it to gdb so it’s as easy as calling godrun to start debugging. Yay!

3. Put debug on your .gitignore file to not commit this special debug build to your repository

4. If you want to see the value of, say, a struct behind a pointer the syntax in gdb is p *'&varName' (yep asterisk outside the quotes, then the varName in quotes with an ampersand prepended).

5. Each print in gdb assigns the printed variable to a temporally $-prefixed variable (e.g. $5) which can be used from then on (e.g. $5.SomeStructProperty + 1000

6. Variables can be manipulated with set variable varName=value (e.g.set variable varName=1000 or set $5.SomeStructProperty = 1000)

Those are just random findings and might be totally obvious if you ever worked with gdb before. However, chances are that if your are looking into Go, you are actually NOT coming from the lower level language eco system (e.g. C/C++) but rather from higher level eco systems such as Node.js or .NET.

On famo.us, semantic html and the future for web apps

Now with famo.us being out of private beta and the code being online I was eager to take a closer look.

I stumbled over this talk by Steve Newcomb the CEO of famo.us.

It’s *super* interesting to watch this stuff. Go watch it!

It would be very interesting to take some time to play with famo.us sometime soon. What’s interesting though is that it’s an entirely different paradigm than what AngularJS, Polymer, vue.js or even WebComponents are after.

They basically say that the gaming industry got rendering right (minute 43:35), and and the web industry got everything else right and they are trying to combine it. They don’t believe in semantically correct HTML for web apps. They say this stuff is for documents and no one should care about it building web apps. That’s also what Sencha is telling people.

It’s interesting and a part of me agrees with that. On the other hand, HTML is awesome for designers to work with. People even start building things *outside the browser* (think: githubs Atom editor) with web technology *because* of HTML/CSS.

I remember that working with Sencha was a pain from a designers perspective *because* you work very decoupled from the underlying HTML.

It really seems people are strongly divided about the direction the web is taking. Google seems to strongly believe in semantic html and to enrich it for web apps. Angular, Polymer, vue.js, WebComponents are all projects strongly driven by Google. From the perspective of a company driven by ad sales this makes perfect sense. Semantic markup has great value for Google as a search engine / ad selling company.

On the other hand there are companies like Apple, Sencha, Famo.us which do *not* believe this would be the right way forward.

I can’t say that I have a clear opinion about this stuff. Just that it’s an interesting observation.

StackWho is now ninya.io

A couple of weeks ago I blogged about StackWho – a project to search for users on StackOverflow by their location and skills. Today I’m happy to announce that StackWho is now ninya.io!

Why the new name?

I felt that it was just too tightly coupled to StackOverflow. I’m planning to increase the scope of the project by combining different sources. I’ll soon leave more words about that…

With the new name, we knew that it was also time for a really cool logo.

 


We were lucky to have @oriSomething join us and support the project with his awesome design skills! You should definitely send him hugs on twitter!

Is this a true open source project?

I have been asked this question a couple of times so I want make this 100 % clear: There isn’t a single bit of code of this project that is not on github or not covered by the MIT license. You find all the code at our ninya organization on github and if you ever find a repository that is missing the MIT license, just send a PR and I promise to merge it instantly. Everything about the project will remain open source as long as I am in charge of things. And btw, you should really star the project here!

That being said, I have bills to pay for this project and if you are feeling generous feel free to leave a tip here 🙂

What’s coming?

Yep, lots of things. There’s a new sync API brewing which will make it much easier to sync with more sources. It will also drop the postgresDB dependency.

It’s also planned to overhaul the entire design of the website and to create a real ninya blog. Stay tuned.

Tag related sorting just landed in StackWho

StackWho just got a nice update which makes it much more useful! Previously results were always sorted by the users reputation. That’s probably what one would expect if you only search by location.

However, once you start searching by tags you would probably expect that the sorting relates to such tags. Fortunately that’s exactly how things work now 🙂

This makes for some very interesting queries. If you combine multiple tags then the results will be sorted by the cumulated answer score of such tags.

For instance those are the users from Hannover, Germany sorted by cumulated answer score for the tags javascript, angularjs and git.

This is just scratching the surface of what’s planned for the future. I migrated the search to use ElasticSearch to expand the feature set. Previously the search was implemented on top of postgresSQL (used as a NoSQL DB) which was fine for the beginning but I felt I can move faster by migrating to ElasticSearch.

StackWho is entirely open source and MIT licensed, if you like to get involved, head over to the repository and help shaping it’s future. If you don’t like to directly contribute you can still help if you star the repository on github and share this blog post as much as you can 🙂

What out for more awesomeness coming soon!

Introducing StackWho

I’m working on this little side project and I thought I share some words about it.

So, I wanted to figure out other StackOverflow users from my city and filter them by skill set. Turns out that’s not possible by default with the search provided by StackOverflow. Oh wait, it is! They have a thing called “Candidate Search”. Unfortunately a one month subscription for the candidate search costs 1000 $.

And of course the API provided by StackOverflow doesn’t make it easy to run such queries either. However, what you can do is just scrape the entire[1] user data of StackOverflow and then build such a search yourself.

Introducing: StackWho.

It’s pretty rudimentary at this point. You can enter comma separated locations to combine users from multiple cities (or to alias different spellings) and you can enter multiple tags which the users should have as their top answer tags.

Bild
The data is continuously synchronized with StackOverflow[1] which means user data should usually be only a couple of days old. The frontend is written with AngularJS and the backend is built with NodeJS and split into a query and a sync part. It’s hosted on heroku. Everything is MIT licensed and I’m happy to merge Pull Requests 🙂

Also any feedback is highly welcome! Now go and check it out the top users from San Francisco & Berkeley with strong AngularJS or NodeJS skills: Check it out here!

[1] I currently only sync the top 150k users

A flexible team- five things to get right

On the first of November 2012 I joined CouchCommerce to lead the frontend development. Here are five things that I highly enjoy here and that I think make my life easier and more fun.

1. Use a notebook + monitor  approach rather than a PC + monitor approach

At my previous job I used a regular PC at the office with three monitors attached. Not that they forced me to use that, I could have had any system when I started my job. But since nobody worked on a notebook, it was the obvious choice to use a PC as well. It was the fasted machine one could think of. 16 GB Ram. Core i7. Fast SSD drives. The machine was super swift and never let me down.

At my new gig here at CouchCommerce everyone used MacBooks with only one monitor attached. Since every web developer seems to work on a MacBook these days it again was the obvious choice to align with that 🙂 It’s the fastest retina MacBook you can  buy. I don’t like to make compromises when it comes to speed. I take my work seriously and same goes with my tooling.

It was a bit of a hassle at first to get used to work with only two physical screens but since you can use multiple virtual screens on OSX  I quickly got used to it. I don’t want to go into too much detail about how the switch to OSX went. There are plenty of other good posts on that topic.

The point is: using a notebook/MacBook + monitor is so much more convenient than using a stationary computer + monitor. Previously I used to work on three different machines:

– my office PC
– my PC at my home office
– my notebook

Multiple systems lead to multiple problems:

– I had to maintain three different machines with tooling, updates etc
– I often forgot to push code so when I worked from home I had to ask a team mate at the office to boot my computer. Or even worse I forgot to push code after a day of doing home office so I was lacking the code when I returned to the office

I really enjoy having only one system. I’m sometimes in the middle of a coding session when I decide to close the lid and go home. At home I can directly continue with my work. I also have the latest stuff with me. May it be on the train, at a conference or at the office. I also only reboot once every couple of months or so. That makes my life so much easier.

2. Be flexible.

That goes hand in hand with the previous issue. In my six month at CouchCommerce I already had four different work places. The team went from 4 people to 10 people within that time. We rearrange desks, switch offices just as it fits our needs. That also relates to everyone working on a MacBook. The setup is just super lightweight. We also regularly try out new software that has the potential to make our life easier.

3. Use dropbox and google docs for file Management.

I remember file management often was an issue at my previous job. We used to put documents on a central file server. Files went quickly out of date and manually versioning with multiple dated file names didn’t make it any easier.

I highly enjoy working with team mates together on one google doc. Everything stays in sync. Automatic version control makes sure nobody has to lose their sanity.

4. Use a company social network like yammer.

Well yammer, has it’s own dark corners. It’s not all milk and honey. However, we also tested several other company social networks and didn’t come across any better one so far. In general it really helps to get everyone up to date dramatically. It’s an important cornerstone of our communication. We always know about who is where. We share code improvements, interesting articles, fun stuff, being off, doing home office.

5. Speak english. Prepare to go international.

One thing that always bothered me at my previous employer was that we used too much German. Code comments where made in German. Some variable or methods where named in German. Technical documents where written entirely in German. That makes it much harder to scale the team.

Here at CouchCommerce three of my team mates don’t speak German as their mother language. We use ca. 95 % English when communicating through yammer and  at least 60% English when talking directly to each other at the office. Our product was designed English-first, German-second. All our technical documents, the entire code base everything is entirely written in English. We can easily scale our team and employ people from all around the world.
Those things might or might not work for your. Works for us.

Improving the hannoverjs.de site

A couple of days ago we came together again at the EDELSTALL to have our regulary hannoverjs meetup. It was that day when someone on Facebook asked me where he should have looked to figure out the date earlier.

My immediate rection was: “It’s every two month and it’s the fourth Thursday.”

Well, while me and a couple of others now about the rhythm and it’s also mentioned on the hannoverjs.de site, we shouldn’t assume that everyone does. It’s a huge fail when we rely on the assumption that everybody knows about that for sure.

So I pulled up my sleeves and automated the appearance of the date on our website so that it always points to the correct next date for our event.

And while I was at it, I rewrote the whole page with AngularJS (it previously used jQuery and Backbone). One of the benefits is that the talk section is much more maintainable now.

1. All talks for one event are written in it’s own file which follows the pattern MM_YYYY.tpl.html (Example)

2. Old talks can be viewed by just changing the date in the URL. (http://hannoverjs.github.com/hannoverjs.de/#/talks/01/2013)
(but I didn’t finish to extract the complete history from the commit logs yet)

3. There’s a default talk template which is automatically shown when nobody added a talk for the upcoming meetup yet.
So whenever one views the talk section, it should always make sense 🙂
On a side note: when you feel something about the site or the meetup in general could be improved. Don’t hesitate. Send PRs. Get in touch. We are a community.

From jQuery Deferred to RxJS ForkJoin

In a recent posting I blogged about how to process some code after several asynchronous operations have finished and how to access each return value of those operations no matter in which order those would finish.

To make this happen I used the new jQuery Deferred API. While this is a great way, I would also like to show you that there are other (even more advanced) ways to do so.

One of my heart warming interests is to dig deep into reactive functional programming and therefore digging into RxJS.

So let’s see how we can rewrite our example to make use of RxJS!

 

$(document).ready(function(){

    var example = function (){
        var deferred = new Rx.AsyncSubject();

        setTimeout(function(){
            deferred.OnNext(5);
            deferred.OnCompleted();
        }, 1000); //Will finish first

        return deferred;
    };

    var example2 = function (){
        var deferred = new Rx.AsyncSubject();
        setTimeout(function(){
            deferred.OnNext(10);
            deferred.OnCompleted();

        }, 2000); //Will finish second

        return deferred;
    };

    Rx.Observable
      .ForkJoin(Example(), Example2())
      .Subscribe(function(args){
            console.log("Example1 (Should be 5): " + args[0]);
            console.log("Example2 (Should be 10): " + args[1]);
          });
    });

As you can see, nothing ground shaking happened to our code. Things are just named slightly different.

Did we gain anything? Yes, we did! ForkJoin not only combines two observable streames and waits until both have finished, but in fact returns a new observable stream. Having an observable stream as an first class object is a major benefit! For example, let’s say we are only interested if first stream matches a certain condition. We can just filter out the undesired values using the Where operator.

    Rx.Observable
      .ForkJoin(Example(), Example2())
      .Where(function(x){ return x[0] == 5; })
      .Subscribe(function(args){
            console.log("Example1 (Should be 5): " + args[0]);
            console.log("Example2 (Should be 10): " + args[1]);
          });
    });

And once again, the Where operator returns a new observable stream. This is great in terms of composability. You can easily just hand this new observable stream over to another component which will react on a stream of data that exactly matches the conditions the component was intended for.

Having events as first class citizen which you can compose and pass on, is what makes Rx so great.