The frontend track featured some very different talks , ranging from performance to a/b testing. In that sense it was a bit different from the microservices track, because it was less of a ‘one story’ thing.
Scaling A/B testing at Netflix – Alex Liu
Netflix takes a/b testing to the next level, where they break up the UI in little pieces and run a/b tests on all of them at once. One user can be in several a/b tests at one point in time. They do hundreds of these tests a year, which means they have to deliver 2.5 million unique packages a week (!!!!!!!). This in itself is astounding.
The problem with these packages is conditional packages. So for example search is updated and needs some old dependencies and some old dependencies (see slide below).
To build this to scale you don’t want to have someone picking the correct packages by hand. For that they wrote a node module that looks at the docstring of a function/file. Which can have a ruleset explaining when it should or should not be included. In the build process the packages are selected that should be part of it and which should not. With every request to the node server that runs the assets, the registry is consulted, the rules are applied and packaged and cached and served from a CDN for the next person in the same a/b test group.
Fail fast, move faster (to improve that is, your goal should not be to fail).
Fullstack through microservices – Matteo Collina
Matteo talked about using data channels and RPC’s (remote procedure calls) from the back end to the frontend. Using data streams in stead of request/response patterns. In stead of returning a delimited block of data you can return a channel. This makes it possible to receive and unpack or send and connect binary data directly to or from the frontend. Anything that is serializable to JSON, Node.js binary streams or even other channels can all be sent through this mechanism.
So to have a framework of these types of microservices they created Graft.
_Update: oh and he used this really cool cheap sensor to do his talk: SensorTag_
Building high quality services at Uber – Jake Verbaten
Raynos started with a funny story where his first day was to build a proxy service. Oh and it will be deployed to about 50 machines. Oh and they can’t go down ever.
Of course when you make something, you have to go over it again to “productionize” it. In order to do this, he showed some tools they use at Uber to get this going.
The one that stood out the most was potter (which they’re still “open-sourcifying”). It basically sets up a skeleton (or scaffolds) for your project, starts a github repo, and registers it at your CI server (i.e. travis, jenkins, wha-evah), starts logging to sentry and graphite.
The engine which is used for node.js is the same one as is used for Chrome/Chromium project: V8. Therefore similar problems that arise with node will also occur in the browser. Which means that performancewise you can apply the same principles.
This is the talk that had me looking up lots of things in the end, so it at least was great food for thought.
One of the easiest performance gains you can easily apply is to be clear about your intentions to the browser/v8 engine. Meaning you don’t start by declaring an empty list if you are then going to fill it up with a number of different types. Usually you should stay away from different types in one array altogether, but performance will severely drop if you are not being upfront to the interpreter.
He also went into great detail about how you don’t want to keep on changing an Object structure after construction, because at one point v8 will give up and fall-back to the unoptimizable slow HashTable structure. But rather than I try to reproduce this look at this blog post: A tour of v8 object representation
David Clements did a workshop on working with streams and data from front end to the backend (and vice versa) which covered lots of ground similar to what Matteo squeezed into 20 min. Check it out here: Slides