This month's HumanTalks was hosted by Criteo. We had a hard time managing to correctly plug the laptops in to the projector and connect to the wifi so we started a bit late. This forced us to cut down the questions from 10mn to only 6 or 7.
Customer Solutions Engineer
The first talk was by Nicolas Baissas, my coworker at Algolia. He explained what the job of CSE really means. The job originated mostly in the Silicon Valley for SaaS companies, but is moving to all sort of companies.
The goal of a CSE is to make the customers successful, by making them happy to use the product, for the longest amount of time. This is especially important for SaaS companies where revenue is based on a monthly payment. You do not want your customers to leave your service, and the best way to keep them is to offer them a better and better service.
There will always be customers leaving, which is what is called a churn, but the goal of a CSE is to make sure the company has a negative churn. This means that the customers that stay compensate for those who left, because they now use more of the service than before. The CSE must ensure that less and less customers want to leave, by understanding what they are looking for while managing to bring more and more value of the product to the already happy customers.
The only way to do that is to try to be part on their team, showing that you're on their side, and not trying to push selling more and more. CSE are experts on their product, and they share that expertise with the customers by email, by Skype or whenever possible, by meeting them directly. This also happens before, during and after Algolia's deployment in their system. The CSE ensures that they use the service at its best, and that it is used in the best way possible to fit their needs.
Month after month, they have a regular look at all the past implementations of their customers and reach out with advices if they see things that could be improved. Today Algolia has more than 1000 customers but only 4 CSEs, so this has trouble scaling. So in parallel they also work on making it easy for all the customers they do not have time to talk to.
They write guides, explaining specific features in detail, with examples. They have already explained the same thing hundreds of times on calls, so they have experience on how to explain it clearly, then it's just a matter of writing it down. They also write detailed tech tutorials. They all have a tech background and know how to code, so they can really understand what it requires to implement Algolia in an existing system.
The goal is to automate most of the recurring work. They built a new tab in the UI to analyze the current configuration of a customer and suggest improvements. Those are the exact same improvements than they would have suggested during a one-to-one call, but because they have the experience and know how to code, they can just simply make it self-service for users directly.
Some of the features are too complex to be correctly grasped just by reading the theory, like geosearch. So they created a demo with real data, using all the best practices and letting users play with it to see how it was working. This worked really well and transformed a theoretical feature into a real life application and in turn generated signups to the service.
What Nicolas really stressed out is that the role of a CSE is to be as close to the customer as possible, in order to really understand, in a real-life scenario, what he wants to do, with the very specifics of its project. But as well be as close as possible to the product itself, part of the team that builds it so he can exactly know which features are ready and how they work. By doing both you can really bring your deep expertise of the service to the specific issues of the customer while helping build the service with real-life examples of real-life issues customers have.
A CSE's ultimate goal is to not be needed anymore, the documentation and self-service information should be enough for most of the users, and core developers of the service should be in direct contact with the users so they know how people really use their service.
JSweet
The second talk was about JSweet, a transpiler of Java to JavaScript. A transpiler is like a compiler, it transforms a language into another language, or even the same. There are Java-to-Java transpilers, that can refactor the code and there are already a lot of tools transpiling to JavaScript (e.g. CoffeeScript, Dart and TypeScript).
Of these three, TypeScript seems to be the most popular today. It was originally created by Microsoft, but then Google started using it for Angular. TypeScript mostly adds a typed layer on top of JavaScript, but still lets you use vanilla JavaScript wherever you want.
There already were attempts at a Java-to-JavaScript transpiler in the past, namely with GWT, but it was not as promising as announced and carried many organic limitations. GWT is too much of a blackbox, and you couldn't use the generated JavaScript with regular JS APIs, so that it was quickly outdated and the promise of having all Java in JavaScript wasn't even fulfilled. Mostly, it was done for developers that did not want to learn JavaScript and just wanted their backend code to work in the frontend as well.
Later on, we saw the emergence of NodeJS, where one of the cool features was that you could use the same language on the backend and frontend. NodeJS being written in JavaScript, it meant that you had to ditch your old Java/Ruby/PHP backend and put NodeJS instead. JSweet follows the same logic of "same language" everywhere, but this time it lets you use JavaScript with the Java language.
TypeScript syntax being really close to Java, it is easy to transpile from one to the other. Then, TypeScript transpiling to JavaScript, you can transpile all the way from Java to JavaScript.
This lets you use all your JavaScript libraries (like underscore, moment, etc) directly in your Java code. And you can also write your frontend code with Java, letting you follow the paradigm of "one language everywhere". Internally your Java code will be transpiled to TypeScript, then to JavaScript. Not all of Java will be available in JSweet, though.
I never coded in Java so I am unsure how useful this really is, but it seemed like a nice project if you want to keep the same language from back to front.
Shadow IT
Next talk was about what is called the Shadow IT, by Lucas Girardin. The shadow IT encompass all the IT devices in a company that are invisible to the radar of the official IT departement. It includes all the employees cell phones that are used to check personal emails during the day, all the quantified self devices (FitBit, etc), Excel files filled with custom macros, personal dropbox account and even the external contracts with freelancers that are not approved by the IT department.
Granted, this kind of issues only occur in big companies, where there are way too many layers between the real needs of employees and the top hierarchy trying to "rationalize" things. This talk gave a great echo to the first talk about CSE and reminded me why I quit consulting ;).
Anyway, the talk was still interesting. It started by explaining why these kinds of shadow developments appeared in the first place: mainly because the tools the employees have were not powerful enough to let them do their job in the best environment. And because employees are getting more and more tech-savvy, they found their own ways to bypass the restrictions. They expect to have the same level of features in their day job than at home or on their smartphone. If their company cannot provide it, they will find other ways.
Unfortunately, these ways are frowned upon by the IT departement. Maybe the Excel sheet the employee is creating is really useful, but maybe it is also illegal in regard to personal information storage. Or it will break as soon as the Excel version is changed, or be completely lost when the employee leave and no backup exists.
Then I started getting lost in the talk. Some of the concepts he talked about were too alien to what I experience everyday that I had trouble understanding what it was really about. In the end he suggested a way to still rationalize those various independant parts, by building a Platform that lets users build their own tools, even if they do not know how to code. This platform would get its data from a Referential that is accessible company-wide and hold the only real trustable source of data. And finally, the IT departement will build Middlewares that will help application A to communicate with application B.
In the end, the IT department will stop building custom applications for their employees, but simply provide the tools to help them build it themselves. Still, it will have to create the middlewares to let all those parts discuss.
I cannot help but think that this does not fix the initial issue but simply gives the IT department the feeling that it is in control again. As soon as the platform tools will be too limited for employees to really do what they want (this will happen really quickly), they will revert to using other, more powerful, tools and this will still be out of the IT departement reach. I fail to see how this is any different from what it was before, except that instead of building the application themselves, the IT departement now builds the tools so the employees can build the applications, but they are still needed to make them work together and will still be a bottleneck.
The last talk was presented by Michel Nguyen and was about the Criteo Hero Team.
Michel told us about his team, the Hero team, for Escalation, Release and Ops. He added a H in front of it to make it cooler. The team is in charge of all prod releases as well as dealing with incidents.
They realized that whenever they put something in production, something else breaks. So they started coordinating the releases and having all the people that could understand the infra in the same team, to better understand where this could break.
The now use a 24/7 "follow the sun" schedule where teams in France and the US are always awake to follow the potential issues. They have an escalation system with two layers, that lets them deal with the minor issues without creating a bottleneck for the major ones. The Hero team is in charge of finding the root causes of the issues and if none can be found quickly enough, they will just find a temporary workaround and dig deeper later. Once the issue is found and fixed, they do a postmortem to share with everybody what went wrong, how they fixed it, and ways to prevent it from happening again.
They use Centreon and Nagios as part of their monitoring and check after each production release the state of the metrics they are following, to see if nothing extremely abnormal appeared. If too many metrics changed too widely, then we can assume something is not working correctly.
The current production environment of Criteo is about 15.000 servers, which weigh as much as 6 Airbuses and would be twice the height of the Empire State Building. They handle about ~1200 incidents per year and resolve about 90% of them in escalation. The last 10% are incidents that depends from third-parties, or one-shot incidents they never understood.
To be honest, even if the talk was interesting (and Michel is a very good speaker), it felt too much like a vanity metrics contest. I know Michel had to cut his talk from 30mn to 10mn in order to fit in the HumanTalks format, so I'd like to see the full version of it.
Conclusion
I did not felt like I was the target of the talks this time. I already knew everything about the CSE job because I work with them everyday, I never coded in Java, stopped working in companies big enough to have a Shadow IT issue and as I said, the last one left me hungry for more.
Still, nice talks and nice chats afterwards. Except for the small hiccup with the projector and wifi at the start, the room was perfect and very comfortable and the pizza + sushi buffet was great.
Next month we'll be at Viadeo. Hope to see you there!
Note: I'm actually writing this blog post several months after the event, so my memory might not be the freshest here.
Anyway, we had 4 talks as usual for the HumanTalks, hosted at Deezer this time, with food courtesy of PaloIT.
Apache Zepellin
First talk was by Saad Ansari, with an introduction to Apache Zeppelin. Their use-case was that they had a huge amount of tech data (mostly logs) and they had no idea what to do with it.
They knew they should analyze it and extract relevant information from it, but they had so much data, in various forms, that they didn't really know where to start. Sorting them manually, even just to find which data was interesting and which was garbage was a very long task that weren't able to do.
So they simply pushed them to Zeppelin. It understand the global structure of the data and display it in tables and/or graphs. It basically expect CSV data as input and then lets you use a SQL-like syntax to do requests on it and display visual graphs. The UI even provides a drag'n'drop feature for easier refinement.
I was a bit confused as to who the target of such a tool was. Definitely it was not for any BigData expert, because the tool seem too basic. It wouldn't fit for someone not technical either because it still requires to write SQL queries. It's for the developer in between, to get an overall feeling of the data, without being too fine-grained. Nice to get a first overview of what to do with the data.
As the speaker put it, it's the Notepad++ of BigData. Just throw garbage CSV and logs in it, and then play with SQL and drag'n'drop to extract some meaning from it.
The Infinite, or why it is smaller that you may think
Next talk by Freddy Fadel was a lot more complex to follow. I actually stopped taking notes to focus on what the speaker was saying and trying to grasp the concepts.
It was about the mathematical definition of the infinity and what it implies, and how we can actually count it. Really I cannot explain what it was about, but it was still interesting.
At first I must say I really wondered what such a talk was doing in a meetup like the HumanTalks, and was expecting a big WTF moment. I was actually gladly surprised to enjoy the talk.
Why is it so hard to write APIs?
Then Alex Estela explained why it is so complex to build API, or rather, what are the main points that people are failing at?
First of all, REST is an exchange interface. It's main and sole purpose is to ease the exchange between two parties. The quality of the API will be as good as the communication there is between the various teams that are building it.
REST is just an interface, there is no standard and no specific language or infrastructure pattern to apply. This can be intimidating, and gives so many possible reasons to fail at building it.
Often people build REST API like they built everything else, thinking of SOAP, and exposing actions, not resources. Often, they build an API that only expose the internals of the system, without any wrapping logic. You also often see APIs that are too tailored for the specific needs of one application, or on the other hand that can let you do anything but built with no specific use-case in mind so you have to retro-engineer it yourself to get things done.
The technical aspect of building an API is not really an obstacle. It's just basic JSON over HTTP. Even HTTP/2 does not radically change things, it will just need a few adjustments here and there, but nothing too hard. The issue is the lack of standards, that give too many opportunities to do things badly. You can use specs like Swagger, RAML or Blueprint, they all are good choices with strength and weaknesses. Pick one, you cannot go wrong.
There is no right way to build an API in terms of methodology. The one and only rule you have to follow is to keep it close to the users. Once again, an API is a mean of communication between two parties. You should build it with at least one customer using it. Create prototypes, iterate on it, use real-world data and use-cases, deploy on real infrastructure. And take extra care of the Developer Experience. Write clear documentation, give examples on how to use it, give showcases of what you can build with it. Use it. Eat your own dog food. Exposing resources is not enough, you also have to consume them.
Make sure all teams that are building the API can easily talk to each other in the real world and collaborate. Collaboration is key here. All sides (producer and consumer) should give frequent feedback, as it comes.
To conclude, building an API is not really different than building any app. You have to learn a new vocabulary, rethink a bit the way you organize your actions and data, and learn to use HTTP, but it's not really hard.
What you absolutely need are users. Real-world users that will consume your API and use it to build things. Create prototypes, stay close to the users, get feedback early and make sure every actor of the project can communication with the others.
Why do tech people hate sales people ?
Last talk of the day was from Benjamin Digne, coworker of mine at Algolia. He explained in a funny presentation (with highly polished slides⸮) why dev usually hate sales people.
Being a sales person himself, the talk was much more interesting. Ben has always worked in selling stuff, from cheeseburgers to human beings (he used to be an hyperactive recruiter in a previous life).
But he realized that dealing with developers is very different from what he did before. This mostly come from the fact that the two worlds are actually speaking different languages. If you overly stereotype each part you'll see the extrovert salesman only driven by money and the introvert tech guy that spend his whole day in front of his computer.
Because those two worlds are so different, they do not understand each other. And when you do not understand something, you're afraid of it. This really does not help in building trust between the two parts.
But things are not so bleak, there are ways to create bridges between the two communities. First of all, one has to understand that historically sales people were the super rich superstars of the big companies. Techies were the nobodies locked up in a basement somewhere.
Things have changed, and the Silicon Valley culture is making superheroes out of developers. Still, mentalities did not switch overnight and we are still in an hybrid period where both sides have to understand what is going on.
Still, the two worlds haven't completely merged. Try to picture for a minute where the sales people office are located at your current company. And where the R&D is. Are they far apart, or are they working together?
At Algolia, we try to build those bridges. We first start by hiring only people with a tech background, no matter their position (sales, marketing, etc.), which makes speaking a common language easier. We also do what we call "Algolia Academies" where the tech team explain how some parts of the software are working to non-tech employees. On the other hand, we have "Sales classes" where the sales teams explain how they built their arguments and how a typical sales situation is. This helps each part better understand the job of the other part.
We also have a no-fixed-seats policy. We have one big open space, where every employees (including founders) are located. We have more desks than employees and everyone is given the opportunity to change desk at any time. Today we have a JavaScript develop sitting between our accountant and one of our recruiters, and a sales guy next to an op, and another one next to two PHP developers. Mixing teams like this really helps avoiding creating invisible walls.
Conclusion
The talks this time were kind of meta (building an API, sales/tech people, the infinity) and not really tech focused, but that's also what makes the HumanTalks so great. We do not only talk about code, but about everything that happens around the life of a developer. Thanks again to Deezer for hosting us and to all the speakers.
I went to the 11th Paris Vim Meetup last week. As usual, it took place at Silex Labs (near Place de Clichy), and we were a small group of about 10 people.
Thomas started by giving a talk about vimscript, then I talked about Syntastic. And finally, we all discussed vim and plugins, and common issues and how to fix them.
Do you really need a plugin?
This talk by Thomas was a way to explain what's involved in the writing of a vim plugin. It covers the vimscript aspect and all its quirks and was a really nice introduction for anyone wanting to jump in. Both Thomas and I learned vimscript from the awesome work of Steve Losh and his Learn Vimscript the Hard Way online book.
Before jumping into making your own plugin, there are a few questions you should ask yourself.
Isn't this feature already part of vim? Vim is such a huge piece of software that it is filled with features that you might not know. Do a quick Google search or skim through the :h to check if this isn't already included.
Isn't there already a plugin for that? Vim has a huge ecosystem of plugins (of varying quality). Check the GitHub mirror of vim-scripts.org for an easy to clone list.
And finally, ask yourself if your plugin is really useful. Don't forget that you can call any commandline tool from Vim, so maybe you do not have to code a whole plugin if an existing tool already does the job. I like to quote this vim koan on this subject:
A Markdown acolyte came to Master Wq to demonstrate his Vim plugin.
"See, master," he said, "I have nearly finished the Vim macros that translate Markdown into HTML. My functions interweave, my parser is a paragon of efficiency, and the results nearly flawless. I daresay I have mastered Vimscript, and my work will validate Vim as a modern editor for the enlightened developer! Have I done rightly?"
Master Wq read the acolyte-s code for several minutes without saying anything. Then he opened a Markdown document, and typed:
:%!markdown
HTML filled the buffer instantly. The acolyte began to cry.
Anybody can make a plugin
Once you know you need your plugin, it's time to start, and it's really easy. @bling, the author of vim-bufferline and vim-airline, two popular plugins, didn't known how to write vimscript before starting writing those two. Everybody has to start somewhere, so it is better to write a plugin that you would yourself use, this will give you more motivation into doing it.
A vim plugin can add almost any new feature to vim. It can be new motions or text objects, or even a wrapper on an existing commandline tool or even some syntax highlight.
The vimscript language is a bit special. I like to say that if you've ever had to write something in bash and did not like it, you will not like vimscript either. There are initiatives, like underscore.vim, to bring a bit more sanity to it, but it is still hackish anyway.
Vimscript basics
First thing first, the variables. You assign variables in vimscript with let a = 'foo';. If you ever want to change the value of a, you'll have to re-assign it, and using the let keyword again.
You add branching with if and endif and loops with for i in [] and endfor. Strings are concatenated using the . operator and list elements can be accessed through their index (it even understand ranges and negative indices). You can also use Dictionaries, that are a bit like hashes, where each key is named (but will be forced to a string, no matter what).
You can define functions with the function keyword, but vim will scream if you try to redefine a function that was already defined. To suppress the warning, just use function!, with a final exclamation mark. This is really useful when developping and sourcing the same file over and over again.
Variables in vimscript are scoped, and the scope is defined in the variable name. a:foo accesses the fooargument, while b:foo accesses the buffer variable foo. You also have w: for window and g: for global.
WTF Vimscript
And after all this basics, we start to enter the what the fuck territory.
If you try to concatenate strings with + instead of . (maybe because you're used to code in JavaScript), things will kind of work. + will actually force the variables to become integers. But in Vimscript, if a string starts with an integer, it will be casts as this integer. 123foo will become 123. While if it does not, it will simply be casts as 0. foo will become 0.
This can get tricky really quickly, for example if you want to react to the word under the cursor and do something only if it is an integer. You'll have a lot of false positives that you do not expect.
Another WTF‽ is that the equality operator == is actually dependent on the user ignorecase setting. If you :set ignorecase, then "foo" == "FOO" will be true, while it will stay false if the option is not set. Having default equality operators being dependent on the user configuration is... fucked up. Fortunatly, Vimscript also have the ==# operators that is always case-insensitive, so that's the one you should ALWAYS use.
Directory structure
Most Vim plugin packagers (Bundle, Vundle and Pathogen) expect you, as a plugin author, to put your files in specific directories based on what they do. Most of this structure is actually taken from the vim structure itself.
ftdetects will hold the code that is used to assign a specific filetype to files based on their name. ftplugin contains all the specific configuration to apply to a file based on its filetype (so those two usually works together).
For all the vim plugin writers out there, Thomas suggested using scriptease that provides a lot of debug tools.
Tips and tricks
Something you often see in plugin code is the execute "normal! XXXXX". execute lets you pass an argument that is the command to execute, as a string. This allows you to build the string yourself from variables. The normal! tells vim to execute the following set of keys just like when in normal mode. The ! at the end of normal is mandatory to override the user mappings. With everything wrapped in a execute you can even use special chars like <CR> to act as an Enter press.
I use syntasic a lot with various linters. Linters are commandline tools that analyze your code and output possible errors. The most basic ones only check for syntax correctness, but some can even warn you about unused variables, deprecated methods or even style violation (like camelCase vs snake_case naming).
I use linters a lot in my workflow, and every code I push goes through a linter on our Continuous Integration platform (TravisCI). Travis is awesome, but it is asynchronous, meaning I will receive an email a few minutes after my push if the build fails. And this kills my flow.
This is where syntastic comes in play. Syntastic lets you add instant linter feedback while you're in vim. The way I have it configured is to run the specified linters on the file I'm working whenever I save that file. If errors are found, they are displayed on screen, on the lines that contains the error, along with a small text message telling me what I did wrong.
It is then just a matter of fixing the issues until they all disappear. Because the feedback loop is so quick, I find it extremely useful when learning new languages. I recently started a project in python, a language I never used before.
The first thing I did was install pylint and configure syntastic for it. Everytime I saved my file, it was like having a personnal teacher telling me what I did wrong, warning me about deprecated methods and teaching me the best practices from the get go.
I really recommend adding a linter to your workflow as soon as possible. A linter is not something you add once you know the language, but something you use to learn the language.
Syntastic has support for more than a hundred language, so there's a great chance that yours is listed. Even if your language is not in the list, it is really easy to add a Syntastic wrapper to an existing linter. Without knowing much to Vimscript myself, I added 4 of them (Recess, Flog, Stylelint, Dockerfile_lint). All you need is a commandline linter that outputs the errors in a parsable format (json is preferred, but any text output could work).
Conclusion
After those two talks, we all gathered together to discuss vim in a more friendly way, exchanging tips and plugins. Thanks again to Silex Labs for hosting us, this meetup is a nice place to discover vim, whatever your experience with it.
Note: These notes are from Adrien. Thank you very much Adrien!
For December, HumanTalks were at LeBonCoin. After several days of conferences with the dotJs, dotCss and apiDays Tim needed some typing REST and delegated it to me
Polyfill and Transpilers, one code for every browser
First talk by Alexandre Barbier (@alexbrbr, explain the why and how of progressive enhancement in js.
One of main task of web developers is to ensure compatibility across browsers. And if things are getting easier (less painfull) with the death (end of life support) of severals IE, web is still the most hostile environment.
Once target browser has been defined, there are two different ways to do it.
Using polyfills, which consist in reimplementing some API in pure js if this one is not defined. First you need to detect if the feature is available, if not you need to implement it.
If you want to use the last features of js, the one that has not been implemented (such as EcmaScript 6/2015), you need to use a transpiler (source to source compiler). More than 270 language target js, from coffeescript to clojurescript along with dart and typescript. One of the most used is Babel which just hit its 6th version
Transhumanism, the singularity advent
After Transpilers, Transhumanism by Pierre Cointe (@pierre_cointe.
The talk presented history of transhumanism, that came from eugenics as a way to evolve willingly the human specie. NBIC Technologies (Nano tech, Biotech, Information technology, Cognitive science)
Pierre presented some of the projects associated with it such as immortality, genome manipulation, consciousness transfer. Then he presented some of Raymond Kurzweil predict, which based on an extended Moore law to predict the singularity around 2030, the singularity being the point in time where a super computer would be more powerful than an human brain.
Develop application for the TV
The next talk was done by Mickaël GREGORI (@meekah3ll), and present us his experience developing application for the television.
Not that a friendly place neither, with no standards, SDK, xmls... After presenting the market he focused on three products: ChromeCast from Google, Roku, and Android TV. Most of the application consist in creating a new dynamic channel TC.
To conclude he talked a bit about a standard that may be on its way, W3C being working on a TV API
How to be more productive with three methods
The fourth and last talk was made by Thibault Vigouroux (@teaBough)
He presented 3 ways he is using everyday to be more effective at what he is doing.
The first one was the Pomodoro which consist in working 25 minutes on the task, being focused, then taking a 5 minutes break to rest and letting the brain work in diffuse mode. He told us about Pomodoro Challenge, an application flexible that rely in gamification to get you used to practice.
Then he present a way to help choose a task, the Eisenhower Matrix. For it you need to label your task according to two criteria: importance, emergency.
Basically you do know what's both important and urgent, you delegate the urgent non important, and decide when to do what is important but not urgent. (note how I deleted the section about non important non urgent stuff)
To finish he talked about how to be better at a task with deliberate practice, which he used to switch to colemak layout. 5 components are vital for this:
Being focused on improving the result
Immediate Feedback
Easily repeatable exercises
Not really pleasant
Mentally Intense
Conclusion
Very diverse and interesting talks as usual. :) Good meet up to conclude the year!
After dotCSS last Friday, I started my week with dotJS on Monday. Same organizers, but different Theatre. A bigger one was needed because dotJS is one full day of conferences for more than 1.000 attendees.
Overall, I liked dotCSS better (more compact, more human-sized, more great talks), but there have been some nice things said here as well, so I'll try to write all this here.
Christophe Porteneuve
To wake us all up, we started with Christophe Porteneuve, his non-French accent but very-English expressions, fast flow and way too bright slides.
At first I was a bit worried the talk was going to be something we had seen so many times. Speaking about callback hell and why it was bad. Then introducing promises and why it was good.
Then he moved further into unknown territory, using promises with generators. He started by introducing the yield concept, all the way to the await ES7 keyword. He said it himself that using promises with generators are a bit hackish and not the way they are meant to be used, but that was an interesting way to mix the concepts.
And Christophe does the same jokes in English than in French :)
By reading the docs, you're instantly part of the 1% best developers in the world.
Mathias Buus
Second talk was by Mathias Buus. There was a slide issue that forced him to redo the start of his talk, then there might have been a microphone issue because it was hard to hear what Mathias was saying. Or maybe he was just not speaking loudly enough.
It took me a while to understand what the topic of the talk was about. I managed to understand it was a project named hyperdrive, but it was only when he explained that it is a bittorent-like exchange format, done in JavaScript, that he got me hooked.
Hyperdrive uses something called smart diffing to split each file into chunks of data. It then does a checksum on each chunk, and store all the checksums into a table, a bit like a dictionary. In the end, to download a complete set of files, you only need to download each chunk listed in the dictionary and combine them to recreate the files. Doing so, you will never download twice the same content.
There is also another optimisation of creating a recursive hash on top of each set of hashes, in some kind of tree structure, but I will not be able to explain it clearly here.
He ended his talk with a demo of it in action, actually streaming a 200MB video file from one browser to another, just by sharing a hash representation of the tree of hashes.
The talk was a bit slow to start, but the end result was impressive, and I liked him taking the time to explain and demystify the bittorent logic (even if I did not remember everything).
Samuel Saccone
Third talk was by Samuel Saccone, that wasn't speaking really loudly either. Samuel told us about how you're supposed to debug a JavaScript application that becomes slow or unresponsive if used for a long period.
Such issues usually come from memory leaks. But as he so perfectly pointed out in his talk, it is not enough to know that. One actually have to be able to pinpoint the issue and fix it.
If you search for this issue on Google, you'll surely find some nice articles by Paul Irish or Addy Osmani and maybe even a Chrome dev tool video. Things looks so nice and easy when other are explaining it, but when it's your turn to actually understand the complex Chrome UI and untangle your messy code, this is a whole new story.
I was glad that I wasn't the only one struggling to understand how to fix that (and after discussing with other people, the feeling was shared). Fixing memory leaks is not something we want to do. This should be the browser or framework job. But as Samuel pointed, framework developers are humans as well, and they do not understand it either.
Usually, when you have a memory leak, it is because the garbage collector cannot get rid of some elements and keep them in memory. It cannot delete them because they still have a reference to an object that still exists. Most common examples are listeners to document or keeping a handy reference to _app in your views for examples (I plead guilty to this, back in my Backbone days).
He then walk us through the process of finding and fixing the bug, with a nice touch of humor. But the really nice information is that we can now write non-regression tests that checks for memory leaks.
By using drool, we have access to the current node count in the browser memory, so it's just a matter of running a set of actions repeatedly and see if that number grows or not.
I do not often have to debug memory leaks, but when I do, I always have to re-watch the same video tutorials and try to make it work with my current app. At least now I know that I'm not the only one finding this hard, and I'll even be able to write tests to prevent this from occurring again.
Rebecca Murphey
The last talk of the first session was by Rebecca Murphey, who had the difficult task of speaking between us and the lunch. Four sessions in the morning might be a bit too much, maybe 3 would have been better.
Anyway, she spoke a bit about HTTP/2, and what it will change. She, too, had some microphone issues and I had a hard time following what she was saying. I was a bit afraid she was going to do a list of what HTTP/2 was changing (something I had already seen several times recently -at ParisWeb and the perfUG for example-). But, no, instead she focussed on asking more down-to-earth and sensible questions.
First is: how will the server push data to the client? HTTP/2 lets the server pro-actively push data to the client. For example, if the client asks for an HTML page, the server can reply with a CSS, JavaScript and/or images along with the HTML, speculating that the user will ask for them later anyway.
All the needed syntax is available in the protocol to allow that, but how will the server know what to send? Will it be some kind of heuristic guessing based on previous requests? Will it be some configuration in our nginx files, or a LUA module that can process a bit of logic? Nothing is yet implemented to do it in any webserver.
There are a few POC webservers that lets you do it, but they exist only so we can test the various syntaxes and see which one is best. Nothing is ready yet, this is a new story we'll have to write.
Second question was: how are we going to debug HTTP/2 requests? The current Chrome and Firefox dev tools do not display the server push in the waterfall. Also, HTTP/2 being wrapped a binary protocol, all previous HTTP tool will need an upgrade to work with HTTP/2.
Next is, how are we going to transition from HTTP to HTTP/2? Most of the webperf best practices in HTTP are either useless or bad practice in HTTP/2. Are we going to have two builds, and redirect to one or the other based on the user support of HTTP?
If we look at how CDNs are currently handling HTTP/2, we see that they are not ready either. At the moment, only Cloudflare implements it, but it does not (yet) provides a way to do server push.
At first, the low voice volume, hungry belly and generic explanation of HTTP/2 make me fear a boring talk. In the end, the questions asked were clever and made me think. We wanted HTTP/2. Now we have it. But we still have to build our tools to correctly use it and the next few years will be spend toying with it, discovering usages, building stuff and emerging best practices. Can't wait to get started.
As usual, the food was excellent, and the mountain of cheese is always a nice touch. As usual, the main hall is also too crowded. Everybody is crammed between the food tables and the sponsors booths.
After eating, but before starting with the real speakers, we had the lightning talks session. There was much more of them than in dotCSS, which is nice.
Johannes Fiala showed us swagger-codegen which is a tool to generate a SDK for any API that is exposed through Swagger.
Vincent Voyer, my colleague, shared with us the best way to publish ES6 modules today. Any module should be loadable either from a require or a script, and be easily pushed to npm. The browser support is too low to directly push stuff in ES6, and the js dev environment is too fragmented to safely push a browserify or webpack module.
The trick is to push to npm and the CDNs an ES5-compatible version, the one that is built by Babel for example, while still maintaining an ES6 code base for developers.
After than, Etienne Margraff, did a demo of Vorlon.js, a local webserver to debug any application, on any device by redirecting all debug messages to the webserver UI.
Then Maxime Salnikov tried to explain how Angular 2 was working. I say try because I did not get it. Maybe it comes from my aversion to Angular and the strong Russian accent.
Finally, Nicolas Grenié did a commercial presentation of Amazon Lambda. This was not supposed to be commercial I guess, just a way to explain how serverless microservices are working and why it's a good thing, but in reality as it was only talking about Amazon lambda this felt weird. Oh and the serverless part only means that the server is on somebody else infrastructure. Nevertheless, I was convinced by the power of the approach and would like to tinker with it.
Nicolas Bevaqua
After that, we got back to the main speakers. And this is also when it started to get really hot in the theater. This only got worse the more we advanced into the evening, but man it was hot. And as I'm complaining I might add that the available space to put my leg was too small, even on the first floor and I didn't have much space to put my elbows either which made it quite hard (and sometimes painful) to take notes.
Anyway, back to real business. Nicolas might better be known under the ponyfoo alias. You might know him for his extensive and very useful serie of blog posts about ES6.
Actually if that's how you know him, you pretty much already know everything he was saying in his talk. Basically he went over all the goodness that makes writing ES6 code so much enjoyable than ES5: arrow functions, spread/destructuring, default function values, rest parameters, string interpolation, let, const and other shiny new stuff.
I won't detail it here, but I strongly invite you to read on the subject, npm install --save-dev babel and start using it right away.
Andre Medeiros
The next one totally deserves the award of best talk of the day. Andre had some really simple, clear and beautiful slides, he told us a nice programming story in small increments and managed to convey complex concepts to the audience.
I was hooked really quickly and it's only at the end of the talk that he told us that he just taught us what reactive programming is, how it works, what problem it solves. Thanks a lot for that, this is the talk I enjoyed the most, and one I will remember for a long time. I even stopped taking notes at the end to keep my focus on the talk.
He walked us through the story of two linked variables, and how changing one would affect the other, and the window of inconsistency this could create. He then changed our point of view and represented those changes on a timeline where we do not need to know the value of each var at any given time, but only the event that lead to some changes in the value. He compared it to the difference between our age and our birthday. We do not need to know how old we are for any second of our live. We just need to know our birthday.
I could try to put into words what I learned from his talk, but this wouldn't do it justice. Instead, I encourage you to wait for the video of his talk, book 18 minutes in your busy schedule and watch it. It's worth it.
All of the concepts he talked about, and much much more, are implemented in RxJS.
Did I told you how great that talk was?
Eric Shoftall
After another break, we continued with Eric Shoftall. The theater was getting hotter and hotter and it was getting really uncomfortable and harder to concentrate.
Still, Eric talk was interesting so it made it easier. Eric is a fun guy (who ran for mayor of SF and did lobbying using mechanical turks), created gulp, and now tries to make WebRTC work everywhere.
WebRTC is still a cutting-edge technology. It is only really well supported by Chrome, and is hard to implement. There are a large number of quirks to know. The basic dance to get two nodes to connect is a bit long (but can be abstract in a few lines through the Peer module).
But where things are really getting complicated, it's when you need to make it work in Android, iOS and IE. But you need to, because as Eric said:
If you build an app that works only in Chrome, it's not an app, it's a demo.
Recent Android phone ships with Chrome, so it works well. But old version have a crappy browser installed by default. Fortunately, if you package your app in a Cordova bundle, using Crosswalk, you can force the webview to use the latest Chrome.
For iOS, we enter the hackish zone. There is something called iosrtc, which is a WebRTC polyfill written in Swift. It re-implement the methods and protocol in another thread, which makes it integrate with the webview quite challenging. For example, to play a video using it, you have to manually display the video from the other thread with an absolute positioning on top of the webview.
This reminds me of the old hacks to do file uploads in HTML by overlaying a transparent SWF file above a file upload to allow uploading several files at once. This was so hacky, on so many levels...
For IE, there is a proprietary IE plugin called Temasys that users need to manually install before being able to use WebRTC. And even once installed, you have to deal with the same positioning hacks than for iOS.
In the end, Eric created WebRTC everywhere, that packs all the solutions to the common issues he found into one neat bundle. I liked his talk because it is always interesting to see the creative ways people find to fix complex issues.
Forbes Lindesay
On the next talk, Forbes, creator of the Jade HTML preprocessor walked through the various parts that compose a transpiler (compiler? post-processor? I don't know).
We're used to using this kind of tools today. CoffeeScript, Babel, SCSS, postCSS and Jade (that actually had to be renamed to Pug because of legal issues...).
All of this tools (as we've already seen in the postCSS talk at dotCSS) are made up of three parts. First a Lexer to parse the string code into a list of tokens. Then a Parser that convert those tokens into a structured tree. And finally a code generator that will output it back as a string.
I already had a general idea of how lexers and parsers were working, and I guess people that did a computer school had to build one at some point. But it was nice to not assume that everybody knows everything, and re-explain this kind of stuff.
It might have been a bit too long sometimes, and could have been made shorter because some parts really seemed obvious. Anyway, as he said at the end of the talk, now the audience knows how this works and can contribute to Pug :)
Actually, Pug seems to be to HTML what postCSS is to CSS. You can build your own plugin to add to the Pug pipeline and maybe define your own HTML custom tags or attributes to transpile (compile? post-process? damn.) them into something else.
I still do not know how to build my own lexer/parser after this talk, but I liked the idea of making Pug modular and more high level. This also does a great echo to all the good things that have been said on postCSS.
Tim Caswell
Tim (aka @creationix and his son Jack then did a live-coding session involving colored LEDs, arduino and, maybe, JavaScript, but I'm not sure. By that time, I had moved to the ground level where the air was fresher.
I must say that I did not get what the message of this talk was. Tim explained that there are no programmers in his town, and he wanted to make coding easy and teach it around him, especially to kids. This is all well and a very nice idea... But then I failed to really understand the rest of the talk :/
Tim's son, Jack, managed to live-code something in a weird programming language to make some leds blink and a robot move. The language itself looked so low level to me that the performance seemed more to be how the kid managed to remember all the magic numbers to use. Really, having to input stuff like istart 0x70 or write 6 is absolutely not the idea of programming I would like to show to people that don't know anything about it.
Henrik Joreteg
So, after this small WTF talk, we're back to one of the best talks of the day, done by Henrik Joreteg.
He started his talk with something that makes a lot of sense. When you use a mobile phone, you're actually using a physical object, and when you use its touch screen you want it to react immediately. That's how we're used to have physical objects to react; immediately.
But the web of yesterday has been thought for the desktop, with desktop hardware and speed. The web of today is, in the best of cases, mobile-first. Still, this is not enough because the web of tomorrow will be mobile-only. There are more and more users everyday that uses smartphones only and have ditched their desktop browser.
Still, we, the web developer community, build our websites on a desktop machine. And we test our code on the desktop as well, only testing on mobile emulators later in the cycle and on real devices really at the end of the chain while we should actually do the opposite. We should all have physical phones near to our work station and take them in our hand when we code for it. This will make us really feel what we are coding for.
He also quoted a tweet saying:
If you want to write fast code, use a slow computer
Which is absolutely true. I think we're guilty of thinking along the lines of "oh yes it is a bit slow on mobile, but mobiles are getting more powerful every 6 months so this should be ok soon". But that's not true. Not everybody has the latest iPhone, nor a fast bandwidth, but they still want a nice experience.
Google set some rules to their products, based on the number 6. I did not manage to write them all down, but it was something like:
max 60KB of HTML
max 60KB of CSS
max 60KB of JavaScript
60fps
max .6s to load the page
They managed to achieve it, and I think they are sensible values that we could all try to reach. Forcing us to work under a budget will help us make things better. It might be hard, but not impossible.
He then continued by giving some advices on what we should do, and most of all what we should stop doing.
First of all, we should get back to the server-side rendering that we should never have stopped doing. Browser are really fast at parsing HTML, much more than they are at parsing and executing js then building the DOM. There is no need to go all isomorphic/universal/whatever. Just push the HTML you know is going to be needed to the page. There's even a webpack config that can do that for you.
Second point is to really think if you need the whole framework you're using or even if you need a framework at all. Do we need a templating system when we have JSX? No need to parse and modify DOM element when we have HTML-to-JavaScript transforms at build time.
Also, React proved that re-rendering the whole UI whenever the app state changed could actually be really lightweight as long as you use a virtual DOM. If you strip everything down, in the end, your whole app can be simplified as newVDom = app(state). You have only one state for your whole app, you process it, and it returns a virtual DOM. If you really need a little more structure on how the data flows, you can add Redux which is only 2KB.
React is nice, but the real gold nugget in it is the virtual DOM. You can extract only this part from the React core for only 10% of the total size.
The last trick is to use the main browser thread for the UI rendering (vdom to DOM) and make all the heavier computation asynchronously in WebWorkers on the side. The UI only pass actions to the WebWorkers that yield the new vdom back to the UI thread when they are done.
You can see this in action on a website named pokedex.com, which apparently works also really well on old mobile devices.
I liked that talk as well, because he did not hesitate to challenge what we take as granted. It's healthy once in a while to cleanup what we think we now about our tools, remove the dust and change the broken pieces. React introduced some really clever ideas, and even if the framework as a whole works nice, you can still cherry-pick parts of it to go with the bare minimum.
Brendan Eich
And finally the last talk of the day was done by Brendan Eich, and was really really weird. I had to ask people afterwards to explain me what it was about.
What was missing for this talk was a previously, on Brendan Eich. I felt like I had to catch up with a story without knowing the context. He talked about TC39 and asm.js, without explaining what it is. In no specific order he also talked about how FirefoxOS and Tizen, that were huge hopes of game changers failed in the last years. He explained that it was due to the fact that those platform did not have any apps, and that people today wants app. But app developers doesn't want to code apps for platforms that have very few users, which creates a vicious circle.
He then continue saying that if you build something for the web, you cannot do it in isolation, it has to fit in the existing landscape. He went on joking about Douglas Crockford and its minimalist views.
Then, he started killing zombies with chicken equipped with machine guns. And that crashed. So he started killing pigs with an axe.
To be honest, this talk made absolutely no sense to me, I am completely unable to synthesize the point of the talk.
Conclusion
I think I'm gonna say the same thing as last year. dotJS is for me less interesting than dotCSS. Proportionally, there was way much less inspirational talks, the sound wasn't great, the place is less comfortable and was getting hotter and hotter along the day. On a very personal note I also realized that my eyes are getting worse, and even with my glasses on, the slides were a bit blurry for me when I sat in the back.
Last year I said "maybe I won't come next year", but I still came. This time I'm saying it again, removing the maybe. This is just no longer adapted to what I'm looking for in a conference. I'll still come to dotCSS and dotScale, though.