HumanTalks January 2016

Note: I'm actually writing this blog post several months after the event, so my memory might not be the freshest here.

Anyway, we had 4 talks as usual for the HumanTalks, hosted at Deezer this time, with food courtesy of PaloIT.

Apache Zepellin

First talk was by Saad Ansari, with an introduction to Apache Zeppelin. Their use-case was that they had a huge amount of tech data (mostly logs) and they had no idea what to do with it.

They knew they should analyze it and extract relevant information from it, but they had so much data, in various forms, that they didn't really know where to start. Sorting them manually, even just to find which data was interesting and which was garbage was a very long task that weren't able to do.

So they simply pushed them to Zeppelin. It understand the global structure of the data and display it in tables and/or graphs. It basically expect CSV data as input and then lets you use a SQL-like syntax to do requests on it and display visual graphs. The UI even provides a drag'n'drop feature for easier refinement.

I was a bit confused as to who the target of such a tool was. Definitely it was not for any BigData expert, because the tool seem too basic. It wouldn't fit for someone not technical either because it still requires to write SQL queries. It's for the developer in between, to get an overall feeling of the data, without being too fine-grained. Nice to get a first overview of what to do with the data.

As the speaker put it, it's the Notepad++ of BigData. Just throw garbage CSV and logs in it, and then play with SQL and drag'n'drop to extract some meaning from it.

The Infinite, or why it is smaller that you may think

Next talk by Freddy Fadel was a lot more complex to follow. I actually stopped taking notes to focus on what the speaker was saying and trying to grasp the concepts.

It was about the mathematical definition of the infinity and what it implies, and how we can actually count it. Really I cannot explain what it was about, but it was still interesting.

At first I must say I really wondered what such a talk was doing in a meetup like the HumanTalks, and was expecting a big WTF moment. I was actually gladly surprised to enjoy the talk.

Why is it so hard to write APIs?

Then Alex Estela explained why it is so complex to build API, or rather, what are the main points that people are failing at?

First of all, REST is an exchange interface. It's main and sole purpose is to ease the exchange between two parties. The quality of the API will be as good as the communication there is between the various teams that are building it.

REST is just an interface, there is no standard and no specific language or infrastructure pattern to apply. This can be intimidating, and gives so many possible reasons to fail at building it.

Often people build REST API like they built everything else, thinking of SOAP, and exposing actions, not resources. Often, they build an API that only expose the internals of the system, without any wrapping logic. You also often see APIs that are too tailored for the specific needs of one application, or on the other hand that can let you do anything but built with no specific use-case in mind so you have to retro-engineer it yourself to get things done.

The technical aspect of building an API is not really an obstacle. It's just basic JSON over HTTP. Even HTTP/2 does not radically change things, it will just need a few adjustments here and there, but nothing too hard. The issue is the lack of standards, that give too many opportunities to do things badly. You can use specs like Swagger, RAML or Blueprint, they all are good choices with strength and weaknesses. Pick one, you cannot go wrong.

There is no right way to build an API in terms of methodology. The one and only rule you have to follow is to keep it close to the users. Once again, an API is a mean of communication between two parties. You should build it with at least one customer using it. Create prototypes, iterate on it, use real-world data and use-cases, deploy on real infrastructure. And take extra care of the Developer Experience. Write clear documentation, give examples on how to use it, give showcases of what you can build with it. Use it. Eat your own dog food. Exposing resources is not enough, you also have to consume them.

Make sure all teams that are building the API can easily talk to each other in the real world and collaborate. Collaboration is key here. All sides (producer and consumer) should give frequent feedback, as it comes.

To conclude, building an API is not really different than building any app. You have to learn a new vocabulary, rethink a bit the way you organize your actions and data, and learn to use HTTP, but it's not really hard.

What you absolutely need are users. Real-world users that will consume your API and use it to build things. Create prototypes, stay close to the users, get feedback early and make sure every actor of the project can communication with the others.

Why do tech people hate sales people ?

Last talk of the day was from Benjamin Digne, coworker of mine at Algolia. He explained in a funny presentation (with highly polished slides⸮) why dev usually hate sales people.

Being a sales person himself, the talk was much more interesting. Ben has always worked in selling stuff, from cheeseburgers to human beings (he used to be an hyperactive recruiter in a previous life).

But he realized that dealing with developers is very different from what he did before. This mostly come from the fact that the two worlds are actually speaking different languages. If you overly stereotype each part you'll see the extrovert salesman only driven by money and the introvert tech guy that spend his whole day in front of his computer.

Because those two worlds are so different, they do not understand each other. And when you do not understand something, you're afraid of it. This really does not help in building trust between the two parts.

But things are not so bleak, there are ways to create bridges between the two communities. First of all, one has to understand that historically sales people were the super rich superstars of the big companies. Techies were the nobodies locked up in a basement somewhere.

Things have changed, and the Silicon Valley culture is making superheroes out of developers. Still, mentalities did not switch overnight and we are still in an hybrid period where both sides have to understand what is going on.

Still, the two worlds haven't completely merged. Try to picture for a minute where the sales people office are located at your current company. And where the R&D is. Are they far apart, or are they working together?

At Algolia, we try to build those bridges. We first start by hiring only people with a tech background, no matter their position (sales, marketing, etc.), which makes speaking a common language easier. We also do what we call "Algolia Academies" where the tech team explain how some parts of the software are working to non-tech employees. On the other hand, we have "Sales classes" where the sales teams explain how they built their arguments and how a typical sales situation is. This helps each part better understand the job of the other part.

We also have a no-fixed-seats policy. We have one big open space, where every employees (including founders) are located. We have more desks than employees and everyone is given the opportunity to change desk at any time. Today we have a JavaScript develop sitting between our accountant and one of our recruiters, and a sales guy next to an op, and another one next to two PHP developers. Mixing teams like this really helps avoiding creating invisible walls.

Conclusion

The talks this time were kind of meta (building an API, sales/tech people, the infinity) and not really tech focused, but that's also what makes the HumanTalks so great. We do not only talk about code, but about everything that happens around the life of a developer. Thanks again to Deezer for hosting us and to all the speakers.

Paris Vim Meetup #11

I went to the 11th Paris Vim Meetup last week. As usual, it took place at Silex Labs (near Place de Clichy), and we were a small group of about 10 people.

Thomas started by giving a talk about vimscript, then I talked about Syntastic. And finally, we all discussed vim and plugins, and common issues and how to fix them.

Pictures

Do you really need a plugin?

This talk by Thomas was a way to explain what's involved in the writing of a vim plugin. It covers the vimscript aspect and all its quirks and was a really nice introduction for anyone wanting to jump in. Both Thomas and I learned vimscript from the awesome work of Steve Losh and his Learn Vimscript the Hard Way online book.

Before jumping into making your own plugin, there are a few questions you should ask yourself.

Isn't this feature already part of vim? Vim is such a huge piece of software that it is filled with features that you might not know. Do a quick Google search or skim through the :h to check if this isn't already included.

Isn't there already a plugin for that? Vim has a huge ecosystem of plugins (of varying quality). Check the GitHub mirror of vim-scripts.org for an easy to clone list.

And finally, ask yourself if your plugin is really useful. Don't forget that you can call any commandline tool from Vim, so maybe you do not have to code a whole plugin if an existing tool already does the job. I like to quote this vim koan on this subject:

A Markdown acolyte came to Master Wq to demonstrate his Vim plugin.

"See, master," he said, "I have nearly finished the Vim macros that translate Markdown into HTML. My functions interweave, my parser is a paragon of efficiency, and the results nearly flawless. I daresay I have mastered Vimscript, and my work will validate Vim as a modern editor for the enlightened developer! Have I done rightly?"

Master Wq read the acolyte-s code for several minutes without saying anything. Then he opened a Markdown document, and typed:

:%!markdown

HTML filled the buffer instantly. The acolyte began to cry.

Anybody can make a plugin

Once you know you need your plugin, it's time to start, and it's really easy. @bling, the author of vim-bufferline and vim-airline, two popular plugins, didn't known how to write vimscript before starting writing those two. Everybody has to start somewhere, so it is better to write a plugin that you would yourself use, this will give you more motivation into doing it.

A vim plugin can add almost any new feature to vim. It can be new motions or text objects, or even a wrapper on an existing commandline tool or even some syntax highlight.

The vimscript language is a bit special. I like to say that if you've ever had to write something in bash and did not like it, you will not like vimscript either. There are initiatives, like underscore.vim, to bring a bit more sanity to it, but it is still hackish anyway.

Vimscript basics

First thing first, the variables. You assign variables in vimscript with let a = 'foo';. If you ever want to change the value of a, you'll have to re-assign it, and using the let keyword again.

You add branching with if and endif and loops with for i in [] and endfor. Strings are concatenated using the . operator and list elements can be accessed through their index (it even understand ranges and negative indices). You can also use Dictionaries, that are a bit like hashes, where each key is named (but will be forced to a string, no matter what).

You can define functions with the function keyword, but vim will scream if you try to redefine a function that was already defined. To suppress the warning, just use function!, with a final exclamation mark. This is really useful when developping and sourcing the same file over and over again.

Variables in vimscript are scoped, and the scope is defined in the variable name. a:foo accesses the foo argument, while b:foo accesses the buffer variable foo. You also have w: for window and g: for global.

WTF Vimscript

And after all this basics, we start to enter the what the fuck territory.

If you try to concatenate strings with + instead of . (maybe because you're used to code in JavaScript), things will kind of work. + will actually force the variables to become integers. But in Vimscript, if a string starts with an integer, it will be casts as this integer. 123foo will become 123. While if it does not, it will simply be casts as 0. foo will become 0.

This can get tricky really quickly, for example if you want to react to the word under the cursor and do something only if it is an integer. You'll have a lot of false positives that you do not expect.

Another WTF‽ is that the equality operator == is actually dependent on the user ignorecase setting. If you :set ignorecase, then "foo" == "FOO" will be true, while it will stay false if the option is not set. Having default equality operators being dependent on the user configuration is... fucked up. Fortunatly, Vimscript also have the ==# operators that is always case-insensitive, so that's the one you should ALWAYS use.

Directory structure

Most Vim plugin packagers (Bundle, Vundle and Pathogen) expect you, as a plugin author, to put your files in specific directories based on what they do. Most of this structure is actually taken from the vim structure itself.

ftdetects will hold the code that is used to assign a specific filetype to files based on their name. ftplugin contains all the specific configuration to apply to a file based on its filetype (so those two usually works together).

For all the vim plugin writers out there, Thomas suggested using scriptease that provides a lot of debug tools.

Tips and tricks

Something you often see in plugin code is the execute "normal! XXXXX". execute lets you pass an argument that is the command to execute, as a string. This allows you to build the string yourself from variables. The normal! tells vim to execute the following set of keys just like when in normal mode. The ! at the end of normal is mandatory to override the user mappings. With everything wrapped in a execute you can even use special chars like <CR> to act as an Enter press.

Syntastic

After Thomas talk, I briefly talked about Syntastic, the syntax checker for vim.

I use syntasic a lot with various linters. Linters are commandline tools that analyze your code and output possible errors. The most basic ones only check for syntax correctness, but some can even warn you about unused variables, deprecated methods or even style violation (like camelCase vs snake_case naming).

I use linters a lot in my workflow, and every code I push goes through a linter on our Continuous Integration platform (TravisCI). Travis is awesome, but it is asynchronous, meaning I will receive an email a few minutes after my push if the build fails. And this kills my flow.

This is where syntastic comes in play. Syntastic lets you add instant linter feedback while you're in vim. The way I have it configured is to run the specified linters on the file I'm working whenever I save that file. If errors are found, they are displayed on screen, on the lines that contains the error, along with a small text message telling me what I did wrong.

It is then just a matter of fixing the issues until they all disappear. Because the feedback loop is so quick, I find it extremely useful when learning new languages. I recently started a project in python, a language I never used before.

The first thing I did was install pylint and configure syntastic for it. Everytime I saved my file, it was like having a personnal teacher telling me what I did wrong, warning me about deprecated methods and teaching me the best practices from the get go.

I really recommend adding a linter to your workflow as soon as possible. A linter is not something you add once you know the language, but something you use to learn the language.

Syntastic has support for more than a hundred language, so there's a great chance that yours is listed. Even if your language is not in the list, it is really easy to add a Syntastic wrapper to an existing linter. Without knowing much to Vimscript myself, I added 4 of them (Recess, Flog, Stylelint, Dockerfile_lint). All you need is a commandline linter that outputs the errors in a parsable format (json is preferred, but any text output could work).

Conclusion

After those two talks, we all gathered together to discuss vim in a more friendly way, exchanging tips and plugins. Thanks again to Silex Labs for hosting us, this meetup is a nice place to discover vim, whatever your experience with it.

HumanTalks December 2015

Note: These notes are from Adrien. Thank you very much Adrien!

For December, HumanTalks were at LeBonCoin. After several days of conferences with the dotJs, dotCss and apiDays Tim needed some typing REST and delegated it to me

Polyfill and Transpilers, one code for every browser

First talk by Alexandre Barbier (@alexbrbr, explain the why and how of progressive enhancement in js.

One of main task of web developers is to ensure compatibility across browsers. And if things are getting easier (less painfull) with the death (end of life support) of severals IE, web is still the most hostile environment.

Once target browser has been defined, there are two different ways to do it.

Using polyfills, which consist in reimplementing some API in pure js if this one is not defined. First you need to detect if the feature is available, if not you need to implement it.

If you want to use the last features of js, the one that has not been implemented (such as EcmaScript 6/2015), you need to use a transpiler (source to source compiler). More than 270 language target js, from coffeescript to clojurescript along with dart and typescript. One of the most used is Babel which just hit its 6th version

Transhumanism, the singularity advent

After Transpilers, Transhumanism by Pierre Cointe (@pierre_cointe.

The talk presented history of transhumanism, that came from eugenics as a way to evolve willingly the human specie. NBIC Technologies (Nano tech, Biotech, Information technology, Cognitive science)

Pierre presented some of the projects associated with it such as immortality, genome manipulation, consciousness transfer. Then he presented some of Raymond Kurzweil predict, which based on an extended Moore law to predict the singularity around 2030, the singularity being the point in time where a super computer would be more powerful than an human brain.

Develop application for the TV

The next talk was done by Mickaël GREGORI (@meekah3ll), and present us his experience developing application for the television.

Not that a friendly place neither, with no standards, SDK, xmls... After presenting the market he focused on three products: ChromeCast from Google, Roku, and Android TV. Most of the application consist in creating a new dynamic channel TC.

To conclude he talked a bit about a standard that may be on its way, W3C being working on a TV API

How to be more productive with three methods

The fourth and last talk was made by Thibault Vigouroux (@teaBough)

He presented 3 ways he is using everyday to be more effective at what he is doing.

The first one was the Pomodoro which consist in working 25 minutes on the task, being focused, then taking a 5 minutes break to rest and letting the brain work in diffuse mode. He told us about Pomodoro Challenge, an application flexible that rely in gamification to get you used to practice.

Then he present a way to help choose a task, the Eisenhower Matrix. For it you need to label your task according to two criteria: importance, emergency.

Basically you do know what's both important and urgent, you delegate the urgent non important, and decide when to do what is important but not urgent. (note how I deleted the section about non important non urgent stuff)

To finish he talked about how to be better at a task with deliberate practice, which he used to switch to colemak layout. 5 components are vital for this:

  1. Being focused on improving the result
  2. Immediate Feedback
  3. Easily repeatable exercises
  4. Not really pleasant
  5. Mentally Intense

Conclusion

Very diverse and interesting talks as usual. :) Good meet up to conclude the year!

dotJS 2015

After dotCSS last Friday, I started my week with dotJS on Monday. Same organizers, but different Theatre. A bigger one was needed because dotJS is one full day of conferences for more than 1.000 attendees.

Overall, I liked dotCSS better (more compact, more human-sized, more great talks), but there have been some nice things said here as well, so I'll try to write all this here.

dotJS

Christophe Porteneuve

To wake us all up, we started with Christophe Porteneuve, his non-French accent but very-English expressions, fast flow and way too bright slides.

At first I was a bit worried the talk was going to be something we had seen so many times. Speaking about callback hell and why it was bad. Then introducing promises and why it was good.

Then he moved further into unknown territory, using promises with generators. He started by introducing the yield concept, all the way to the await ES7 keyword. He said it himself that using promises with generators are a bit hackish and not the way they are meant to be used, but that was an interesting way to mix the concepts.

Slides are available here.

And Christophe does the same jokes in English than in French :)

By reading the docs, you're instantly part of the 1% best developers in the world.

Mathias Buus

Second talk was by Mathias Buus. There was a slide issue that forced him to redo the start of his talk, then there might have been a microphone issue because it was hard to hear what Mathias was saying. Or maybe he was just not speaking loudly enough.

It took me a while to understand what the topic of the talk was about. I managed to understand it was a project named hyperdrive, but it was only when he explained that it is a bittorent-like exchange format, done in JavaScript, that he got me hooked.

Hyperdrive uses something called smart diffing to split each file into chunks of data. It then does a checksum on each chunk, and store all the checksums into a table, a bit like a dictionary. In the end, to download a complete set of files, you only need to download each chunk listed in the dictionary and combine them to recreate the files. Doing so, you will never download twice the same content.

There is also another optimisation of creating a recursive hash on top of each set of hashes, in some kind of tree structure, but I will not be able to explain it clearly here.

He ended his talk with a demo of it in action, actually streaming a 200MB video file from one browser to another, just by sharing a hash representation of the tree of hashes.

The talk was a bit slow to start, but the end result was impressive, and I liked him taking the time to explain and demystify the bittorent logic (even if I did not remember everything).

Samuel Saccone

Third talk was by Samuel Saccone, that wasn't speaking really loudly either. Samuel told us about how you're supposed to debug a JavaScript application that becomes slow or unresponsive if used for a long period.

Samuel

Such issues usually come from memory leaks. But as he so perfectly pointed out in his talk, it is not enough to know that. One actually have to be able to pinpoint the issue and fix it.

If you search for this issue on Google, you'll surely find some nice articles by Paul Irish or Addy Osmani and maybe even a Chrome dev tool video. Things looks so nice and easy when other are explaining it, but when it's your turn to actually understand the complex Chrome UI and untangle your messy code, this is a whole new story.

I was glad that I wasn't the only one struggling to understand how to fix that (and after discussing with other people, the feeling was shared). Fixing memory leaks is not something we want to do. This should be the browser or framework job. But as Samuel pointed, framework developers are humans as well, and they do not understand it either.

Usually, when you have a memory leak, it is because the garbage collector cannot get rid of some elements and keep them in memory. It cannot delete them because they still have a reference to an object that still exists. Most common examples are listeners to document or keeping a handy reference to _app in your views for examples (I plead guilty to this, back in my Backbone days).

He then walk us through the process of finding and fixing the bug, with a nice touch of humor. But the really nice information is that we can now write non-regression tests that checks for memory leaks.

By using drool, we have access to the current node count in the browser memory, so it's just a matter of running a set of actions repeatedly and see if that number grows or not.

I do not often have to debug memory leaks, but when I do, I always have to re-watch the same video tutorials and try to make it work with my current app. At least now I know that I'm not the only one finding this hard, and I'll even be able to write tests to prevent this from occurring again.

Rebecca Murphey

The last talk of the first session was by Rebecca Murphey, who had the difficult task of speaking between us and the lunch. Four sessions in the morning might be a bit too much, maybe 3 would have been better.

Rebecca

Anyway, she spoke a bit about HTTP/2, and what it will change. She, too, had some microphone issues and I had a hard time following what she was saying. I was a bit afraid she was going to do a list of what HTTP/2 was changing (something I had already seen several times recently -at ParisWeb and the perfUG for example-). But, no, instead she focussed on asking more down-to-earth and sensible questions.

First is: how will the server push data to the client? HTTP/2 lets the server pro-actively push data to the client. For example, if the client asks for an HTML page, the server can reply with a CSS, JavaScript and/or images along with the HTML, speculating that the user will ask for them later anyway.

All the needed syntax is available in the protocol to allow that, but how will the server know what to send? Will it be some kind of heuristic guessing based on previous requests? Will it be some configuration in our nginx files, or a LUA module that can process a bit of logic? Nothing is yet implemented to do it in any webserver.

There are a few POC webservers that lets you do it, but they exist only so we can test the various syntaxes and see which one is best. Nothing is ready yet, this is a new story we'll have to write.

Second question was: how are we going to debug HTTP/2 requests? The current Chrome and Firefox dev tools do not display the server push in the waterfall. Also, HTTP/2 being wrapped a binary protocol, all previous HTTP tool will need an upgrade to work with HTTP/2.

Next is, how are we going to transition from HTTP to HTTP/2? Most of the webperf best practices in HTTP are either useless or bad practice in HTTP/2. Are we going to have two builds, and redirect to one or the other based on the user support of HTTP?

If we look at how CDNs are currently handling HTTP/2, we see that they are not ready either. At the moment, only Cloudflare implements it, but it does not (yet) provides a way to do server push.

At first, the low voice volume, hungry belly and generic explanation of HTTP/2 make me fear a boring talk. In the end, the questions asked were clever and made me think. We wanted HTTP/2. Now we have it. But we still have to build our tools to correctly use it and the next few years will be spend toying with it, discovering usages, building stuff and emerging best practices. Can't wait to get started.

You can find the slides here

Lunch break and lightning talks

As usual, the food was excellent, and the mountain of cheese is always a nice touch. As usual, the main hall is also too crowded. Everybody is crammed between the food tables and the sponsors booths.

Hall

After eating, but before starting with the real speakers, we had the lightning talks session. There was much more of them than in dotCSS, which is nice.

Johannes Fiala showed us swagger-codegen which is a tool to generate a SDK for any API that is exposed through Swagger.

Vincent Voyer, my colleague, shared with us the best way to publish ES6 modules today. Any module should be loadable either from a require or a script, and be easily pushed to npm. The browser support is too low to directly push stuff in ES6, and the js dev environment is too fragmented to safely push a browserify or webpack module.

The trick is to push to npm and the CDNs an ES5-compatible version, the one that is built by Babel for example, while still maintaining an ES6 code base for developers.

After than, Etienne Margraff, did a demo of Vorlon.js, a local webserver to debug any application, on any device by redirecting all debug messages to the webserver UI.

Then Maxime Salnikov tried to explain how Angular 2 was working. I say try because I did not get it. Maybe it comes from my aversion to Angular and the strong Russian accent.

Finally, Nicolas Grenié did a commercial presentation of Amazon Lambda. This was not supposed to be commercial I guess, just a way to explain how serverless microservices are working and why it's a good thing, but in reality as it was only talking about Amazon lambda this felt weird. Oh and the serverless part only means that the server is on somebody else infrastructure. Nevertheless, I was convinced by the power of the approach and would like to tinker with it.

Nicolas Bevaqua

After that, we got back to the main speakers. And this is also when it started to get really hot in the theater. This only got worse the more we advanced into the evening, but man it was hot. And as I'm complaining I might add that the available space to put my leg was too small, even on the first floor and I didn't have much space to put my elbows either which made it quite hard (and sometimes painful) to take notes.

Nicolas

Anyway, back to real business. Nicolas might better be known under the ponyfoo alias. You might know him for his extensive and very useful serie of blog posts about ES6.

Actually if that's how you know him, you pretty much already know everything he was saying in his talk. Basically he went over all the goodness that makes writing ES6 code so much enjoyable than ES5: arrow functions, spread/destructuring, default function values, rest parameters, string interpolation, let, const and other shiny new stuff.

I won't detail it here, but I strongly invite you to read on the subject, npm install --save-dev babel and start using it right away.

Andre Medeiros

The next one totally deserves the award of best talk of the day. Andre had some really simple, clear and beautiful slides, he told us a nice programming story in small increments and managed to convey complex concepts to the audience.

I was hooked really quickly and it's only at the end of the talk that he told us that he just taught us what reactive programming is, how it works, what problem it solves. Thanks a lot for that, this is the talk I enjoyed the most, and one I will remember for a long time. I even stopped taking notes at the end to keep my focus on the talk.

He walked us through the story of two linked variables, and how changing one would affect the other, and the window of inconsistency this could create. He then changed our point of view and represented those changes on a timeline where we do not need to know the value of each var at any given time, but only the event that lead to some changes in the value. He compared it to the difference between our age and our birthday. We do not need to know how old we are for any second of our live. We just need to know our birthday.

I could try to put into words what I learned from his talk, but this wouldn't do it justice. Instead, I encourage you to wait for the video of his talk, book 18 minutes in your busy schedule and watch it. It's worth it.

All of the concepts he talked about, and much much more, are implemented in RxJS.

Did I told you how great that talk was?

Eric Shoftall

After another break, we continued with Eric Shoftall. The theater was getting hotter and hotter and it was getting really uncomfortable and harder to concentrate.

Still, Eric talk was interesting so it made it easier. Eric is a fun guy (who ran for mayor of SF and did lobbying using mechanical turks), created gulp, and now tries to make WebRTC work everywhere.

WebRTC is still a cutting-edge technology. It is only really well supported by Chrome, and is hard to implement. There are a large number of quirks to know. The basic dance to get two nodes to connect is a bit long (but can be abstract in a few lines through the Peer module).

But where things are really getting complicated, it's when you need to make it work in Android, iOS and IE. But you need to, because as Eric said:

If you build an app that works only in Chrome, it's not an app, it's a demo.

Recent Android phone ships with Chrome, so it works well. But old version have a crappy browser installed by default. Fortunately, if you package your app in a Cordova bundle, using Crosswalk, you can force the webview to use the latest Chrome.

For iOS, we enter the hackish zone. There is something called iosrtc, which is a WebRTC polyfill written in Swift. It re-implement the methods and protocol in another thread, which makes it integrate with the webview quite challenging. For example, to play a video using it, you have to manually display the video from the other thread with an absolute positioning on top of the webview.

This reminds me of the old hacks to do file uploads in HTML by overlaying a transparent SWF file above a file upload to allow uploading several files at once. This was so hacky, on so many levels...

For IE, there is a proprietary IE plugin called Temasys that users need to manually install before being able to use WebRTC. And even once installed, you have to deal with the same positioning hacks than for iOS.

In the end, Eric created WebRTC everywhere, that packs all the solutions to the common issues he found into one neat bundle. I liked his talk because it is always interesting to see the creative ways people find to fix complex issues.

Forbes Lindesay

On the next talk, Forbes, creator of the Jade HTML preprocessor walked through the various parts that compose a transpiler (compiler? post-processor? I don't know).

Forbes

We're used to using this kind of tools today. CoffeeScript, Babel, SCSS, postCSS and Jade (that actually had to be renamed to Pug because of legal issues...).

All of this tools (as we've already seen in the postCSS talk at dotCSS) are made up of three parts. First a Lexer to parse the string code into a list of tokens. Then a Parser that convert those tokens into a structured tree. And finally a code generator that will output it back as a string.

I already had a general idea of how lexers and parsers were working, and I guess people that did a computer school had to build one at some point. But it was nice to not assume that everybody knows everything, and re-explain this kind of stuff.

It might have been a bit too long sometimes, and could have been made shorter because some parts really seemed obvious. Anyway, as he said at the end of the talk, now the audience knows how this works and can contribute to Pug :)

Actually, Pug seems to be to HTML what postCSS is to CSS. You can build your own plugin to add to the Pug pipeline and maybe define your own HTML custom tags or attributes to transpile (compile? post-process? damn.) them into something else.

I still do not know how to build my own lexer/parser after this talk, but I liked the idea of making Pug modular and more high level. This also does a great echo to all the good things that have been said on postCSS.

Tim Caswell

Tim (aka @creationix and his son Jack then did a live-coding session involving colored LEDs, arduino and, maybe, JavaScript, but I'm not sure. By that time, I had moved to the ground level where the air was fresher.

Tim

I must say that I did not get what the message of this talk was. Tim explained that there are no programmers in his town, and he wanted to make coding easy and teach it around him, especially to kids. This is all well and a very nice idea... But then I failed to really understand the rest of the talk :/

Tim's son, Jack, managed to live-code something in a weird programming language to make some leds blink and a robot move. The language itself looked so low level to me that the performance seemed more to be how the kid managed to remember all the magic numbers to use. Really, having to input stuff like istart 0x70 or write 6 is absolutely not the idea of programming I would like to show to people that don't know anything about it.

Henrik Joreteg

So, after this small WTF talk, we're back to one of the best talks of the day, done by Henrik Joreteg.

He started his talk with something that makes a lot of sense. When you use a mobile phone, you're actually using a physical object, and when you use its touch screen you want it to react immediately. That's how we're used to have physical objects to react; immediately.

But the web of yesterday has been thought for the desktop, with desktop hardware and speed. The web of today is, in the best of cases, mobile-first. Still, this is not enough because the web of tomorrow will be mobile-only. There are more and more users everyday that uses smartphones only and have ditched their desktop browser.

Still, we, the web developer community, build our websites on a desktop machine. And we test our code on the desktop as well, only testing on mobile emulators later in the cycle and on real devices really at the end of the chain while we should actually do the opposite. We should all have physical phones near to our work station and take them in our hand when we code for it. This will make us really feel what we are coding for.

He also quoted a tweet saying:

If you want to write fast code, use a slow computer

Which is absolutely true. I think we're guilty of thinking along the lines of "oh yes it is a bit slow on mobile, but mobiles are getting more powerful every 6 months so this should be ok soon". But that's not true. Not everybody has the latest iPhone, nor a fast bandwidth, but they still want a nice experience.

Google set some rules to their products, based on the number 6. I did not manage to write them all down, but it was something like:

  • max 60KB of HTML
  • max 60KB of CSS
  • max 60KB of JavaScript
  • 60fps
  • max .6s to load the page

They managed to achieve it, and I think they are sensible values that we could all try to reach. Forcing us to work under a budget will help us make things better. It might be hard, but not impossible.

He then continued by giving some advices on what we should do, and most of all what we should stop doing.

First of all, we should get back to the server-side rendering that we should never have stopped doing. Browser are really fast at parsing HTML, much more than they are at parsing and executing js then building the DOM. There is no need to go all isomorphic/universal/whatever. Just push the HTML you know is going to be needed to the page. There's even a webpack config that can do that for you.

Second point is to really think if you need the whole framework you're using or even if you need a framework at all. Do we need a templating system when we have JSX? No need to parse and modify DOM element when we have HTML-to-JavaScript transforms at build time.

Also, React proved that re-rendering the whole UI whenever the app state changed could actually be really lightweight as long as you use a virtual DOM. If you strip everything down, in the end, your whole app can be simplified as newVDom = app(state). You have only one state for your whole app, you process it, and it returns a virtual DOM. If you really need a little more structure on how the data flows, you can add Redux which is only 2KB.

React is nice, but the real gold nugget in it is the virtual DOM. You can extract only this part from the React core for only 10% of the total size.

The last trick is to use the main browser thread for the UI rendering (vdom to DOM) and make all the heavier computation asynchronously in WebWorkers on the side. The UI only pass actions to the WebWorkers that yield the new vdom back to the UI thread when they are done.

You can see this in action on a website named pokedex.com, which apparently works also really well on old mobile devices.

I liked that talk as well, because he did not hesitate to challenge what we take as granted. It's healthy once in a while to cleanup what we think we now about our tools, remove the dust and change the broken pieces. React introduced some really clever ideas, and even if the framework as a whole works nice, you can still cherry-pick parts of it to go with the bare minimum.

Brendan Eich

And finally the last talk of the day was done by Brendan Eich, and was really really weird. I had to ask people afterwards to explain me what it was about.

What was missing for this talk was a previously, on Brendan Eich. I felt like I had to catch up with a story without knowing the context. He talked about TC39 and asm.js, without explaining what it is. In no specific order he also talked about how FirefoxOS and Tizen, that were huge hopes of game changers failed in the last years. He explained that it was due to the fact that those platform did not have any apps, and that people today wants app. But app developers doesn't want to code apps for platforms that have very few users, which creates a vicious circle.

He then continue saying that if you build something for the web, you cannot do it in isolation, it has to fit in the existing landscape. He went on joking about Douglas Crockford and its minimalist views.

Then, he started killing zombies with chicken equipped with machine guns. And that crashed. So he started killing pigs with an axe.

To be honest, this talk made absolutely no sense to me, I am completely unable to synthesize the point of the talk.

Conclusion

I think I'm gonna say the same thing as last year. dotJS is for me less interesting than dotCSS. Proportionally, there was way much less inspirational talks, the sound wasn't great, the place is less comfortable and was getting hotter and hotter along the day. On a very personal note I also realized that my eyes are getting worse, and even with my glasses on, the slides were a bit blurry for me when I sat in the back.

All

Last year I said "maybe I won't come next year", but I still came. This time I'm saying it again, removing the maybe. This is just no longer adapted to what I'm looking for in a conference. I'll still come to dotCSS and dotScale, though.

dotCSS 2015

For the second year, I went to dotCSS. It took place at the same place as last year, Théâtre des variétés. As usual, we know only who will be speaking, but have no idea what they are going to talk about.

I met a few former colleagues (most of them had switched job in the past year), so it was a nice way to catch up on the news.

You can find wonderful photos of the event on flickr.

Rachel Andrew

The first talk of the day was by Rachel Andrew, who talked about CSS grids. Midway through her talk I realized that I had already seen that talk not long ago, at ParisWeb. This time it was in 18mn instead of 50 which is a much more suitable format.

I see CSS grids as the logical extension of flexbox, where each element is aware that it is part of layout markup, and thus can react to CSS rules knowing much more context, and without being bound to its wrapping markup.

Long will be gone the days of the faux-column layout and display:table when CSS grid will hit mainstream browsers. I personally do not yet have dived into the flexbox world, but seeing what the future has in stock, I might hold a bit longer and go with the CSS grids once they are released.

CSS grids are the future of CSS layout, you should start to play with it right now (by feature flipping it in your browser) so you'll be ready when the day will come.

Andrey Sitnik

Next talk, by Andrey Sitnik, made me understand what postCSS was, for real. Like everybody, I've used autoprefixer, but I never understood it was just one plugin across hundreds that sits in the postCSS environment.

By itself, postCSS does nothing more than parsing a CSS file into a tree representation, and output it back into a string. But the fun part is that you can add any number of plugins between the start and the end of the pipeline. This means that you can write plugins that take a tree representation of a CSS file as input, transform it as much as you want, and let postCSS output the new string representation of your file.

All the plugins follow the simple rule of doing one thing only (but doing it well), and can all be piped together, in pure unix philosophy. As said earlier, one of the most known plugin is autoprefixer, which will add all the required vendor prefixes to your properties, but other can emulate some features found in SCSS for example (like nesting of selectors).

Andrey showed us a bunch of really useful plugins, stressing the fact that postCSS can do what SCSS can also do (using the precss plugin for example), but that its goal was not to add syntactic sugar on top of CSS, but to help improve modularization, reusability and maintainability of CSS components.

In CSS, everything is global. A rule you write in a file could affect an element on another part of the application. Even reset.css files are global. And the cascade makes it really hard to really know where properties are coming from.

With postCSS, you can split your code into modules (like button or header), creating one directory for each of them, where you put all the required html, css, js and images. The use meta-plugin lets you define a local plugin for this module only, which lets you better handle the dependencies of your project.

In the past years, we've started to use BEM as a way to avoid collisions in our selectors and workaround the global nature of CSS. But naming components in BEM can still be complex, and boring. postCSS provides of course @b, @e and @m methods to make writing BEM easier, but it also provides a better top-level abstraction that lets your write your nested class like in the 1996 web, without caring about any conflict, and it will automatically rewrite them to make them unique in a BEM sort of way. The example given was using React for the HTML rendering part, and I'm a bit unsure how this would work without React.

postCSS also provides plugins for handling extended media queries. Part of CSS code that could react not only to the window width, but to a parent element dimensions, or color. This can be useful for example to change the text color of an element to black to white if its container changes its background to something darker.

In the same vein, postCSS has a plugin to apply a local reset to an element, not affecting every other elements on the page. As it has knowledge of every other parent element in the tree, it can know which properties needs to be overwritten to get back to the default values.

I must say postCSS seems really awesome. It seems mature and build by people that really use CSS everyday and know the quirks we are facing. The component approach of plugins and the modularization it provides are really huge benefits for the code maintenance.

But to be honest, this is also one new tool to add to the front-end pipeline and should be used only by people that understand the underlying issues the tool is solving, otherwise it only adds complexity.

If I had to remember only one thing from dotCSS, it's postCSS.

You can find the slides here: http://ai.github.io/postcss-isolation/

Daniel Eden

Then, Daniel Eden, Designer at Dropbox, told us more about how they manage to do CSS on a huge scale. Short answer is: they don't.

Daniel

Their codebase is 1220 CSS files, for about 150.000 LOC. Which is about 6% of the whole Dropbox codebase. This seems impressive, but it is in the same tendency that another web giant, Etsy, with its 2000 files for 200.000 LOC.

How did that happen? Well, because a lot of people are touching CSS. When a new developer or new team starts a new feature, it handles both the back-end and front-end side. And most of them do not like writing CSS, so they do it quite badly. They are very good js and python developers, still do not understand the cascading principle of CSS, the specificity nor the way to abstract CSS concepts. And still, they have to write CSS, and write a lot of it.

Developers are frustrated by the archaic way we have to test CSS: save file, alt-tab, see if this looks good. As said earlier, everything can be overridden because everything is global. The box model is counter-intuitive and the industry best practice are really young, a few years at most.

And when something doesn't work as expected in CSS, it is easy to write more CSS to fix it. And then writing more CSS to fix the fix that fixes this old fix. In the end, it is too easy to write CSS, that's why the codebase grows that fast.

CSS

So Daniel took the issue and started improving the whole CSS codebase, following a mantra I like a lot:

Move slow and fix things

His solution was not perfect, but it was a good starting point. He started to quantify everything. Knowing the number of line of code (using cloc, then running CSSStats —which is using Parker underneath—) to show in a nice and readable way what they were doing wrong. CSSStats outputs the number of font-size defined, the number of unique colors, the most specific selectors, and so on. When faced with such objective values, one can only react and try to make things better. As long as the issue is invisible and "things are working" and "CSS is not really programming", nothing will change.

He put in place a few rules in their process, like the check through a linter for new files, adding himself as a reviewer every time CSS was changed on some critical files, as well as writing a style guide. They also moved from SCSS to postCSS and managed to lose almost 80% of their LOC.

What I really enjoyed in this talk was how humble Daniel was. He told use what worked for them, and the issues they faced. All that was needed was one guy that really cared about CSS code quality to take the lead on it to see things dramatically improve. Start with data, get metrics from your current codebase, explain and teach why it's bad and how to fix it, then provide some tooling to help make it easier and the results will come.

CSS Optimization

After the coffee break, we had the chance to see one (and only one) lightning talk. Only one person applied for it this year, which is a shame because it was really interesting and I would have like to see more of them. I will suggest a talk myself for next year.

The one and only talk was about a project named cssnano, a postCSS plugin for minifying CSS.

This is a set of plugins that will try to optimize the css output so it takes the lower possible amount of bytes. Things like removing whitespace, renaming colors, animations, transform and default values.

In essence, it does the same job as cssmin, but due to its modular approach seems a better alternative on the long run.

Una Kravetz

The next talk was the mind-blowing one. Every dotCSS needs one, and this was the one.

Una Kravetz works at IBM and told us about mostly unknown properties to apply styling on images close to what we are used to do with Photoshop. She started by giving an academic list of all the possible options, then gave us some real life examples.

It is for example possible to change an image contrast to make the grey background appear white and thus being able to put it on a white background page without having to edit the page in Photoshop beforehand.

Then she showed a few interpolation modes to merge two images together and the various way each pixel can interact (either by taking the lightest or darkest value, doing a mix of both, only keeping the hue, etc). At first this looked a bit useless because I could not see any real life usage of the technique, then she started giving more real examples and I was conquered.

When you start mixing this already powerful filters with other CSS properties like masking and cropping, you can make really smart image composition stuff. She recreated most (if not all) the Instagram effects and more using simply CSS.

She showed us how to create those blurry background images in pure CSS, and by applying them to all images in a photo gallery, we can manage to give them an unifying look. This seems really useful on an e-commerce website when you are presenting a lot of different products, with various size, background and colors. But using such a filter, they will all look like they are part of the same set.

I highly encourage you to have a look at CSSGram and her other projects. Her talk was really inspiring and the effects she showed are much more than a simple showcase of the power of CSS, they have real world usage that could greatly improve UI and UX of the website that are going to use it. That being said, only the best unicorn devsigners will be able to harness it correctly because this requires both a set of design and CSS skills to be able to understand all the possibilities.

Alan Stearns

After that, we had Alan Stearns, new co-chairman of the W3C who told us a bit more about how we, web developers, can help move the web forward.

Alan

The guy was a huge typo nerd and was unhappy about how fonts were rendered on the web, so he wrote a blog post about it. And was contacted by the W3C to help them fix this at a larger scale.

His whole talk could have been summarized in one of his slides:

Write, Talk, Share and File bugs

We should write blog posts about what is hard, boring or impossible to do in CSS. We should write about the workarounds we found. We should talk in conferences about it and most of all, we should file bugs directly to the browser issue tracking. I know it can be intimidating to file a bug that up in the chain, but this is actually the only way to make the browser improve their rendering engines.

Platform.sh

During the next break, I talked briefly with Ori about his company, platform.sh. Platform is a PaaS that gives you the insurance that your production and staging environment are exactly the same, up to the bytecode running. This gives you all confidence that if its working in staging, this will work the same in prod. This also let you run your tests against your production DB. The way to define your config is a declarative language close to Ansible (but not Ansible).

The idea is interesting, but moving all the prod environment to such a new actor on the scene can be scary.

Tom Giannattasio

After the break, Tom Giannattasio told us more about advanced CSS animations.

Tom

The guy came from the Flash world, where doing 3D modeling was a real pain, whereas it is really easy to do in CSS. The default tutorial you see everywhere is how to build a 3D cube in CSS. By tweaking it a bit you can easily create other forms, like a cylinder, and by applying the specific background, you can simulate a barrel (like the ones usually found in video games).

CSS

Actually, this is not a real 3D object, this are just various div elements rotating and with a mask applied to simulate some perspective effect. It is actually possible to go quite far in 3D in pure CSS (like this guy who build a FPS).

Still, things are quite limited in what you can achieve with one div only because any timing animation you add with an element has a fixed duration. If you want to animate the same element with one timing function for one property and another for another property, you have to resort to using pseudo-elements or wrapper divs. By adding enough wrappers and playing with transform, blend-mode and opacity, you can build interesting demos.

But what use does it actually have in the web world? Well, seduction. Netflix did a parallax effect on their serie selection view for example. This is a pure gadget, but is so nicely done that the user wants to play with it, and is one of this little touches that does the difference.

As we have already seen in the previous talk with blend-mode, we can do pretty nice and incredible things with CSS today. But should we?

For Tom, most of what the web needs today can be done with simple CSS. More complex usage, like 3D rendering, can be achieved through CSS as well, but the more advanced effect will require WebGL. And this is a whole different world, with a completely different learning curve.

Chris Eppstein

One of the last talks was done by Chris Eppstein, the creator of SCSS. He did not speak about SCSS but gave an few plausible explanations of why CSS is often seen as not really programming.

Before the declarative version of CSS we know, there was a proposal of another styling language, close to lisp in its syntax, but this was considered too complex for the non-programmers. So yes, from its inception, CSS has been created with the fact that their users where not programmers.

At first, variables in the language where considered a bad idea from its creator itself, for the exact same reasons. This was too complex a concept for the non-programmers. But what is programming anyway? Chris had a nice definition of it. Programming is simply the art of splitting a big problem into smaller ones, and then moving data from one place to another, possibly with some reformatting in between.

Following that definition, CSS is programming as we are moving the pixels data from the stylesheet to the screen. The fact that the language is declarative and does not have loops or branching does not change anything.

Maybe this view of the CSS developer not being a real developer come from the old ages, when CSS was much easier? In the first version, CSS had 53 properties, whereas it now boast 316. Maybe it is based on the fact that developers think design is easy and does not require much skill?

In any case, even if any of those two ideas were true, system have evolved but CSS is still part of it. It is tied to the HTML, it is tied to the JavaScript, and website and applications codebase are getting bigger and bigger. Being tied like this, CSS complexity increases just because its related parts complexity are increasing as well.

Chris point was that CSS is not easy. Maybe it was, back in the days, but it is not anymore. And what we can do is limited by the language in which we expose our concepts. In ancient Rome, multiplication was something that was considered an extremely complex scientific achievement, simply because the Roman way of writing numbers was too complex. When people switched to Arabic numbers, multiplication became much more easier. The language we use restricts us in what we can achieve.

This limitation of the language was the driving force that made Chris create SCSS. It added variables, functions and loops. People were reacting quite badly against it at first, because it was breaking how CSS was designed. Along the years, we learned by experience that this new tools where indeed useful and we are glad Chris pushed the boundaries of the language.

You can find the slides here

Daniel Glazman

The last conference of the day was by Daniel Glazman, doing a nice echo to last year where he did the opening one. I must confess that I slept during the major part of his talk so I cannot really tell what it was about. I kind of remember that he was bitching about CSS, and that I had already seen this talk (or one very similar) in another event, but I'll have to wait for the video to refresh my memory.

Glazou

Conclusion

Awesome evening. A great line up of talks that were really inspiring. Maybe the intensity decreased a bit at the end of the day, or maybe it was just me getting tired. The tool to remember from this session is definitely postCSS (and CSSStats). I've personally really enjoyed the talks about the blend mode and 3D, as well as the more meta talks about the CSS dev environment.

all

Big up to the dotConferences team, keep up the good work and I'll definitely come again next year.