HumanTalks December 2015

Note: These notes are from Adrien. Thank you very much Adrien!

For December, HumanTalks were at LeBonCoin. After several days of conferences with the dotJs, dotCss and apiDays Tim needed some typing REST and delegated it to me

Polyfill and Transpilers, one code for every browser

First talk by Alexandre Barbier (@alexbrbr, explain the why and how of progressive enhancement in js.

One of main task of web developers is to ensure compatibility across browsers. And if things are getting easier (less painfull) with the death (end of life support) of severals IE, web is still the most hostile environment.

Once target browser has been defined, there are two different ways to do it.

Using polyfills, which consist in reimplementing some API in pure js if this one is not defined. First you need to detect if the feature is available, if not you need to implement it.

If you want to use the last features of js, the one that has not been implemented (such as EcmaScript 6/2015), you need to use a transpiler (source to source compiler). More than 270 language target js, from coffeescript to clojurescript along with dart and typescript. One of the most used is Babel which just hit its 6th version

Transhumanism, the singularity advent

After Transpilers, Transhumanism by Pierre Cointe (@pierre_cointe.

The talk presented history of transhumanism, that came from eugenics as a way to evolve willingly the human specie. NBIC Technologies (Nano tech, Biotech, Information technology, Cognitive science)

Pierre presented some of the projects associated with it such as immortality, genome manipulation, consciousness transfer. Then he presented some of Raymond Kurzweil predict, which based on an extended Moore law to predict the singularity around 2030, the singularity being the point in time where a super computer would be more powerful than an human brain.

Develop application for the TV

The next talk was done by Mickaël GREGORI (@meekah3ll), and present us his experience developing application for the television.

Not that a friendly place neither, with no standards, SDK, xmls... After presenting the market he focused on three products: ChromeCast from Google, Roku, and Android TV. Most of the application consist in creating a new dynamic channel TC.

To conclude he talked a bit about a standard that may be on its way, W3C being working on a TV API

How to be more productive with three methods

The fourth and last talk was made by Thibault Vigouroux (@teaBough)

He presented 3 ways he is using everyday to be more effective at what he is doing.

The first one was the Pomodoro which consist in working 25 minutes on the task, being focused, then taking a 5 minutes break to rest and letting the brain work in diffuse mode. He told us about Pomodoro Challenge, an application flexible that rely in gamification to get you used to practice.

Then he present a way to help choose a task, the Eisenhower Matrix. For it you need to label your task according to two criteria: importance, emergency.

Basically you do know what's both important and urgent, you delegate the urgent non important, and decide when to do what is important but not urgent. (note how I deleted the section about non important non urgent stuff)

To finish he talked about how to be better at a task with deliberate practice, which he used to switch to colemak layout. 5 components are vital for this:

  1. Being focused on improving the result
  2. Immediate Feedback
  3. Easily repeatable exercises
  4. Not really pleasant
  5. Mentally Intense

Conclusion

Very diverse and interesting talks as usual. :) Good meet up to conclude the year!

dotJS 2015

After dotCSS last Friday, I started my week with dotJS on Monday. Same organizers, but different Theatre. A bigger one was needed because dotJS is one full day of conferences for more than 1.000 attendees.

Overall, I liked dotCSS better (more compact, more human-sized, more great talks), but there have been some nice things said here as well, so I'll try to write all this here.

dotJS

Christophe Porteneuve

To wake us all up, we started with Christophe Porteneuve, his non-French accent but very-English expressions, fast flow and way too bright slides.

At first I was a bit worried the talk was going to be something we had seen so many times. Speaking about callback hell and why it was bad. Then introducing promises and why it was good.

Then he moved further into unknown territory, using promises with generators. He started by introducing the yield concept, all the way to the await ES7 keyword. He said it himself that using promises with generators are a bit hackish and not the way they are meant to be used, but that was an interesting way to mix the concepts.

Slides are available here.

And Christophe does the same jokes in English than in French :)

By reading the docs, you're instantly part of the 1% best developers in the world.

Mathias Buus

Second talk was by Mathias Buus. There was a slide issue that forced him to redo the start of his talk, then there might have been a microphone issue because it was hard to hear what Mathias was saying. Or maybe he was just not speaking loudly enough.

It took me a while to understand what the topic of the talk was about. I managed to understand it was a project named hyperdrive, but it was only when he explained that it is a bittorent-like exchange format, done in JavaScript, that he got me hooked.

Hyperdrive uses something called smart diffing to split each file into chunks of data. It then does a checksum on each chunk, and store all the checksums into a table, a bit like a dictionary. In the end, to download a complete set of files, you only need to download each chunk listed in the dictionary and combine them to recreate the files. Doing so, you will never download twice the same content.

There is also another optimisation of creating a recursive hash on top of each set of hashes, in some kind of tree structure, but I will not be able to explain it clearly here.

He ended his talk with a demo of it in action, actually streaming a 200MB video file from one browser to another, just by sharing a hash representation of the tree of hashes.

The talk was a bit slow to start, but the end result was impressive, and I liked him taking the time to explain and demystify the bittorent logic (even if I did not remember everything).

Samuel Saccone

Third talk was by Samuel Saccone, that wasn't speaking really loudly either. Samuel told us about how you're supposed to debug a JavaScript application that becomes slow or unresponsive if used for a long period.

Samuel

Such issues usually come from memory leaks. But as he so perfectly pointed out in his talk, it is not enough to know that. One actually have to be able to pinpoint the issue and fix it.

If you search for this issue on Google, you'll surely find some nice articles by Paul Irish or Addy Osmani and maybe even a Chrome dev tool video. Things looks so nice and easy when other are explaining it, but when it's your turn to actually understand the complex Chrome UI and untangle your messy code, this is a whole new story.

I was glad that I wasn't the only one struggling to understand how to fix that (and after discussing with other people, the feeling was shared). Fixing memory leaks is not something we want to do. This should be the browser or framework job. But as Samuel pointed, framework developers are humans as well, and they do not understand it either.

Usually, when you have a memory leak, it is because the garbage collector cannot get rid of some elements and keep them in memory. It cannot delete them because they still have a reference to an object that still exists. Most common examples are listeners to document or keeping a handy reference to _app in your views for examples (I plead guilty to this, back in my Backbone days).

He then walk us through the process of finding and fixing the bug, with a nice touch of humor. But the really nice information is that we can now write non-regression tests that checks for memory leaks.

By using drool, we have access to the current node count in the browser memory, so it's just a matter of running a set of actions repeatedly and see if that number grows or not.

I do not often have to debug memory leaks, but when I do, I always have to re-watch the same video tutorials and try to make it work with my current app. At least now I know that I'm not the only one finding this hard, and I'll even be able to write tests to prevent this from occurring again.

Rebecca Murphey

The last talk of the first session was by Rebecca Murphey, who had the difficult task of speaking between us and the lunch. Four sessions in the morning might be a bit too much, maybe 3 would have been better.

Rebecca

Anyway, she spoke a bit about HTTP/2, and what it will change. She, too, had some microphone issues and I had a hard time following what she was saying. I was a bit afraid she was going to do a list of what HTTP/2 was changing (something I had already seen several times recently -at ParisWeb and the perfUG for example-). But, no, instead she focussed on asking more down-to-earth and sensible questions.

First is: how will the server push data to the client? HTTP/2 lets the server pro-actively push data to the client. For example, if the client asks for an HTML page, the server can reply with a CSS, JavaScript and/or images along with the HTML, speculating that the user will ask for them later anyway.

All the needed syntax is available in the protocol to allow that, but how will the server know what to send? Will it be some kind of heuristic guessing based on previous requests? Will it be some configuration in our nginx files, or a LUA module that can process a bit of logic? Nothing is yet implemented to do it in any webserver.

There are a few POC webservers that lets you do it, but they exist only so we can test the various syntaxes and see which one is best. Nothing is ready yet, this is a new story we'll have to write.

Second question was: how are we going to debug HTTP/2 requests? The current Chrome and Firefox dev tools do not display the server push in the waterfall. Also, HTTP/2 being wrapped a binary protocol, all previous HTTP tool will need an upgrade to work with HTTP/2.

Next is, how are we going to transition from HTTP to HTTP/2? Most of the webperf best practices in HTTP are either useless or bad practice in HTTP/2. Are we going to have two builds, and redirect to one or the other based on the user support of HTTP?

If we look at how CDNs are currently handling HTTP/2, we see that they are not ready either. At the moment, only Cloudflare implements it, but it does not (yet) provides a way to do server push.

At first, the low voice volume, hungry belly and generic explanation of HTTP/2 make me fear a boring talk. In the end, the questions asked were clever and made me think. We wanted HTTP/2. Now we have it. But we still have to build our tools to correctly use it and the next few years will be spend toying with it, discovering usages, building stuff and emerging best practices. Can't wait to get started.

You can find the slides here

Lunch break and lightning talks

As usual, the food was excellent, and the mountain of cheese is always a nice touch. As usual, the main hall is also too crowded. Everybody is crammed between the food tables and the sponsors booths.

Hall

After eating, but before starting with the real speakers, we had the lightning talks session. There was much more of them than in dotCSS, which is nice.

Johannes Fiala showed us swagger-codegen which is a tool to generate a SDK for any API that is exposed through Swagger.

Vincent Voyer, my colleague, shared with us the best way to publish ES6 modules today. Any module should be loadable either from a require or a script, and be easily pushed to npm. The browser support is too low to directly push stuff in ES6, and the js dev environment is too fragmented to safely push a browserify or webpack module.

The trick is to push to npm and the CDNs an ES5-compatible version, the one that is built by Babel for example, while still maintaining an ES6 code base for developers.

After than, Etienne Margraff, did a demo of Vorlon.js, a local webserver to debug any application, on any device by redirecting all debug messages to the webserver UI.

Then Maxime Salnikov tried to explain how Angular 2 was working. I say try because I did not get it. Maybe it comes from my aversion to Angular and the strong Russian accent.

Finally, Nicolas Grenié did a commercial presentation of Amazon Lambda. This was not supposed to be commercial I guess, just a way to explain how serverless microservices are working and why it's a good thing, but in reality as it was only talking about Amazon lambda this felt weird. Oh and the serverless part only means that the server is on somebody else infrastructure. Nevertheless, I was convinced by the power of the approach and would like to tinker with it.

Nicolas Bevaqua

After that, we got back to the main speakers. And this is also when it started to get really hot in the theater. This only got worse the more we advanced into the evening, but man it was hot. And as I'm complaining I might add that the available space to put my leg was too small, even on the first floor and I didn't have much space to put my elbows either which made it quite hard (and sometimes painful) to take notes.

Nicolas

Anyway, back to real business. Nicolas might better be known under the ponyfoo alias. You might know him for his extensive and very useful serie of blog posts about ES6.

Actually if that's how you know him, you pretty much already know everything he was saying in his talk. Basically he went over all the goodness that makes writing ES6 code so much enjoyable than ES5: arrow functions, spread/destructuring, default function values, rest parameters, string interpolation, let, const and other shiny new stuff.

I won't detail it here, but I strongly invite you to read on the subject, npm install --save-dev babel and start using it right away.

Andre Medeiros

The next one totally deserves the award of best talk of the day. Andre had some really simple, clear and beautiful slides, he told us a nice programming story in small increments and managed to convey complex concepts to the audience.

I was hooked really quickly and it's only at the end of the talk that he told us that he just taught us what reactive programming is, how it works, what problem it solves. Thanks a lot for that, this is the talk I enjoyed the most, and one I will remember for a long time. I even stopped taking notes at the end to keep my focus on the talk.

He walked us through the story of two linked variables, and how changing one would affect the other, and the window of inconsistency this could create. He then changed our point of view and represented those changes on a timeline where we do not need to know the value of each var at any given time, but only the event that lead to some changes in the value. He compared it to the difference between our age and our birthday. We do not need to know how old we are for any second of our live. We just need to know our birthday.

I could try to put into words what I learned from his talk, but this wouldn't do it justice. Instead, I encourage you to wait for the video of his talk, book 18 minutes in your busy schedule and watch it. It's worth it.

All of the concepts he talked about, and much much more, are implemented in RxJS.

Did I told you how great that talk was?

Eric Shoftall

After another break, we continued with Eric Shoftall. The theater was getting hotter and hotter and it was getting really uncomfortable and harder to concentrate.

Still, Eric talk was interesting so it made it easier. Eric is a fun guy (who ran for mayor of SF and did lobbying using mechanical turks), created gulp, and now tries to make WebRTC work everywhere.

WebRTC is still a cutting-edge technology. It is only really well supported by Chrome, and is hard to implement. There are a large number of quirks to know. The basic dance to get two nodes to connect is a bit long (but can be abstract in a few lines through the Peer module).

But where things are really getting complicated, it's when you need to make it work in Android, iOS and IE. But you need to, because as Eric said:

If you build an app that works only in Chrome, it's not an app, it's a demo.

Recent Android phone ships with Chrome, so it works well. But old version have a crappy browser installed by default. Fortunately, if you package your app in a Cordova bundle, using Crosswalk, you can force the webview to use the latest Chrome.

For iOS, we enter the hackish zone. There is something called iosrtc, which is a WebRTC polyfill written in Swift. It re-implement the methods and protocol in another thread, which makes it integrate with the webview quite challenging. For example, to play a video using it, you have to manually display the video from the other thread with an absolute positioning on top of the webview.

This reminds me of the old hacks to do file uploads in HTML by overlaying a transparent SWF file above a file upload to allow uploading several files at once. This was so hacky, on so many levels...

For IE, there is a proprietary IE plugin called Temasys that users need to manually install before being able to use WebRTC. And even once installed, you have to deal with the same positioning hacks than for iOS.

In the end, Eric created WebRTC everywhere, that packs all the solutions to the common issues he found into one neat bundle. I liked his talk because it is always interesting to see the creative ways people find to fix complex issues.

Forbes Lindesay

On the next talk, Forbes, creator of the Jade HTML preprocessor walked through the various parts that compose a transpiler (compiler? post-processor? I don't know).

Forbes

We're used to using this kind of tools today. CoffeeScript, Babel, SCSS, postCSS and Jade (that actually had to be renamed to Pug because of legal issues...).

All of this tools (as we've already seen in the postCSS talk at dotCSS) are made up of three parts. First a Lexer to parse the string code into a list of tokens. Then a Parser that convert those tokens into a structured tree. And finally a code generator that will output it back as a string.

I already had a general idea of how lexers and parsers were working, and I guess people that did a computer school had to build one at some point. But it was nice to not assume that everybody knows everything, and re-explain this kind of stuff.

It might have been a bit too long sometimes, and could have been made shorter because some parts really seemed obvious. Anyway, as he said at the end of the talk, now the audience knows how this works and can contribute to Pug :)

Actually, Pug seems to be to HTML what postCSS is to CSS. You can build your own plugin to add to the Pug pipeline and maybe define your own HTML custom tags or attributes to transpile (compile? post-process? damn.) them into something else.

I still do not know how to build my own lexer/parser after this talk, but I liked the idea of making Pug modular and more high level. This also does a great echo to all the good things that have been said on postCSS.

Tim Caswell

Tim (aka @creationix and his son Jack then did a live-coding session involving colored LEDs, arduino and, maybe, JavaScript, but I'm not sure. By that time, I had moved to the ground level where the air was fresher.

Tim

I must say that I did not get what the message of this talk was. Tim explained that there are no programmers in his town, and he wanted to make coding easy and teach it around him, especially to kids. This is all well and a very nice idea... But then I failed to really understand the rest of the talk :/

Tim's son, Jack, managed to live-code something in a weird programming language to make some leds blink and a robot move. The language itself looked so low level to me that the performance seemed more to be how the kid managed to remember all the magic numbers to use. Really, having to input stuff like istart 0x70 or write 6 is absolutely not the idea of programming I would like to show to people that don't know anything about it.

Henrik Joreteg

So, after this small WTF talk, we're back to one of the best talks of the day, done by Henrik Joreteg.

He started his talk with something that makes a lot of sense. When you use a mobile phone, you're actually using a physical object, and when you use its touch screen you want it to react immediately. That's how we're used to have physical objects to react; immediately.

But the web of yesterday has been thought for the desktop, with desktop hardware and speed. The web of today is, in the best of cases, mobile-first. Still, this is not enough because the web of tomorrow will be mobile-only. There are more and more users everyday that uses smartphones only and have ditched their desktop browser.

Still, we, the web developer community, build our websites on a desktop machine. And we test our code on the desktop as well, only testing on mobile emulators later in the cycle and on real devices really at the end of the chain while we should actually do the opposite. We should all have physical phones near to our work station and take them in our hand when we code for it. This will make us really feel what we are coding for.

He also quoted a tweet saying:

If you want to write fast code, use a slow computer

Which is absolutely true. I think we're guilty of thinking along the lines of "oh yes it is a bit slow on mobile, but mobiles are getting more powerful every 6 months so this should be ok soon". But that's not true. Not everybody has the latest iPhone, nor a fast bandwidth, but they still want a nice experience.

Google set some rules to their products, based on the number 6. I did not manage to write them all down, but it was something like:

  • max 60KB of HTML
  • max 60KB of CSS
  • max 60KB of JavaScript
  • 60fps
  • max .6s to load the page

They managed to achieve it, and I think they are sensible values that we could all try to reach. Forcing us to work under a budget will help us make things better. It might be hard, but not impossible.

He then continued by giving some advices on what we should do, and most of all what we should stop doing.

First of all, we should get back to the server-side rendering that we should never have stopped doing. Browser are really fast at parsing HTML, much more than they are at parsing and executing js then building the DOM. There is no need to go all isomorphic/universal/whatever. Just push the HTML you know is going to be needed to the page. There's even a webpack config that can do that for you.

Second point is to really think if you need the whole framework you're using or even if you need a framework at all. Do we need a templating system when we have JSX? No need to parse and modify DOM element when we have HTML-to-JavaScript transforms at build time.

Also, React proved that re-rendering the whole UI whenever the app state changed could actually be really lightweight as long as you use a virtual DOM. If you strip everything down, in the end, your whole app can be simplified as newVDom = app(state). You have only one state for your whole app, you process it, and it returns a virtual DOM. If you really need a little more structure on how the data flows, you can add Redux which is only 2KB.

React is nice, but the real gold nugget in it is the virtual DOM. You can extract only this part from the React core for only 10% of the total size.

The last trick is to use the main browser thread for the UI rendering (vdom to DOM) and make all the heavier computation asynchronously in WebWorkers on the side. The UI only pass actions to the WebWorkers that yield the new vdom back to the UI thread when they are done.

You can see this in action on a website named pokedex.com, which apparently works also really well on old mobile devices.

I liked that talk as well, because he did not hesitate to challenge what we take as granted. It's healthy once in a while to cleanup what we think we now about our tools, remove the dust and change the broken pieces. React introduced some really clever ideas, and even if the framework as a whole works nice, you can still cherry-pick parts of it to go with the bare minimum.

Brendan Eich

And finally the last talk of the day was done by Brendan Eich, and was really really weird. I had to ask people afterwards to explain me what it was about.

What was missing for this talk was a previously, on Brendan Eich. I felt like I had to catch up with a story without knowing the context. He talked about TC39 and asm.js, without explaining what it is. In no specific order he also talked about how FirefoxOS and Tizen, that were huge hopes of game changers failed in the last years. He explained that it was due to the fact that those platform did not have any apps, and that people today wants app. But app developers doesn't want to code apps for platforms that have very few users, which creates a vicious circle.

He then continue saying that if you build something for the web, you cannot do it in isolation, it has to fit in the existing landscape. He went on joking about Douglas Crockford and its minimalist views.

Then, he started killing zombies with chicken equipped with machine guns. And that crashed. So he started killing pigs with an axe.

To be honest, this talk made absolutely no sense to me, I am completely unable to synthesize the point of the talk.

Conclusion

I think I'm gonna say the same thing as last year. dotJS is for me less interesting than dotCSS. Proportionally, there was way much less inspirational talks, the sound wasn't great, the place is less comfortable and was getting hotter and hotter along the day. On a very personal note I also realized that my eyes are getting worse, and even with my glasses on, the slides were a bit blurry for me when I sat in the back.

All

Last year I said "maybe I won't come next year", but I still came. This time I'm saying it again, removing the maybe. This is just no longer adapted to what I'm looking for in a conference. I'll still come to dotCSS and dotScale, though.

dotCSS 2015

For the second year, I went to dotCSS. It took place at the same place as last year, Théâtre des variétés. As usual, we know only who will be speaking, but have no idea what they are going to talk about.

I met a few former colleagues (most of them had switched job in the past year), so it was a nice way to catch up on the news.

You can find wonderful photos of the event on flickr.

Rachel Andrew

The first talk of the day was by Rachel Andrew, who talked about CSS grids. Midway through her talk I realized that I had already seen that talk not long ago, at ParisWeb. This time it was in 18mn instead of 50 which is a much more suitable format.

I see CSS grids as the logical extension of flexbox, where each element is aware that it is part of layout markup, and thus can react to CSS rules knowing much more context, and without being bound to its wrapping markup.

Long will be gone the days of the faux-column layout and display:table when CSS grid will hit mainstream browsers. I personally do not yet have dived into the flexbox world, but seeing what the future has in stock, I might hold a bit longer and go with the CSS grids once they are released.

CSS grids are the future of CSS layout, you should start to play with it right now (by feature flipping it in your browser) so you'll be ready when the day will come.

Andrey Sitnik

Next talk, by Andrey Sitnik, made me understand what postCSS was, for real. Like everybody, I've used autoprefixer, but I never understood it was just one plugin across hundreds that sits in the postCSS environment.

By itself, postCSS does nothing more than parsing a CSS file into a tree representation, and output it back into a string. But the fun part is that you can add any number of plugins between the start and the end of the pipeline. This means that you can write plugins that take a tree representation of a CSS file as input, transform it as much as you want, and let postCSS output the new string representation of your file.

All the plugins follow the simple rule of doing one thing only (but doing it well), and can all be piped together, in pure unix philosophy. As said earlier, one of the most known plugin is autoprefixer, which will add all the required vendor prefixes to your properties, but other can emulate some features found in SCSS for example (like nesting of selectors).

Andrey showed us a bunch of really useful plugins, stressing the fact that postCSS can do what SCSS can also do (using the precss plugin for example), but that its goal was not to add syntactic sugar on top of CSS, but to help improve modularization, reusability and maintainability of CSS components.

In CSS, everything is global. A rule you write in a file could affect an element on another part of the application. Even reset.css files are global. And the cascade makes it really hard to really know where properties are coming from.

With postCSS, you can split your code into modules (like button or header), creating one directory for each of them, where you put all the required html, css, js and images. The use meta-plugin lets you define a local plugin for this module only, which lets you better handle the dependencies of your project.

In the past years, we've started to use BEM as a way to avoid collisions in our selectors and workaround the global nature of CSS. But naming components in BEM can still be complex, and boring. postCSS provides of course @b, @e and @m methods to make writing BEM easier, but it also provides a better top-level abstraction that lets your write your nested class like in the 1996 web, without caring about any conflict, and it will automatically rewrite them to make them unique in a BEM sort of way. The example given was using React for the HTML rendering part, and I'm a bit unsure how this would work without React.

postCSS also provides plugins for handling extended media queries. Part of CSS code that could react not only to the window width, but to a parent element dimensions, or color. This can be useful for example to change the text color of an element to black to white if its container changes its background to something darker.

In the same vein, postCSS has a plugin to apply a local reset to an element, not affecting every other elements on the page. As it has knowledge of every other parent element in the tree, it can know which properties needs to be overwritten to get back to the default values.

I must say postCSS seems really awesome. It seems mature and build by people that really use CSS everyday and know the quirks we are facing. The component approach of plugins and the modularization it provides are really huge benefits for the code maintenance.

But to be honest, this is also one new tool to add to the front-end pipeline and should be used only by people that understand the underlying issues the tool is solving, otherwise it only adds complexity.

If I had to remember only one thing from dotCSS, it's postCSS.

You can find the slides here: http://ai.github.io/postcss-isolation/

Daniel Eden

Then, Daniel Eden, Designer at Dropbox, told us more about how they manage to do CSS on a huge scale. Short answer is: they don't.

Daniel

Their codebase is 1220 CSS files, for about 150.000 LOC. Which is about 6% of the whole Dropbox codebase. This seems impressive, but it is in the same tendency that another web giant, Etsy, with its 2000 files for 200.000 LOC.

How did that happen? Well, because a lot of people are touching CSS. When a new developer or new team starts a new feature, it handles both the back-end and front-end side. And most of them do not like writing CSS, so they do it quite badly. They are very good js and python developers, still do not understand the cascading principle of CSS, the specificity nor the way to abstract CSS concepts. And still, they have to write CSS, and write a lot of it.

Developers are frustrated by the archaic way we have to test CSS: save file, alt-tab, see if this looks good. As said earlier, everything can be overridden because everything is global. The box model is counter-intuitive and the industry best practice are really young, a few years at most.

And when something doesn't work as expected in CSS, it is easy to write more CSS to fix it. And then writing more CSS to fix the fix that fixes this old fix. In the end, it is too easy to write CSS, that's why the codebase grows that fast.

CSS

So Daniel took the issue and started improving the whole CSS codebase, following a mantra I like a lot:

Move slow and fix things

His solution was not perfect, but it was a good starting point. He started to quantify everything. Knowing the number of line of code (using cloc, then running CSSStats —which is using Parker underneath—) to show in a nice and readable way what they were doing wrong. CSSStats outputs the number of font-size defined, the number of unique colors, the most specific selectors, and so on. When faced with such objective values, one can only react and try to make things better. As long as the issue is invisible and "things are working" and "CSS is not really programming", nothing will change.

He put in place a few rules in their process, like the check through a linter for new files, adding himself as a reviewer every time CSS was changed on some critical files, as well as writing a style guide. They also moved from SCSS to postCSS and managed to lose almost 80% of their LOC.

What I really enjoyed in this talk was how humble Daniel was. He told use what worked for them, and the issues they faced. All that was needed was one guy that really cared about CSS code quality to take the lead on it to see things dramatically improve. Start with data, get metrics from your current codebase, explain and teach why it's bad and how to fix it, then provide some tooling to help make it easier and the results will come.

CSS Optimization

After the coffee break, we had the chance to see one (and only one) lightning talk. Only one person applied for it this year, which is a shame because it was really interesting and I would have like to see more of them. I will suggest a talk myself for next year.

The one and only talk was about a project named cssnano, a postCSS plugin for minifying CSS.

This is a set of plugins that will try to optimize the css output so it takes the lower possible amount of bytes. Things like removing whitespace, renaming colors, animations, transform and default values.

In essence, it does the same job as cssmin, but due to its modular approach seems a better alternative on the long run.

Una Kravetz

The next talk was the mind-blowing one. Every dotCSS needs one, and this was the one.

Una Kravetz works at IBM and told us about mostly unknown properties to apply styling on images close to what we are used to do with Photoshop. She started by giving an academic list of all the possible options, then gave us some real life examples.

It is for example possible to change an image contrast to make the grey background appear white and thus being able to put it on a white background page without having to edit the page in Photoshop beforehand.

Then she showed a few interpolation modes to merge two images together and the various way each pixel can interact (either by taking the lightest or darkest value, doing a mix of both, only keeping the hue, etc). At first this looked a bit useless because I could not see any real life usage of the technique, then she started giving more real examples and I was conquered.

When you start mixing this already powerful filters with other CSS properties like masking and cropping, you can make really smart image composition stuff. She recreated most (if not all) the Instagram effects and more using simply CSS.

She showed us how to create those blurry background images in pure CSS, and by applying them to all images in a photo gallery, we can manage to give them an unifying look. This seems really useful on an e-commerce website when you are presenting a lot of different products, with various size, background and colors. But using such a filter, they will all look like they are part of the same set.

I highly encourage you to have a look at CSSGram and her other projects. Her talk was really inspiring and the effects she showed are much more than a simple showcase of the power of CSS, they have real world usage that could greatly improve UI and UX of the website that are going to use it. That being said, only the best unicorn devsigners will be able to harness it correctly because this requires both a set of design and CSS skills to be able to understand all the possibilities.

Alan Stearns

After that, we had Alan Stearns, new co-chairman of the W3C who told us a bit more about how we, web developers, can help move the web forward.

Alan

The guy was a huge typo nerd and was unhappy about how fonts were rendered on the web, so he wrote a blog post about it. And was contacted by the W3C to help them fix this at a larger scale.

His whole talk could have been summarized in one of his slides:

Write, Talk, Share and File bugs

We should write blog posts about what is hard, boring or impossible to do in CSS. We should write about the workarounds we found. We should talk in conferences about it and most of all, we should file bugs directly to the browser issue tracking. I know it can be intimidating to file a bug that up in the chain, but this is actually the only way to make the browser improve their rendering engines.

Platform.sh

During the next break, I talked briefly with Ori about his company, platform.sh. Platform is a PaaS that gives you the insurance that your production and staging environment are exactly the same, up to the bytecode running. This gives you all confidence that if its working in staging, this will work the same in prod. This also let you run your tests against your production DB. The way to define your config is a declarative language close to Ansible (but not Ansible).

The idea is interesting, but moving all the prod environment to such a new actor on the scene can be scary.

Tom Giannattasio

After the break, Tom Giannattasio told us more about advanced CSS animations.

Tom

The guy came from the Flash world, where doing 3D modeling was a real pain, whereas it is really easy to do in CSS. The default tutorial you see everywhere is how to build a 3D cube in CSS. By tweaking it a bit you can easily create other forms, like a cylinder, and by applying the specific background, you can simulate a barrel (like the ones usually found in video games).

CSS

Actually, this is not a real 3D object, this are just various div elements rotating and with a mask applied to simulate some perspective effect. It is actually possible to go quite far in 3D in pure CSS (like this guy who build a FPS).

Still, things are quite limited in what you can achieve with one div only because any timing animation you add with an element has a fixed duration. If you want to animate the same element with one timing function for one property and another for another property, you have to resort to using pseudo-elements or wrapper divs. By adding enough wrappers and playing with transform, blend-mode and opacity, you can build interesting demos.

But what use does it actually have in the web world? Well, seduction. Netflix did a parallax effect on their serie selection view for example. This is a pure gadget, but is so nicely done that the user wants to play with it, and is one of this little touches that does the difference.

As we have already seen in the previous talk with blend-mode, we can do pretty nice and incredible things with CSS today. But should we?

For Tom, most of what the web needs today can be done with simple CSS. More complex usage, like 3D rendering, can be achieved through CSS as well, but the more advanced effect will require WebGL. And this is a whole different world, with a completely different learning curve.

Chris Eppstein

One of the last talks was done by Chris Eppstein, the creator of SCSS. He did not speak about SCSS but gave an few plausible explanations of why CSS is often seen as not really programming.

Before the declarative version of CSS we know, there was a proposal of another styling language, close to lisp in its syntax, but this was considered too complex for the non-programmers. So yes, from its inception, CSS has been created with the fact that their users where not programmers.

At first, variables in the language where considered a bad idea from its creator itself, for the exact same reasons. This was too complex a concept for the non-programmers. But what is programming anyway? Chris had a nice definition of it. Programming is simply the art of splitting a big problem into smaller ones, and then moving data from one place to another, possibly with some reformatting in between.

Following that definition, CSS is programming as we are moving the pixels data from the stylesheet to the screen. The fact that the language is declarative and does not have loops or branching does not change anything.

Maybe this view of the CSS developer not being a real developer come from the old ages, when CSS was much easier? In the first version, CSS had 53 properties, whereas it now boast 316. Maybe it is based on the fact that developers think design is easy and does not require much skill?

In any case, even if any of those two ideas were true, system have evolved but CSS is still part of it. It is tied to the HTML, it is tied to the JavaScript, and website and applications codebase are getting bigger and bigger. Being tied like this, CSS complexity increases just because its related parts complexity are increasing as well.

Chris point was that CSS is not easy. Maybe it was, back in the days, but it is not anymore. And what we can do is limited by the language in which we expose our concepts. In ancient Rome, multiplication was something that was considered an extremely complex scientific achievement, simply because the Roman way of writing numbers was too complex. When people switched to Arabic numbers, multiplication became much more easier. The language we use restricts us in what we can achieve.

This limitation of the language was the driving force that made Chris create SCSS. It added variables, functions and loops. People were reacting quite badly against it at first, because it was breaking how CSS was designed. Along the years, we learned by experience that this new tools where indeed useful and we are glad Chris pushed the boundaries of the language.

You can find the slides here

Daniel Glazman

The last conference of the day was by Daniel Glazman, doing a nice echo to last year where he did the opening one. I must confess that I slept during the major part of his talk so I cannot really tell what it was about. I kind of remember that he was bitching about CSS, and that I had already seen this talk (or one very similar) in another event, but I'll have to wait for the video to refresh my memory.

Glazou

Conclusion

Awesome evening. A great line up of talks that were really inspiring. Maybe the intensity decreased a bit at the end of the day, or maybe it was just me getting tired. The tool to remember from this session is definitely postCSS (and CSSStats). I've personally really enjoyed the talks about the blend mode and 3D, as well as the more meta talks about the CSS dev environment.

all

Big up to the dotConferences team, keep up the good work and I'll definitely come again next year.

Meteor Paris S03E03

The last Meteor meetup took place at Algolia, and I helped organize it. I never coded in Meteor (yet), but I follow the technology from a distance, and hosting the event was a really interesting experience.

I was actually quite surprised by the way the meetup is organized. There is no defined agenda. People just raise their hands when they want to talk about something. Some have slides, some don't. And once everyone has talked, you can suggest subjects for open-table discussions for the remaining of the meetup. After that it's a pizza buffet and networking time.

Meteor meetup @Algolia

Fastosphere

The first talk was from Vianney and François, the organizers. They were not happy with the official repository of Meteor packages, called Atmosphere, and decided to code their own, called Fastosphere. They scrapped the data from Atmosphere and pushed it to Algolia, in order to provide a faster and more relevant search.

They both love what we do at Algolia, so we did not even had to do a speech about us, they did it for us, and much better than what we could have done. They showed a bit of code on how to push data to Algolia, and then how to search it. They showed the dashboard, and the various analytics metrics you can get from it.

Vianney even said "Algolia is too fast, we had to query their servers directly because if we had gone through our Meteor server, this would have been way too slow".

Other talks

Then other people suggested talks. We talked about Slack bots done in Meteor, to order from PopChef (french food delivery startup), or to order a Uber cab. A guy also showed a really nice looking website where you can upload your holiday videos and it will build a 2mn short video of the best parts.

Open tables

After that, two open tables were created to talk about Docker and testing. I went to the one about testing and it seems that testing (end-to-end as well as unit) is not something a lot of people do in the Meteor world. There are no clear and easy way to test stuff, which in my opinion does not smell very good. I did not attend the Docker open-table, but there seem to be a nice git repo with all the needed information.

Conclusion

I'm still impressed by the way this meetup is auto-organized and how it works well. Even without knowing anything about Meteor, I had a really great time and nice talks with the attendees.

HumanTalks 3 Years

For the third year anniversary of the HumanTalks, we did a special event. We were hosted at the Société Générale headquarters, in one of the most beautiful conference room I've ever seen. There also was more attendees than usual, and we gave some goodies, T-shirts and JetBrain licenses at the end.

It was also the first session with Antoine Pezé as our official new team member.

And for me, the first time I went to the SG building. I must say I had a bad feeling about the place at first. I feel like La Défense is really a creepy place, with everything I don't like about modern society. Big grey buildings that tower over you, and everything seems to have been constructed for giants. As a human being, you feel out of place. Big tall towers with thousands of people working in them, all dressed the same. The only sources of light were from the ads and the mall windows. Grey, work, consume.

But then, in all this ocean of sadness, we met with Adrien Blind who is in charge of organising the meetups at the Société Générale. And I discovered that the SG is actually much more interesting in the inside than the outside. They are quick to iterate, have DevOps all the way from top to bottom, and really know and apply Agile and Software Craftsmanship philosophy. We'll see more about that in the last talk.

The room

Front-end testing

The first talk was done by William Ong, former coworker from Octo. He presented the front-end testing ref card they developed. It's a physical cardboard flyer that lists all the different kind of tests you can do to your front-end (from unit testing to performance testing and security testing), with examples, tools and advices.

William

The content of the refcard is really really great. It gives a nice overview of what can (and should be done) today, but also advices on the costs of each of them and when to implement them or not.

The web frontend landscape evolved a lot in the past years, and it is getting more and more complex with more and more logic being moved from the back-end to the front. To keep it sustainable, we now need to use the same kind of tooling we're used to use on the backend: testing harnesses.

Unit testing are a must have. He won't even develop on that subject because it is obvious. No good quality code can endure the stress of time without unit testing. Code that isn't unit tested is not finished.

But then, came all the other kinds of tests. When should we use them? Which tools should we use? Are they really needed? All the other tests are harder to put in place, so you should only add them if it helps you fight a pain you already experienced. They are costly to start, costly to maintain, so the benefit must be higher than the cost.

For integration testing (here also named end-to-end testing or functional testing), you should only add them on the critical path of your users. The one that generate money (subscription funnel) for example. This kind of test will indirectly test the whole stack. It will warn you when the core functionality you're testing is broken, but won't really help you diagnose where the issue could come from (database, back-end, front-end, etc.). These tests are also the longest to write and will yield false positive whenever you update the design/markup.

Talking about design, it is also possible to test your design. Using PhantomCSS, you can take screenshots of your whole page or specific parts of it and check that they did not change with the previous commit. This will help you diagnose changes to the website created by a seemingly unrelated CSS commit. Those tests can be invaluable, but they will also yield false positive results when a design change is actually expected. As for the integration test, limit yourself to the real main parts of your app.

You can also test for common security exploits, like the top 10 OWASP. Some tools can test them on your website and warn you of any vulnerability. Another approach can also be to ask for a security audit, and I personally also recommend opening an open hacker bounty program, like HackerOne.

In the end, all tests share one benefit: they give you the freedom to make the code evolve without being afraid of breaking things. It gives you complete trust in the code.

It was William first public talk, and the room was quite impressive with more than 120 attendees, so I guess he freaked out a bit. There were a few silences where you could feel that William was intimidated, and he looked a lot at his slides to give him some assurance. In the end, the message was here and it was interesting, and we'll be happy to have him on stage another time if he wants.

How to win at TCG with code

I don't know how many coffees Gary Mialaret took before coming on stage for the next talk, but he seemed to be really happy to be here and was almost jumping and running while speaking.

Gary

Gary told all us about TCG (Trading Card Games), and how those kind of game are no longer really about trading (Hearthstone for example, does not allow trading of cards). The main common ground of all those games is that it's a duel between two players, where each player create its deck of cards before the game and must carefully balance the number of monster/spell cards (that can make him win), with the resource cards (that are needed for using the monster/spell cards).

Empirically, every hardcore Magic player knows that a balanced deck needs 24 resource card, but Gary wanted to get to the math behind it to prove that it was the most optimal number.

He introduced us to the hypergeometric distribution mathematical function, that can calculate the probability of drawing a hand with at least n "good" cards, given the number of cards in a hand, the number of cards in a deck, and the ratio of good/bad cards in a deck.

While applying the method to a basic Magic deck we do not get the 24 cards we talked about earlier. This is because this method does not take into account another Magic mechanism called Mulligan, that lets you discard all your starting hand and start with a new one instead. To simulate that, he had to resort to a bit more coding. He generated thousands of different hands, discarding them when they did not meet his expectation and managed to get back to the magical 24 number.

He went even further, simulating basic Magic rules, playing against what is called a goldfish (a player that does not respond to attacks, and basically does nothing). He went on creating adaptive algorithms, applying something similar to genetic selection on deck creation. He starts with simple deck, make them play against goldfishes, then keep the one that works best, apply a few random slight changed (a bit more of that card, a bit less of that one), and make them play again until he reached the best possible deck.

His final conclusion is that we should not hesitate to put some of our developer mind in action to solve things that are not related to development. The most important thing when we want to solve a problem, is to know which questions we're looking answers for. Magic is a very complex game, it is not possible to code every possible rule and generate every possible deck to find the ultimate one. But by focusing on one specific problem, we can learn a lot about the underlying principles and this, in turn, helps us devise a better deck.

How our brain reacts to instant

The next talk was done by Gaetan Gachet, with whom I work at Algolia. He talked about the way our brain reacts to instant feedbacks.

Gaetan

His presentation was explaining the theory of Information Foraging. The main idea is that the way we are today looking for information on websites is similar to what our ancestors were doing when hunting.

When you hunt you know a few interesting places where you know you might find game. But sometimes, not game shows up, so you wait a bit more. And more. And more. Until you decide that it is not worth waiting any longer, and that you might actually just move to the next place you know could be a good hunting place.

But this second place is far away, and it will take you hours to get there, so that's why you wanted to wait here a little longer. But now, you decide that your current spot is not good enough and you'll take the chance to move to the next one.

What happened in your brain was actually a simple equation of risks and return. At the start it seemed more interesting to stay, to see if you could find something here, because you know that this place should have animals to hunt. But the more you wait without results, the more you're thinking that moving to the next place might actually be more interesting. Until you decide that you've wasted enough time and actually move.

When looking for information online we act the same. We first go to a first website, because Google told us that it might have the information we are looking for. We are searching their list of products, trying to find the one we want. We are skimming pages and pages of results until we realize that what we are looking for is not here, and that maybe we'll have more luck going to the next website.

This would not have happened if either the first website let you easily find what you're looking for, or would easily tell you that it does not have what you're looking for. It is just a matter of how much information you get compared to the time you spend.

In that analogy, Google is actually the whole territory the hunter can access in a day, while each website is a known hunting zone. But because Google is so fast, moving from one hunting zone to the next is actually really easy. And so you are more easily convinced to try another hunting zone if you did not find what you were looking for in the first minutes or second of hunting.

The paradox here is that the faster Google is, the less time people will spend on your site, because they know they can easily go search in another website if yours does not yield relevant results quickly.

That is why it is very important to give quick and relevant results to user searches when they come to your website, because you have competitors, and users will easily jump to a competitor website if they do not find what they are looking for on your website. Being the top first result on a Google page search is not enough.

Culture Craft

Last talk of the day was a nice story about what happened at Société Générale in the past years, by Thomas Pierrain. Some people wanted to "wake up" the organisation. They felt like the overall strategy of the company was not adapted to its current culture. And that the current culture was slowly dying because of that.

Thomas

The speaker actually confessed that he wanted to leave the company at that time. But he stopped short when he realized that his thinking was actually just something along the lines of "it was better before", which is what stupid old men are usually saying. And he didn't want to become a stupid old man, so he decided to do something about it.

He started organizing BBLs internally, where they will book a room during lunch and one of them was going to talk about a subject he was passionate about while the other would listen to him (and eat their lunch). They also organized some coding dojo, where everybody in a room would work on the same computer problem, one after another, and everybody helping others.

But it did not started that fast. At first he was alone. So he asked a friend if he would be interested in listening to him speak at a BBL. The friend was ok, and talked about it to other friends. So they did it. They did not tell the management, or booked a room. They just took an empty room and did it. They knew that because they were some of the oldest developers in the company, nobody would listen to them. So they just did it.

The first BBL was a success. The room was small, so not everybody could come in. Those who couldn't come in wanted to come to the next one. It started to work on a "first come, first served" principle and gave the event a nice image.

They also created some events codenamed "Dude, you have to see this!". In those events, they take a room and one person opens a Youtube/TEDTalk video he liked, everybody watches it, and then everybody discuss it.

And they kept creating things like that. Things like lunch mob, where they gather together for lunch and code on a project and push everything at the end. They posted photos internally, put some on Twitter and in the end, what started as a single initiative is now a well-known fact of how things are working at the SG.

His main advice was simply to make it happen. Some people won't like it, so don't invite them and don't try to convince them. Focus on those that are interested and build something for them and with them.

Conclusion

Overall a really nice session. Thanks a lot to the Société Générale to have hosted us in that wonderful room, and thanks for their very inspiring talk as well.

Unfortunately, even if the room had all the video capture capabilities, we still did not manage to get the videos... It taught us to always record with our own devices :)