For the second year, I went to dotCSS. It took place at the same place as last year, Théâtre des variétés. As usual, we know only who will be speaking, but have no idea what they are going to talk about.
I met a few former colleagues (most of them had switched job in the past year), so it was a nice way to catch up on the news.
You can find wonderful photos of the event on flickr.
Rachel Andrew
The first talk of the day was by Rachel Andrew, who talked about CSS grids. Midway through her talk I realized that I had already seen that talk not long ago, at ParisWeb. This time it was in 18mn instead of 50 which is a much more suitable format.
I see CSS grids as the logical extension of flexbox, where each element is aware that it is part of layout markup, and thus can react to CSS rules knowing much more context, and without being bound to its wrapping markup.
Long will be gone the days of the faux-column layout and display:table when CSS grid will hit mainstream browsers. I personally do not yet have dived into the flexbox world, but seeing what the future has in stock, I might hold a bit longer and go with the CSS grids once they are released.
CSS grids are the future of CSS layout, you should start to play with it right now (by feature flipping it in your browser) so you'll be ready when the day will come.
Andrey Sitnik
Next talk, by Andrey Sitnik, made me understand what postCSS was, for real. Like everybody, I've used autoprefixer, but I never understood it was just one plugin across hundreds that sits in the postCSS environment.
By itself, postCSS does nothing more than parsing a CSS file into a tree representation, and output it back into a string. But the fun part is that you can add any number of plugins between the start and the end of the pipeline. This means that you can write plugins that take a tree representation of a CSS file as input, transform it as much as you want, and let postCSS output the new string representation of your file.
All the plugins follow the simple rule of doing one thing only (but doing it well), and can all be piped together, in pure unix philosophy. As said earlier, one of the most known plugin is autoprefixer, which will add all the required vendor prefixes to your properties, but other can emulate some features found in SCSS for example (like nesting of selectors).
Andrey showed us a bunch of really useful plugins, stressing the fact that postCSS can do what SCSS can also do (using the precss plugin for example), but that its goal was not to add syntactic sugar on top of CSS, but to help improve modularization, reusability and maintainability of CSS components.
In CSS, everything is global. A rule you write in a file could affect an element on another part of the application. Even reset.css files are global. And the cascade makes it really hard to really know where properties are coming from.
With postCSS, you can split your code into modules (like button or header), creating one directory for each of them, where you put all the required html, css, js and images. The use meta-plugin lets you define a local plugin for this module only, which lets you better handle the dependencies of your project.
In the past years, we've started to use BEM as a way to avoid collisions in our selectors and workaround the global nature of CSS. But naming components in BEM can still be complex, and boring. postCSS provides of course @b, @e and @m methods to make writing BEM easier, but it also provides a better top-level abstraction that lets your write your nested class like in the 1996 web, without caring about any conflict, and it will automatically rewrite them to make them unique in a BEM sort of way. The example given was using React for the HTML rendering part, and I'm a bit unsure how this would work without React.
postCSS also provides plugins for handling extended media queries. Part of CSS code that could react not only to the window width, but to a parent element dimensions, or color. This can be useful for example to change the text color of an element to black to white if its container changes its background to something darker.
In the same vein, postCSS has a plugin to apply a local reset to an element, not affecting every other elements on the page. As it has knowledge of every other parent element in the tree, it can know which properties needs to be overwritten to get back to the default values.
I must say postCSS seems really awesome. It seems mature and build by people that really use CSS everyday and know the quirks we are facing. The component approach of plugins and the modularization it provides are really huge benefits for the code maintenance.
But to be honest, this is also one new tool to add to the front-end pipeline and should be used only by people that understand the underlying issues the tool is solving, otherwise it only adds complexity.
If I had to remember only one thing from dotCSS, it's postCSS.
Then, Daniel Eden, Designer at Dropbox, told us more about how they manage to do CSS on a huge scale. Short answer is: they don't.
Their codebase is 1220 CSS files, for about 150.000 LOC. Which is about 6% of the whole Dropbox codebase. This seems impressive, but it is in the same tendency that another web giant, Etsy, with its 2000 files for 200.000 LOC.
How did that happen? Well, because a lot of people are touching CSS. When a new developer or new team starts a new feature, it handles both the back-end and front-end side. And most of them do not like writing CSS, so they do it quite badly. They are very good js and python developers, still do not understand the cascading principle of CSS, the specificity nor the way to abstract CSS concepts. And still, they have to write CSS, and write a lot of it.
Developers are frustrated by the archaic way we have to test CSS: save file, alt-tab, see if this looks good. As said earlier, everything can be overridden because everything is global. The box model is counter-intuitive and the industry best practice are really young, a few years at most.
And when something doesn't work as expected in CSS, it is easy to write more CSS to fix it. And then writing more CSS to fix the fix that fixes this old fix. In the end, it is too easy to write CSS, that's why the codebase grows that fast.
So Daniel took the issue and started improving the whole CSS codebase, following a mantra I like a lot:
Move slow and fix things
His solution was not perfect, but it was a good starting point. He started to quantify everything. Knowing the number of line of code (using cloc, then running CSSStats —which is using Parker underneath—) to show in a nice and readable way what they were doing wrong. CSSStats outputs the number of font-size defined, the number of unique colors, the most specific selectors, and so on. When faced with such objective values, one can only react and try to make things better. As long as the issue is invisible and "things are working" and "CSS is not really programming", nothing will change.
He put in place a few rules in their process, like the check through a linter for new files, adding himself as a reviewer every time CSS was changed on some critical files, as well as writing a style guide. They also moved from SCSS to postCSS and managed to lose almost 80% of their LOC.
What I really enjoyed in this talk was how humble Daniel was. He told use what worked for them, and the issues they faced. All that was needed was one guy that really cared about CSS code quality to take the lead on it to see things dramatically improve. Start with data, get metrics from your current codebase, explain and teach why it's bad and how to fix it, then provide some tooling to help make it easier and the results will come.
CSS Optimization
After the coffee break, we had the chance to see one (and only one) lightning talk. Only one person applied for it this year, which is a shame because it was really interesting and I would have like to see more of them. I will suggest a talk myself for next year.
The one and only talk was about a project named cssnano, a postCSS plugin for minifying CSS.
This is a set of plugins that will try to optimize the css output so it takes the lower possible amount of bytes. Things like removing whitespace, renaming colors, animations, transform and default values.
In essence, it does the same job as cssmin, but due to its modular approach seems a better alternative on the long run.
Una Kravetz
The next talk was the mind-blowing one. Every dotCSS needs one, and this was the one.
Una Kravetz works at IBM and told us about mostly unknown properties to apply styling on images close to what we are used to do with Photoshop. She started by giving an academic list of all the possible options, then gave us some real life examples.
It is for example possible to change an image contrast to make the grey background appear white and thus being able to put it on a white background page without having to edit the page in Photoshop beforehand.
Then she showed a few interpolation modes to merge two images together and the various way each pixel can interact (either by taking the lightest or darkest value, doing a mix of both, only keeping the hue, etc). At first this looked a bit useless because I could not see any real life usage of the technique, then she started giving more real examples and I was conquered.
When you start mixing this already powerful filters with other CSS properties like masking and cropping, you can make really smart image composition stuff. She recreated most (if not all) the Instagram effects and more using simply CSS.
She showed us how to create those blurry background images in pure CSS, and by applying them to all images in a photo gallery, we can manage to give them an unifying look. This seems really useful on an e-commerce website when you are presenting a lot of different products, with various size, background and colors. But using such a filter, they will all look like they are part of the same set.
I highly encourage you to have a look at CSSGram and her other projects. Her talk was really inspiring and the effects she showed are much more than a simple showcase of the power of CSS, they have real world usage that could greatly improve UI and UX of the website that are going to use it. That being said, only the best unicorn devsigners will be able to harness it correctly because this requires both a set of design and CSS skills to be able to understand all the possibilities.
Alan Stearns
After that, we had Alan Stearns, new co-chairman of the W3C who told us a bit more about how we, web developers, can help move the web forward.
The guy was a huge typo nerd and was unhappy about how fonts were rendered on the web, so he wrote a blog post about it. And was contacted by the W3C to help them fix this at a larger scale.
His whole talk could have been summarized in one of his slides:
Write, Talk, Share and File bugs
We should write blog posts about what is hard, boring or impossible to do in CSS. We should write about the workarounds we found. We should talk in conferences about it and most of all, we should file bugs directly to the browser issue tracking. I know it can be intimidating to file a bug that up in the chain, but this is actually the only way to make the browser improve their rendering engines.
Platform.sh
During the next break, I talked briefly with Ori about his company, platform.sh. Platform is a PaaS that gives you the insurance that your production and staging environment are exactly the same, up to the bytecode running. This gives you all confidence that if its working in staging, this will work the same in prod. This also let you run your tests against your production DB. The way to define your config is a declarative language close to Ansible (but not Ansible).
The idea is interesting, but moving all the prod environment to such a new actor on the scene can be scary.
Tom Giannattasio
After the break, Tom Giannattasio told us more about advanced CSS animations.
The guy came from the Flash world, where doing 3D modeling was a real pain, whereas it is really easy to do in CSS. The default tutorial you see everywhere is how to build a 3D cube in CSS. By tweaking it a bit you can easily create other forms, like a cylinder, and by applying the specific background, you can simulate a barrel (like the ones usually found in video games).
Actually, this is not a real 3D object, this are just various div elements rotating and with a mask applied to simulate some perspective effect. It is actually possible to go quite far in 3D in pure CSS (like this guy who build a FPS).
Still, things are quite limited in what you can achieve with one div only because any timing animation you add with an element has a fixed duration. If you want to animate the same element with one timing function for one property and another for another property, you have to resort to using pseudo-elements or wrapper divs. By adding enough wrappers and playing with transform, blend-mode and opacity, you can build interesting demos.
But what use does it actually have in the web world? Well, seduction. Netflix did a parallax effect on their serie selection view for example. This is a pure gadget, but is so nicely done that the user wants to play with it, and is one of this little touches that does the difference.
As we have already seen in the previous talk with blend-mode, we can do pretty nice and incredible things with CSS today. But should we?
For Tom, most of what the web needs today can be done with simple CSS. More complex usage, like 3D rendering, can be achieved through CSS as well, but the more advanced effect will require WebGL. And this is a whole different world, with a completely different learning curve.
Chris Eppstein
One of the last talks was done by Chris Eppstein, the creator of SCSS. He did not speak about SCSS but gave an few plausible explanations of why CSS is often seen as not really programming.
Before the declarative version of CSS we know, there was a proposal of another styling language, close to lisp in its syntax, but this was considered too complex for the non-programmers. So yes, from its inception, CSS has been created with the fact that their users where not programmers.
At first, variables in the language where considered a bad idea from its creator itself, for the exact same reasons. This was too complex a concept for the non-programmers. But what is programming anyway? Chris had a nice definition of it. Programming is simply the art of splitting a big problem into smaller ones, and then moving data from one place to another, possibly with some reformatting in between.
Following that definition, CSS is programming as we are moving the pixels data from the stylesheet to the screen. The fact that the language is declarative and does not have loops or branching does not change anything.
Maybe this view of the CSS developer not being a real developer come from the old ages, when CSS was much easier? In the first version, CSS had 53 properties, whereas it now boast 316. Maybe it is based on the fact that developers think design is easy and does not require much skill?
In any case, even if any of those two ideas were true, system have evolved but CSS is still part of it. It is tied to the HTML, it is tied to the JavaScript, and website and applications codebase are getting bigger and bigger. Being tied like this, CSS complexity increases just because its related parts complexity are increasing as well.
Chris point was that CSS is not easy. Maybe it was, back in the days, but it is not anymore. And what we can do is limited by the language in which we expose our concepts. In ancient Rome, multiplication was something that was considered an extremely complex scientific achievement, simply because the Roman way of writing numbers was too complex. When people switched to Arabic numbers, multiplication became much more easier. The language we use restricts us in what we can achieve.
This limitation of the language was the driving force that made Chris create SCSS. It added variables, functions and loops. People were reacting quite badly against it at first, because it was breaking how CSS was designed. Along the years, we learned by experience that this new tools where indeed useful and we are glad Chris pushed the boundaries of the language.
The last conference of the day was by Daniel Glazman, doing a nice echo to last year where he did the opening one. I must confess that I slept during the major part of his talk so I cannot really tell what it was about. I kind of remember that he was bitching about CSS, and that I had already seen this talk (or one very similar) in another event, but I'll have to wait for the video to refresh my memory.
Conclusion
Awesome evening. A great line up of talks that were really inspiring. Maybe the intensity decreased a bit at the end of the day, or maybe it was just me getting tired. The tool to remember from this session is definitely postCSS (and CSSStats). I've personally really enjoyed the talks about the blend mode and 3D, as well as the more meta talks about the CSS dev environment.
Big up to the dotConferences team, keep up the good work and I'll definitely come again next year.
The last Meteor meetup took place at Algolia, and I helped organize it. I never coded in Meteor (yet), but I follow the technology from a distance, and hosting the event was a really interesting experience.
I was actually quite surprised by the way the meetup is organized. There is no defined agenda. People just raise their hands when they want to talk about something. Some have slides, some don't. And once everyone has talked, you can suggest subjects for open-table discussions for the remaining of the meetup. After that it's a pizza buffet and networking time.
Fastosphere
The first talk was from Vianney and François, the organizers. They were not happy with the official repository of Meteor packages, called Atmosphere, and decided to code their own, called Fastosphere. They scrapped the data from Atmosphere and pushed it to Algolia, in order to provide a faster and more relevant search.
They both love what we do at Algolia, so we did not even had to do a speech about us, they did it for us, and much better than what we could have done. They showed a bit of code on how to push data to Algolia, and then how to search it. They showed the dashboard, and the various analytics metrics you can get from it.
Vianney even said "Algolia is too fast, we had to query their servers directly because if we had gone through our Meteor server, this would have been way too slow".
Other talks
Then other people suggested talks. We talked about Slack bots done in Meteor, to order from PopChef (french food delivery startup), or to order a Uber cab. A guy also showed a really nice looking website where you can upload your holiday videos and it will build a 2mn short video of the best parts.
Open tables
After that, two open tables were created to talk about Docker and testing. I went to the one about testing and it seems that testing (end-to-end as well as unit) is not something a lot of people do in the Meteor world. There are no clear and easy way to test stuff, which in my opinion does not smell very good. I did not attend the Docker open-table, but there seem to be a nice git repo with all the needed information.
Conclusion
I'm still impressed by the way this meetup is auto-organized and how it works well. Even without knowing anything about Meteor, I had a really great time and nice talks with the attendees.
For the third year anniversary of the HumanTalks, we did a special event. We were hosted at the Société Générale headquarters, in one of the most beautiful conference room I've ever seen. There also was more attendees than usual, and we gave some goodies, T-shirts and JetBrain licenses at the end.
It was also the first session with Antoine Pezé as our official new team member.
And for me, the first time I went to the SG building. I must say I had a bad feeling about the place at first. I feel like La Défense is really a creepy place, with everything I don't like about modern society. Big grey buildings that tower over you, and everything seems to have been constructed for giants. As a human being, you feel out of place. Big tall towers with thousands of people working in them, all dressed the same. The only sources of light were from the ads and the mall windows. Grey, work, consume.
But then, in all this ocean of sadness, we met with Adrien Blind who is in charge of organising the meetups at the Société Générale. And I discovered that the SG is actually much more interesting in the inside than the outside. They are quick to iterate, have DevOps all the way from top to bottom, and really know and apply Agile and Software Craftsmanship philosophy. We'll see more about that in the last talk.
Front-end testing
The first talk was done by William Ong, former coworker from Octo. He presented the front-end testing ref card they developed. It's a physical cardboard flyer that lists all the different kind of tests you can do to your front-end (from unit testing to performance testing and security testing), with examples, tools and advices.
The content of the refcard is really really great. It gives a nice overview of what can (and should be done) today, but also advices on the costs of each of them and when to implement them or not.
The web frontend landscape evolved a lot in the past years, and it is getting more and more complex with more and more logic being moved from the back-end to the front. To keep it sustainable, we now need to use the same kind of tooling we're used to use on the backend: testing harnesses.
Unit testing are a must have. He won't even develop on that subject because it is obvious. No good quality code can endure the stress of time without unit testing. Code that isn't unit tested is not finished.
But then, came all the other kinds of tests. When should we use them? Which tools should we use? Are they really needed? All the other tests are harder to put in place, so you should only add them if it helps you fight a pain you already experienced. They are costly to start, costly to maintain, so the benefit must be higher than the cost.
For integration testing (here also named end-to-end testing or functional testing), you should only add them on the critical path of your users. The one that generate money (subscription funnel) for example. This kind of test will indirectly test the whole stack. It will warn you when the core functionality you're testing is broken, but won't really help you diagnose where the issue could come from (database, back-end, front-end, etc.). These tests are also the longest to write and will yield false positive whenever you update the design/markup.
Talking about design, it is also possible to test your design. Using PhantomCSS, you can take screenshots of your whole page or specific parts of it and check that they did not change with the previous commit. This will help you diagnose changes to the website created by a seemingly unrelated CSS commit. Those tests can be invaluable, but they will also yield false positive results when a design change is actually expected. As for the integration test, limit yourself to the real main parts of your app.
You can also test for common security exploits, like the top 10 OWASP. Some tools can test them on your website and warn you of any vulnerability. Another approach can also be to ask for a security audit, and I personally also recommend opening an open hacker bounty program, like HackerOne.
In the end, all tests share one benefit: they give you the freedom to make the code evolve without being afraid of breaking things. It gives you complete trust in the code.
It was William first public talk, and the room was quite impressive with more than 120 attendees, so I guess he freaked out a bit. There were a few silences where you could feel that William was intimidated, and he looked a lot at his slides to give him some assurance. In the end, the message was here and it was interesting, and we'll be happy to have him on stage another time if he wants.
How to win at TCG with code
I don't know how many coffees Gary Mialaret took before coming on stage for the next talk, but he seemed to be really happy to be here and was almost jumping and running while speaking.
Gary told all us about TCG (Trading Card Games), and how those kind of game are no longer really about trading (Hearthstone for example, does not allow trading of cards). The main common ground of all those games is that it's a duel between two players, where each player create its deck of cards before the game and must carefully balance the number of monster/spell cards (that can make him win), with the resource cards (that are needed for using the monster/spell cards).
Empirically, every hardcore Magic player knows that a balanced deck needs 24 resource card, but Gary wanted to get to the math behind it to prove that it was the most optimal number.
He introduced us to the hypergeometric distribution mathematical function, that can calculate the probability of drawing a hand with at least n "good" cards, given the number of cards in a hand, the number of cards in a deck, and the ratio of good/bad cards in a deck.
While applying the method to a basic Magic deck we do not get the 24 cards we talked about earlier. This is because this method does not take into account another Magic mechanism called Mulligan, that lets you discard all your starting hand and start with a new one instead. To simulate that, he had to resort to a bit more coding. He generated thousands of different hands, discarding them when they did not meet his expectation and managed to get back to the magical 24 number.
He went even further, simulating basic Magic rules, playing against what is called a goldfish (a player that does not respond to attacks, and basically does nothing). He went on creating adaptive algorithms, applying something similar to genetic selection on deck creation. He starts with simple deck, make them play against goldfishes, then keep the one that works best, apply a few random slight changed (a bit more of that card, a bit less of that one), and make them play again until he reached the best possible deck.
His final conclusion is that we should not hesitate to put some of our developer mind in action to solve things that are not related to development. The most important thing when we want to solve a problem, is to know which questions we're looking answers for. Magic is a very complex game, it is not possible to code every possible rule and generate every possible deck to find the ultimate one. But by focusing on one specific problem, we can learn a lot about the underlying principles and this, in turn, helps us devise a better deck.
How our brain reacts to instant
The next talk was done by Gaetan Gachet, with whom I work at Algolia. He talked about the way our brain reacts to instant feedbacks.
His presentation was explaining the theory of Information Foraging. The main idea is that the way we are today looking for information on websites is similar to what our ancestors were doing when hunting.
When you hunt you know a few interesting places where you know you might find game. But sometimes, not game shows up, so you wait a bit more. And more. And more. Until you decide that it is not worth waiting any longer, and that you might actually just move to the next place you know could be a good hunting place.
But this second place is far away, and it will take you hours to get there, so that's why you wanted to wait here a little longer. But now, you decide that your current spot is not good enough and you'll take the chance to move to the next one.
What happened in your brain was actually a simple equation of risks and return. At the start it seemed more interesting to stay, to see if you could find something here, because you know that this place should have animals to hunt. But the more you wait without results, the more you're thinking that moving to the next place might actually be more interesting. Until you decide that you've wasted enough time and actually move.
When looking for information online we act the same. We first go to a first website, because Google told us that it might have the information we are looking for. We are searching their list of products, trying to find the one we want. We are skimming pages and pages of results until we realize that what we are looking for is not here, and that maybe we'll have more luck going to the next website.
This would not have happened if either the first website let you easily find what you're looking for, or would easily tell you that it does not have what you're looking for. It is just a matter of how much information you get compared to the time you spend.
In that analogy, Google is actually the whole territory the hunter can access in a day, while each website is a known hunting zone. But because Google is so fast, moving from one hunting zone to the next is actually really easy. And so you are more easily convinced to try another hunting zone if you did not find what you were looking for in the first minutes or second of hunting.
The paradox here is that the faster Google is, the less time people will spend on your site, because they know they can easily go search in another website if yours does not yield relevant results quickly.
That is why it is very important to give quick and relevant results to user searches when they come to your website, because you have competitors, and users will easily jump to a competitor website if they do not find what they are looking for on your website. Being the top first result on a Google page search is not enough.
Culture Craft
Last talk of the day was a nice story about what happened at Société Générale in the past years, by Thomas Pierrain. Some people wanted to "wake up" the organisation. They felt like the overall strategy of the company was not adapted to its current culture. And that the current culture was slowly dying because of that.
The speaker actually confessed that he wanted to leave the company at that time. But he stopped short when he realized that his thinking was actually just something along the lines of "it was better before", which is what stupid old men are usually saying. And he didn't want to become a stupid old man, so he decided to do something about it.
He started organizing BBLs internally, where they will book a room during lunch and one of them was going to talk about a subject he was passionate about while the other would listen to him (and eat their lunch). They also organized some coding dojo, where everybody in a room would work on the same computer problem, one after another, and everybody helping others.
But it did not started that fast. At first he was alone. So he asked a friend if he would be interested in listening to him speak at a BBL. The friend was ok, and talked about it to other friends. So they did it. They did not tell the management, or booked a room. They just took an empty room and did it. They knew that because they were some of the oldest developers in the company, nobody would listen to them. So they just did it.
The first BBL was a success. The room was small, so not everybody could come in. Those who couldn't come in wanted to come to the next one. It started to work on a "first come, first served" principle and gave the event a nice image.
They also created some events codenamed "Dude, you have to see this!". In those events, they take a room and one person opens a Youtube/TEDTalk video he liked, everybody watches it, and then everybody discuss it.
And they kept creating things like that. Things like lunch mob, where they gather together for lunch and code on a project and push everything at the end. They posted photos internally, put some on Twitter and in the end, what started as a single initiative is now a well-known fact of how things are working at the SG.
His main advice was simply to make it happen. Some people won't like it, so don't invite them and don't try to convince them. Focus on those that are interested and build something for them and with them.
Conclusion
Overall a really nice session. Thanks a lot to the Société Générale to have hosted us in that wonderful room, and thanks for their very inspiring talk as well.
Unfortunately, even if the room had all the video capture capabilities, we still did not manage to get the videos... It taught us to always record with our own devices :)
Third edition of the ParisCSS meetup. I could not attend the first two sessions, so this is a first for me. It took place at Numa, and we were about 70.
CSS Masks
In the first talk, Vincent De Oliveira told us about CSS masks. He started by telling us that we might all already use CSS masks without knowing it.
border-radius and overflow:hidden are actually masking techniques, even if we do not think of them that way. He showed some nice and clever tricks with a few simple divs, rotation animation and overflow:hidden.
He then told us about clipping, or how to use images with some transparent pixels to define which part of an image should be displayed. He had an impressive demo of a text that could be visible trough a complex form (here it was the Eiffel tower), using only an image as background and a mask on top of the element.
The end of the talk was really impressive, but it was also really hard to understand how things were working without testing the code directly. He showed us how he animated icons or animated a cursive script but to be honest I did not really understood how this worked. I'll have to play with the slides.
Rendering performances
The second talk was done by Jean-Pierre Vincent (alias @theystolemynick). I gave him a #millisecondsmatters Algolia shirt right before the talk, and he wore it while speaking. Algolia everywhere ;)
The talk was dense, and a broad array of topics were discussed, but the overall rhythm was slow and it was sometimes hard to follow. The talk started with the conventionnal numbers for this kind of talk (on how speed is directly linked to revenue), but he also spoke about how it is now good for SEO and how the human brain reacts to instant.
He covered the subjects of repaints and reflows in CSS, but focused on CSS animations compared to jQuery animations. Done well, animations can really improve the user speed perception. Quickly moving items or fading images when changing pages can distract the user long enough so she cannot see that a page is currently loading.
His point was that jQuery animations are not inherently slow because they use JavaScript, but because this JavaScript asks position and dimension information to the DOM engine. First, to get the correct values, the DOM engine must force a reflow of the whole page. Also, it requires a communication bridge between the JavaScript thread and the DOM engine, which requires stopping both of them while they communicate.
For every animation that do not depend on user input (eg. moving an image from one fixed position to another), we can pre-compute the bounds and simply create two CSS classes (one for the start and one for the end), and use CSS animations to let the GPU handle the rendering.
For "unpredictable animations" (eg. based on the way the user moves her mouse or scroll the page), there is no silver bullet. The main advice is to throttle or debounce the call. There is no need to fire an animation on every scroll event, but only on the first or last one.
As always, there is no perfect solution. One must be aware of all the pros and cons of every technique. For example, using the GPU (through CSS animations) allows for much faster rendering, but will also drain the battery more quickly on mobile devices. One can use the translateZ(0) trick to force an animation to use the GPU, but this will prevent browsers from doing any optimisation natively.
The trickiest part is that some devices have better GPU than others. Sometimes, it is better to let the CPU handle the animation in specific browser/device targets. As it is near impossible to perfectly target each mobile/browser with the best implementation, you either have to go for the lowest common denominator, or only focus on a few specific devices. None of this solutions are optimal.
He gave a rapid overview of the various layout techniques in CSS. We started with <table>-based layout, using the HTML markup to split our page. Then, we used float and display: table, which are still hacks and not made to do layout. Then came display: inline-block which have all the pros of inline (stays on the same line) and block (can have dimensions), but still has its own shortcomings (whitespace in markup and parent font-size can impact the rendering).
In the end, doing layouts in CSS is not easy and requires a deep knowledge of all the quirks and limitations. Frameworks like Bootstrap exists so users do not see the complex code used to do a simple grid layout. Few people really know how things work internally.
Then came flexbox, which is a set of CSS properties here for layout purposes. With it, you no longer need Bootstrap for your grid needs.
Still, all this techniques requires you to have specific markup to separate your rows from your columns. If you want to do a specific layout on various RWD breakpoints, and move an element from one row to another you might have to add the element twice to the markup and show/hide them depending on the current viewport.
display: grid is the future of CSS and fixes all that. By adding this to a parent element and then adding grid-row and grid-column attributes to children elements, we can simply specify where each cell should go. It also comes with its own set of rules letting you define a custom grid template, default spacing, stacking cells, move cells on breakpoints, define their dimensions based on the available space and even do ASCII art to draw your grid.
This is brilliant and really really powerful while having a simple and straightforward syntax. Unfortunatly, this is only available in the latest Chrome under an experimental flag.
Anyway, this future CSS syntax will one day become our present CSS syntax, so we'd better start using it now to get familiar with it and push issues and feature requests.
Conclusion
The overall level of the talks is quite high for a young meetup like this one, and the large number of attendees shows that Paris needed a CSS meetup. I will speak at the next event and encourage you to submit your talk ideas or proposals.
Note: For this post, I'll try to write in english. I'm now working in an english-speaking company and I'm already writing emails in english to give feedback on the HumanTalks sessions. I'm not used to write such lengthy posts in english, though. Hopefully I'll get better at it the more I do it.
This HumanTalks session took place at UrbanLinkers, a Parisian recruiting company. The room was packed and we had just enough seats for everybody.
IBM Bluemix
The first talk was by Alain Airbon, which was very nervous. He said so right at the beginning of his talk, but it was quite obvious. He talked about Bluemix, which is the PaaS developed by IBM. He showed us how the Bluemix dashboard UI was working, but it felt a lot like a commercial presentation, and we did not learn anything we could have learned in 10 minutes in the dashboard ourselves.
I did not really understood what Bluemix was actually doing. He spent very little time explaining what a PaaS was (he compared it to IaaS, explaining the differences, but even for IaaS, I would have loved to have an explanation). I think Amazon is to IaaS what Heroku is to PaaS, but I'm not even sure.
It was clear that Alain was very nervous, and wanted the audience to like his talk on stage, but it did feel too shallow, and too commercial. And developers have a very low tolerance to bullshit. An advice I could give to first-time speakers would be to talk about something they really like and are passionate about, so the audience can feel how they feel about it. A personal feedback on how he used Bluemix to deploy one of its own projects would have worked better I think.
Still, I liked the way he introduced Bluemix: "Bluemix is simply Cloudfoundry, repainted to the IBM colors. We might not be better than the alternatives, but at least we follow standards and use an open-source project". This was honest and straight to the point.
I hope this first talk will not discourage him from trying again another time, we're still open to hosting him again and can give some advice on public speaking.
Michel is convinced that regular code review inside a team will lead to better, more maintainable and more stable code in production. He announces numbers as high as a ROI of 4 to 1. For one hour spent on code review, we can save up to four hours of debugging. Almost 65% of bugs can be found before shipping, thanks to the reviews.
Michel then shared 3 personal stories with us. First, he told us about that time when he worked for 10 days on a feature that was originally estimated to 3 days. In the end he did a lot of refactoring, touched hundred of classes and thousands of LOC. Once done, he asked a colleague for a review. 30 minutes later, one of them came back to him with a few issues spotted, mostly variable naming and a few typos. So they pushed to production.
Two weeks later, a bug popped in. After two days investigating, it was obvious that the bug came from this new feature, so they had to start debugging live.
Here is what they did wrong. First, they rushed the whole thing. On average, you cannot review more than 300LOC/hour, so take your time and do it with a fresh mind, not late at night. If the feature involves a lot of changes, it's expected that the review also take some time. Also, Michel should not have waited 10 days before submitting his work for a review, he should have asked feedback earlier.
But also, it seems that the reviewer did not know what to look for in the code review. He found typos and naming issues, but he did not really looked inside the logic and the behavior. No-one told him what to look for.
In the second story, Michel told us about Bob who came to him one day at the coffee machine. Bob was upset, he was saying that Martin code was really ugly, that he was still doing hard-coded SQL queries. Bob said he spent the last hour fixing the issues of Martin.
That was a perfect recipe for failing at code review as well. Bob never spoke to Martin directly, he just complained to Michel. Instead they should have talked together. Then, Bob should never have corrected the issues himself. The author should fix its own errors, otherwise he will never learn. And finally this whole story of hard coded SQL queries had never been written anywhere, so you cannot blame people for not following it.
The last story was about Kent and Beckie (pun intended that were arguing loudly about snake_case vs camelCase, shouting in the open-space. In the end, Kent left saying "you're a shitty coder, Beckie!".
This is really the worse that could happen. Kent was criticizing people, not their code. When in doubt, always refer to the principles of the Egoless Programming. Your code is not an extension of yourself. It can be judged, improved or deleted, this has nothing to do with you.
Also, it was not the right time to argue about snake_case or camelCase. Refrain from starting a framework war or any kind of trolling when doing a code review. This is not the time nor the place. Find compromises, write it down and follow it.
I really liked this talk. Michel gave real-life examples that we can all understand. Coding involves much more human contact that we could initially imagine. The code of a project belongs to the community building it, not to the one person that wrote it. Anyone can write code. Writing maintainable, understandable, shareable code is harder. The only way to do it properly is to talk to your colleagues, and code together.
Memory techniques
This talk, I gave it myself. I've written a summary of it on my blog.
How to hack your electricity meter
Cédric Finance told us about how he hacked its electricity meter at home with a Raspberry Pi. He did not actually really hacked it, he simply plugged the Raspberry Pi to the public part of the meter and read the data sent by it.
He did it because he realized that we never really know how much electricity we're using. When we leave the water tap open, it's obvious how much water we're wasting, and we can almost see money flowing down the sink. For electricity, we never really know until we receive our bill.
Getting back to the technical side of things. The meter sends data as a stream of keys/values. The data is not self explanatory but fortunately the spec is available and it can easily be parsed. He realized that it gave him a much more accurate set of information than what was printed on its bill.
As soon as he got the data, he pushed it to Thingspeak, a SaaS service to push streamed data and generate graphs. With that he quickly saw that its water heater was malfunctioning. It was supposed to only work on off-peak hours, but was actually always working. Knowing that, he fixed it.
He then dropped Thingspeak to build a custom dashboard to capture data with a higher frequency. He let it run for more than a year and was then able to compare usage from one year to the next.
At first he did it just for fun, but he soon realized that he could better understand its electricity consumption and make some improvements to it. Now he went ever further, with humidity and heat sensors in his home and graphs for all that.
Conclusion
Once again, a nice series of talks. And once again, my favorites are the personal sharing of experience.
Next month will be a special event as we'll be celebrating the HumanTalks Paris birthday, in a bigger room than usual. Hope to see you there.