MiXiT 2017: The Human Part

Last week I spent two days in Lyon, France, for the MiXiT ([mɪks ɪt]) conference. I was pleasantly surprised by the high quality, both of the event itself and of the talks. But even more by the high level of care of all the staff and attendees, the wide breadth of topics and its citizen and activism involvement.

You could feel the conference was for people, for human, for citizens, before being for developers. The tech skill level was also impressive, but the more impressive was how both aspects were so intertwined.

This blog post is about all the non-tech talks. I will write a tech-oriented wrap-up soon and link it from here.

Group dynamics

The first talk set the tone (pun intended) for the day. When I entered the main auditorium, I discovered a live classical orchestra on stage. The conductor talked about band dynamics, and its influence on the rhythm.

Orchestra

He never explicitly stated it, but I could not resist making parallels with other kind of groups, like dev teams, or companies.

Each member of a group can be playing its own unique tune, but when put together, the melody will be even different than the sum of all individualities. Every individuals are skilled in what they are doing, and they each know what is the global melody they want to achieve as a group. They still need to be aware of the others, to follow the same rhythm. The bigger the band, the harder it is for individuals to keep in sync with everyone.

That's when the conductor comes into play. Every individuals in the group now only need to focus on the conductor and follow his rhythm. The conductor keeps track of what everyone is doing, and helps those in need, adapting the tempo so the group acts as a whole.

A good leader can let all members of the band focus on what they each do best —playing their instrument— while reducing the amount of energy needed to focus on the tempo. A mediocre leader will not prevent a skilled band to play, it will make it harder. Each individual will do his or her best, and the result will still be enjoyable. A bad leader can stall a group to a halt, where the amount of energy needed to follow the instructions is such that people don't have enough energy left to do what they are skilled at.

An inspiring talk, all in the nuances of "show, don't tell". It got my brain started, which is always a good way to start a day of conferences!

Keynotes

Other talks, especially keynotes, were centered around issues we face, as citizens, not as developers. We had talks about local currency, universal income, ethics or alternative voting systems.

Lyon has its local currency, the Gonette (as did more than 40 other cities in France). Local currency help the flow of money to stay in a local system, not leaking to more global speculative markets.

La Gonette

Universal income is a concept to give all citizens of a specific country a fixed income, with no requirement. Speakers debunked the classical "so even rich people will have it?", "but people will stop working!", "it's too expensive, states don't have enough money for that" questions.

The debunking I liked the most are the polls that show that when asked about Universal Income, most of the people think that the others will stop working. But when asked if they will stop working, they say no. Everyone thinks the others are lazier than they see themselves.

Ethics

Then Guillaume Champeau made me think again with questions about ethics, as a developer.

We all know that bugs are bad and we should avoid creating them. How far one should go in making sure its code is bug free will greatly vary from one individual to another. I might do TDD from day one while others might push to prod without even testing manually. When you're developing your personal website it might be ok, but what happens when you're developing embeded code for self-driving cars or planes? A bug might kill people there. How far should you go in your testing? When can you say you've done enough?

What do you do when your company asks you to develop something illegal? Or if you're sent on a mission for a company you find unethical? Coding something you know will be used to create weapons that will kill people? What about stopping bug fixes and releases on a product you know have security holes that will leak personal data? What about feeding data you know is incomplete and/or biased to a machine-learning algorithm?

What we, as developers, might be lacking is an ethical code, some kind of oath like in other professions. Doctors, lawyers, architects or accountants have to swear an oath. They are personally responsible if they fail to follow the ethical rules of their orders. We don't have those limits. We can do whatever we want and hide behind the "it wasn't me, it's the algorithm" (thanks to the complete misunderstanding of this word by media and most judges).

Trying to do our best is not enough. Even swearing an oath is not enough. Mistakes will be made, shit will happen, data will leak. Still, thinking about what we don't want to happen is the first step in finding ways to prevent it. Technology is shaping the world of tomorrow, and we are the makers of that. We have an incredible power into our hands, and not thinking about the damage we could do is irresponsible.

I don't know which rules we should abide to, collectively, and I'm not even sure rules are the solution. But we can start, individually, to think about the limits we should never cross. Let's not wait until it's too late. Better safe than sorry.

Once again, interesting questions, and my brain racing. I've asked myself those questions in the past, and came up with my own ethical lines for most of them. But I was happy discussing with more junior developers afterwards that told me that they never thought about that before and now have to find where they stand.

After the atrocities of WW2, the Universal Declaration of Human Rights was signed. It gave every individual a set of rights that they could use to oppose states. It's 2017 today, and tech companies have more power than states. Think about Facebook and Google, that have more registered users than any state, and much more information about you than any of them.

UDHR

I don't act the same when I'm alone at home or with my girlfriend. I don't talk about the same subjects in the intimacy of my home, or in the subway. I don't speak and act the same way when I'm casually discussing with a coworker around the coffee machine, or on stage in front of 250 people. I might sing under the shower in the morning, when I'm alone, but would never dare do it in the street. There are things I could talk with my friends that I would not dare say if my parents could hear it, things that are private and that I can discuss with my family but would not want to share at work, or vice-versa.

We act differently based on who is watching us, who can hear what we're saying.

Now, think about everything you told Google, and if you would have ever told the same thing to your loved one, your boss, your parents, your friends or a random person in the street?

Did you search for some illness symptoms you had? Did you search for an address for a place you wanted to go to? Did you search for a political party program? Did you search for a childhood crush?

After Snowden announced that the NSA had access to this data and used it to track terrorists, people started to change their behavior. They stopped searching for some content, afraid that they would trigger something on the NSA side and be flagged as a terrorist. When Facebook announced that it could automatically detect if you were interested in some content based on the time you spend on it, people started to consciously limit the time they spent on each article, once again, fearing to trigger anything.

Knowing who is watching you changes your behavior. People act differently now that they know that what they do online is not private, and can be accessed. People are afraid they will be judged by the content they read, and that it will backfire against them.

Let that sink for a minute. People stop searching some topics, stop reading some content, because they know they are watched. Because they know they have no privacy, they're afraid of being judged by what they read, or say, so they stope reading or saying anything that is not "the norm". They stop doing in private what they would not do in public. They stop looking for information.

Without access to information, you can only make poor choices. Choice is the cornerstone of democracy; if you have no choice, you have no democracy. Removing privacy is dangerous as it removes the ability to be informed, hence to take sensible choice. Removing privacy destroys democracy.

That's exactly why, in a democracy, voting is anonymous. If voting was public, people would vote differently.

Fighting for privacy is not because people have some dark secret they want to keep in the shadows, it's actually a fight to keep democracy alive. Privacy is a mark of trust. No society can work without trust.

Majority Judgment

The keynote about the election system was perfectly timed, a few days before the french presidential election.

It started by explaining the limitation of the current voting system where you do not vote based on who you want elected, but based on how your vote will influence the result. The goal of an election is to aggregate what voters chose, in order to pick the more consensual candidate. The majoritarian representation used in France does not correctly measure opinions as it forces voters to pick only one candidate. And we don't have opinions on only one candidate, we have opinions on each of them, but those opinions are not recorded.

When it comes to something as important as picking the person that will run the state for 5 years, being asked to summarize all our views in only one name does not seem appropriate.

What is even worse is that results can be completly different if we add or remove candidates. Of course if you add a candidate that everyone loves, he or she will be elected, and the result will be different. I'm talking about adding a candidate that will not win. Just adding one into the mix will change the outcome. Why should adding someone that won't win change the final winner?

A better system would let any number of candidate take part in the election, and the exact number should not change who is winning. A way to do that, instead of asking "who do you think is the best of this 10 candidates?", would be to ask "Between A and B, who do you think is best? And between A and C?", and ask the question for each couple of candidates. Based on that we could find the candidate that is the most preferred against the others.

This system is interesing but still has one paradox. It is possible to have results in such a way that no-one is winning. Just like the Rock-Paper-Scissor game, you can find a configuration where no clear winner exists. This is called the Condorcet Paradox.

An even better system would be to ask each voter to evaluate each candidate on a specific question. Something like "For each of the following candidate, how do you think they will best represent the French interests?" followed by a double entry table with one candidate per line and columns for Very good / Good / Average / Bad / Very Bad.

The question is important, as is the wording of the choices. It is important to note that we're not asking to give a note (like a star rating), we're asking to answer a question.

It is important because it will have an impact in the way we then calculate who is the winner. Maybe we want the one that has a majority of Very Good, or the one that has a minority of Very Bad, or something different. The suggested way, called the "majority vote", is to pick the median value for each candidate (where 50% of the votes are above, and 50% are below). The candidate with the highest median will be elected. In case of ties, we keep only the contenders and do a "trim average": we remove the best and worst results of each and recalculate the median until we have a clear winner.

I really like that the way we express our choice is not boolean and have many more nuances. Finding the right way to extract one leader from all this data is tricky. If everyone plays fair, it will pick a candidate no-one really has a strong opinion about. No-one really wanted him or her, but no-ones really rejected him or her either. Is it really what democracy is about? I'm not sure, but once again it raises an interesting question.

But the best part of this voting system for me is that we can easily see if candidates are massively rejected. The vote can show that the people does not want any of them, and a new election with new candidates can be started again.

Was really everything so awesome at MiXit?

Yes.

As you can guess from this long blog post, talks of the day were highly interesting. There were many others that I could not attend, but from the short presentation we had I know there was talks about astrophysics, diversity, design, ecology or remote working.

There was one talk that felt oddly off compared to the others, about a connected green village. The subject could have been interesting, but it was presented in such a way that it was borderline cliché: TEDTalk rhythm, "inspirational video", text-heavy slides. The message was more-or-less:

We have this awesome idea, and pre-rendered 3D images of what it will look like, with happy smiling people in it.

It will be a smart village, with smart sensors, smart water pumps and smart vegetable gardens. We'll do Big Data with it, and with AI and Machine Learning, we'll make the world a better place!

Oh, and we're hiring because we have no idea how to do anything technical.

Regen Villages

They had everything planned on several years already, and the speaker came with her own filming crew. It felt to me like a marketing stunt to have more footage for them, not really caring about sharing anything with the audience. The name of the speaker was never ever mentioned on the slides, only the name of the guy that had the initial idea. It felt really out of place compared to the others talks.

Conclusion

Many talks made me think, made me ask questions about me, about work, about our world, our society. I have many more questions after the event than I had before, and it's a very great feeling. I hadn't felt than way since the very first editions of ParisWeb. Congrats to all the team, and I'll see you next year.

Hotel de Ville

Electron Meetup April 2017

Tonight was the Electron meetup in Paris. I didn't even know there was a dedicated meetup. I went there because my coworker Baptiste was doing a talk and I wanted to support him. I'm glad I went because I learned a lot.

Auto-update in Electron

Baptiste talked about one of the internal applications we are using at Algolia. The app lets the Algolia employees search into content that is spread across different other apps. Using a simple search bar, we can search in GitHub issues, Asana checklists, HelpScout tickets, Salesforce leads, Confluence pages, etc.

He talked about the auto-update mechanism. Electron apps being desktop apps, people have to install them on their machine. Developers cannot push new content as they would do for a website. They have to have an update mechanism in place, and it has to be automated because they cannot rely on users manually updating their version.

Baptiste used a system call Nuts, a node app that you can host on Heroku, and that works as a middleware between your GitHub repo (where you pushed the new builds), and the installed applications.

When the application starts (and every 5mn after that), it checks if a new package is available by contacting the Nuts server. Nuts in turn will query the GitHub repo (using auth tokens if the repo is private). If a new version is available, Nuts will forward it to the application that will download it in the background. If no version is available, nothing happens. Both those cases are handled in Electron through events fired in case of an update available or not.

Now that the technical part is over, you have a lot of UX questions to ask yourself. What do you do with this new version? Do you install it without your user knowing? Do you ask confirmation first? Do you install it right away or do you wait for the next session?

At Algolia, we decided not to install new versions silently. Most of the users of the app are developers, and they want to know when they update, and which version they are using. They don't like too much magic. From a pure debugging standpoint, it was also easier for Baptiste, when a bug occurs, to know from which version they were upgrading. Installing the update silently would have hidden all that info. We decided to display a prompt asking users if they wanted to install the update now or later.

The app itself is something you use many times a day, but you rarely spend more than a few seconds in it. You display it, you type what you look for, you find it, click it and it opens a new tab to your result. Once a result is selected, the app disappears. It means that most of the time, if there was an update, it didn't had time to finish downloading that you were already doing something else.

Prompting the user "Do you want to install the new version" while they were doing something different was too intrusive. We decided that if the app was not currently focused when the download was over, we would just not show the prompt, and simply keep the update for next time.

Next time you open the app, if an update was pending, we will prompt to install it. Once again, if the user clicked "Later", it will just ask again next time.

To conclude, Baptiste also added a manual "Check for updates" button. Electron apps really look like desktop apps, and our expections as users are not the same when we use a desktop app or a website. With apps, we want to feel we are in control. It's something we installed on our machine, so we should be able to tell it when to update if we want to. This button was not doing much more than requesting a check for update when it's clicked instead of waiting 5 more minutes, but it gave the users the feeling that they are in control.

To conclude, keep in mind that the technical part is usually the fastest. Making sure the workflow is enjoyable to your users is the hardest part. People have different expectation in desktop apps than in website. Because it's the same code for you does not mean it will be the same experience for your users.

GitScout

Second talk was by Michael Lefebvre, about GitScout, a MacOS app to handle GitHub issues. I was surprised at first that an Electron app was advertised as a "MacOS App", because Electron is supposed to be used on any platform.

I understood why they want that way. They went to great lengths to have the same kind of UX in their app as you could have on a native Mac OS app. The main example they gave is about the popover notifications you can have in a native MacOS app.

Those popovers can float partly "outside" of their main window. This is not possible in an Electron app, as an app must live inside a window, and you cannot make it go outside of it. To solve that, they created two windows. The parent one is the main app, and it has a child one, invisible by default. Then when it's time to display the popover, they will position the child window, make it visible and style it to look like a popover.

The issue with that approach is that the second window then takes the focus, which put the first window as inactive and all the OS-level styling of inactive window will take place. They then had to remove the OS-level handling of the window and redo it all themselves, so they can adjust it as needed. In the same vein, they had to deal with the clicks through the child window that should be forwarded to the underlying window.

They did a good job reimplementing the native behavior and handled many edge cases of the popover positioning. It took them about 4-5 days which would have been less more than learning to code directly in native so I'd say it's worth it... until the next MacOS update breaks everything.

Cross-platform app in electron

Then Maxence Haltel, from Aircall did a presentation about the building of cross-platforms apps in Electron. The main tool to use is electron-build, which will help you package your build for each platform.

Any platform can be built from any platform (using wine or mono), as long as you are not using any native C/C++ APIs. Also note that even if the code you write is supposed to work the same on every platform, you still have to sometimes handles the specificities of each (in which case, process.platform is your friends). Apparently, there are also some Electron-specific differences between platform, so the best way to debug is to have 3 machines, one for each platform.

All the build info of electron-build is taken from your package.json. You can define, for each platform, the specific setting you want to pass to your build. Also note that by default, only the files that you explicitly require will be included. If you need to pass any other file, you'll have to pass them manually in the config.

To release your app, you can either use Nuts like Baptiste talked about in the first talk, or use electron-release-server that you'll have to host but which will give much more power on the auth and release channels (beta, dev, prod, etc).

Audio in Electron

Last talk was by Matthieu Allegre, but about the sound APIs. Sound in Electron is nothing more than what you can do with sound in Chrome. The Chrome version being set in an Electron app, you know what will be available and what will not. Furthermore, you can pass specific flags to Chrome to enable some features.

The demo was about listing all the input and output devices of the computer, firing events when a new one was added, and then capturing the input stream of one, to send it to the output stream of another.

Aircall uses this (and then send it through WebRTC) to make calls between two peers, which is pretty clever. Having control over the environment and being able to build sound-oriented apps in Electron is interesting and I'm sure this opens the road to interesting applications. I'll have to give it a try.

Conclusion

I'm glad I came. I didn't know there was such a big Electron community in Paris (I would say we were around 60), and having that many interesting talks was worth it.

Paris API March 2017

Yesterday I was at the Paris API meetup at "La Maison du CrowdSourcing", where the KissKissBankBank office is located. The Paris API meetup is organized by Mailjet, and Grégory Betton, one of their developer advocates was the host.

The meetup historically had two talks per session, to keep the sessions short enough so people can still get back to their families without sacrificing on the networking time.

This time though, the sessions were exceptionnaly short has both speakers finished their talks earlier than anticipated. Content was still interesting, and we had plenty of time to discuss afterwards, so that's not a bad thing.

API First to the rescue of my startup

The first talk was by Alexandre Estela, from Actility. Where he explained how to conceive APIs and how to avoid the common pitfalls. His point was that, when working in a startup, we are often time-constrained. We have tasks to do, urgently, and not enough people to do it. So when comes the time to build an API, we tend to rush it and we end up with something that is half broken, hard to maintain and not usable. We also tend to rush into the development phase to have something in production and do not spend too much time thinking about the design.

He gave a list of tools that helps you focus on the design of your API, its specifications, and that will build all the plumbing around it for you. All his talk was focused around Swagger and the tools of its ecosystem. Following his approach, you always start with the specs of your API, spending your time thinking about the design.

Then, you use swagger-inflector on top of it. It will parse your specs and build all the plumbing and create the required endpoints for you. You need to follow some specification and the tool will take care of the rest. It will even create the mocks letting you test your API right away.

No code is finished until it is documented, you also run swagger-codegen-slate to generate the documentation, following the popular Slate framework (used by Stripe, to expose how your API is supposed to work.

swagger-codegen-bbt will let you do black box testing. It will re-use the examples you defined in your specs and will test changes to it to generate real-life test scenarios.

And to finish, the most well-known is swagger-ui, that will generate a full HTML playground, exposing your endpoints and letting people play with it. Having interactive demos for APIs is for me the most important part to discover what an API is doing. When confronted with a new API, most users will read the short description then they will try to play with an example and after that will they read your documentation. So having a live playground for them to do requests is key for the adoption of your API.

His approach was sound: you start with the specs, and then you let the tooling generate the rest around. The backend code will most of the time be generated in Java because that's where Swagger is coming from but I think you can also make it generate it in node or go (although I'm not sure all the plugins will be compatible).

In the end, it will save a lot of time in the long run, but you'll have a starting cost of bootstrapping all the tooling that might not be worth it if you plan to do one and one quick and dirty API. Having everything automated and being able to build tests, mocks, documentation and demos is invaluable, but you still need to spend time writing the specs and examples for everything else to work.

Short talk, but to the point.

PhantomBuster

The second talk was about PhantomBuster, by Antoine Gunzburger. PhantomBuster is a crawling API on top of PhantomJS. Its purpose is similar to what Kimono Labs offered. Not all websites have an API, and when you want, as a developer, to get content from them, you have to resort to crawling them.

Kimono Labs offered a GUI where you had to click on elements of the page you were interested in, and they created an API endpoint that used to expose the data you selected in JSON format. It was a way to make any website into a JSON API for easy consumption.

I'm talking into the past as Kimono Labs shut down end of February.

PhantomBuster is doing something similar except that instead of providing a GUI for you to click on the elements you need, it lets you write custom javascript code to crawl websites and extract content. It is packaged with many features already (like screenshots or captcha solving), but still requires you to write some code.

In the end I'm not sure I will use the project as I already have crawling scripts ready and using them often, but I see how this can be useful into prototyping and API for a POC.

Conclusion

It was my first time at the Paris API meetup. I will surely suggest a talk for the next session, I liked the mood of the meetup. Thanks to both speakers for the interesting content.

WriteTheDocs Europe 2016

Mid september I was in Prague for the WriteTheDocs conference. I went there with four of my colleagues to learn how to improve our documentation. We discovered much more than what we initially expected.

Stickers

First impression

From the first talk I realised that I actually did not know much about the community that I had joined. I expected it to be composed of those developers that also write documentation and enjoy it. But when the first speaker introduced himself as an engineer, and that it apparently was something worth specifing, I knew that I was going to have some surprises.

I then discovered that there is such a job as "Technical Writer". After two days of conference I'm still not sure what it means, to be honest. From what I gather, they are people with skills in writing and they know how to convey information in a clear and concise way. They can translate complex concepts into simpler words so others can understand them with minimal effort. A technical background is not mandatory, but asking question is paramount. They have to deeply understand the subject to be able to synthetize it.

Documentation is code

Throughout the conference, I saw talks explaining how important documentation was and why it should not be added as an afterthought. People were exposing issues in the way documentation was done, and suggesting ways to fix those issues.

At their core, issues people had with documentation were the same issues we have with code (quality, bloat, complexity, etc). The suggested solutions were also the same we apply to code (user testing, automated testing, linters, short feedback loops, etc).

Language as code

Good writing is how you run words into someone else brain to spark ideas. It's not different from a code you execute. If you write bad code, your code will do bad things. This is exactly as true for documentation.

Documentation is as important as code, because it is like code. Language is brain code. Every word will journey through the reader mind. You must be careful to only send important information, as fast as possible, and avoid overflow.

Syntax is paramount, and ambiguity must be avoided as it slows the process down. Readers shouldn't have to read a whole sentence before getting the meaning of it. They should be able to process it as it comes. It's the same as loading a big file to RAM versus reading it line by line.

Docs or it didn't happen

Documentation is as important as code when it comes to features. If undocumented, any feature is outdated as soon as it's shipped. If you're in a Scrum environment, then it means documentation should be part of the DoD of any feature.

As a developer, I will always add tests for any new feature. This is how I can prove that the feature is working. Writing documentation is proving that the feature actually exists.

Docs or it didn't happen

If you see a GitHub repository with an empty readme, you'll assume the project is unfinished. If you see a project without documentation, you'll assume it's not usable.

This is even more true if you're documenting an API. User testing with eye-tracking showed it: when confronted with a new API, everybody searches for the documentation first. Then they look for live examples and code samples.

And like tests, the logical next step is the documentation equivalent of TDD: Documentation Driven Development. Start by writing the documentation, and then write the feature. Documenting the user-facing API before writing any code will let the best API emerge by itself.

Write drunk, edit sober

As developers, we spend more time fixing bugs and adding features than coding the initial skeleton. The same happen when writing documentation. Great documentation requires hundred of tweaks and rewriting and no-one ever did it right on the first try.

Writing and editing requires vastly different state of mind. Write a first draft to dump your ideas. Don't bother with typos nor grammatical errors, but write down all you want to say, to get a rough word count. Then, let it rest. A couple hours or even days, before editing it again.

Keep It Simple, Stupid

Define a shared Styleguide, with the voice and tone you want to keep consistently through your documentation. Your readers should not feel like they are reading a different author on each page.

Writing documentation is easy. Anybody can do it. What is hard is to write something that will be understood and remembered by the reader. The key is brievety and simplicity. Remove words and sentences until you think there is nothing left to remove. Then remove some more. And remember what someone famous once said: If I had more time, I would have written a shorter letter.

People will come to your pages from search engines. They won't read from top to bottom but can jump to any part of the page. They will scan the content, so help them identify what each section is about. Each of your paragraphs should explain exactly one idea and should explain it clearly (in perfect UNIX-style).

Tips'n'tricks

A good story ends with a satisfying finish, not in the middle of a cliffhanger. At the end of any page, list what has been learned, show what can be built with this knowledge or add links to the next steps.

Tools can make your life easier. They can even be pluggued to a Continuous Integration service. Don't waste time doing what a computer can do better and faster than you. Focus on where you bring value.

Support

Spend time with your users. Immerse yourself into the support team and see the real issues your users are facing. Schedule regular user-testing sessions. They are an invaluable way to know the real issues that need documenting.

Add code samples because that's the first thing developers read. Add video tutorials for beginners and interactive jsfiddles for experienced users. Don't hesitate to add pictures to explain complex concepts.

All good writers are avid readers, so read. It will give you more words to enrich your vocabulary, so more ways to express nuances. This is even more true if you're not a native english speaker. Translating books into other languages is also a great way to improve your writing skills.

Conclusion

Even if not exactly what I was expecting, the event was a success. We all learned a lot, met interesting people, and even had the chance to pitch DocSearch. I think I will come again next year.

We will maybe even suggest a talk, because I feel that the way we write documentation at Algolia is on the right track, even if a bit special. We write the documentation of the feature we develop, and we also do the support for it. It puts us in a virtuous circle of feedback, bug fixing and documentation enhancing.

We like doing support, but we'd rather spend our time on adding new features. So enhancing the documentation and fixing bugs is our way to ensure that we spend less time on support and is a good motivation.

Thanks to all the organizers, speakers and attendees and hope to see you next year!

You can find all the videos on YouTube

HumanTalks September 2016

For the September session of the HumanTalks, Sfeir accepted to host us. Everybody was coming back from vacations (including us, organizers) but we managed to find 4 speakers and a place to host in a short amount of time. Tasty bagels and a nice terrace under the warm sky of September is a perfect way to start a new year.

FedEx days

First talk was about an experiment done at NanoCloud, something they call FedEx days. They did not invent the principle, and you should be able to find articles on the subject online.

The goal of FedEx days is to bring coworkers together in a timeboxed session to work on a specific subject of their choosing. They marked a day in their calendar, and one week before they asked everbody to suggest ideas of projects that could be built. Projects need to bring some value to the company, and need to be something you can actually build (no wild dreams here). Then everybody pick a suggest and form a team of about 4 people to work on it. Having no more 4 people in a team creates a climate of healthy competition.

In their iteration, most of the people chose a group based on their personal affinities with others, so there was no diversity in the teams. It's something they would like to change for the next event.

But what was nice was to bring together in the same team people that have been in the company for a long time as well as recent hires. It helped the new hires to better know their coworkers and the product. They plan to make it an "official" part of the hiring process for the next hires.

I'm personnaly unsure this is something I'd like to do on a regular basis. Or, let me reformulate. I think small team of friendly coworkers working on small features that bring value to the company should be what every day should look like. But actually immerse new hires in a team when they join, so they can know their coworkers better, have a deeper understanding of the product and bring something to the company seems a good idea.

ES Next Coverage

Second talk was way more technical, about a code coverage tool. Oleg, from Sfeir, is the developer of esnext-coverage. We saw how testing code is important, but also how having coverage of the tested code is an indicator of test quality.

Not everybody does testing, and even less people do coverage. Configuring the whole test/coverage stack is time-consuming and often discouraging. Oleg tried to make it as simple as possible with esnext-coverage so developers couldn't hide behind excuses.

The tool allow the exporting of the results in json or HTML as well as any custom formatter. This is an improvement over Istanbul that apparently have a limited list of built-in formatters. The default HTML formatter creates a single page application that you can use to browse through the code.

We then saw how all this works in practice, including a live-code of the instrumentation phase, where the initial code is transformed into another code that will "count" each time a line is executed. The whole demo was done in AST explorer, an online AST (Abstract Syntax Tree) console.

Basically every call to a method, or every variable assignment is replaced with a call to a method that will increment the count associated with the current line.

Interesting talk, I always like to see people write code that write codes. I like the meta aspect of it. Also, Oleg is a great speaker which made understanding the underlying concepts easier.

Taking the hard out of Hardware

Alex Bucknall from Sigfox tried to show us how hardware isn't actually so hard, and why it's actually closer to software than we could think.

It looks complicated because we cannot see it. But so is software, and concepts between hardware and OOP are often the same. Classes, Objects, Inheritance, all of that can be found in hardware as well.

Alex explained how the basics of hardware are actually simple, and how more complicated pieces are built out of simpler blocks and then packaged under a new name. And even more complex blocks are built out of those previous ones, and so on.

Everything looked so easy when Alex explained it, but still it felt too fuzzy to me. There are still concepts and words I don't get (frequency, voltage, etc) that I had trouble following.

If anything, it nicely introduced what SigFox is doing, namely hardware blocks that can communicate between each other through familiar HTTP/API/callbacks mechanisms.

What is a stand-up meeting?

Last talk of the day was about the stand-up meeting ritual, dear to the Agile methodologies, presented by Thibaut Cheymol.

Stand-up meetings must answer three specific questions: What did I do yesterday? What will I do today? What issues did I face? All three questions must be answered without forgetting the current sprint goal.

If done like it should, the stand-up meeting will boost the motivation of the team, letting everyone having a clear picture of what they are going to do today, and more importantly: why.

But we all know that the stand-up meeting can turn into a reporting game where everybody waits for their turn to speak and report in a military fashion what they did the day before.

The advice given by Thibaut against that were to:

  • Actually stand up. No chairs, no leaning against the wall, no computer.
  • 15mn tops. More than that and it's boring
  • Start on time. Be concise. No discussion, no interruption.
  • Be prepared (~5mn the day before)

Other more generic advices were to always talk about what was done, not what is in progress. Trying to talk about the end goal ("I made sure that people could buy the product"), instead of the actions done to achieve it ("I implemented the API responses").

Not everything goes as planned, we often have issues that need to be dealt with in the moment, delaying us from what we planned on doing today. In that case, we need to fix issues when they arise, but talk about them on the next stand-up meeting. To avoid having discussions and/or people talking for too long, Thibaut suggested the following three-turns approach:

  • Everybody, in turn, tell what the issues were
  • Everybody, in turn, tell what actions they did to fix the issues
  • Everybody, in turn, expose the results of the actions taken the day before

I haven't been doing a single stand-up meeting in the past year and a half, and I don't miss it at all. I did a lot of them in my previous job, but never perceived it as valuable tool (because we fell into the classical traps, maybe). I'll keep those advice in mind if I ever have to do another stand-up meeting again.

Conclusion

Thanks again to Sfeir and all the speakers. Next month we'll be at Prestashop, with 4 new talks. Hope to see you there.