The Internet of Things is spying on you

The Internet of Things (IoT), a fancy expression used since decades to talk about inter-connected devices through a network. It has been a fantasy for several years and is finally taking off. We will have connected electronics everywhere. Anywhere, anytime.

Examples are there: the NEST home automation company has been acquired by google, fitbit went public the last days and google has now a full a full product line for wearable: Android wear. For sure, the applications for the masses are limited now (e.g. fitness trackers, watches) but companies are investing a lot to put technology everywhere (your shirt, your pants, anywhere in your home).

When looking at the product description, this is very appealing: keep track of your sleep, discover abnormal heartbeats, monitor your home through connected camera. This sounds very appealing.

But there are some downside: by giving away our private data, we are opening the door to mass surveillance to many other people. Your manager can track you down and know when you left your home. Your insurance can increase your premium based on your activity. You give away your privacy, and gives for free the data that matters only to you. This is not new and car insurer already proposed to adapt your insurance policy according to how you drive.

Most of us already gave away our privacy, that is the basis of who we are. Many e-mails accounts are handled by online services (e.g. gmail, ymail, etc.) but we forget that we are paying it with our privacy and finally are the final product (if you are wondering how I manage my e-mail, short answer is custom hosting and encryption). Millions of people are using social medias to report where they go and what they like. If you are skeptical, look at the accounts of the big players (google, facebook) and try to guess how they can make so much money with a free product. The downside for us is that by putting everything online, we give away who we are. What is the benefit to meet people as we already know everything about them? Who has any interest in meeting somebody if he already know everything about him/her?

I am not a naysayer. Or even not saying that: “it was better before”. Progress is both exciting and dangerous. And as Uncle Ben used to say: “With great power comes great responsibility”. Technology should be a progress and help us to improve what we are, who we are. We have to use it carefully and efficiently. Social media is a great platform to organize meetings and keep in touch with folks we did not see for a while but it becomes intrusive and a waste of time when we report everything we do with it. Wearable technology follows the same rules: it can be a great way to improve our life but can also be very intrusive. As for every technology (even the most basic one – think about a knife), its impact will depend on how you use it. It can be a great benefit (e.g. cut your food for the knife or tracking potential diseases for wearables) or a total disaster (e.g. kill people for the knife or tracking your movements for wearables).

One thing for sure: the future is exciting and these technologies open new applications and new markets. I am very curious to see how people will use it and how these new technologies will grow and integrate with other devices (phone, car, etc.).

The Internet of Things is spying on you

The best open-source alternatives to commercial and proprietary software – desktop edition

What is open-source or libre software?

Software is like food: to build it, you need a recipe and tools. Behind the magic that is happening when using your computer, there is a piece of code written in a specific language that is eventually transformed into a machine language your computer executes. When cooking, a recipe gives you the list of ingredients so that you can see and analyze if the content is appropriate for you. In case you have an allergy, you can choose not to cook it and choose another recipe. If you want to replace an ingredient (for example, because of an allergy) or use a better alternative (using organic ingredient for example), this is completely up to you. But to do that, you need something simple: get the recipe. If you do not have it, there is no way you can know what is inside.

Software is like cooking and the source code is the recipe. If you have the source code, you can rebuild the software or even improve it. You can study it, look at its defects and issues and fix bugs or improve the software. For sure, you need to understand the language, but this is the same issue if you receive a recipe in German when you speak only English.

In the software industry, we distinguish mainly two business model for software: open-source (also called libre- or free – I will not go into the details) and commercial. Open-source software gives you access to the source code while commercial software keeps it secret. In other words, with open-source or libre- software, you can analye if the software is good for you. With commercial software, you do not know what is inside.

How different it is from commercial software?

As a user, from a functional perspective, there is not so much difference. Same when going in a restaurant: you just consumer – you eat what is on the menu, without knowing exactly how it is made or cooked – the magic happens in the kitchen! But sometimes, you will be surprised how dirty and bad is the kitchen and you might better investigate what is happening behind the scenes. Same thing with software: investigating what is really done by the software would be helpful to you and understand what others do with your data.

As stated previously, you need to have the source code with the ability to understand it. But when exposing the source code to a large community of developers is alrady a major step forward: you can (at least) rely on a small expert community that will review part of the code (which is not possible with commercial software). Even if you are not a programmer and does not know any programming language, using open-source/libre software is of primary importance. In fact, there is a massive community of developers that review source code, fix issues and improve such piece of software on a regular basis. The main advantages of using open-source software are:

  • security
  • privacy
  • flexibility
  • stability

On the other hand, it can have some issue:

  • lack of support
  • use by experts only

In fact, using open-source or free software is necessary but not sufficient. This is a best-effort approach: it provides some protection and is (for sure) and better solution than commercial software. But it cannot proves and guarantees that it provides all the necessary protections you might expect. Having a total bulletproof system is not feasible, the best strategy is to try protecting yourself as much as possible.

Libre Software Alternatives

Web Browsing

Firefox is the open-source web-browser you need. Many of its features are totally unknown, such as sync (to sync your preferences and bookmarks over several devices) or the anti-ad extensions. Firefox has done a fantastic job to reboot the web and make it more often. They are also pretty good at innovating and introducing new features (such as WebGL).

The browser is available on almost all platforms (Windows, Linux, Mac OS, Android, iOS, etc.) so you can think your profile between many devices and also support a good organization that do its best to protect your privacy.

But … why not chrome or IE?

Chrome is a product from a company making money by selling ads (google). Do you seriously think their business is to make a product that protect your privacy? Internet Explorer (as Chrome) source code is not available so that none of these products can guarantee they will protect your privacy. As firefox is mostly as good as other browsers in terms of performance, stick with the one that is cross-platforms and protect your privacy.


Thunderbird is Mozilla’s (editor of Firefox) brother (ah ah ah) for e-mail. It supports many features and can get e-mails from POP or IMAP servers. It is also privacy-savvy and can be used with encryption support. If you are looking for a good e-mail client, go for it!

But … why not gmail?

gmail is free and easy to use, so, why not using it, right? Well, gmail does not protect your privacy, either to spy on you or to propose you new ads. No matter the reason, I do not want anybody to read my e-mails. Some argue that it does not matter because if you send an e-mail to somebody, this guy has probably a gmail account so that they can already process your data. To this argument, I would oppose the following reasons:

  1. This argument is as saying you are not becoming vegetarian because people will not kill animals and produce meat. If you stop using gmail and encourage people to do so, spying activities will then be more difficult
  2. You can use gmail as a POP3 account and still use encryption. Sure, the service can still process the metadata (headers) but not the content, which is already a big step forward.

No matter what, keep your own shit, protect your data, your privacy and avoid gmail at all cost. Period.

What e-mail provider?

Having a good e-mail client is not sufficient, you also need to protect your data to be processed and analyzed by your e-mail provider. This is known that traditional service providers analyze your messages, even if this is only to show you accurate ads. Regardless the reason, they open your messages to analyze it. Actually, there are few e-mail providers that are privacy-savyy. While you pay traditional services by sharing your privacy, these one must be paid with real money. For about $50 a year, you can then have a good e-mail services that will also protect your privacy. Some names? startmail, runbox, etc. You can find a list of good services online. Instead of paying by giving away your privacy, you just give real money. Yes, everything comes at a price.

Text Editing

Yes, people still edit text files. It might sound weird but in fact, text files are probably the most efficient way to takes notes easily. Using the markdown format, it can be more than enough in most cases. Anyway, if you are running on Windows, I would recommend Notepad++, a pretty efficient tool to edit text released under the GPL. If you are running Linux, use vim (gasp) but if you are looking for a user-friendly soft, just use kate or gedit. And finally, if you are running Mac OS, just change your OS.


Chat is a difficult choice because what matters is not only the software you are using but mostly the protocol you are using. For example, you can use an open-source software for chatting online with your friends on MSN/gtalk but it will still use the gmail infrastructure to transport your messages. Yes, you are not using a proprietary piece of software on your machine but you are still relying on a massive infrastructure that will analyze and process your data.

So, you can use whatever you want but I would recommend not to use any specific chat program but rather stick to e-mail. If you are really looking to discuss with your friends, I guess that the best efficient way to do it would be to use IRC. On the other hand, many folks do not want to use IRC and rather use any crappy webservice. As Churchill said:  “The best argument against democracy is a 5 minutes conversation with the average voter”.


By “productivity” we means software to “produce” something. Using youtube or facebook is not being productive. One of the best software is just (or its brother Yes, this is not beautiful but who cares? It works just well and offers almost the same interface from one version to another.

Sure, it does not have all the fancy extensions from Word. But who cares? For 99.999% of users, it does not matter at all. And each version of Microsoft Office tools has a different layout so that you end up by being totally lost from one version to another. In addition, formats between versions are not so compatible (the layout can be different) so you end up by exporting in PDF …

Sure, LibreOffice/OpenOffice might not be as fancy as Word. But it offers a simple interface that works. And that is all what we are asking when we want to be … productive!


Basically, the number one software used to work on picture is Photoshop. But obviously, who knows how to seriously use all of its features? The soft is really complicated to use and, in addition, is really expensive! If you are looking for a cheap (free!) and open-source alternative, just use the gimp. Simple, efficient, you cannot be wrong with it. It runs on all platforms and is pretty stable.

Instead, just use The Gimp. This is sufficient for most of us – and may already have more features than you expect. The Gimp is available for free on several platforms under an open source license. No reason for not using it.

What about the other applications?

This list is just a start. But when looking for a software, try to find an open-source alternative. Not something that is free just as free of charge but free as in freedom. Check the license (GPL, BSD, etc) and make sure the software license is an open-source one. As of today, there are many open-source licenses and a lot of good open-source (or libre) software.

Also, for sure, you are probably using Windows or Mac OS, which are the two main proprietary/non open-source Operating Systems on the market (this can be discussed for Mac OS ). One big step would be to step away from Windows and use a libre alternative (such as Ubuntu for example). That would be more difficult and require more efforts – you will then need to learn again the basics of using your computer.

The best open-source alternatives to commercial and proprietary software – desktop edition

Firefox is not dead, we need it more than ever

A recent post on slashdot argued that Mozilla has succeeded in its goal with Firefox by supporting choice and innovation on the internet. Before Firefox, there was almost no diversity in the browser world and the only choice was Internet Explorer, which was, from a developer point of view, a disaster. By bringing innovation in browser-land, Mozilla attracted users so that Internet Explorer became the outsider. Since then, Apple improved Safari and Google released their own browser, Chrome. So, one can legitimately wonders: is the war of the open-web promoted earlier by Firefox over? Do we still need Firefox? funny-cat-lolcat-browser-history In fact, we need Firefox more than ever. Before the war is not over. Firefox was initially the starting point of a major technical change that ended by promoting web-standards. Thanks to the hard work from the Mozilla foundation and its community, Firefox changed the browser landscape, which reboot the browser war. But now, the war is no longer about the support of standards or performance (they almost have similar performance – at least for the end user – some review put also Firefox first) but on the protection of user privacy. From a technical perspective, major browsers are built on an open-source engine (KHTML/Website for Safari, Chromium for Chrome and the next engine from Internet Explorer), but the full source code is not disclosed. Only the rendering engine is public, not the complete software and especially not the parts that process your data (thinking about people that synchronize Chrome with their google account?). For that reason, is almost impossible to check how each browser manages your data and if some of it is sent/used to/by an undisclosed third-party. Recent news show that many software have built-in backdoors as requested by various governmental or commercial authorities. On the other hand, the full source code of Firefox is available so that technology experts can precisely analyze what the software is doing and prevent any data leakage. This ensures to the user that the software can be reviewed by security experts. And that any potential defects or errors is fixed as soon as possible. Using such a browser on your computer or mobile phone is thus of primary importance. The war of making the internet free and open is still going on. The first battle consisted in taking back the internet from a technical point of view: promoting  standards and make the web inter-operable again between various devices and operating systems. This is one was a success for Firefox. The nest battle is to take back the internet from a legal and freedom perspective, ensure that the tools we use preserve our privacy and primary rights. That each of us we can use the web and express their opinion. Considering the actual political context, this is a big challenge, let’s hope Firefox will be one technical solution to this issue.

Firefox is not dead, we need it more than ever

You break my Heart (Heartbleed for Dummies)

A great catch

On April, 5 2014, a major security issue has been fixed in OpenSSL. For those who are non-geek, OpenSSL is a software library (code that anybody can reuse) released under a free-software license that aims at handling security issues. This is a free-software, as in free speech: the source code is available on-line and anybody can contribute and add its own corrections/fixes. So, you have the freedom to do whatever you want with it (according to the license terms). But OpenSSL is also free as in free beer: you get a great piece of software without paying anything. But this nice present come without any guarantee and if you are reusing it, it is your duty to check that it satisfy your quality criteria. Who’s using OpenSSL? More or less everybody because the software is used by web servers and web browsers and, as we used today mostly web-based application, you are probably using it.

Can you explain this bug?

One of the best effort to explain it is the xkcd webcomic. To make it simple: when your computer is talking to the server, it sometimes keeps the connection alive and established so that you do not have to re-initialize a new connection every time you want to exchange new information. For that purpose, your program (web browser, application, etc.) sends a request to make sure that the server is there and asks it to reply a specific message. When replying, the server includes the requested message plus other information from the server memory. The bug is that the server should just reply the specific message and not send any additional information.

Problem is: this additional piece of information is totally random and might contain useful and/or critical data. In fact, this additional data can be any data the server might access (password, web content, etc.). Some argued that only non-critical data have been exposed but a challenge showed that even private keys (the one you are not supposed to exchange when using cryptography mechanisms) have been affected. For example, there is evidence that when trying to exploit the bug on yahoo mail service, attackers can get other users passwords.


code-clean“Cleaning OpenSSL bugs might take some time” – picture taken from the Martino Sabia gallery

Who introduce this bug? (so that we can bury his body, say he is a communist, go to his house and steal his groceries)

As OpenSSL is a free-software with contributors all over the world, anybody can modify the code. Thanks to appropriate tools, we can track who is modifying what part of the code. And thus, know who introduced the bug. It turns out that the code related to the bug was introduced by a German guy (Robin Seggelmann – see the related git commit). Unfortunately, this nationality is not appropriate for suspicious people thinking spying agencies introduced the bug on purpose. And, according to the person that introduced the bug, it was a mistake related when working on bug fixes and new features. In addition, the change was also reviewed by somebody else that also missed the potential flaw.

Of course, since the public declaration, there was plenty of rumors about who really introduced the code, if the original developer was paid to do it, who already exploited the vulnerability, etc. Considering my experience in software engineering and the code reviews I have done so far, such a bug is pretty common in many software and usually not spotted during reviews. This is why code reviews are necessary but not sufficient and you still need to use other methods (static analysis, runtime checking, etc.) for safe and secure coding.

But enough debate, instead of taking part of this discussion, let’s stick to the facts.


Why the bug was not fixed before?

The bug was introduced in December 2011 and eventually fixed in April 2014. It was there for about more than 2 years. Within this time frame, anybody that knew about this issue may have exploited it to steal data from services providers using the defective version of OpenSSL.

Finding such a bug requires to review the code, either manually (a coder review the code) or by automated analysis tools or testing. In any case, it requires some efforts which comes at some cost. Problem is: OpenSSL is a free software (as in free speech) and contributors might introduce new code that contains security flaws. Which is just normal: by definition, humans make mistake and when producing code, they sometimes introduce errors (think about the GNUtls bug). Of course, when coding errors might have significant impacts, there are some reviews. But this time, the review was done manually and the reviewer did not catch the bug. Which (again) is normal: most of the time, when reviewing the code, there is no bug and reviewers are not used to make a deep investigation. Which is (again) normal and human – think about a new security clerk that controls people going in and out a building: he will be very careful during his first days, but, after a couple of days, will start to know who is supposed to go in/out and sometimes, make some exception and let you in even if you do not have your badge. This is human: you are used to a routine and less careful. This probably why car accidents are more likely to occur on roads and course you are used to take (for example, when commuting to work).

But let’s come back on software analysis: other than manual code review, other techniques can be used to detect such issues: testing, static analysis, etc. It does not seem that OpenSSL has a test procedure that can find or catch such a bug. But users do not seem to put so much efforts on testing this piece of software, despite its criticality and importance. Which is (again) human: why would you want to test something that is used by many other people since several years and free! You just assume that other folks will detect any defect/issue and take the free (as in free beer) software as is!

This was probably the biggest mistake here: users of OpenSSL did not understand that free software is free as in free speech, not free beer. In other words, you can take the code, use it and contribute but you might have additional work/cost if you want to make sure this software is safe and compliant to your own quality standards. Hopefully, some users (as google) have engineers that investigate such piece of code (and eventually discover and fix issues) but the late discovery show that the effort is not sufficient.

Now, the interesting thing is, as this piece of software is particularly critical, other people pay this cost, not for fixing it but exploiting it. They might do the tests, pay the costs for the technology to find the bug and finally exploit it until its public discovery. In other words, others might be willing to pay the cost of testing to retrieve private data. In that particular case, the Return On Investment (ROI) for detecting and exploiting this bug is definitively worth it: institutions can then steal data at (almost) no-cost from users all over the world: it does not require any high processing capacity (instead of trying to brute force an encryption key) or high bandwidth capacity (as for a Denial Of Service attack). You can put a bunch of raspberry-pi ($50 each) and try to steal data on a 24/7 schedule.

Also, this does not mean or demonstrate that Open-Source or Free Software means low quality: a recent study by Coverity shows that software under this license has a better quality than proprietary products. On the other hand, because the code is publicly available, this is more easy to find issues while proprietary software is more difficult to analyze.

Important questions are now: how many critical bugs as this one are still unfixed, how safe is proprietary software since analysis is more difficult to do and what are the best mitigation techniques?


code-reviewOpenSSL Code Review – Picture under Creative Commons by Sumit Sati


Can the NSA find any pictures of my cat naked using this bug?

Since the Snowden revelations, everybody is nervous about their privacy. We shifted from a behavior where everybody share everything everywhere to a mode where we are suspicious about anything. Conspiracy theories about potential use of the bug have spread over the internet: did spying agencies know what was going on? If yes, did they exploit the bug or not?

Some sources report that NSA was aware of the bug and has been using it for a long time (about 2 years). The official twitter feed from the NSA reports that the agency was not aware of the bug. But after all: does it really matters? If you are using online services such as gmail/facebook/twitter, they probably have more than one way to get access to your data.

On the other hand, other spying agencies and/or company may have used the bug to access private data including private encryption keys. As usual, rumors have spread about exploitation of the bug before the release of the fix but no serious evidence was available so far.


Which companies were affected?

This might be important to know who is really impacted by the bug. What really matters is if the service provider is really affected. The server discloses private information and you never used the bug but other might exploit it to retrieve information you stored using services from various services providers.

Knowing exactly who is really impacted depends on the version of OpenSSL that was used by your service provider. Some services were not impacted at all while other might have sent part of your data without knowing it.

For the impacted services, a timeline has been established by the Sunday Morning Herald. It shows the relation between the bug discovery, how it was disclosed and eventually fixed in popular operating systems. It turns out that few of them were aware quickly, which can be understood: the more people know about the bug, the more potential attacks you can have. So far, as almost 70% of webservers are using OpenSSL, many sites were affected when the bug went public.

How to avoid such situation in the future?

As pointed out earlier in this post, this error is likely due to the manual development process: the developer made a mistake (but who never do one once in a while?) which was not caught by the reviewer (and again, who did not made some mistake when checking something?). But all the development efforts is made manually by two people whereas such issue can be found by other techniques such as:

  1. Using automated analysis tools. As automated analysis tools are computer program, they (by definition) do not do human mistakes. Also, these programs can be executed on a daily basis and so, use them as the code evolved to discover regression while improving the code. This could be used to detect new issues on code freshly added by developers. Tests such as code coverage, coding guidelines checking, etc… can be automated. The problem? It requires to pay the cost: maintain an infrastructure to execute the tests. having a team to make sure issues are resolved, etc.
  2. Increase the work force. Have more people to work on the project and review the code. In this case, the code was reviewed by one person but one solution would have to get more reviewers.
  3. Make independent review. Having independent code review can definitively address this type of issue. As this kind of review is also partly done manually, it may not spot all issues. But this is definitively useful and could be done (for example) at each major milestone/release.

This is list not complete but are likely the usual techniques for finding such issues. Commercial projects use this type of review. So, why not for free software as well? Someone has to pay the cost for it. And, as most OpenSSL users are also competitors, are they willing to pay for a review that can benefit their competitor? As far as I know, there is not such an initiative and review/investigation are not coordinated and made by each company. I might be totally wrong because I am not involved with the biggest users of the software and have no evidence of coordination initiatives) but  it seems that having a joint initiative would be useful and each one could take the benefits of it.


Is there any potential other bug like this?

From a statistical point of view, all the software you are currently using for reading this article potentially contain a bunch of bugs. Think about what your machine is currently running:

  • an Operating System – the kernel (not the graphical part) is made of almost several M of lines of code. In 2011, Linux was made of more than 15M lines of code (and consider Linux is the kernel for Android phones). This is just the kernel part, we even do not include the graphical part of your system.
  • a web-browseralmost 4M lines of code as well (at least for Firefox, probably one of the best browser)
  • a compiler used to convert source code into executable binaries – GCC (one of the most popular compiler – the one used for compiling the Linux kernel for example) was made of more than 7M of lines of code in 2012.

According to different sources, the number of bugs per lines of code varies according to various factors (such as the language, experience of the developers, coding rules, etc.). Even if we consider the lowest estimate of 1 bug per 1000 lines of code (and realistic estimates would be more likely 10 to 20 bugs per 1000 lines of code), this is obvious that the software you are currently executing have some flaws and defects. On top of that, add potential developers that may introduce defects on purposes and you can have a good idea of the level of trust you can put in your computer. There is a reason why the NIST institute estimates in 2002 that software errors cost approximately $60B: from a statistical perspective, this is obvious that your computer has bug. The question is: what is their severity and how they can be exploited.


How can I stay safe?

First of all, you can test the servers of your service provider. Second of all, the best thing is common sense and just keep private data … private. Do not put online data you do not want to disclose. Online services are not safe unless you control what is the underlying software that hosts it. This is a necessary but not sufficient condition (see below).

It sounds ridiculous, old-school but this is just common sense: if you do not want to take the risk to disclose data, do not share! Keep your private information at home, backup on a hard drive and do not send it on google drive, dropbox or other online storage service. Convenience has a price, and, as pointed out since a long time, you might pay with your privacy.

So, what about people having their own self-hosted online service (with their own server running a linux distribution such as FreedomBox)? He is vulnerable despite trying to protect himself by avoiding common online services. Well, because this one is smaller, this might potentially be a target with less interest. Of course, once the bug has been disclosed, many bots will automate any attack and try to get data from any host. A common guideline would be to avoid to use the latest version of a software and stick to established and well-known versions. But this might not be sufficient: even the current Debian stable (wheezy) was exposed. However, this rule might prevent you from future exploits that would eventually be discovered quicker after they are introduced.

The Take-Away

What the average Joe should do to protect himself from potential new security issue:

  1. Use common sense. DO NOT PUT ONLINE DATA YOU DO NOT WANT TO SHARE. Do not trust online services you do not control.
  2. Do not to use the same service for everything. In case this service is hacked, is interrupted or experience issues, you can lose data or experience issues if the service is unavailable
  3. Use free-software as much as you can. Forget the bullshit trends and stick to this rule. Code of free-software is available so that bugs can be discovered and fixed at the earliest. Proprietary applications are more difficult to analyze and finding bugs is more complicated and you have no clue if anybody found it (and if they are eventually fixed). Excited by the latest trendy browser that shows pictures of kitties while the page is loading? You have no clue what this piece of software contains and actually does! Just use firefox, an established browser supported by a large community and that supports standards.



You break my Heart (Heartbleed for Dummies)