Once a year in Europe there is an event that everyone who knows something about Linux dreams to visit. An event that gathers around itself the largest community that ever existed on this planet. A community of enthusiasts, hackers, engineers, programmers, admins, corporate bosses, all those who have jobs and hobbies thanks to Linux and open source. We at STC Metrotek are used to sharing knowledge and getting it, so we could not miss it. Ladies and gentlemen, welcome to Dublin for the triple LinuxCon + CloudOpen + Embedded LinuxCon Europe 2015 conference!

Even when we just looked at
the conference program, we realized that it would be a very busy 3 days. Every day, from 9 am to 6 pm, 12 (!) Reports were held in parallel at the huge Convention Center Dublin. Naturally, I wanted to visit almost everything, but our time has not yet been sent from AliExpress, so I had to carefully choose where to go. Fortunately there were two of us (me and
paulig ) so we managed to see more. In the end, we got to Dublin and started.
The first day
Thousands (literally) Linux suites of all stripes gathered in a huge exhibition / conference center: sugary start-up hipsters running docker containers with Node.js applications, beautiful women standing guard over power management and documentation, to gray-haired Unix hackers who remember times when Linux was developed via tarballs.
')
Answering immediately a subconscious question - yes, Linus Torvalds is here. And also Greg Kroa-Hartman, Dirk Hondell and a dozen other Linux kernel maintainers. And the guys from the Apache Foundation, Docker, Red Hat, IBM, GitHub, Google, Oracle, Intel and other great companies are all those who create the open source software industry.
On the first day, the party proceeds quite modestly, many have not yet been liberated (including us), someone is hampered by the language barrier (Europe after all), and indeed so far the first day.
Like any conference, everything starts with keynote'ov, introductory reports from various top managers. On LinuxCon, traditionally, this man is Jim Zemlin - President of the Linux Foundation.
From the stage of the Convention Center Dublin, he congratulated everyone on two significant dates: the 24th birthday of Linux and the 30th anniversary of the Free Software Foundation.
The Linux Foundation is developing many projects besides Linux itself. The total cost of projects developed under the auspices of the Linux Foundation is $ 5 billion.
Jim also announced the creation of a new project - “Real time collaborative project” - within which they will develop an RT patch for Linux, as well as hiring the core of its developer, Thomas Gleixner. A bit of context. Thomas recently played a drama about the fact that no one pays for his incredible efforts, and therefore he is going to abandon him.
After that, Sean Gourley spoke with the theme “Man against the car.” Using the example of high-frequency trading, it was shown that in the modern world there are areas in which people are inferior to algorithms and information systems. Machines are faster than people - if a person makes a decision in 0.7 seconds, the machine during this time manages to carry out a whole series of trading operations. But, nevertheless, the car is wrong, and these errors are expensive. The most famous example here is the story of Knight Capital. Nevertheless, this is a world in which we live, a world in which 61% of all Internet traffic is irrelevant to people.
Next was an IBM report in which they advertised themselves. Nothing interesting, but they are the main sponsor, so I had to endure.
And in the end, the guys from the Drone Project and ETH Zurich told about the drones. Drones are not military devices, but well-known bloggers. The meaning of the Drone Project is to create open tools for creating such devices. Currently, there are Pixhawk (Open hardware platform) and ROS (Robot operating system).
Then I went to the reports. The first was “Application driven storage”, where guys from CNEX Labs talked about the idea of ​​the so-called. "Open channel SSDs" and their use in applications. Open channel SSD is an SSD that has the entire FTL removed and relegated to the developer. That is, the application itself controls how data will be placed, how to delete data and do garbage collection, remap blocks, etc. This eliminates unnecessary levels of abstraction and complexity, which are more interfering than helping. All this is needed when you are developing an application that is actively working with the disk, has special strict requirements for it and is well aware of its access patterns. For example, databases that struggle with operating systems all the time of their existence and implement their own caches, input / output schedulers, etc. The speaker also spoke about RocksDB, specially sharpened for SSD key-value database, forknited from LevelDB , and the new LightNVM core subsystem, which provides a hardware-independent API for pushing information and control into userspace. There should be a separate report about LightNVM, so the focus was on RocksDB. In fact, everything is in its infancy, in terms of performance, the new solution loses (!) Many times (!!) to the old POSIX API on ordinary SSDs due to the lack of a page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of utilization of SSD. In the meantime, like this. Iron itself (open channel SSD) is now implemented on FPGA, but in the future we plan to make ASICs.
Next was the report “RTFM? Write a better FM! The bottom line is “stop being a jerk, get to know your audience.” Banal things, because, as the speaker himself noted, people who go to listen to such topics are usually well done, and those who really need it do not reach.
Then I went to a speaker with whom I was absently familiar by code - Alan Tull from Altera, whose code I studied and finished for the needs of our products. He talked about the FPGA management framework well known to me. Well, or rather, first about the FPGA, about the SoC FPGA, then about how the FPGA used to work, the development history and the state it is in now. There were few technical details about FPGA management itself, more about DeviceTree overlays. We looked at the latest version and its API. We must see what has changed.
Continuing the theme of the FPGA, there was also a report under the colorful title “Using FPGA for driver testing”, from which I expected miraculous revelations. But in fact, it turned out that the developers of U-Boot take simple Lattice FPGAs and make firmware for them, which is engaged in fuzzy testing of hardware buses and devices. In particular, FPGAs are connected to an I2C or SPI bus, and randomly and simply erroneous commands are sent to it, and then the bus driver looks down or not. Similarly, they come to test the bus itself when a strange device like an SD card is simulated on an FPGA. To be honest, I expected more.
“Maximum performance. How to get pitfalls by Cristoph Lameter, an active developer of the kernel's network subsystem, was quite general, in which a lot of experience in tuning performance was systematized. Usually, software is already optimized for performance in most cases or popular hardware platforms. But sometimes this is not what you need, and if you want to achieve maximums, then you need to sacrifice something: money, simplicity, support and / or productivity of other parts. Typically, tyunit 2 things - input / output and the network, as the longest and most resource-intensive parts. But, since modern data storage systems essentially repeat the behavior of networks (command transmission, message encapsulation), then we can say that “Storage today is network communication”. And if you want to optimize your application, then you need to go down to the hardware. Instead of using your programming language's API, use buffered I / O from glibc, or pull the Socket API, or rewrite everything to the RDMA API, or switch to FPGA or ASIC. When optimizing memory access, you need to remember about the cache hierarchy and try to help it. To optimize the performance of the CPU, you should use vectorization (if the application allows), or even give the processing to the GPU. In short, the main message was that if you want maximum performance, sharpen your hardware and get rid of extra layers and operating systems, which people from LightNVM and RocksDB projects I mentioned at the beginning did. We will think about using the
B100 .
In conclusion, there were the next keynotes, of which the most interesting was the Kernel Developers Panel, when they gather Linux kernel developers and ask them questions, discussing various topics. In addition to the standard “How did you get to such a life”, they mainly tried to understand how to attract new people and how to help maintainers in their thankless work. They agreed that it was a lot of work and people should be helped to join the community, including by helping the maintainers.
And we went to look for maintainers in bars

Second day
The second day promised to be more interesting in terms of reports. I was going to hear about LLVM / Clang, Btrfs and Snapshots, Buildroot and Profiling.
Morning keynots of the second day were more interesting than the first. First, Leigh Honeywell from Slack Technologies spoke about the topic “Securing an open future”. Proceeding from the well-known Heartbleed, she spoke about possible actions to prevent such situations. If in short, developers should not forget about security and do at least something in this direction: study the means, possible attack vectors, etc., i.e. try to think like an intruder. For managers, it is worthwhile to create a healthy culture in which people did not feel guilty, because it leads to the concealment of real problems. In the appendage is to read the good practices of the organization of secure development, such as Microsoft SDL.
Next was the Container Panel - an open discussion on the topic of containers. Key points:
- Are containers ready for production? Yes, the basis of containers in the core has been around for about 10 years and everything is fine with it. Docker showed how this can be conveniently used, and here is what to do.
- Can we replace the traditional way of distributing applications through rpm or deb packages with containers? On the one hand, the container is a black box in which you can shove anything and it is not very pleasant to use it. On the other hand, when we install some complex application that pulls a lot of dependencies, we do not check what is in the packages. It's all about trust and authentication, and that's what container authentication is what is really needed now.
In conclusion, the Keynot talked about IoT, where do without it. In short, IoT will change the world, but it must solve important problems:
- Security
- Lack of experts in embedded systems
- Interaction of things
Then came the reports. I first went to listen to “Boosting Developer productivity with Clang” from Timan Scheller from Samsung.
First of all, Clang is correct to speak as “Klang”, not “Hose” or how you call it there. For those familiar with LLVM and Clang it was boring. Timan said that LLVM is a modular framework for developing compilers and infrastructure for many projects, for example:
- Clang - C / C ++ / Objective C compiler
- LLDB Debugger
- lld - linker framework
- polly - polyhedral optimizer
Now LLVM is used in WebKit FtL JIT, Rust, Android NDK, OpenCL implementations, CUDA, etc., which indicates more than sufficient maturity.
The main feature of LLVM is IR, Intermediate Representation. RISC-like Intermediate bitcode with information about the types in which the source code is translated, and within which all the optimizations are carried out. The IR is then translated into assembler or machine code of the appropriate architecture.
Clang is a powerful C / C ++ / Objective C compiler with rich diagnostic capabilities. On the basis of Clang, several quite interesting utilities are made, namely:
- Clang static analyzer - static analyzer C code
- clang-format - code formatter
- clang-modernize - leads C ++ code to new standards
- clang-tidy - search for development rules violations (coding conventions)
- Sanitizers:
- AddressSanitizer
- ThreadSanitizer
- Leadakanizer
- MemorySanitizer
- UBSanitizer (undefined behavior)
In terms of performance, the code generated by Clang is
on average 2% slower than gcc. but, on the other hand, the compilation time is much less and the diagnostic possibilities are wider. Next, we looked at the diagnostic capabilities with examples in comparison with gcc. And they derived a recipe for the best acceleration of building a large C ++ project (the LLVM itself). The recipe is Clang + gold + PGO (profile guided optimizations) + Split Dwarf + optimized TableGen + Ninja. It turns out 2 times faster than gcc. For details, ask the speaker.
Then I went to the intriguing report “Btrfs and rollback”, but I was disappointed with the poinpoint and the disgusting presentation of the material.
Out on “Project Ara architecture”. From what I heard, the UniPro bus was developed for the modular phone, each module has a small CPU to communicate over the UniPro, when the module is inserted, the bus driver receives a notification about the new module, loads the driver and asks Android to update the software from the clouds.
Visited a great Buildroot tutorial. Thomas Petazonni from Free Electrons on his eyes was assembling a system for BeagleBone Black from BuildRoot, showed to configure components how to add their own patches of the core. We looked at how to make your own package and customize rootfs. 2 hours we spent in the demo with questions and answers, and now I am very enthusiastic about Buildroot and am going to try it.
And the last report was on the topic of “Linux performance profiling and monitoring”, which was absolutely uninteresting to anyone who tried even a little to do it. The report consisted of listing vmstat, sar, top (> _ <) utilities, mentioning ftrace and perf. If you're really interested,
go here -
brendangregg.com/linuxperf.htmlThe final keynots consisted of “Fireside chat” with Linus Torvalds and Dirk Hondell. Uncles nice talked about the state of the nucleus and what Linus is going to do. With the kernel, everything is fine, Linus does not want to do anything.
And then all the same Thomas Petazonni told how they, with their small company of 6 people, manage to make a significant contribution to the development of the core, in particular ARM SoCs. The secret is simple - a small team and a lack of problems with communications, a focus on advancing upstream, a constant exchange of knowledge, communication at conferences.
Third day

In addition to the conference, there was a parallel exhibition at which sponsors showed themselves, someone hantil, someone held a mini-summit (UEFI and Yocto, for example). But mostly they ate and drank there, which is noticeable in the photograph.
Kainoty of the third day was opened by Martin Fink from HP, who presented the OpenSwitch project - a modern open OS for network devices (switches). We look at the project with great interest, hoping to use it in
our switches . Martin also identified one of the main threats to open source is the presence of a large number of licenses (about 70), which are mostly incompatible with each other (read an example about ZFS, DTrace and Linux), troll Oracle and IBM and was like that.
Next, we learned that the Internet of things needs an open platform, over which we will have control (hello, Lenovo) and it will begin with the J-Core CPU project - an open, in the sense of open hardware, processor.
And then there was a terrible advertising report from Huawei in the best traditions of
Death by PowerPoint , which no one listened to.
I went to the reports. I very much wanted to visit the story about Multifunctional devices (mfd), but the speaker started 10 minutes earlier (why ?!), quickly rattled off the introductory one, and went to throw muddled pieces of code. Ran away to a report about Open Channel SSDs.
Open Channel SSDs are SSD devices that provide user (and nuclear) applications with access to internal information, namely, the “geometry” of the SSD:
- NAND carrier
- Channels, timings
- List of bad blocks
- Some PPA format
- ECC
All this so that I / O-intensive applications can utilize the SSD to the maximum. Read more
on the githaba .
Tim Bird (
Bird ) made his annual report “Status of Embedded Linux”, where he told that interesting things happened during the year on Linux for embedded systems, and what will happen. It makes no sense to list everything, so I’ll manage with a short list:
What is worth watching next year:
- kdbus
- RT-preempt (now under the auspices of the Linux Foundation will go uphill)
- Persistent Memory
- SoC mainlining
Then there were 2 reports about debuggers - “How debuggers work” and “Debugging Linux kernel with GDB”.
About how debuggers work was told by Pavel Moll from ARM. In short, through ptrace. If it is longer, the debuggers forge the process, do ptrace (PTRACE_TRACEME, ...) and execve there. In the parent process (debugger), all control occurs. Installing a breakpoint at an address is to save the instructions for that address somewhere at the debugger and replace them with architecturally specific ones. For example, for x86 it is int 3, and for ARM there is a specifically defined undefined instruction (there is a defined undefined instruction). Pun, sir, yes. When an execution reaches such a special instruction, an exception occurs, which raises the SIGILL signal, which is delivered to the parent process, i.e. debugger And he can already do whatever he wants. In order to do something interesting, the process under study must deliver debug info, which is usually stored in separate sections of the ELF file in DWARF format.
Peter Griffin (not at
all ) from Linaro talked about Linux kernel debugging. Suggested 4 ways to debug:
- gdb remote to kgdb stub in the kernel
- gdb remote to qemu
- gdb remote to gdb-compatible JTAG, for example OpenOCD
- gdb and kernel dump, the crash utility
He also spoke about the progress of support for debugging Linux in GDB.
Also from Linaro told a report on the pompous topic "Rethinking the core OS in 2015". But in fact, a sad bearded man told platitudes in the spirit of "And let's replace gcc with Clang, and glibc with musl!". As a result, we have a lot of problems, and there is no big win. Strange.
On this, the reports ended, in conclusion, a dozen development boards were played out and all were taken to the factory museum Guiness. But that's another story.
Conclusion
After reading my notes, it may seem that most of the reports were weak, but this is not true. Bad performances, of course, spoiled the mood, but the remaining good ones were worth it. For 3 days we managed to communicate with a lot of people, find out how our industry lives, where to look and be inspired for a long time, and this is what it is worth attending such events for.
And I also have 2 pictures that many friends now envy.


That's all. Thanks for attention!