Updated bleeding edge toolchains on toolchains.free-electrons.com

Two months ago, we announced a new service from Free Electrons: free and ready-to-use Linux cross-compilation toolchains, for a large number of architectures and C libraries, available at http://toolchains.free-electrons.com/.

Bleeding edge toolchain updates

All our bleeding edge toolchains have been updated, with the latest version of the toolchain components:

  • gcc 7.2.0, which was released 2 days ago
  • glibc 2.26, which was released 2 weeks ago
  • binutils 2.29
  • gdb 8.0

Those bleeding edge toolchains are now based on Buildroot 2017.08-rc2, which brings a nice improvement: the host tools (gcc, binutils, etc.) are no longer linked statically against gmp, mpfr and other host libraries. They are dynamically linked against them with an appropriate rpath encoded into the gcc and binutils binaries to find those shared libraries regardless of the installation location of the toolchain.

However, due to gdb 8.0 requiring a C++11 compiler on the host machine (at least gcc 4.8), our bleeding edge toolchains are now built in a Debian Jessie system instead of Debian Squeeze, which means that at least glibc 2.14 is needed on the host system to use them.

The only toolchains for which the tests are not successful are the MIPS64R6 toolchains, due to the Linux kernel not building properly for this architecture with gcc 7.x. This issue has already been reported upstream.

Stable toolchain updates

We haven’t changed the component versions of our stable toolchains, but we made a number of fixes to them:

  • The armv7m and m68k-coldfire toolchains have been rebuilt with a fixed version of elf2flt that makes the toolchain linker directly usable. This fixes building the Linux kernel using those toolchains.
  • The mips32r5 toolchain has been rebuilt with NaN 2008 encoding (instead of NaN legacy), which makes the resulting userspace binaries actually executable by the Linux kernel, which expects NaN 2008 encoding on mips32r5 by default.
  • Most mips toolchains for musl have been rebuilt, with Buildroot fixes for the creation of the dynamic linker symbolic link. This has no effect on the toolchain itself, but also the tests under Qemu to work properly and validate the toolchains.

Other improvements

We made a number of small improvements to the toolchains.free-electrons.com site:

  • Each architecture now has a page that lists all toolchain versions available. This allows to easily find a toolchain that matches your requirements (in terms of gcc version, kernel headers version, etc.). See All aarch64 toolchains for an example.
  • We added a FAQ as well as a news page.

As usual, we welcome feedback about our toolchains, either on our bug tracker or by mail at info@free-electrons.com.

Posted in Technical | Tagged | Leave a comment

Free Electrons proposes an I3C subsystem for the Linux kernel

MIPI I3C fact sheet, from the MIPI I3C white paper

MIPI I3C fact sheet, from the MIPI I3C white paper

At the end of 2016, the MIPI consortium has finalized the first version of its I3C specification, a new communication bus that aims at replacing older busses like I2C or SPI. According to the specification, I3C gets closer to SPI data rate while requiring less pins and adding interesting mechanisms like in-band interrupts, hotplug capability or automatic discovery of devices connected on the bus. In addition, I3C provides backward compatibility with I2C: I3C and legacy I2C devices can be connected on a common bus controlled by an I3C master.

For more details about I3C, we suggest reading the MIPI I3C Whitepaper, as unfortunately MIPI has not publicly released the specifications for this protocol.

For the last few months, Free Electrons engineer Boris Brezillon has been working with Cadence to develop a Linux kernel subsystem to support this new bus, as well as Cadence’s I3C master controller IP. We have now posted the first version of our patch series to the Linux kernel mailing list for review, and we already received a large number of very useful comments from the kernel community.

Free Electrons is proud to be pioneering the support for this new bus in the Linux kernel, and hopes to see other developers contribute to this subsystem in the near future!

Posted in Technical | Tagged , , , | Leave a comment

Linux 4.12, Free Electrons contributions

Linus Torvalds has released the 4.12 Linux kernel a week ago, in what is the second biggest kernel release ever by number of commits. As usual, LWN had a very nice coverage of the major new features and improvements: first part, second part and third part.

LWN has also published statistics about the Linux 4.12 development cycles, showing:

  • Free Electrons as the #14 contributing company by number of commits, with 221 commits, between Broadcom (230 commits) and NXP (212 commits)
  • Free Electrons as the #14 contributing company number of changed lines, with 16636 lines changed, just two lines less than Mellanox
  • Free Electrons engineer and MTD NAND maintainer Boris Brezillon as the #17 most active contributor by number of lines changed.

Our most important contributions to this kernel release have been:

  • On Atmel AT91 and SAMA5 platforms:
    • Alexandre Belloni has continued to upstream the support for the SAMA5D2 backup mode, which is a very deep suspend to RAM state, offering very nice power savings. Alexandre touched the core code in arch/arm/mach-at91 as well as pinctrl and irqchip drivers
    • Boris Brezillon has converted the Atmel PWM driver to the atomic API of the PWM subsystem, implemented suspend/resume and did a number of fixes in the Atmel display controller driver, and also removed the no longer used AT91 Parallel ATA driver.
    • Quentin Schulz improved the suspend/resume hooks in the atmel-spi driver to support the SAMA5D2 backup mode.
  • On Allwinner platforms:
    • Mylène Josserand has made a number of improvements to the sun8i-codec audio driver that she contributed a few releases ago.
    • Maxime Ripard added devfreq support to dynamically change the frequency of the GPU on the Allwinner A33 SoC.
    • Quentin Schulz added battery charging and ADC support to the X-Powers AXP20x and AXP22x PMICs, found on Allwinner platforms.
    • Quentin Schulz added a new IIO driver to support the ADCs found on numerous Allwinner SoCs.
    • Quentin Schulz added support for the Allwinner A33 built-in thermal sensor, and used it to implement thermal throttling on this platform.
  • On Marvell platforms:
    • Antoine Ténart contributed Device Tree changes to describe the cryptographic engines found in the Marvell Armada 7K and 8K SoCs. For now only the Device Tree description has been merged, the driver itself will arrive in Linux 4.13.
    • Grégory Clement has contributed a pinctrl and GPIO driver for the Marvell Armada 3720 SoC (Cortex-A53 based)
    • Grégory Clement has improved the Device Tree description of the Marvell Armada 3720 and Marvell Armada 7K/8K SoCs and corresponding evaluation boards: SDHCI and RTC are now enabled on Armada 7K/8K, USB2, USB3 and RTC are now enabled on Armada 3720.
    • Thomas Petazzoni made a significant number of changes to the mvpp2 network driver, finally adding support for the PPv2.2 version of this Ethernet controller. This allowed to enable network support on the Marvell Armada 7K/8K SoCs.
    • Thomas Petazzoni contributed a number of fixes to the mv_xor_v2 dmaengine driver, used for the XOR engines on the Marvell Armada 7K/8K SoCs.
    • Thomas Petazzoni cleaned-up the MSI support in the Marvell pci-mvebu and pcie-aardvark PCI host controller drivers, which allowed to remove a no-longer used MSI kernel API.
  • On the ST SPEAr600 platform:
    • Thomas Petazzoni added support for the ADC available on this platform, by adding its Device Tree description and fixing a clock driver bug
    • Thomas did a number of small improvements to the Device Tree description of the SoC and its evaluation board
    • Thomas cleaned up the fsmc_nand driver, which is used for the NAND controller driver on this platform, removing lots of unused code
  • In the MTD NAND subsystem:
    • Boris Brezillon implemented a mechanism to allow vendor-specific initialization and detection steps to be added, on a per-NAND chip basis. As part of this effort, he has split into multiple files the vendor-specific initialization sequences for Macronix, AMD/Spansion, Micron, Toshiba, Hynix and Samsung NANDs. This work will allow in the future to more easily exploit the vendor-specific features of different NAND chips.
  • Other contributions:
    • Maxime Ripard added a display panel driver for the ST7789V LCD controller

In addition, several Free Electrons engineers are also maintainers of various kernel subsystems. During this release cycle, they reviewed and merged a number of patches from kernel contributors:

  • Maxime Ripard, as the Allwinner co-maintainer, merged 94 patches
  • Boris Brezillon, as the NAND maintainer and MTD co-maintainer, merged 64 patches
  • Alexandre Belloni, as the RTC maintainer and Atmel co-maintainer, merged 38 patches
  • Grégory Clement, as the Marvell EBU co-maintainer, merged 32 patches

The details of all our contributions for this release:

Posted in Technical | Tagged , , | 2 Comments

Free and ready-to-use cross-compilation toolchains

For all embedded Linux developers, cross-compilation toolchains are part of the basic tool set, as they allow to build code for a specific CPU architecture and debug it. Until a few years ago, CodeSourcery was providing a lot of high quality pre-compiled toolchains for a wide range of architectures, but has progressively stopped doing so. Linaro provides some freely available toolchains, but only targetting ARM and AArch64. kernel.org has a set of pre-built toolchains for a wider range of architectures, but they are bare metal toolchains (cannot build Linux userspace programs) and updated infrequently.

To fill in this gap, Free Electrons is happy to announce its new service to the embedded Linux community: toolchains.free-electrons.com.

Free Electrons toolchains

This web site provides a large number of cross-compilation toolchains, available for a wide range of architectures, in multiple variants. The toolchains are based on the classical combination of gcc, binutils and gdb, plus a C library. We currently provide a total of 138 toolchains, covering many combinations of:

  • Architectures: AArch64 (little and big endian), ARC, ARM (little and big endian, ARMv5, ARMv6, ARMv7), Blackfin, m68k (Coldfire and 68k), Microblaze (little and big endian), MIPS32 and MIPS64 (little and big endian, with various instruction set variants), NIOS2, OpenRISC, PowerPC and PowerPC64, SuperH, Sparc and Sparc64, x86 and x86-64, Xtensa
  • C libraries: GNU C library, uClibc-ng and musl
  • Versions: for each combination, we provide a stable version which uses slightly older but more proven versions of gcc, binutils and gdb, and we provide a bleeding edge version with the latest version of gcc, binutils and gdb.

After being generated, most of the toolchains are tested by building a Linux kernel and a Linux userspace, and booting it under Qemu, which allows to verify that the toolchain is minimally working. We plan on adding more tests to validate the toolchains, and welcome your feedback on this topic. Of course, not all toolchains are tested this way, because some CPU architectures are not emulated by Qemu.

The toolchains are built with Buildroot, but can be used for any purpose: build a Linux kernel or bootloader, as a pre-built toolchain for your favorite embedded Linux build system, etc. The toolchains are available in tarballs, together with licensing information and instructions on how to rebuild the toolchain if needed.

We are very much interested in your feedback about those toolchains, so do not hesitate to report bugs or make suggestions in our issue tracker!

This work was done as part of the internship of Florent Jacquet at Free Electrons.

Posted in Technical | 7 Comments

Elixir Cross Referencer: new way to browse kernel sources

Today, we are pleased to announce the initial release of the Elixir Cross-Referencer, or just “Elixir”, for short.

What is Elixir?

Elixir home pageSince 2006, we have provided a Linux source code cross-referencing online tool as a service to the community. The engine behind this website was LXR, a Perl project almost as old as the kernel itself. For the first few years, we used the then-current 0.9.5 version of LXR, but in early 2009 and for various reasons, we reverted to the older 0.3.1 version (from 1999!). In a nutshell, it was simpler and it scaled better.

Recently, we had the opportunity to spend some time on it, to correct a few bugs and to improve the service. After studying the Perl source code and trying out various cross-referencing engines (among which LXR 2.2 and OpenGrok), we decided to implement our own source code cross-referencing engine in Python.

Why create a new engine?

Our goal was to extend our existing service (support for multiple projects, responsive design, etc.) while keeping it simple and fast. When we tried other cross-referencing engines, we were dissatisfied with their relatively low performance on a large codebase such as Linux. Although we probably could have tweaked the underlying database engine for better performance, we decided it would be simpler to stick to the strategy used in LXR 0.3: get away from the relational database engine and keep plain lists in simple key-value stores.

Another reason that motivated a complete rewrite was that we wanted to provide an up-to-date reference (including the latest revisions) while keeping it immutable, so that external links to the source code wouldn’t get broken in the future. As a direct consequence, we would need to index many different revisions for each project, with potentially a lot of redundant information between them. That’s when we realized we could leverage the data model of Git to deal with this redundancy in an efficient manner, by indexing Git blobs, which are shared between revisions. In order to make sure queries under this strategy would be fast enough, we wrote a proof-of-concept in Python, and thus Elixir was born.

What service does it provide?

First, we tried to minimize disruption to our users by keeping the user interface close to that of our old cross-referencing service. The main improvements are:

  • We now support multiple projects. For now, we provide reference for Linux, Busybox and U-Boot.
  • Every tag in each project’s git repository is now automatically indexed.
  • The design has been modernized and now fits comfortably on smaller screens like tablets.
  • The URL scheme has been simplified and extended with support for multiple projects. An HTTP redirector has been set up for backward compatibility.
Elixir supports multiple projects

Elixir supports multiple projects

Among other smaller improvements, it is now possible to copy and paste code directly without line numbers getting in the way.

How does it work?

Elixir is made of two Python scripts: “update” and “query”. The first looks for new tags and new blobs inside a Git repository, parses them and appends the new references to identifiers to a record inside the database. The second uses the database and the Git repository to display annotated source code and identifier references.

The parsing itself is done with Ctags, which provides us with identifier definitions. In order to find the references to these identifiers, Elixir then simply checks each lexical token in the source file against the definition database, and if that word is defined, a new reference is added.

Like in LXR 0.3, the database structure is kept very simple so that queries don’t have much work to do at runtime, thus speeding them up. In particular, we store references to a particular identifier as a simple list, which can be loaded and parsed very fast. The main difference with LXR is that our list includes references from every blob in the project, so we need to restrict it first to only the blobs that are part of the current version. This is done at runtime, simply by computing the intersection of this list with the list of blobs inside the current version.

Finally, we kept the user interface code clearly segregated from the engine itself by making these two modules communicate through a Unix command-line interface. This means that you can run queries directly on the command-line without going through the web interface.

Elixir code example

Elixir code example

What’s next?

Our current focus is on improving multi-project support. In particular, each project has its own quirky way of using Git tags, which needs to be handled individually.

At the user-interface level, we are evaluating the possibility of having auto-completion and/or fuzzy search of identifier names. Also, we are looking for a way to provide direct line-level access to references even in the case of very common identifiers.

On the performance front, we would like to cut the indexation time by switching to a new database back-end that provides efficient appending to large records. Also, we could make source code queries faster by precomputing the references, which would also allow us to eliminate identifier “bleeding” between versions (the case where an identifier shows up as “defined in 0 files” because it is only defined in another version).

If you think of other ways we could improve our service, don’t hesitate to drop us a feature request or a patch!

Bonus: why call it “Elixir”?

In the spur of the moment, it seemed like a nice pun on the name “LXR”. But in retrospect, we wish to apologize to the Elixir language team and the community at large for unnecessary namespace pollution.

Posted in Technical | 10 Comments

Beyond boot testing: custom tests with LAVA

Since April 2016, we have our own automated testing infrastructure to validate the Linux kernel on a large number of hardware platforms. We use this infrastructure to contribute to the KernelCI project, which tests every day the Linux kernel. However, the tests being done by KernelCI are really basic: it’s mostly booting a basic Linux system and checking that it reaches a shell prompt.

However, LAVA, the software component at the core of this testing infrastructure, can do a lot more than just basic tests.

The need for custom tests

With some of our engineers being Linux maintainers and given all the platforms we need to maintain for our customers, being able to automatically test specific features beyond a simple boot test was a very interesting goal.

In addition, manually testing a kernel change on a large number of hardware platforms can be really tedious. Being able to quickly send test jobs that will use an image you built on your machine can be a great advantage when you have some new code in development that affects more than one board.

We identified two main use cases for custom tests:

  • Automatic tests to detect regression, as does KernelCI, but with more advanced tests, including platform specific tests.
  • Manual tests executed by engineers to validate that the changes they are developing do not break existing features, on all platforms.

Overall architecture

Several tools are needed to run custom tests:

  • The LAVA instance, which controls the hardware platforms to be tested. See our previous blog posts on our testing hardware infrastructrure and software architecture
  • An appropriate root filesystem, that contains the various userspace programs needed to execute the tests (benchmarking tools, validation tools, etc.)
  • A test suite, which contains various scripts executing the tests
  • A custom test tool that glues together the different components

The custom test tool knows all the hardware platforms available and which tests and kernel configurations apply to which hardware platforms. It identifies the appropriate kernel image, Device Tree, root filesystem image and test suite and submits a job to LAVA for execution. LAVA will download the necessary artifacts and run the job on the appropriate device.

Building custom rootfs

When it comes to test specific drivers, dedicated testing, validation or benchmarking tools are sometimes needed. For example, for storage device testing, bonnie++ can be used, while iperf is nice for networking testing. As the default root filesystem used by KernelCI is really minimalist, we need to build our owns, one for each architecture we want to test.

Buildroot is a simple yet efficient tool to generate root filesystems, it is also used by KernelCI to build their minimalist root filesystems. We chose to use it and made custom configuration files to match our needs.

We ended up with custom rootfs built for ARMv4, ARMv5, ARMv7, and ARMv8, that embed for now Bonnie++, iperf, ping (not the Busybox implementation) and other tiny tools that aren’t included in the default Buildroot configuration.

Our Buildroot fork that includes our custom configurations is available as the buildroot-ci Github project (branch ci).

The custom test tool

The custom test tool is the tool that binds the different elements of the overall architecture together.

One of the main features of the tool is to send jobs. Jobs are text files used by LAVA to know what to do with which device. As they are described in LAVA as YAML files (in the version 2 of the API), it is easy to use templates to generate them based on a single model. Some information is quite static such as the device tree name for a given board or the rootfs version to use, but other details change for every job such as the kernel to use or which test to run.

We made a tool able to get the latest kernel images from KernelCI to quickly send jobs without having a to compile a custom kernel image. If the need is to test a custom image that is built locally, the tool is also able to send files to the LAVA server through SSH, to provide a custom kernel image.

The entry point of the tool is ctt.py, which allows to create new jobs, providing a lot of options to define the various aspects of the job (kernel, Device Tree, root filesystem, test, etc.).

This tool is written in Python, and lives in the custom_tests_tool Github project.

The test suite

The test suite is a set of shell scripts that perform tests returning 0 or 1 depending on the result. This test suite is included inside the root filesystem by LAVA as part of a preparation step for each job.

We currently have a small set of tests:

  • boot test, which simply returns 0. Such a test will be successful as soon as the boot succeeds.
  • mmc test, to test MMC storage devices
  • sata test, to test SATA storage devices
  • crypto test, to do some minimal testing of cryptographic engines
  • usb test, to test USB functionality using mass storage devices
  • simple network test, that just validates network connectivity using ping

All those tests only require the target hardware platform itself. However, for more elaborate network tests, we needed to get two devices to interact with each other: the target hardware platform and a reference PC platform. For this, we use the LAVA MultiNode API. It allows to have a test that spans multiple devices, which we use to perform multiple iperf sessions to benchmark the bandwidth. This test has therefore one part running on the target device (network-board) and one part running on the reference PC platform (network-laptop).

Our current test suite is available as the test_suite Github project. It is obviously limited to just a few tests for now, we hope to extend the tests in the near future.

First use case: daily tests

As previously stated, it’s important for us to know about regressions introduced in the upstream kernel. Therefore, we have set up a simple daily cron job that:

  • Sends custom jobs to all boards to validate the latest mainline Linux kernel and latest linux-nextli>
  • Aggregates results from the past 24 hours and sends emails to subscribed addresses
  • Updates a dashboard that displays results in a very simple page

A nice dashboard showing the tests of the Beaglebone Black and the Nitrogen6x.

Second use case: manual tests

The custom test tool ctt.py has a simple command line interface. It’s easy for someone to set it up and send custom jobs. For example:

ctt.py -b beaglebone-black -m network

will start the network test on the BeagleBone Black, using the latest mainline Linux kernel built by KernelCI. On the other hand:

ctt.py -b armada-7040-db armada-8040-db -t mmc --kernel arch/arm64/boot/Image --dtb-folder arch/arm64/boot/dts/

will run the mmc test on the Marvell Armada 7040 and Armada 8040 development boards, using the locally built kernel image and Device Tree.

The result of the job is sent over e-mail when the test has completed.


Thanks to this custom test tool, we now have an infrastructure that leverages our existing lab and LAVA instance to execute more advanced tests. Our goal is now to increase the coverage, by adding more tests, and run them on more devices. Of course, we welcome feedback and contributions!

Posted in Technical | Tagged , , , | Leave a comment

Recent improvements in Buildroot QA

Over the last few releases, a significant number of improvements in terms of QA-related tooling has been done in the Buildroot project. As an embedded Linux build system, Buildroot has a growing number of packages, and maintaining all of those packages is a challenge. Therefore, improving the infrastructure around Buildroot to make sure that packages are in good shape is very important. Below we provide a summary of the different improvements that have been made.


check-package script

Very much like the Linux kernel has a checkpatch.pl script to help contributors validate their patches, Buildroot now has a check-package script that allows to validate the coding style and check for common errors in Buildroot packages. Contributors are encouraged to use it to avoid common mistakes typically spotted during the review process.

check-package is capable of checking the .mk file, the Config.in, the .hash file of packages as well as the patches that apply to packages.

Contributed by Ricardo Martincoski, and written in Python, this tool will first appear in the next stable release 2017.05, to be published at the end of the month.

Buildroot’s page of package stats has been updated with a new column Warnings that lists the number of check-package issues to be fixed on each package. Not all packages have been fixed yet!

test-pkg script

Besides coding style issues, another problem that the Buildroot community faces when accepting contributions of new packages or package updates, is that those contributions have rarely been tested on a large number of toolchain/architecture configurations. To help contributors in this testing, a test-pkg tool has been added.

Provided a Buildroot configuration snippet that enables the package to be tested, test-pkg tool will iterate over a number of toolchain/architecture configurations and make sure the package builds fine for all those configurations. The set of configurations being tested is the one used by Buildroot autobuilder’s infrastructure, which allows us to make sure that a package will not fail to build as soon as it is added into the tree. Contributors are therefore encouraged to use this tool for their package related contributions.

A very primitive version of this tool was originally contributed by Thomas Petazzoni, but it’s finally Yann E. Morin who took over, cleaned up the code, extended it and made it more generic, and contributed the final tool.

Runtime testing infrastructure

The Buildroot autobuilder infrastructure has been running for several years, and tests random configurations of Buildroot packages to make sure they build properly. This infrastructure allows the Buildroot developers to make sure that all combinations of packages build properly on all architectures, and has been a very useful tool to help increase Buildroot quality. However, this infrastructure does not perform any sort of runtime testing.

To address this, a new runtime testing infrastructure has recently been contributed to Buildroot by Thomas Petazzoni. Contrary to the autobuilder infrastructure that tests random configurations, this runtime testing infrastructure tests a well-defined set of configurations, and uses Qemu to make sure that they work properly. Located in support/testing this test infrastructure currently has only 25 test cases, but we plan to extend this over time with more and more tests.

For example, the ISO9660 test makes sure that Buildroot is capable of building an ISO9660 image that boots properly under Qemu. The test suite validates that it works in different configurations: using either Grub, Grub2 or Syslinux as a bootloader, and using a root filesystem entirely contained in an initramfs or inside the ISO9660 filesystem itself.

Besides doing run-time tests of packages, this infrastructure will also allow us to test various core Buildroot functionalities. We also plan to have the tests executed on a regular basis on a CI infrastructure such as Gitlab CI.

DEVELOPERS file and e-mail notification

Back in the 2016.11 release, we added a top-level file called DEVELOPERS, which plays more or less the same role as the Linux kernel’s MAINTAINERS file: associate parts of Buildroot, especially packages, with developers interested in this area. Since Buildroot doesn’t have a concept of per-package maintainers, we decided to simply call the file DEVELOPERS.

Thanks to the DEVELOPERS file, we are able to:

  • Provide the get-developers tool, which parses patches and returns a list of e-mail addresses to which the patches should be sent, very much like the Linux kernel get_maintainer.pl tool. This allows developers who have contributed a package to be notified when a patch is proposed for the same package, and get the chance to review/test the patch.
  • Notify the developers of a package of build failures caused by this package in the Buildroot autobuilder infrastructure. So far, all build results of this infrastructure were simply sent every day on the mailing list, which made it unpractical for individual developers to notice which build failures they should look at. Thanks to the DEVELOPERS file, we now send a daily e-mail individually to the developers whose packages are affected by build failures.
  • Notify the developers of the support for a given CPU architecture of build failures caused on the autobuilders for this architecture, in a manner similar to what is done for packages.

Thanks to this, we have seen developers who were not regularly following the Buildroot mailing list contribute again fixes for build failures caused by their packages, increasing the overall Buildroot quality. The DEVELOPERS file and the get-developers script, as well as the related improvements to the autobuilder infrastructure were contributed by Thomas Petazzoni.

Build testing of defconfigs on Gitlab CI

Buildroot contains a number of example configurations called defconfigs for various hardware platforms, which allow to build a minimal embedded Linux system, known to work on such platforms. At the time of this writing, Buildroot has 146 defconfigs for platforms ranging from popular development boards (Raspberry Pi, BeagleBone, etc.) to evaluation boards from SoC vendors and Qemu machine emulation.

In order to make sure all those defconfigs build properly, we used to have a job running on Travis CI, but we started to face limitations, especially in the maximum allowed build duration. Therefore, Arnout Vandecappelle migrated this on Gitlab CI and things have been running smoothly since then.

Posted in Technical | Tagged | Leave a comment

Introducing lavabo, board remote control software

In two previous blog posts, we presented the hardware and software architecture of the automated testing platform we have created to test the Linux kernel on a large number of embedded platforms.

The primary use case for this infrastructure was to participate to the KernelCI.org testing effort, which tests the Linux kernel every day on many hardware platforms.

However, since our embedded boards are now fully controlled by LAVA, we wondered if we could not only use our lab for KernelCI.org, but also provide remote control of our boards to Free Electrons engineers so that they can access development boards from anywhere. lavabo was born from this idea and its goal is to allow full remote control of the boards as it is done in LAVA: interface with the serial port, control the power supply and provide files to the board using TFTP.

The advantages of being able to access the boards remotely are obvious: allowing engineers working from home to work on their hardware platforms, avoid moving the boards out of the lab and back into the lab each time an engineer wants to do a test, etc.

User’s perspective

From a user’s point of view, lavabo is used through the eponymous command lavabo, which allows to:

  • List the boards and their status
    $ lavabo list
  • Reserve a board for lavabo usage, so that it is no longer used for CI jobs
    $ lavabo reserve am335x-boneblack_01
  • Upload a kernel image and Device Tree blob so that it can be accessed by the board through TFTP
    $ lavabo upload zImage am335x-boneblack.dtb
  • Connect to the serial port of the board
    $ lavabo serial am335x-boneblack_01
  • Reset the power of the board
    $ lavabo reset am335x-boneblack_01
  • Power off the board
    $ lavabo power-off am335x-boneblack_01
  • Release the board, so that it can once again be used for CI jobs
    $ lavabo release am335x-boneblack_01

Overall architecture and implementation

The following diagram summarizes the overall architecture of lavabo (components in green) and how it connects with existing components of the LAVA architecture.

lavabo reuses LAVA tools and configuration files

lavabo reuses LAVA tools and configuration files

A client-server software

lavabo follows the classical client-server model: the lavabo client is installed on the machines of users, while the lavabo server is hosted on the same machine as LAVA. The server-side of lavabo is responsible for calling the right tools directly on the server machine and making the right calls to LAVA’s API. It controls the boards and interacts with the LAVA instance to reserve and release a board.

On the server machine, a specific Unix user is configured, through its .ssh/authorized_keys to automatically spawn the lavabo server program when someone connects. The lavabo client and server interact directly using their stdin/stdout, by exchanging JSON dictionaries. This interaction model has been inspired from the Attic backup program. Therefore, the lavabo server is not a background process that runs permanently like traditional daemons.

Handling serial connection

Exchanging JSON over SSH works fine to allow the lavabo client to provide instructions to the lavabo server, but it doesn’t work well to provide access to the serial ports of the boards. However, ser2net is already used by LAVA and provides a local telnet port for each serial port. lavabo simply uses SSH port-forwarding to redirect those telnet ports to local ports on the user’s machine.

Different ways to connect to the serial

Different ways to connect to the serial

Interaction with LAVA

To use a board outside of LAVA, we have to interact with LAVA to tell him the board cannot be used anymore. We therefore had to work with LAVA developers to add endpoints for putting online (release) and for putting offline (reserve) boards and an endpoint to get the current status of a board (busy, idle or offline) in LAVA’s API.

These additions to the LAVA API are used by the lavabo server to make reserve and release boards, so that there is no conflict between the CI related jobs (such as the ones submitted by KernelCI.org) and the direct use of boards for remote development.

Interaction with the boards

Now that we know how the client and the server interact and also how the server communicates with LAVA, we need a way to know which boards are in the lab, on which port the serial connection of a board is exposed and what are the commands to control the board’s power supply. All this configuration has already been given to LAVA, so lavabo server simply reads the LAVA configuration files.

The last requirement is to provide files to the board, such as kernel images, Device Tree blobs, etc. Indeed, from a network point of view, the boards are located in a different subnet not routed directly to the users machines. LAVA already has a directory accessible through TFTP from the boards which is one of the mechanisms used to serve files to boards. Therefore, the easiest and most obvious way is to send files from the client to the server and move the files to this directory, which we implemented using SFTP.

User authentication

Since the serial port cannot be shared among several sessions, it is essential to guarantee a board can only be used by one engineer at a time. In order to identify users, we have one SSH key per user in the .ssh/authorized_keys file on the server, each associated to a call to the lavabo-server program with a different username.

This allows us to identify who is reserving/releasing the boards, and make sure that serial port access, or requests to power off or reset the boards are done by the user having reserved the board.

For TFTP, the lavabo upload command automatically uploads files into a per-user sub-directory of the TFTP server. Therefore, when a file called zImage is uploaded, the board will access it over TFTP by downloading user/zImage.

Availability and installation

As you could guess from our love for FOSS, lavabo is released under the GNU GPLv2 license in a GitHub repository. Extensive documentation is available if you’re interested in installing lavabo. Of course, patches are welcome!

Posted in Technical | Tagged , , , , | 2 Comments

Eight channels audio on i.MX7 with PCM3168

Toradex Colibri i.MX7Free Electrons engineer Alexandre Belloni recently worked on a custom carrier board for a Colibri iMX7 system-on-module from Toradex. This system-on-module obviously uses the i.MX7 ARM processor from Freescale/NXP.

While the module includes an SGTL5000 codec, one of the requirements for that project was to handle up to eight audio channels. The SGTL5000 uses I²S and handles only two channels.


I2S timing diagram from the SGTL5000 datasheet

Thankfully, the i.MX7 has multiple audio interfaces and one is fully available on the SODIMM connector of the Colibri iMX7. A TI PCM3168 was chosen for the carrier board and is connected to the second Synchronous Audio Interface (SAI2) of the i.MX7. This codec can handle up to 8 output channels and 6 input channels. It can take multiple formats as its input but TDM takes the smaller number of signals (4 signals: bit clock, word clock, data input and data output).

TDM timing diagram from the PCM3168 datasheet

The current Linux long term support version is 4.9 and was chosen for this project. It has support for both the i.MX7 SAI (sound/soc/fsl/fsl_sai.c) and the PCM3168 (sound/soc/codecs/pcm3168a.c). That’s two of the three components that are needed, the last one being the driver linking both by describing the topology of the “sound card”. In order to keep the custom code to the minimum, there is an existing generic driver called simple-card (sound/soc/generic/simple-card.c). It is always worth trying to use it unless something really specific prevents that. Using it was as simple as writing the following DT node:

        board_sound {
                compatible = "simple-audio-card";
                simple-audio-card,name = "imx7-pcm3168";
                simple-audio-card,widgets =
                        "Speaker", "Channel1out",
                        "Speaker", "Channel2out",
                        "Speaker", "Channel3out",
                        "Speaker", "Channel4out",
                        "Microphone", "Channel1in",
                        "Microphone", "Channel2in",
                        "Microphone", "Channel3in",
                        "Microphone", "Channel4in";
                simple-audio-card,routing =
                        "Channel1out", "AOUT1L",
                        "Channel2out", "AOUT1R",
                        "Channel3out", "AOUT2L",
                        "Channel4out", "AOUT2R",
                        "Channel1in", "AIN1L",
                        "Channel2in", "AIN1R",
                        "Channel3in", "AIN2L",
                        "Channel4in", "AIN2R";

                simple-audio-card,dai-link@0 {
                        format = "left_j";
                        bitclock-master = <&pcm3168_dac>;
                        frame-master = <&pcm3168_dac>;

                        cpu {
                                sound-dai = <&sai2>;
                                dai-tdm-slot-num = <8>;
                                dai-tdm-slot-width = <32>;

                        pcm3168_dac: codec {
                                sound-dai = <&pcm3168 0>;
                                clocks = <&codec_osc>;

                simple-audio-card,dai-link@2 {
                        format = "left_j";
                        bitclock-master = <&pcm3168_adc>;
                        frame-master = <&pcm3168_adc>;

                        cpu {
                                sound-dai = <&sai2>;
                                dai-tdm-slot-num = <8>;
                                dai-tdm-slot-width = <32>;

                        pcm3168_adc: codec {
                                sound-dai = <&pcm3168 1>;
                                clocks = <&codec_osc>;

There are multiple things of interest:

  • Only 4 input channels and 4 output channels are routed because the carrier board only had that wired.
  • There are two DAI links because the pcm3168 driver exposes inputs and outputs separately
  • As per the PCM3168 datasheet:
    • left justified mode is used
    • dai-tdm-slot-num is set to 8 even though only 4 are actually used
    • dai-tdm-slot-width is set to 32 because the codec takes 24-bit samples but requires 32 clocks per sample (this is solved later in userspace)
    • The codec is master which is usually best regarding clock accuracy, especially since the various SoMs on the market almost never expose the audio clock on the carrier board interface. Here, a crystal was used to clock the PCM3168.

The PCM3168 codec is added under the ecspi3 node as that is where it is connected:

&ecspi3 {
        pcm3168: codec@0 {
                compatible = "ti,pcm3168a";
                reg = <0>;
                spi-max-frequency = <1000000>;
                clocks = <&codec_osc>;
                clock-names = "scki";
                #sound-dai-cells = <1>;
                VDD1-supply = <&reg_module_3v3>;
                VDD2-supply = <&reg_module_3v3>;
                VCCAD1-supply = <&reg_board_5v0>;
                VCCAD2-supply = <&reg_board_5v0>;
                VCCDA1-supply = <&reg_board_5v0>;
                VCCDA2-supply = <&reg_board_5v0>;

#sound-dai-cells is what allows to select between the input and output interfaces.

On top of that, multiple issues had to be fixed:

Finally, an ALSA configuration file (/usr/share/alsa/cards/imx7-pcm3168.conf) was written to ensure samples sent to the card are in the proper format, S32_LE. 24-bit samples will simply have zeroes in the least significant byte. For 32-bit samples, the codec will properly ignore the least significant byte.
Also this describes that the first subdevice is the playback (output) device and the second subdevice is the capture (input) device.

imx7-pcm3168.pcm.default {
	@args [ CARD ]
	@args.CARD {
		type string
	type asym
	playback.pcm {
		type plug
		slave {
			pcm {
				type hw
				card $CARD
				device 0
			format S32_LE
			rate 48000
			channels 4
	capture.pcm {
		type plug
		slave {
			pcm {
				type hw
				card $CARD
				device 1
			format S32_LE
			rate 48000
			channels 4

On top of that, the dmix and dsnoop ALSA plugins can be used to separate channels.

To conclude, this shows that it is possible to easily leverage existing code to integrate an audio codec in a design by simply writing a device tree snippet and maybe an ALSA configuration file if necessary.

Posted in Technical | Tagged , , , | 1 Comment

Feedback from the Netdev 2.1 conference

At Free Electrons, we regularly work on networking topics as part of our Linux kernel contributions and thus we decided to attend our very first Netdev conference this year in Montreal. With the recent evolution of the network subsystem and its drivers capabilities, the conference was a very good opportunity to stay up-to-date, thanks to lots of interesting sessions.

Eric Dumazet presenting “Busypolling next generation”

The speakers and the Netdev committee did an impressive job by offering such a great schedule and the recorded talks are already available on the Netdev Youtube channel. We particularly liked a few of those talks.

Distributed Switch Architecture – slidesvideo

Andrew Lunn, Viven Didelot and Florian Fainelli presented DSA, the Distributed Switch Architecture, by giving an overview of what DSA is and by then presenting its design. They completed their talk by discussing the future of this subsystem.

DSA in one slide

The goal of the DSA subsystem is to support Ethernet switches connected to the CPU through an Ethernet controller. The distributed part comes from the possibility to have multiple switches connected together through dedicated ports. DSA was introduced nearly 10 years ago but was mostly quiet and only recently came back to life thanks to contributions made by the authors of this talk, its maintainers.

The main idea of DSA is to reuse the available internal representations and tools to describe and configure the switches. Ports are represented as Linux network interfaces to allow the userspace to configure them using common tools, the Linux bridging concept is used for interface bridging and the Linux bonding concept for port trunks. A switch handled by DSA is not seen as a special device with its own control interface but rather as an hardware accelerator for specific networking capabilities.

DSA has its own data plane where the switch ports are slave interfaces and the Ethernet controller connected to the SoC a master one. Tagging protocols are used to direct the frames to a specific port when coming from the SoC, as well as when received by the switch. For example, the RX path has an extra check after netif_receive_skb() so that if DSA is used, the frame can be tagged and reinjected into the network stack RX flow.

Finally, they talked about the relationship between DSA and Switchdev, and cross-chip configuration for interconnected switches. They also exposed the upcoming changes in DSA as well as long term goals.

Memory bottlenecks – slides

As part of the network performances workshop, Jesper Dangaard Brouer presented memory bottlenecks in the allocators caused by specific network workloads, and how to deal with them. The SLAB/SLUB baseline performances are found to be too slow, particularly when using XDP. A way from a driver to solve this issue is to implement a custom page recycling mechanism and that’s what all high-speed drivers do. He then displayed some data to show why this mechanism is needed when targeting the 10G network budget.

Jesper is working on a generic solution called page pool and sent a first RFC at the end of 2016. As mentioned in the cover letter, it’s still not ready for inclusion and was only sent for early reviews. He also made a small overview of his implementation.

DDOS countermeasures with XDP – slides #1slides #2 – video #1video #2

These two talks were given by Gilberto Bertin from Cloudflare and Martin Lau from Facebook. While they were not talking about device driver implementation or improvements in the network stack directly related to what we do at Free Electrons, it was nice to see how XDP is used in production.

XDP, the eXpress Data Path, provides a programmable data path at the lowest point of the network stack by processing RX packets directly out of the drivers’ RX ring queues. It’s quite new and is an answer to lots of userspace based solutions such as DPDK. Gilberto andMartin showed excellent results, confirming the usefulness of XDP.

From a driver point of view, some changes are required to support it. RX hooks must be added as well as some API changes and the driver’s memory model often needs to be updated. So far, in v4.10, only a few drivers are supporting XDP.

XDP MythBusters – slides – video

David S. Miller, the maintainer of the Linux networking stack and drivers, did an interesting keynote about XDP and eBPF. The eXpress Data Path clearly was the hot topic of this Netdev 2.1 conference with lots of talks related to the concept and David did a good overview of what XDP is, its purposes, advantages and limitations. He also quickly covered eBPF, the extended Berkeley Packet Filters, which is used in XDP to filter packets.

This presentation was a comprehensive introduction to the concepts introduced by XDP and its different use cases.


Netdev 2.1 was an excellent experience for us. The conference was well organized, the single track format allowed us to see every session on the schedule, and meeting with attendees and speakers was easy. The content was highly technical and an excellent opportunity to stay up-to-date with the latest changes of the networking subsystem in the kernel. The conference hosted both talks about in-kernel topics and their use in userspace, which we think is a very good approach to not focus only on the kernel side but also to be aware of the users needs and their use cases.

Posted in Conference, Technical | Tagged , , , , | Leave a comment