📜 ⬆️ ⬇️

Interview with Matthieu Herrb: Testing X.Org Server

Xorg

This year, Xorg, the free implementation of the X Window System , is 30 years old . Despite the existence and development of alternatives, Xorg remains alive and alive.

On the occasion of the anniversary, I asked a few questions to a person who has been working on the development of this project for 23 (!) Years. His name is Matthieu Herrb . In addition to his participation in the X.Org project, he also stands at the origins of creating a separate version of Xorg for the OpenBSD project - Xenocara .
')


X.Org is a large and complex project. What does the development process look like?

The project is not as big as other projects, like Firefox, Chrome or even Gnome or KDE.

The development process is slightly different for the most actively developed components (the X server itself, some libraries and drivers) and obsolete components, like libXt and all applications based on this library.

There is also a constant interaction between the two groups: the developers of Mesa and DRM modules in the Linux kernel.

Ideas are discussed during developer meetings (once a year, in Europe or North America, the next will be in September of this year in Bordeaux, France) or on the xorg-devel mailing list.

Over the past few years, we have adapted a development model that is very similar to the Linux kernel development model: patches (generated using git format-patch) are sent to the mailing list for review, discussions are held and, if agreed to be discussed, the patch is kept by the maintainer.

There is one maintainer for the X server (currently Kate Packard). For other components, receiving commits is simpler and, as a rule, it is enough for the author of a patch to offer it once for the review to be successful.

And, to be complete, at present, the development of popular drivers is almost entirely entrusted to companies (Intel, AMD, VMWare), so the engineers from these companies make most of the changes.

How many developers are involved in the development process?

If you count the Mesa developers, the graphical stack in the Linux kernel and the X server developers, then it is about 50-60 people who committit on an ongoing basis in one of the repositories.

What does the testing process look like? Do you use regular testing (running tests for each commit) or is it an irregular testing?

We have several tools for continuous, automated testing, but they are not as effective as we would like.

What tools, tests and test frameworks do you use? I found many tests (for example, XTS , rendercheck , glean , piglit , etc.) in the repository ( http://cgit.freedesktop.org/ ), but many of them look outdated. Do developers create tests on a regular basis for new features and based on bugfixes?

In addition to all these existing test cases, which are usually very cumbersome to use on a regular basis, Peter Hutterer has developed a relatively new, integration test for the X server, which is supposed to run automatically from the build system of the X server (using 'make test') and on our server with tinderbox. The build.sh script used by many developers also runs these tests by default.

But given the huge range of systems supported (although this number has steadily decreased since switching from XFree86 to X.Org), only a small part of them receive actual regular testing.

Most tests are done by people who integrate X.Org into other systems and distributions.

This is my case among others. I support X.Org on OpenBSD (and helped on NetBSD before), so I test configurations that are not covered by major X server developers and often find errors that are skipped during the testing process, either because they are platform specific (for example, OpenBSD is one of several systems that still work on some exotic architectures, such as VAX, m88k, or even sparc32), or simply because our malloc () implementation is capable of catching bugs that elude other tools used in Linux.

What types of testing are used (performance testing, functional, compatibility testing, stability testing, unit testing, etc.)?

The new test framework for the X server mainly uses unit testing and functional testing to make sure that the X server components work as expected, regardless of the driver.

In the case of tests, do you measure code coverage by tests?

Not. Since the same person most often writes code and tests, he has some understanding about the coverage of this code,
but there is no formal instrument for measuring coverage.

How often do you test: from time to time or on a regular basis?

The Tinderbox platform was supposed to run tests as often as possible, but most other tests are run manually from time to time.

How are new features tested?

New features in X, you're kidding, right? But seriously, a number of new features were added, mainly in Mesa (OpenGL) code and input driver. Either new tests for features are added to the tests at the same time as the code itself, or, in the case of OpenGL, external compliance checks are used.

Do you use continuous integration in the development process?

Yes, I have already mentioned Tinderbox several times, although this is far from perfect.

What tool do you use to work with defects? Who is responsible for working with bugs?

We have Bugzilla , supplemented by a patchwork tracking system (patchwork) that keeps track of what remains un patched patches that no submitted patch gets forgotten or unhandled.

Sometimes security issues are found in X.Org ( http://www.x.org/wiki/Development/Security/ ). Do you use regular code auditing?

And yes and no :) As far as I know, X.Org does not have a dedicated person to audit on a regular basis. But some distributions (for example, Oracle / Solaris in the person of Alan Coopersmith) regularly use security-oriented tools and make corrections to the project. Sometimes, when a specific new type of vulnerability appears (such as formatting strings or integer overflow about 10 years ago), we do a huge purge of existing code to try to fix everything in it.

We also get external help from independent security researchers who monitor curious vulnerabilities, and since the X server still runs as root on many systems, this is still justified.

The last year Ilja Van Sprundel reprinted a very large number of vulnerabilities in the X server library and in the X server itself, mainly due to the lack of good validation of messages in the X server protocol.

Do you use static code analysis?

The answer is similar to my previous one. Tinderbox does not run any static analyzers other than gcc with the -Wall option and some additional options. But some developers (including Alan from Oracle) have access to powerful static code analyzers and they run them from time to time.

Coverity has a program for conducting static analysis of free organizations. X.Org was part of this program and they helped us find a number of problems.

X.Org supports increasingly less popular operating systems: Linux, FreeBSD, NetBSD, OpenBSD, Solaris, Microsoft Windows. How do you ensure stability in all these OSes?


As I explained above, this is provided by volunteers (or paid workers in some cases) from various projects. Most developers focus on Linux, which has become the main development platform in the past 10 years. But from myself I want to add that I am a little sorry that the developers do not interfere a bit more in support on other systems. From my experience, I need to study a lot to develop on more than one platform and from the point of view of code security, diversity is important (even if it increases development costs).

Who is responsible for the release of new versions? What are the criteria for release?

There is a maintainer for the X server, which is responsible for releasing the release. We are currently working in a 6-month development cycle for the release of a new release every 6 months. The previous release gets a -stable maintainer and is mainly maintained for more than 12 months.

In addition to the X Server releases, we are still releasing Katamari releases with a full, consistent set of libraries and utilities in addition to the X server. This is done one or more times a year. (The current release of Katamari is 7.7, based on X server 1.14). But the need for Katamari releases is often questioned, since distribution vendors, as a rule, support their own Katamari (with a large number of upstream outlets), regardless of the official X.Org.

The times when the XFree86 project provided binary builds for most supported systems (from SVR4 to Linux, including NetBSD, OS / 2, and a few others) are definitely over.

Tell us about the most interesting bug in your practice. :)

Working with code that was designed and implemented when the security of the code did not matter much was not so interesting. The X server was originally a permissive system (remember "xhost +"?). People did not care about buffer overflow or other malicious ways to exploit coding errors. Features like the X-SHM extension were broken initially. (SHM has been fixed by using a new API based on file descriptor transfer).

But the most interesting problem, from my point of view, is described in the article Loic Dufflot on CanSecWest 2006, where he explained that even with privelege escalation, which I added in OpenBSD, it remains possible to “simply” implement the code to gain control over the OS kernel for the fact that the X server has direct access to the hardware.

This is something that has always been known (and I even talked about this in my report at RMLL in 2003), but the lack of implementation of PoC (Proof of Concept) allows many developers to ignore the problem.

Thanks for the answers and wish you fewer bugs in the code!

Thank.

In conclusion, I want to add that yes, X.Org is far from ideal in terms of testing. We are trying to do it better, but this is not the most attractive area for contributors, things are not done quickly, as most developers prefer more attractive things.

Source: https://habr.com/ru/post/234291/


All Articles