
Greetings habrayuzerov, in this article we will talk about the internal structure of the compiler LLVM. You can read about LLVM in general
here or at
llvm.org . As is known, LLVM (conditionally) consists of three parts - bytecode, compilation strategy and environment aka LLVM infrastructure. I will review the latter.
Content:
- Build llvm
- Binding to Eclipse
- Environment Architecture
- LLVM API
- Optimize Hello, World!
The article is taken LLVM 2.7+, running under GNU / Linux (Ubuntu 10.04). Used materials official
documentation . The materials are based on the one and a half year experience in the study of LLVM at ISP RAS.
Build llvm
LLVM sources can be taken from the
official project page , both in the archive and from the repository. Building LLVM is pretty simple, but it has its own characteristics: first, the LLVM itself is assembled, then the front-end (in our case, llvm-gcc), without which it will not be possible to compile C / C ++ programs. It is assumed that all the necessary libraries are present in the system, otherwise you will be cursed anyway during configuration. So, we are in a root of a source tree, we collect LLVM:
mkdir build && cd build
')
# Configuration
.. / configure --enable-jit --enable-optimized --enable-shared --prefix = / prefix --enable-targets = host-only
# --enable-shared allows you to build the LLVM core as one large shared library
# instead of / prefix set the desired path for the installation
# --enable-targets = host-only means support only the host architecture
# for additional acceleration, you can add --disable-assertions, they are in the code at every step, incl. in cycles
# Build
make -j2 CXXFLAGS = -O3
# -jN is quite symbolic, most of the time, everything will still be collected in one thread,
# nothing can be done about it
# CXXFLAGS = -O3 gives additional GCC optimizations.
# You can also add -flto to enable link-time optimizations in case of GCC 4.5+
# Installation
make install
Now we collect llvm-gcc:
# Take the code
svn co http: // llvm.org / svn / llvm-project / llvm-gcc- 4.2 / trunk llvm-gcc
cd llvm-gcc && mkdir build && cd build
# Configuration
.. / configure --enable-llvm = / path / to / llvm / build --prefix = / prefix / of / llvm --program-prefix = llvm- --enable-languages = c, c ++
# / path / to / llvm / build means the path to the same build folder created in the previous step.
# / prefix / of / llvm - for convenience, the prefix matches the prefix LLVM
# --program-prefix = llvm- needed to get common binaries like llvm-gcc or llvm-g ++
# Build
make -j2 CFLAGS = -O3
# Installation
make install
If everything went well, then, for example, this test will pass:
/ prefix / of / llvm / llvm-gcc -v
...
gcc version 4.2.1 ( Based on Apple Inc. build 5658 ) ( LLVM build )
Binding to Eclipse
For convenience, I’ll tell you how to compile and study LLVM source code in Eclipse (of course, with C / C ++ support). Create a new C ++ project
/ File-New - C ++ Project / , set the name “llvm”
/ Project name / , remove the checkbox from the
default location / Use default location / and set the path to the LLVM source tree, set the project type to Makefile
/ Project type: Makefile project - Empty Project / . Next, click "Finish"
/ Finish / . While Eclipse parses the files and builds the navigation index, go to the project properties
/ Project - Properties / . In the settings of the builder
/ Builder Settings / uncheck the
default build command / Use default build command / and enter “make -j2 CXXFLAGS = -O3”. In the build directory
/ Build directory / append "/ build", so that it turned out "$ {workspace_loc: / llvm} / build". Go to the behavior tab / Behavior / and to the right of the build target
/ Build (Incremental build) / erase “all”. The latter is not necessary and should be rather for the identity of the manual assembly in the console. Done! You can safely jump by class, watch # define-s and click on the build button
/ Build / . To avoid possible problems (for whom how) you can remove tests from the tree.
Environment Architecture
LLVM consists of several executable files using libLLVM2.x.so (kernel).
opt is a bytecode tool: it can roll in any available machine-independent optimizations in any order, perform various types of analysis and add profiling. There are levels of optimization O0, O1, O2 and O3.
llc -
kodogenerator from bytecode to assembler of the target machine. Performs machine-specific optimizations. There are also optimization levels O0, O1, O2 and O3.
llvm-ld - bytecode linker, i.e. The tool combines several bytecode files into one large BC file. At the same time, link-time optimizations that the creators are so proud of are rolling.
lli is a bytecode interpreter with JIT compilation capability.
llvm-dis is a bytecode converter (bc) to the equivalent text representation (ll).
llvm-as - text view converter (ll) to bytecode (bc).
llvm-ar - packer multiple files into one. Some kind of analog bzip2.
llvmc - LLVM driver, calling in turn the language parser (i.e. front-end: llvm-gcc or clang), opt, llc (specified back-end for the target machine), assembler and linker of the target machine. The general scheme is shown in the figure.

It is worth noting that all well-known architectures (including x86, x86-64, ARM) are supported as a back-end, and even such exotic as C language. The latter means that you can convert bytecode directly to C source code ... but it is unlikely that you will like it without variable names (or rather, with different names assigned in bytecode), with goto cycles, a changed order of the commands, and many more with something else. And how else, miracles do not happen. By the way, bytecode can potentially be turned into Java or .NET virtual machine code.
Interesting implementation of the "iron" back-end-s. At an early stage of building LLVM, a utility called
tblgen appears . It also converts the descriptions of assemblers in a specific descriptive language into C ++ code, which later includes # in classes that implement code generation. Thus, it is very easy to change or add something to the command system, and support is unified. Unlike GCC, you don’t have to strain too hard on cross-compilation: when you call llc, just specify -march = armv7, where any supported architecture can be in place of armv7. I can not fail to mention here the fundamental difference between LLVM and, say, Java: the LLVM bytecode generally depends on the platform on which it is obtained and the target machine under which it is compiled. The first is explained by linking with libraries that depend on the OS, the second by assembler inserts and porting code (conditional compilation) to various architectures.
The values ​​of some folders with code:
- include - header files
- tools - implementation of the above tools
- runtime - implementation of profile collection
- tools - implementation of LLVM utility utilities, incl. tblgen
- examples - examples of using LLVM
- lib is the main part
- lib / Analysis - analysis passages (for passages see further)
- lib / CodeGen - implementation of code generation, machine-dependent passes
- lib / Transforms - passes executable over bytecode
In principle, the meanings are obvious from the names, and this evidence is not deceptive.
LLVM API
The LLVM core is a very powerful system. All routine things are hidden from ordinary developers, giving the opportunity to focus on improving the quality of the compiler. Detailed documentation, a lot of materials allow you to quickly get up to date. The names of classes, methods, inheritance scheme - everything is thought out to the smallest detail. I will talk a little bit about the API related to optimizations. In order not to be bored, it is advisable to know what the terms such as basic block and control flow graph (aka CFG, Control Flow Graph) mean. Enough information from Wikipedia: the
base unit and
control flow graph .
It is generally accepted to talk about passes (passes), and not about optimizations. Passage is the implementation of a specific optimization or part of it. LLVM has a Pass Manager (PassManager), which works on the FIFO principle. First, descriptions of the passages are added, and then the manager will execute them one by one. Moreover, if the passage needs some CFG analysis data, then the manager will first carry out this analysis if his data is outdated. Passages operate within a function (inherited from FunctionPass) or a base unit (BasicBlockPass). In the first case, the function will be input to the algorithm, in the second - the base unit. A function can be standardized to iterate over the base blocks, the base block, respectively, according to the instructions. You can rearrange everything in places, as you like, add new elements.
LLVM opt supports plug-ins - a sort of shared passes. During the call the library is loaded and taken into account. Thanks to this, it is not necessary to make your own branch of code and then edit it to expand the capabilities of the compiler, the necessary header files are enough. As is known, this has appeared quite recently in GCC.
As they say, it's better to see once than hear a hundred times. Let's go to practice.
Optimize Hello, World!
Let us apply this knowledge to write our own true machine-independent optimization. All she will do is collect statistics about the average number of instructions in the base unit, and in fact do not optimize anything. We will assemble it as an opt plug-in so as not to edit the LLVM code. Create the plugins folder in the root of the LLVM source tree, and in it the BasicBlockStats.cpp file with the following contents:
#define DEBUG_TYPE "basic-block-stats"
#include "llvm / Pass.h"
#include "llvm / Function.h"
#include "llvm / Instructions.h"
#include "llvm / BasicBlock.h"
#include "llvm / ADT / Statistic.h"
#include "llvm / Transforms / Scalar.h"
#include "llvm / Support / CFG.h"
using namespace llvm ;
// Declaring statistics collected.
// Each of them can only be a human number without a sign.
// Total number of base blocks
STATISTIC ( NumberOfBasicBlocks, "Number of basic blocks in the module" ) ;
// Total instructions
STATISTIC ( NumberOfInstructions, "Number of instructions in the module" ) ;
// Average number of instructions per base unit
STATISTIC ( AverageInstructionsInBasicBlock, "Average number of instructions in a basic block" ) ;
namespace
{
// Structure (class) of the passage (pass)
// In the standard terminology, optimization is called passages, because
// action on the module is performed in one full pass through its contents
struct BasicBlockStats : public FunctionPass
{
// Pass Identification Number
static char ID ;
BasicBlockStats ( ) : FunctionPass ( ID ) { }
// Overloading the method that informs the passage manager that it uses and changes this passage
virtual void getAnalysisUsage ( AnalysisUsage & AU ) const
{
// Do not use anything, change nothing
AU. setPreservesAll ( ) ;
}
// Overloading the method that bears the meaning of the pass - operations on the module function
virtual bool runOnFunction ( Function & F ) ;
~ BasicBlockStats ( )
{
// When calling the destructor count the desired number
AverageInstructionsInBasicBlock = NumberOfInstructions / NumberOfBasicBlocks ;
}
} ;
// We don’t need to identify ourselves
char BasicBlockStats :: ID = 0 ;
// Perform a pass registration in the pass manager
RegisterPass < BasicBlockStats > X ( "basic-block-stats" , "Basic Block Statistics" , true ) ;
} // end of unnamed namespace
// Implement the main logic
bool BasicBlockStats :: runOnFunction ( Function & F )
{
// Pass each function base block
for ( Function :: iterator I = F. begin ( ) , E = F. end ( ) ; I ! = E ; I ++ )
{
NumberOfInstructions + = I - > size ( ) ;
}
// Add the number of base blocks
NumberOfBasicBlocks + = F. size ( ) ;
return false ;
}
I tried as much as possible to provide the code with comments so that it was clear. Now we build our plug-in:
g ++ -c -fPIC BasicBlockStats.cpp -o BasicBlockStats.o -I .. / include -I .. / build / include -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS
gcc -shared -s BasicBlockStats.o -o BasicBlockStats.so
It is time to check if it works? I suggest to test in combat conditions on the SQLite code.
We swing sqlite-amalgamation-A_B_C_D.zip , we unpack somewhere. Further, I assume that the path to the installed executable files LLVM and llvm-gcc, as well as to BasicBlockStats.so is registered in $ PATH.
# Putting SQLite library into bytecode
llvm-gcc -emit-llvm -c sqlite3.c -o sqlite3.bc
# Run our optimization
# -stats shows all collected statistics
opt sqlite3.bc -load BasicBlockStats.so -basic-block-stats -stats -o sqlite3.bc
=== ----------------------------------------------- -------------------------- === <br/>
... Statistics Collected ... <br/>
=== ----------------------------------------------- -------------------------- === <br/>
<br/>
8 basic-block-stats - Average number of instructions in a basic block <br/>
18871 basic-block-stats - Number of basic blocks in the module <br/>
153836 basic-block-stats - Number of instructions in the module
So, the average value of the instructions in the base unit in SQLite code is 8.
Perhaps the simplicity of writing your own pass (pass) will inspire the reader to research in this area, and I will end on this. I will be glad to answer your questions.