📜 ⬆️ ⬇️

Revelations metaprogrammer. We program the code at compile time, use C ++ templates for non-template solutions.



Templates can be called the most important difference and the main advantage of the C ++ language. The ability to create an algorithm pattern for various types without copying the code and with strict type checking is just one aspect of using patterns. The code of specializations of the template is built at the compilation stage, and this means that the behavior of the created types and functions can be controlled. How can you resist the possibility of programming compiled classes?

Metaprogramming is becoming as an integral part of writing C ++ code, as is the use of the standard library, part of which is created specifically for use at the compilation stage. Today we will produce a library of safe casting of C ++ scalar types by metaprogramming with templates!

Pattern break


In fact, all metaprogramming is reduced not so much to the patterned behavior, regardless of the type, but to the violation of this very pattern of behavior. Suppose we have a template class or a template function:
')
template <class T> class Some; template <class T> T func(T const& value); 

As a rule, such classes and functions are described immediately with a body common to any type. But no one prevents us from setting an explicit template specialization for one of the types, creating for this type a unique function behavior or a special kind of class:

 template <> class Some<int> { public: explicit Some(int value) : m_twice(value * 2) { } int get_value() const { return m_twice / 2; } private: int m_twice; }; template <> double func(double const& value) { return std::sqrt(value); } 

In this case, the general behavior can be described very differently from the pattern specified for specializations:

 template <class T> class Some { public: explicit Some(T const& value) : m_value(value) { } T const& get_value() const { return m_value; } private: T m_value; }; template <class T> T func(T const& value) { return value * value; } 

In this case, when using the template, a special behavior will be observed for the `Some` and` func` specializations: it will differ greatly from the general behavior of the template, although the external API will differ slightly. But when creating instances, `Some` will store a double value and return the original value, halving the` m_twice` property with the query `get_value ()`. The generic `Some` pattern, where T is any type other than int, will simply save the passed value, producing a constant reference to the` m_value` field for each `get_value ()` request.

The `func` function computes the root of the argument value altogether, while any other specialization of the` func` pattern will calculate the square of the passed value.

Why do you need it? As a rule, in order to make a logical fork within the template algorithm, for example:

 template <class T> T create() { Some<T> some(T()); return func(some.get_value()); } 

The behavior of the algorithm inside create will be different for the types int and double. In this case, the behavior of different components of the algorithm will differ. Despite the illogicality of the template specialization code, we got a simple and understandable example of managing patterns.

Gap non-existent pattern


Let's make our example a little more fun - remove the general pattern of behavior for Some and func, leaving only the specializations Some and func already written and, of course, without touching the preliminary announcement.

What happens to the `create` template in this case? It will simply stop compiling for any type. After all, for `create`, there is no implementation of the` func` function, and for `create` there is no necessary` Some`. The first attempt to insert the create call for any type into the code will result in a compilation error.

To allow the `create` functions to work, you need to specialize` Some` and `func` from at least one type at a time. You can implement `Some` or` func`, like so:

 template <> int func(int const& value) { return value; } template <> class Some<double> { public: explicit Some(double value) : m_value(value*value) { } double get_value() const { return m_square; } private: double m_square; }; 

By adding two specializations, we not only revived the compilation of create specializations from the types int and double, it also turned out that the algorithm would return the same values ​​for these types. But the behavior will be different!

INFO


In C ++, types behave differently and not always the template algorithm behaves effectively for all types. Often, by adding a template specialization, we get not only performance gains, but also more understandable behavior of the program as a whole.

So help us std ::


Every year more and more tools for metaprogramming are added to the standard library. As a rule, everything new is a well-tried old, borrowed from the Boost.MPL library and legalized. We increasingly need `#include <type_traits>`, and more and more code comes with using forks like `std :: enable_if`, more and more we need to know at the compilation stage whether the template argument is not an integer type` std :: is_integral` , or, for example, compare two types inside a template with `std :: is_same` to control the behavior of template specializations.

The supporting structures of the template are built so that only the specialization is compiled, which gives the truth of the expression, and there are no specializations for false behavior.

To make it clearer, let's take a closer look at `std :: enable_if`. This pattern depends on the truth of its first argument (the second is optional), and an expression like `std :: enable_if :: type` will be compiled only for true expressions, this is done quite simply - by specializing on the value true:

 template <bool predicate_value, class result_type = void> struct enable_if; template<class result_type> struct enable_if<true, result_type> { typedef result_type type; }; 

For a value of false of type `std :: enable_if <P, T> :: type`, the compiler simply cannot create it, and this can be used, for example, by limiting the behavior of a number of partial specialization types of the template structure or class.

Here, a variety of predicate structures from the same `<type_traits>`: `std :: is_signed :: value` is true if the type T supports the type of the + or - character ( which is very convenient for cutting off the behavior of unsigned integers), `std :: is_floating_point :: value` is true for real types float and double,` std :: is_same <T1, T2> :: value` is true if types T1 and T2 are the same. There are many predicate structures that help us, and if something is missing in `std ::` or `boost ::`, you can easily make your structure.

Well, the introductory part is completed, we proceed to practice.

How are predicates arranged?


A predicate is the usual partial specialization of a template structure. For example, for `std :: is_same`, in general, it looks like this:

 template <class T1, class T2> struct is_same; template <class T> struct is_same<T,T> { static const bool value = true; }; template <class T1, class T2> struct is_same { static const bool value = false; }; 

For the coinciding argument types `std :: is_same`, the C ++ compiler will select the appropriate specialization, in this case partial with value = true, and for mismatched ones it will fall into the general implementation of the template with value = false. The compiler is always trying to find a strictly suitable specialization in the types of arguments and, just not finding the one needed, goes into the general implementation of the template.

Entry by template is strictly prohibited


To start programming the program code and do all kinds of metaprogramming, try creating a terrible function that returns a different result for the same and different types of template arguments. In this we will be helped by the mechanism of partial specialization for the auxiliary structure. Since there is no partial specialization for functions, inside the function we will simply refer to a simple corresponding specialization of the structure, in which we will define a partial specialization:

 template <class result_type, class value_type> struct type_cast; template <class result_type, class value_type> bool try_safe_cast(result_type& result, value_type const& value) { return type_cast<result_type, value_type>::try_cast(result, value); } template <class same_type> struct type_cast<same_type, same_type> { static bool try_cast(result_type& result, value_type const& value) { result = value; return true; } } 

Obviously, we have created a preset for the safe cast function. The function is based on the types of the arguments passed to it and goes to the static `try_cast` method of the corresponding specialization of the` type_cast` structure. At the moment we have implemented only the trivial case, when the type of the value coincides with the type of the result and the transformation, in fact, is not necessary. The result variable is simply assigned an incoming value, and true is always returned - a sign of successful conversion of the value type to the result type.

For mismatched types, a compilation error with a long incomprehensible text will now be displayed. To fix this a little, you need to enter the general implementation of the template with `static_assert (false, ...)` in the body of the `try_cast` method - this will make the error message more understandable:

 template <class result_type, class value_type> struct type_cast { static bool try_cast(result_type&, value_type const&) { static_assert(false, "     "); } } 

Thus, every time an attempt is made to cast a type using the `try_safe_cast` type function for which there is no corresponding specialization of the` type_cast` structure, a compilation error message will be issued from the general template.

Preparation is ready, it's time to start metaprogramming!

Pometaprogram me here!


First you need to correct the declaration of the auxiliary structure `type_cast`. We will need an additional type of `meta_type` for a logical fork without prejudice to the parameters passed and implicitly determining their types. Now the description of the structure template will look a bit more complicated:

 template <class result_type, class value_type, class meta_type = void> struct type_cast; 

As you can see, the new type in the template declaration is optional and does not interfere with the already existing declarations of specialization and the general behavior of the template. However, this little nuance allows us to control the compilation success by passing the result of `std :: enable_if <predicate> :: value` to the third parameter. Specializations with a non-compiled template parameter will be discarded, which is what we need in order to manage the casting logic of the types of different groups.

Indeed, it is obvious that integers are given to each other in different ways, depending on whether both types have a sign, which type is of greater bitness and whether the transmitted value value does not exceed the allowable values ​​for `result_type`.
So, if both types are signed integers and a type of result that is larger than the width of the input value, then you can easily assign an input value to the result, the same is true for unsigned types. Let's describe this behavior with a special partial specialization of the `type_cast` template:

 template <class result_type, class value_type> struct type_cast<result_type, value_type, typename std::enable_if<...>::value> { static bool try_cast(result_type& result, value_type const& value) { result = value; return true; } }; 

Now we need to figure out what condition we need to insert instead of the ellipsis parameter `std :: enable_if`.

Let's go to describe the condition of the compilation time:

 typename std::enable_if< 

First, the specialization should not intersect with the already existing one, where the type of the result and the input value coincide:

 !std::is_same<result_type, value_type>::value && 

Secondly, we consider the case when both template arguments are integer types:

 std::is_integral<result_type>::value && std::is_integral<value_type>::value && 

Thirdly, we mean that both types are either signed or unsigned (brackets are required - the conditions of the template parameters are calculated differently than at the execution stage!):

 (std::is_signed<result_type>::value == std::is_signed<value_type>::value) && 

Fourthly, the bit width of the integer type of the result is greater than the bit width of the type of the transferred value (again brackets are required!):

 (sizeof(result_type) > sizeof(value_type)) 

Finally, close the std :: enable_if declaration:

 ::type 

As a result, the type for `std :: enable_if` will be generated only when these four conditions are met. In other cases, for other combinations of types, this partial specialization will not even be created.

It turns out a furious expression inside `std :: enable_if`, which cuts off only the case specified by us. This template saves from duplicating the code to bring the various integral types into each other.

To consolidate the material, we can describe a slightly more complicated case - the reduction of an unsigned integer to a type of lesser bit depth of an unsigned integer. Here knowledge of the binary representation of the integer and the standard class `std :: numeric_limits` will help us:

 template <typename result_type, typename value_type> struct type_cast<result_type, value_type, typename std::enable_if<...>::type> { static bool try_cast(result_type& result, value_type const& value) { if (value != (value & std::numeric_limits<result_type>::max())) { return false; } result = result_type(value); return true; } }; 

In the if condition, everything is quite simple: the maximum value of the type `result_type` is implicitly reduced to a type of more bitness` value_type` and acts as a mask for the value `value`. In case the bits outside the `result_type` are used for the` value` value, we will get the executed inequality and get return false.

Now let's go through the compile time condition:

 typename std::enable_if< 

The first two conditions remain the same - both types are integer, but different among themselves:

 !std::is_same<result_type, value_type>::value && std::is_integral<result_type>::value && std::is_integral<value_type>::value && 

Both types are unsigned integers:

 std::is_unsigned<result_type>::value && std::is_unsigned<value_type>::value && 

The type of the result is smaller than the input type (parentheses are required!):

 (sizeof(result_type) < sizeof(value_type)) 

All conditions are listed, close the condition of specialization:

 ::type 

For signed integers, where the result is of lower bitness, the condition will be similar, but with two `std :: is_signed` inside of` std :: enable_if`, but the condition for going beyond the limits of values ​​will be somewhat different:

 static bool try_cast(result_type& result, value_type const& value) { if (value != (value & (std::numeric_limits<result_type>::max() | std::numeric_limits<value_type>::min()))) { return false; } result = result_type(value); return true; } 

Again, remember the binary representation of signed integers: here the mask will be the sign bit of the input value and the value bits of the result type, excluding the sign bit. Accordingly, the minimum number of the type `value_type`, where only the sign bit is filled, is combined bitwise with the maximum number of the type` result_type`, where all bits except the sign one are filled, and will give us the desired mask of acceptable values.

For homework, consider the following cases:

  1. Bringing signed to unsigned using already written specializations and the modifier `std :: make_unsigned`.
  2. Reduction of unsigned to significant higher resolution using already written specializations and the modifier `std :: make_signed`.
  3. Slightly more complicated: casting unsigned to signed less or equal to bitness using the no-exit condition and the `std :: make_signed` modifier.

It is also not difficult to write similar specializations for conversions from `std :: is_floating_point` types, as well as conversions from` bool` type. For complete satisfaction, you can add a cast from and to string types and arrange it with the much-needed library of safe casting of C ++ types.

Unconventional thinking


For each use case of the pattern, there may be an exception. Now you will be ready to meet him and correctly handle it. A special meta-type in the template of the auxiliary structure is not always needed, but if it is time to process predicates at the compilation stage - well, there’s nothing to worry about. All that is needed is to roll up the sleeves and neatly create a template construct with a compile time predicate.

But be careful, the abuse of templates does not lead to good! Treat templates as purely a generalization of code for different types with similar behavior; patterns should appear reasonably when there is a risk of replicating the same code for different types.

Remember also that in order to understand the logic of the template predicate without the author of the code, you need to be at least a bold optimist, so take care of your colleagues ’mind, make out the template predicates neatly, nicely and readably and feel free to comment on almost every condition in the predicate .

Template the code carefully and only when necessary, and colleagues will thank you. And do not be afraid to break the pattern in the event of an exception to the rule. Rules without exceptions are rather exceptions to the rules.

image

First published in Hacker Magazine # 193.
Author: Vladimir Qualab Kerimov, Lead C ++ Developer, Parallels

Subscribe to "Hacker"

Source: https://habr.com/ru/post/257899/


All Articles