📜 ⬆️ ⬇️

Plugin system as an exercise in C ++ 11

Somehow it turns out that in many systems with which I had to work I had my own component models or it came to the fact that they had to appear in this system, because it was already understood that decomposition was already needed and the system code in one module to exist more and more difficult.

Does it make sense to write something like this or take a ready-made solution? The answer to this question does not provide a post. In this post there will be no philosophy on the topic "Why is it necessary?"

An attempt to do something similar was already in C ++ 03. It developed a component / plug-in model that lives within the process. For me, the solution of such a problem is interesting. In gcc 4.7.2, everything that was interesting to me at the time of the beginning of this article appeared, and this is the beginning of this (2013) year. And then I came to C ++ 11 ... At work in one direction, at home in another. To play around with C ++ 11 I decided to rewrite the material from the old article with new language features. To do in a sense an exercise in C ++. But for some reason, I was unable to bring the article to the end of more than six months, and the article was left in untouched drafts. Got, shook off naphthalene. What came out of this can be read further.


About the decision to use C ++ 11


We waited, waited and finally waited for the release of C ++ updates. A new language standard has been released - C ++ 11. This version of the programming language has brought many interesting and useful features, but is it worth it to use it for the time being a controversial issue, but not all compilers support it or support it in incomplete form.
')

Introduction and some philosophy


Here we will talk a little about the principles on which this implementation of the plug-in system or component model is built and why that is chosen rather than the other way. Some water will be poured out: my reflections. If you are not interested in such philosophizing, then go ahead with the implementation.

Interfaces and IDs

The basis of the described development will be based on the interaction of system components via interfaces. The interface in this context should be understood as a C ++ structure containing only purely virtual methods. The interface will be some logical unit around which everything is built.

One of the important questions can be - what to use as an identifier for an interface, implementation, module and other entities. In the previous article, the C-string was used as an identifier, since it is possible to provide greater uniqueness if, for example, uuid is used as an identifier, generated by some tool and translated into a string. You can use a numeric identifier as an identifier. In such a solution, the uniqueness will be weaker, but there are advantages - at least, a high performance, which is obvious, comparing strings is more laborious than numbers. As a value of a numeric identifier, you can use for example CRC32 from a string. Suppose there is an Ibase interface and it is in the Common namespace. CRC32 from the string “Common.IBase” can become an identifier. Yes, if suddenly interface identifiers coincide somewhere, since this is not uuid, then you will get many hours of “happy” debugging and get a good idea of ​​mastering the strength of the Russian language. But if you do not have ambitions that your model will be used throughout the world in global systems, then the probability of an outcome with long debugging is minimal. In a pair of offices, they dealt with their “crafts” in the style of MS-COM, in which numerical values ​​were used as an identifier and did not come across the problem described above, and there were no rumors that someone had it either. Therefore, this implementation will use a numeric identifier. In addition to the performance of this solution, there is another positive point: with a numeric identifier at the time of compilation, you can do a lot of interesting things, since you cannot manipulate strings as template parameters, and the number is easy. And then just the first plus C ++ 11 will be used - this is constexpr, with which you can calculate hash values ​​at the time of compilation.

Cross-platform and language support

The described model will be cross-platform. Cross something there - this is one of the interesting points in the design. For a C ++ developer, one of the most understandable tasks is to support cross-platform, but less often there are also tasks related to cross-compilation, since what is easily supported by one compiler may not be supported otherwise. One such example is before the implementation of the decltype implementation of an attempt to implement how to get an expression type at compile time. A good example is BOOST_TYPEOF. If you look under the hood of BOOST_TYPEOF, you will find a considerable set of sticks and crutches, since such things could not be implemented using C ++ 03, but were solved mainly on some advanced features of a specific compiler. Also in C ++ 11, the standard library has expanded, which made it possible to refuse to write your own wrappers on threads, synchronization objects, etc. For the library functions on type support, developers of the standard can say a special thank you, since they eliminated the need to write their code in many cases, and, most importantly, gave the implementation of such methods as std :: is_pod and others, which can be implemented using standard C ++ 03 tools without using compiler extensions was impossible.

Whether to use third-party libraries

There was a desire to minimize the use of third-party libraries and, if possible, reduce their use to zero. This will be development in pure C ++. When implementing the final components, anything can be used because of the task, any libraries, but the model itself, given here, will be pure in the sense of using third-party libraries.

I have a certain attitude to the use of third-party libraries: do not use the library in a project if its functionality is not used by the client code as much as possible. You should not drag Qt into a project just because some people like to use QStrung and QList. Yes, I met projects in which some libraries and frameworks were dragged behind the ears just to use some small and unimportant part of it simply because of the habits of some developers. In general, it is impossible to deny the use of such libraries as boost, Qt, Poco and others, but they should be applied to the place, included in the project only when they have a great need. You should not breed a zoo, so, in the project, make a couple of exotic animals and no more :) so as not to get a project in which there are 5-7 pieces, or even more types of strings, 2-3 of which are own bicycles, and the rest came from other libraries and written a bunch of converters from one implementation to another. As a result, the developed program instead of doing useful work may well be a significant amount of time converting between different implementations of the same entities.

Boss ...

Somehow I got used to lay out code in namespaces. Boss (base objects for service solutions) will be chosen as the name of the namespace and the whole model. About the origins of the name can be found in the previous article on this topic. In the comments to the article, it was noted that “Boss” can be embarrassing in the code, due to the reminders of the authorities and stereotypes associated with it. Initially, there was no goal to focus the title on a certain “cudgel” (© Our Russia). But if someone causes negative associations, then why not look at it from a different angle? There is a wonderful book by Ken Blanchard “Leadership to the heights of success”, which describes high-performance organizations and servant-leaders whose goal is to do the maximum for the employee to give everything for his work with maximum productivity, and not just to stand with a stick behind his back. Those. Leader - assistant in the organization of effective work. Boss is preferably perceived as a manager in a highly efficient organization that helps employees achieve maximum productivity by providing them with everything they need to do so. Within the framework of the component model, this is exactly help in organizing a thin layer for more simple interaction of entities in the system, and not a monstrous framework with which to fight and most of the work is aimed only at working with it, and not at business logic.

Minimalism in the interface

One of the criteria, which for me plays an important role when reviewing a regular library, is the speed of starting work with the library and providing it with more and more advanced options for setting it up for the task. Ie the library should not force a very long ritual to perform its user before something starts to work with its use. But at the same time, as necessary, it should be possible to get more and more opportunities for its configuration and adaptation to more difficult tasks. Those. Initially, here is your “Pysch” button, after clicking on it, a certain sequence of actions is performed on a specific pattern, and if necessary, here is a control panel with a bunch of buttons and switches. This idea was one of the key points in the proposed model: to hide from the user as much as possible at the initial stages. Inside libraries, code can be arbitrarily complex, but with all its complexity it has to justify maximum ease of use of the library itself.

Multiple inheritance implementations

A lot of holy wars have been and are being conducted on the multiple inheritance of realizations on the Internet. I believe that multiple inheritance is one of the strengths of C ++. Yes, sometimes there are problems with it, but without it, it is also not always possible to get out easily. Each C ++ tool is not intended to somehow be taken and used only because it is there, but when the need arises, the tool is there.

When I start to sing the advantages of languages ​​with multiple inheritance of only interfaces, I like to ask about the solution of the following problem. Suppose there are two interfaces and an implementation for each. These interfaces and implementations are used in the project for some considerable time. Yes, the problem of design and fat interfaces is bad, but let's say these interfaces have more than a dozen methods, and, accordingly, their implementations implement all this. And here there is a need to implement a component with the functionality of these two entities, but with the implementation of another third interface. With support for multiple inheritance of implementations, everything is solved simply: a class is derived from the new interface and from two existing implementations, and only the methods of the new third interface are implemented. But with the support of only multiple inheritance of interfaces such a simple solution will not be.

Here, of course, you can dissolve a considerable discussion about the design of the system, but real practice is not as idealistic as theoretical code design.
Once at the interview I asked the candidate (far from a youth) about what he knows about multiple inheritance. The answer was approximately as follows: “Yes, I know that there is multiple inheritance and, it seems, like, there is also virtual plural inheritance, but this is bad. I never use it. And I can't say anything more about it. ”

If you want to make new entities, dialing them from cubes-ready entities, then multiple inheritance is one of the most useful mechanisms. And component models are just a space for building something new from pieces of something already existing.

Implementation


Core

As already noted, everything is built around interfaces - C ++ structures with purely virtual methods and some admixture (interface identifier).
The basic interface from which all existing in this implementation should inherit:
namespace Boss { struct IBase { BOSS_DECLARE_IFACEID("Boss.IBase") virtual ~IBase() {} BOSS_DECLARE_IBASE_METHODS() }; } 

Hmm, virtual destructor and a couple of macros ... Many will exclaim: “Macros are bad!” Yes, it’s bad when they are abundant and applied anywhere. In small quantities and only when necessary, it is useful, as a poison in pharmacology - kills and heals, depending on the dosage.
BOSS_DECLARE_IFACEID
 #define BOSS_DECLARE_IFACEID(ifaceid_) \ static constexpr Boss::InterfaceId const GetInterfaceId() \ { \ return Boss::Crc32(ifaceid_); \ } 
adds some static method with which you can get the interface identifier. Since the static method does not affect the data structure in any way, therefore, you can safely transfer the interface between modules collected even on different compilers, and constexpr will allow the resulting value to be used for parameterization of templates.

As an interface identifier, a macro is passed as a string parameter. Somehow the lines look more pleasant in the code than dry numbers, and you also need to have a certain set of data from which to generate a numeric identifier. As an identifier selected crc32 from the string. And here it is the strength of the new standard: you can count crc32 and other things from the lines at the time of compilation! Such a trick, of course, does not come out with the lines dynamically created in the program, and it is not useful for solving this problem.

To implement the crc32 calculation, you need some table with data that can be easily found on the Internet. With its help, crc32 can be considered as follows:
 namespace Boss { namespace Private { template <typename T> struct Crc32TableWrap { static constexpr uint32_t const Table[256] = { 0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L, 0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L, ... etc }; }; typedef Crc32TableWrap<EmptyType> Crc32Table; template<int const I> inline constexpr std::uint32_t Crc32Impl(char const *str) { return (Crc32Impl < I - 1>(str) >> 8) ^ Crc32Table::Table[(Crc32Impl< I - 1>(str) ^ str[I]) & 0x000000FF]; } template<> inline constexpr std::uint32_t Crc32Impl<-1>(char const *) { return 0xFFFFFFFF; } } template <std::size_t N> inline constexpr unsigned Crc32(char const (&str)[N]) { return (Private::Crc32Impl<sizeof(str) - 2>(str) ^ 0xFFFFFFFF); } } 

Why is the table wrapped in a structure, and even in a template? To get rid of the cpp-file with the data definition, i.e. everything is only in the included file and without the charms of static data in the included files.

Crc32 is calculated, the identifier is generated. Now to consider what lies under the second macro:
BOSS_DECLARE_IBASE_METHODS
 #define BOSS_DECLARE_IBASE_METHODS() \ virtual Boss::UInt BOSS_CALL AddRef() = 0; \ virtual Boss::UInt BOSS_CALL Release() = 0; \ virtual Boss::RetCode BOSS_CALL QueryInterface(Boss::InterfaceId ifaceId, Boss::Ptr *iface) = 0; 
But! Surely it was impossible to just take and place the three methods in the structure? Why macro? And to finish off with the question of the presence of relatives in India ... But since there is no refusal from multiple inheritance and, moreover, it is very welcome in this model, in order to calm the compiler's anxiety that it is not clear from which branch of inheritance to take any of the methods described under the macro, this macro will be used in several more places.

Managing the lifetime of objects is implemented through reference counting. The functions of the IBase interface include methods for working with a reference counter and a method for requesting interfaces from an object.

An example of a user interface definition:
 struct IFace : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace") virtual void BOSS_CALL Mtd() = 0; }; 
Almost everything is clear: the interface, the declaration of its methods, the definition of the identifier. But why not just make inheritance from Ibase?

The second example is the user interface, so that further explanation is clearer:
 struct IFace1 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace1") virtual void BOSS_CALL Mtd1() = 0; }; struct IFace2 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace2") virtual void BOSS_CALL Mtd2() = 0; }; struct IFace3 : Boss::Inherit<IFace1, IFace2> { BOSS_DECLARE_IFACEID("IFace3") virtual void BOSS_CALL Mtd3() = 0; }; 
Now everything is revealed? Not? Everything is simple: in the presence of multiple inheritance, even only interfaces need to somehow be able to “get around” in their search for the necessary QueryInterface implementation. The case is a little esoteric, but sometimes I stumbled upon a similar one. Suppose you have a pointer to IFace3, it is clear that all the methods of its base classes can be called here on the spot. And if you transfer it to another function, more generalized, which always requests IFace1 or IFace2 from some interface, not necessarily with such a structure of inheritance, it no longer relies on the C ++ mechanisms, but on the implemented QueryInterface, the implementation of which needs this hierarchy get around This is where some impurity comes in: Boss :: Inherit, which has the following implementation:
 namespace Boss { template <typename ... T> struct Inherit : public T ... { virtual ~Inherit() {} typedef std::tuple<T ... > BaseInterfaces; BOSS_DECLARE_IBASE_METHODS() }; } 
This impurity is simply inherited from the transferred list of basic interfaces, “calms down” the compiler from the illegibility of choosing the right method (using BOSS_DECLARE_IBASE_METHODS) and “dropping” the list of inherited interfaces. Here the new standard gives such an advantage as templates with a variable number of parameters. Hurray, wait! Previously, this was solved through bulky lists of types in the style Alexandrescu. Well, here too, the new “pluses” still gave a bonus in the form of a tuple, freeing you from writing your similar bicycle.

How, from what and why to define user interfaces is considered, but they need to be implemented somewhere and somehow. First, a small example of the implementation of interfaces:
 struct IFace1 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace1") virtual void Mtd1() = 0; }; class Face_1 : public Boss::CoClass<Boss::Crc32("Face_1"), IFace1> { public: virtual void Mtd1() { // TODO: } }; 

And a great example
 struct IFace1 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace1") virtual void Mtd1() = 0; }; struct IFace2 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace2") virtual void Mtd2() = 0; }; struct IFace3 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace3") virtual void Mtd3() = 0; }; class Face1 : public Boss::CoClass<Boss::Crc32("Face1"), IFace1> { public: virtual void Mtd1() { // TODO: } }; class Face2 : public Boss::CoClass<Boss::Crc32("Face2"), IFace2> { public: virtual void Mtd2() { // TODO: } }; class Face123 : public Boss::CoClass<Boss::Crc32("Face123"), Face1, Face2, IFace3> { public: virtual void Mtd3() { // TODO: } }; struct IFace4 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace4") virtual void Mtd4() = 0; }; struct IFace5 : Boss::Inherit<Boss::IBase> { BOSS_DECLARE_IFACEID("IFace5") virtual void Mtd5() = 0; }; struct IFace6 : Boss::Inherit<IFace4, IFace5> { BOSS_DECLARE_IFACEID("IFace6") virtual void Mtd6() = 0; }; class Face123456 : public Boss::CoClass<Boss::Crc32("Face123456"), Face123, IFace6> { public: virtual void Mtd4() { // TODO: } virtual void Mtd5() { // TODO: } virtual void Mtd6() { // TODO: } }; 
with all the "meanness" for the implementation, which it will have to disassemble. It is clear that you can bloat more chaos. The above example describes the possibilities of implementation in the construction of "cubes".

It's hard not to notice that every implementation inherits from CoClass. CoClass has a very simple implementation:
 namespace Boss { template <ClassId ClsId, typename ... T> class CoClass : public virtual Private::CoClassAdditive , public T ... { public: typedef std::tuple<T ... > BaseEntities; CoClass() : Constructed(false) { } // IBase BOSS_DECLARE_IBASE_METHODS() private: template <typename Y> friend void Private::SetConstructedFlag(Y *, bool); template <typename Y> friend bool Private::GetConstructedFlag(Y *); bool Constructed; }; } 
This class, as well as the structure of Inherit, is inherited from the list of transferred entities, "dropping" this list of inheritance, is inherited from some
impurities (Private :: CoClassAdditive) / brands
 namespace Boss { namespace Private { struct CoClassAdditive { virtual ~CoClassAdditive() {} }; } } 
(which will be used to classify entities: interface or implementation), also eliminates the compiler from indiscriminations (by punching methods through BOSS_DECLARE_IBASE_METHODS) and contains a sign of the constructed object (Constructed).

There are interfaces, there are their implementations, but so far no IBase implementation has been implemented. The implementation of this interface will probably be one of the most complex.

To create an object from the above large example, it will look something like this:
 auto Obj = Boss::Base<Face123456>::Create(); 

Boss :: Base is a class implementation of Boss :: IBase. In the implementation to perform certain operations have to bypass the class hierarchy. So for the example above, the simplified hierarchy will look like this:

Bypassing the class hierarchy in search of the necessary briefly postpone. Quickly go through more simple methods.

Reference counting is performed by calling the AddRef methods (increases the reference count) and Release (decreases the reference count and, upon reaching zero, deletes the object, making delete this). Since it is assumed that objects can be used in a multi-threaded environment, the work with the counter is done through std :: atomic, which allows the counter to be increased and decreased in a multi-threaded environment. Yes, finally C ++ recognized the existence of threads and support for working with threads and synchronization primitives appeared.

The Create method has the following implementation:
 template <typename ... Args> static RefObjPtr<T> Create(Args const & ... args) { Private::ModuleCounter::ScopedLock Lock; RefObjPtr<T> NewInst(new Base<T>(args ...)); Private::FinalizeConstruct<T>::Construct(NewInst.Get()); return std::move(NewInst); } 
The presence of templates with a variable number of parameters allows you to create a method for constructing objects and transfer the required parameters to the constructor. Previously, it was impossible to do this, and if the object needed some initial settings, then it was necessary to create an object that had some method (its own specific) of type Init, into which it was necessary and passed.
ModuleCounter
 namespace Boss { namespace Private { struct ModuleRefCounterTypeStub { }; template <typename T> class ModuleRefCounter { public: static void AddRef() { Counter.fetch_add(1, std::memory_order_relaxed); } static void Release() { Counter.fetch_sub(1, std::memory_order_relaxed); } static UInt GetCounter() { return Counter; } private: static std::atomic<UInt> Counter; public: class ScopedLock { public: ScopedLock(ScopedLock const &) = delete; ScopedLock(ScopedLock &&) = delete; ScopedLock operator = (ScopedLock const &) = delete; ScopedLock operator = (ScopedLock &&) = delete; ScopedLock() { ModuleRefCounter<T>::AddRef(); } ~ScopedLock() { ModuleRefCounter<T>::Release(); } }; }; template <typename T> std::atomic<UInt> ModuleRefCounter<T>::Counter(0); typedef ModuleRefCounter<ModuleRefCounterTypeStub> ModuleCounter; } } 
Manages the module's reference count. There are two reference counters - this is the reference counter directly at the object and the counter of all module references. The module’s link counter is needed so that you can understand when there are “live” objects in the module, and when there are none and the module can be unloaded.

To abandon static libraries and implement the loner pattern (for each of the modules) you need for the ModuleRefCounter entity to implement it only in the included file, then a trick with templates and static objects is quite useful. More details about this can be found in the previous article. This can be briefly described as follows: if you create a type template with a static field and instantiate it with any type, then an instance of this object will be the only one for the entire module. It turns out a small trick used to write singles in the included files without implementation somewhere in the cpp-file (singles in include).
And in this beautiful solution there is a rake, a children's rake: the handle is two times shorter, beats more accurately and hurts ... This solution works fine in .dll, but in .so caught the problem: the template with static fields, instantiated by the same type, became one on all .so with components of this model in the framework of the process! Why I realized a little later, but I had to abandon a beautiful solution in favor of a simpler one, based on nameless namespaces and an included file, which is included in each module no more than once (who cares - boss / include / plugin / module.h).

C ++ is considered by many to be a language that makes it easy to "shoot yourself in the foot." And, as a rule, it is often persecuted because of the consideration of the pair of operations for the allocation / release of resources, and in particular of memory. But if you use smart pointers, then one headache becomes less. RefObjPtr is precisely a smart pointer that calls AddRef and Release to control the lifetime of an object and in the program when it is used; the AddRef and Release methods should not appear in user code.

Such a bun of the new standard as r-value allows you to write more optimal entities; for example, the same RefObjPtr to return an object without invoking AddRef / Release once again on copy constructors (return std :: move (NewInst)).

There is also in Create and appeal to someone FinalizeConstruct. What is it and why? Suppose you have a hierarchy, it is about not simpler than the one shown in the figure above, and you need to call something in one of the interface implementations, which is defined in the class below. You can use virtual functions, but, to put it simply, the designer still does not have a virtual table of functions, but it does not already exist in the destructor. All calls to virtual functions will be like calls to ordinary functions of the class and it will not be possible to call the redefined function one level below from the constructor. FinalizeConstruct is made on this case, which will be called after the object is already fully created. It turns out that it is necessary to implement some logic similar to the logic of calling constructors, only on our own, i.e. bypass the entire hierarchy and call FinalizeConstruct for each class in the order in which the constructors are called.

The class developer is not required to determine FinalizeConstruct in its class. When traversing the class hierarchy, the FinalizeConstruct logic implemented in the model will determine with the help of the good old SFINAE the presence of the FinalizeConstruct class in the class and, if this method is present, it will call it. The main rule: implementation in the custom code FinalizeConstruct should not be virtual! Otherwise, there will be confusion when constructing entities from ready-made cubes.
The presence in the class of FinalizeConstruct is determined by the following code:
 template <typename T> class HasFinalizeConstruct { private: typedef char (&No)[1]; typedef char (&Yes)[10]; template <typename U, void (U::*)() = &U::FinalizeConstruct> struct CheckMtd { typedef Yes Type; }; template <typename U> static typename CheckMtd<U>::Type Check(U const *); static No Check(...); public: enum { Has = sizeof(Check(static_cast<T const *>(0))) == sizeof(Yes) }; }; 

All logic on call FinalizeConstruct
 namespace Boss { namespace Private { template <bool HasFinalizeConstruct> struct CallFinalizeConstruct { template <typename ObjType> static void Call(ObjType *obj) { obj->FinalizeConstruct(); SetConstructedFlag(obj, true); } }; template <> struct CallFinalizeConstruct<false> { template <typename ObjType> static void Call(ObjType *obj) { SetConstructedFlag(obj, true); } }; template < typename T, bool IsCoClass = std::is_base_of<CoClassAdditive, T>::value > struct FinalizeConstruct { template <typename ObjType> static void Construct(ObjType *) { } }; template <typename T, std::size_t I> struct FinalizeConstructIter { template <typename ObjType> static void Construct(ObjType *obj) { typedef typename std::tuple_element<I, T>::type CurType; FinalizeConstructIter<T, I - 1>::Construct(obj); FinalizeConstruct<CurType>::Construct(static_cast<CurType *>(obj)); } }; template <typename T> struct FinalizeConstructIter<T, -1> { template <typename ObjType> static void Construct(ObjType *) { } }; template <typename T> struct FinalizeConstruct<T, true> { template <typename ObjType> static void Construct(ObjType *obj) { typedef typename T::BaseEntities BaseEntities; enum { BaseEntityCount = std::tuple_size<BaseEntities>::value - 1 }; FinalizeConstructIter<BaseEntities, BaseEntityCount>::Construct(obj); CallFinalizeConstruct<HasFinalizeConstruct<T>::Has>::Call(obj); } }; } } 
built on private specializations of templates and walking the hierarchy through prikupannye tuples with types of base classes. The standard library has become a means of working with types, so, to determine whether a class belongs to an implementation class, you can now use std :: is_base_of, rather than writing your own implementation. You can also use std :: tuple instead of type lists in the Alexandrescu style.

The analogy to designers is ready, but what about the analogy to destructors? Where do without it. The model implements logic to bypass the class hierarchy in order to call destructors, search in the implementation class all through the same SFINAE BeforeRelease method and if there is a call to it. The implementation of the logic to work with BeforeRelease is similar to the logic of FinalizeConstruct, but only in the reverse order of the detour.

Now it is possible to deconstruct an object after its complete creation and release something before the destruction of the object. But in the constructor you can report a problem by throwing an exception from it. The same behavior is implemented in this model: in any method of FinalizeConstruct in the hierarchy, you can throw an exception and the rest of the FinalizeConstruct chain will not be called anymore, besides, for objects of the hierarchy for which FinalizeConstruct has already passed successfully will be called BeforeRelease. It turns out a complete analogy to C ++ constructors and destructors. BeforeRelease is called from the implementation of the Release method and when traversing the hierarchy BeforeRelease will be called only for those objects for which the FinalizeConstruct call was successful, and the call success is determined by the Constructed flag located in the CoClass (remember?). It is also worth noting that if there is no need for a pair of these methods in a class, then only one can be present, if it is needed at all.

It remains to implement the logic
QueryInterface
 namespace Boss { namespace Private { template <typename T, bool IsImpl> struct QueryInterface; template <typename T, std::size_t I> struct QueryInterfacesListIter { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { typedef typename std::tuple_element<I, T>::type CurType; if (ifaceId == InterfaceTraits<CurType>::Id) { *iface = static_cast<CurType *>(obj); return Status::Ok; } return QueryInterfacesListIter<T, I - 1>::Query(obj, ifaceId, iface) == Status::Ok ? Status::Ok : QueryInterface<CurType, false>::Query(obj, ifaceId, iface); } }; template <typename T> struct QueryInterfacesListIter<T, -1> { template <typename ObjType> static RetCode Query(ObjType *, InterfaceId, Ptr *) { return Status::InterfaceNotFound; } }; template <typename T> struct QueryFromInterfacesList { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { typedef typename T::BaseInterfaces BaseInterfaces; enum { BaseInterfaceCount = std::tuple_size<BaseInterfaces>::value - 1 }; return QueryInterfacesListIter<BaseInterfaces, BaseInterfaceCount>::Query(obj, ifaceId, iface); } }; template <> struct QueryFromInterfacesList<IBase> { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { if (ifaceId == InterfaceTraits<IBase>::Id) { *iface = static_cast<IBase *>(obj); return Status::Ok; } return Status::InterfaceNotFound; } }; template < typename T, bool IsCoClass = std::is_base_of<CoClassAdditive, T>::value > struct QueryInterface { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { if (ifaceId == InterfaceTraits<T>::Id) { *iface = static_cast<T *>(obj); return Status::Ok; } return QueryFromInterfacesList<T>::Query(static_cast<T *>(obj), ifaceId, iface); } }; template <typename T, std::size_t I> struct QueryInterfaceIter { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { typedef typename std::tuple_element<I, T>::type CurType; return QueryInterface<CurType>::Query(static_cast<ObjType *>(obj), ifaceId, iface) == Status::Ok ? Status::Ok : QueryInterfaceIter<T, I - 1>::Query(obj, ifaceId, iface); } }; template <typename T> struct QueryInterfaceIter<T, -1> { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { return Status::InterfaceNotFound; } }; template <typename T> struct QueryInterface<T, true> { template <typename ObjType> static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface) { typedef typename T::BaseEntities BaseEntities; enum { BaseEntityCount = std::tuple_size<BaseEntities>::value - 1 }; return QueryInterfaceIter<BaseEntities, BaseEntityCount>::Query(static_cast<T *>(obj), ifaceId, iface); } }; } } 
which, by and large, is not very different from the hierarchy traversal described above. When traversing a hierarchy and when meeting in the implementation class tree, its priped list of basic entities is taken and it recursively manages to find the right interface. There is one addition. Since there is an opportunity to work with interfaces that are multiple inherited from other interfaces, when meeting in the interface search hierarchy, it costs in searching for the desired interface similarly to bypassing the implementation class, but only with a prikopanny list of only one interface.
Implementing Boss :: iBase
 namespace Boss { template <typename T> class Base final : public T { public: Base(Base const &) = delete; Base const & operator = (Base const &) = delete; Base(Base &&) = delete; Base const & operator = (Base &&) = delete; template <typename ... Args> static RefObjPtr<T> Create(Args const & ... args) { Private::ModuleCounter::ScopedLock Lock; RefObjPtr<T> NewInst(new Base<T>(args ...)); Private::FinalizeConstruct<T>::Construct(NewInst.Get()); return std::move(NewInst); } private: std::atomic<UInt> Counter; template <typename ... Args> Base(Args const & ... args) : T(args ...) , Counter(0) { Private::ModuleCounter::AddRef(); } virtual ~Base() { Private::ModuleCounter::Release(); } // IBase virtual UInt BOSS_CALL AddRef() { return Counter.fetch_add(1, std::memory_order_relaxed) + 1; } virtual UInt BOSS_CALL Release() { UInt CurValue = Counter.fetch_sub(1, std::memory_order_relaxed); if (CurValue == 1) { Private::BeforeRelease<T>::Release(static_cast<T *>(this)); std::atomic_thread_fence(std::memory_order_acquire); delete this; } return CurValue - 1; } virtual RetCode BOSS_CALL QueryInterface(InterfaceId ifaceId, Ptr *iface) { RetCode Ret = Private::QueryInterface<T>::Query(static_cast<T *>(this), ifaceId, iface); if (Ret == Status::Ok) AddRef(); return Ret; } }; } 
You can use the final keyword in the inheritance hierarchy of user implementations and other auxiliary classes to exclude the possibility of inheritance from this implementation. You can also explicitly say this in the interface of this class to explicitly exclude the possibility of copying and moving objects of this type. what is not needed as deleted.

The kernel is ready! All the most difficult and interesting described. Then everything will be much easier and smoother, without puzzles.

Plugins

In this part we will discuss the organization of plug-ins. Plug-ins should be understood in the current context of dynamic libraries (.so / .dll), which host interface implementation classes (components) and a small set of functions for accessing objects of these implementation classes.

This part of the article, in my opinion, is the simplest, since there is no “template programming” and other mockery of the compiler. Just the creation of a certain set of interfaces and implementations for the organization of the plugin system.

For the “life” component in their dwellings (plug-ins) within one state, called the process, not so much is needed:


The service registry is a place to store all information about the service:

Based on this information, the class factory will be able to load the necessary plug-ins and create interface implementation objects.

The role of the loader is to load the component registry, load the class factory and configure it to work with the service registry. After that, all calls to create objects will be only to the factory and the user gets some abstraction, he should not worry in which of the modules his object is located and how to create it. The user operates only with identifiers of classes of implementations when requesting to create a new object.

The service registry provides an interface with just one method, which is sufficient to obtain the necessary information for the class factory.
 namespace Boss { struct IServiceRegistry : public Inherit<IBase> { BOSS_DECLARE_IFACEID("Boss.IServiceRegistry") virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const = 0; }; } 

But the service registry implementation class itself can supply several interfaces. What was it all for? Make composing components.
Class-implementation of the registry services
 namespace Boss { class ServiceRegistry : public CoClass < Service::Id::ServiceRegistry, IServiceRegistry, IServiceRegistryCtrl, ISerializable > { public: ServiceRegistry(); virtual ~ServiceRegistry(); private: // IServiceRegistry virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const; // IServiceRegistryCtrl virtual RetCode BOSS_CALL AddService(IServiceInfo *service); virtual RetCode BOSS_CALL DelService(ServiceId serviceId); // ISerializable virtual RetCode BOSS_CALL Load(IIStream *stream); virtual RetCode BOSS_CALL Save(IOStream *stream); // ... }; } 
those. the implementation provides an interface for manipulating the registry (IServiceRegistryCtrl,) and loading and saving it (ISerializable).
Class factory implementation
 namespace Boss { class ClassFactory : public CoClass < Service::Id::ClassFactory, IClassFactory, IClassFactoryCtrl > { public: // IClassFactory virtual RetCode BOSS_CALL CreateObject(ClassId clsId, IBase **inst); // IClassFactoryCtrl virtual RetCode BOSS_CALL SetRegistry(IServiceRegistry *registry); // ... }; } 
It also supplies several interfaces: one main (IClassFactory), which all clients will use to create objects and auxiliary (IClassFactoryCtrl), which the loader uses to configure the factory to the registry.

The bootloader code is quite simple, but unfortunately, C ++ 11 did not recognize the platform (OS) very little. They recognized multithreading, but the existence of such things as dynamic libraries yet. So for loading of modules the code depending on an operating system will be used. Of course hidden deep. It would be nice to remember about pImple , but since the policy was aimed at abandoning static libraries, it will be a little different: the implementation for each OS in its header file and the file interface that parses what to include based on __ linux__ and _WIN32.

A small example of the use of services in the framework of the model of plug-ins that live in the same process:
 #include <iostream> #include "plugin/loader.h" #include "plugin/module.h" int main() { try { Boss::Loader Ldr("Registry.xml", "./libservice_registry.so", "./libclass_factory.so"); Boss::RefObjQIPtr<Boss::IBase> Inst; Inst = Ldr.CreateObject<Boss::IBase>(Boss::Crc32("MyClass")); } catch (std::exception const &e) { std::cerr << e.what() << std::endl; } return 0; } 

As it was noted at the beginning of the section, everything is very simple, only it took to write a certain amount of auxiliary code.

Examples

The best example is a real task, and not an artificially made-up pile-up, which demonstrates this or that possibility to the maximum.

Above, when describing the kernel, I cited a very large example, which tried as much as possible to display the existing flexibility in collecting entities from ready-made implementations and adding a new interface. But an example, despite the fact that it displays the capabilities of the model, it is far-fetched and does not look very friendly. Therefore, as examples, we can consider the implementation of the necessary components of the plug-in part, namely the service registry and class factory. Though they are part of the plug-in model, they are the same plug-ins as those that the user can develop for their needs.

Once again I will give the implementation class for the registry of services.
Service Registry Implementation
 namespace Boss { class ServiceRegistry : public CoClass < Service::Id::ServiceRegistry, IServiceRegistry, IServiceRegistryCtrl, ISerializable > { public: ServiceRegistry(); virtual ~ServiceRegistry(); private: // IServiceRegistry virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const; // IServiceRegistryCtrl virtual RetCode BOSS_CALL AddService(IServiceInfo *service); virtual RetCode BOSS_CALL DelService(ServiceId serviceId); // ISerializable virtual RetCode BOSS_CALL Load(IIStream *stream); virtual RetCode BOSS_CALL Save(IOStream *stream); // ... }; } 

Now I will try to describe what is happening here ...
To create a class-implementation of one or more interfaces, you need to create a class derived from the template class CoClass. This class takes as parameters the class-implementation identifier (which can already be used to create an object through a class factory) and a list of inherited interfaces or ready-made interface implementations. If you look at the class implementation of the service registry, then you can see the identifier (Service :: Id :: ServiceRegistry) and the following interfaces that are implemented in this class (IServiceRegistry is the service registry interface that will be used by the class factory); ISrviceRegistryCtrl - registry management interface; ISerializable - the registry must be saved somewhere and loaded from somewhere and this interface allows you to perform the required).This completes the work on creating the component and you just need to implement its methods.

The component is ready. It remains to publish it somehow, i.e. enable access to it from outside the module in which it is located.
To do this, use the macro BOSS_DECLARE_MODULE_ENTRY_POINT
 #include "service_registry.h" #include "plugin/module.h" namespace { typedef std::tuple < Boss::ServiceRegistry > ExportedCoClasses; } BOSS_DECLARE_MODULE_ENTRY_POINT("ServiceRegistry", ExportedCoClasses) 
A string is passed to the macro from which the CRC32 used as the module identifier and the list of class implementations exported by this module will be calculated. After that, the component and its module are ready (and there may be several components in the module), you can use it by registering in the registry (exception: the service registry and the class factory for normal use of the model can be unregistered).

Another similar example: the implementation of the class factory, which has already been given above.
Class factory
 namespace Boss { class ClassFactory : public CoClass < Service::Id::ClassFactory, IClassFactory, IClassFactoryCtrl > { public: // IClassFactory virtual RetCode BOSS_CALL CreateObject(ClassId clsId, IBase **inst); // IClassFactoryCtrl virtual RetCode BOSS_CALL SetRegistry(IServiceRegistry *registry); // ... }; } 

Completely similar example. Also inheritance from CoClass, the identifier and the list of implemented interfaces. The class factory is located in a separate module; accordingly, it has its own
entry point
 #include "class_factory.h" #include "plugin/module.h" namespace { typedef std::tuple < Boss::ClassFactory > ExportedCoClasses; } BOSS_DECLARE_MODULE_ENTRY_POINT("ClassFactory", ExportedCoClasses) 
similar to the entry point of the registry services.

These were simple component implementations in which each component inherited only a list of interfaces, implemented their methods and that was all. There was no inheritance of ready-made implementations. And if you look again at the interface of the registry of services, then in it you will see work with IServiceInfo, through which all information is transmitted. IServiceInfo can only transmit general information about the service, but there is also private information. Initially I wanted to make plugins that live not only in dynamic libraries, but also scattered across processes, in their executable modules. Hence, different information: for plug-ins in dynamic libraries, only an addition about the path to it, and for plug-ins in separate executable modules a lot of additional information: information about Proxy / Stubs, transport, etc. (but, unfortunately, I did not finish this part to the end, but cut off the rudiments,So as not to clog the code with imperfections). Now I’ll just give an example in which components are already inherited not only from interfaces, but also from implementations.
Implementing Service Information
 #ifndef __BOSS_PLUGIN_SERVICE_INFO_H__ #define __BOSS_PLUGIN_SERVICE_INFO_H__ #include "../core/base.h" #include "../core/error_codes.h" #include "../core/ref_obj_ptr.h" #include "../common/enum.h" #include "../common/entity_id.h" #include "../common/string.h" #include "iservice_info.h" #include <string> namespace Boss { namespace Private { template <typename T, bool = !!std::is_base_of<IServiceInfo, T>::value> class ServiceInfo; template <typename T> class ServiceInfo<T, true> : public CoClass<Crc32("Boss.ServiceInfo"), T> { public: // … void SetServiceId(ServiceId srvId) { // ... } void AddCoClassId(ClassId clsId) { // ... } void AddCoClassIds(RefObjPtr<IEnum> coClassIds) { // ... } private: // … // IServiceInfo virtual RetCode BOSS_CALL GetServiceId(ServiceId *serviceId) const { // ... } virtual RetCode BOSS_CALL GetClassIds(IEnum **ids) const { // ... } }; } class LocalServiceInfo : public CoClass<Crc32("Boss.LocalServiceInfo"), Private::ServiceInfo<ILocalServiceInfo>> { public: void SetModulePath(std::string const &path) { // ... } void SetModulePath(RefObjPtr<IString> path) { // ... } private: // ... // ILocalServiceInfo virtual RetCode BOSS_CALL GetModulePath(IString **path) const { // ... } }; class RemoteServiceInfo : public CoClass<Crc32("Boss.RemoteServiceInfo"), Private::ServiceInfo<IRemoteServiceInfo>> { public: void SetProps(RefObjPtr<IPropertyBag> props) { // ... } private: // ... // IRemoteServiceInfo virtual RetCode BOSS_CALL GetProperties(IPropertyBag **props) const { // ... } }; } #endif // !__BOSS_PLUGIN_SERVICE_INFO_H__ 
The implementation of ServiceInfo may seem a bit complicated. Why and here a pattern? This is the subtlety of the implementation of the data structure, which occurred to me, and not the tribute given to the component model / plug-in system. To clarify a little the reason for this implementation, I will give the interface:
Service Information Interface
 #ifndef __BOSS_PLUGIN_ISERVICE_INFO_H__ #define __BOSS_PLUGIN_ISERVICE_INFO_H__ #include "../core/ibase.h" #include "../common/ienum.h" #include "../common/istring.h" #include "../common/iproperty_bag.h" namespace Boss { struct IServiceInfo : public Inherit<IBase> { BOSS_DECLARE_IFACEID("Boss.IServiceInfo") virtual RetCode BOSS_CALL GetServiceId(ServiceId *serviceId) const = 0; virtual RetCode BOSS_CALL GetClassIds(IEnum **ids) const = 0; }; struct ILocalServiceInfo : public Inherit<IServiceInfo> { BOSS_DECLARE_IFACEID("Boss.ILocalServiceInfo") virtual RetCode BOSS_CALL GetModulePath(IString **path) const = 0; }; struct IRemoteServiceInfo : public Inherit<IServiceInfo> { BOSS_DECLARE_IFACEID("Boss.IRemoteServiceInfo") virtual RetCode BOSS_CALL GetProperties(IPropertyBag **props) const = 0; }; } #endif // !__BOSS_PLUGIN_ISERVICE_INFO_H__ 
A slightly more comprehensible implementation with inheritance of interfaces and implementations is given in the kernel description with a fancy Face123456 class without any templates :)

How the components implement everything is cleared up. It's simple.And how to query and work with interfaces, query from one another - you can look at the example of the loader that loads the registry of services, gets the necessary interfaces from it, sets up the registry itself, loads the class factory and sets it up to work with the registry. Then, of course, all the work of the client is already going on with the class factory and the client no longer has to work with the modules, otherwise, for the sake of which all this abstractness was started.
Loader
 #ifndef __BOSS_PLUGIN_LOADER_H__ #define __BOSS_PLUGIN_LOADER_H__ #include "iservice_registry.h" #include "iclass_factory.h" #include "iclass_factory_ctrl.h" #include "module_holder.h" #include "service_ids.h" #include "core/exceptions.h" #include "common/file_stream.h" #include "common/iserializable.h" #include <string> namespace Boss { BOSS_DECLARE_RUNTIME_EXCEPTION(Loader) class Loader final { public: Loader(Loader const &) = delete; Loader& operator = (Loader const &) = delete; Loader(std::string const &registryFilePath, std::string const &srvRegModulePath, std::string const &clsFactoryModulePath) : SrvRegistry([&] () { auto SrvRegModule(ModuleHolder(std::move(DllHolder(srvRegModulePath)))); auto SrvReg = SrvRegModule.CreateObject<IServiceRegistry>(Service::Id::ServiceRegistry); RefObjQIPtr<ISerializable> Serializable(SrvReg); if (!Serializable.Get()) throw LoaderException("Failed to get ISerializable interface from Registry object."); if (Serializable->Load(Base<IFileStream>::Create(registryFilePath).Get()) != Status::Ok) throw LoaderException("Failed to load Registry."); return std::move(std::make_pair(std::move(SrvRegModule), std::move(SrvReg))); } ()) , ClsFactory([&] () { auto ClassFactoryModule(ModuleHolder(std::move(DllHolder(clsFactoryModulePath)))); auto NewClsFactory = ClassFactoryModule.CreateObject<IClassFactory>(Service::Id::ClassFactory); RefObjQIPtr<IClassFactoryCtrl> Ctrl(NewClsFactory); if (!Ctrl.Get()) throw LoaderException("Failed to get ICalssFactoryCtrl interface from ClassFactory object."); if (Ctrl->SetRegistry(SrvRegistry.second.Get()) != Status::Ok) throw LoaderException("Failed to set Registry into ClassFactory."); return std::move(std::make_pair(std::move(ClassFactoryModule), std::move(NewClsFactory))); } ()) { } template <typename T> RefObjPtr<T> CreateObject(ClassId clsId) { RefObjPtr<IBase> NewInst; if (ClsFactory.second->CreateObject(clsId, NewInst.GetPPtr()) != Status::Ok) throw LoaderException("Failed to create object."); RefObjQIPtr<T> Ret(NewInst); if (!Ret.Get()) throw LoaderException("Interface not found."); return Ret; } ~Loader() { ClsFactory.second.Release(); SrvRegistry.second.Release(); } private: std::pair<ModuleHolder, RefObjPtr<IServiceRegistry>> SrvRegistry; std::pair<ModuleHolder, RefObjPtr<IClassFactory>> ClsFactory; }; } #endif // !__BOSS_PLUGIN_LOADER_H__ 

In addition to the examples given, examples from the article with the implementation of the previous version in C ++ 03 are relevant. The only difference is the work with identifiers. In the new model, it is not necessary to add a separate macro to the implementation class, about which you can forget. If you forget about the identifier in the new model, the compiler will remind you of this, since now it is a template parameter.

Conclusion

There was some big idea, but it was realized only by 2/3:

Somehow it happened that the moment of building the skeleton or skeleton of the system is most interesting for me, but building up muscles and fat injections (developing all kinds of utility / pseudo-uses) can sometimes be a job that is done very quickly due to good awareness in the system. Because of this, a very complete (in some places, redundantly full) core can be obtained (spherical horses in vacuum have always attracted me). There is a small part of the muscles (the main components of the plug-in system: the registry of services and the class factory) so that the model can at least somehow exist. But this implementation turned out to be completely fat free: there is nothing auxiliary in it. The skeleton of the system has been assembled, some muscles have been built up and a kick in the ass has been given so that it somehow moves from the place - it becomes the material and the Habr article.

The project must be either released or terminated as early as possible, before it has eaten all the resources and safely disappeared from the field of attention. Due to this judgment and the fact that the article’s material turned out to be too big and, perhaps, difficult in some places, and the reason that I didn’t manage to pay more attention to this article for more than six months, the part with plug-ins is still missing. Soon, for example, C ++ 14 may appear, and then the material of this article on C ++ 11 may already become irrelevant. It may well be that the unrealized part will be released as a separate post ... This material will be based on the material of the article "Proxy / Stubs do-it-yourself" , which I wanted to rework with the C ++ 11 standard, add interface marshaling and put all the transport under it (implement one of the mechanisms IPC).

Unfortunately and fortunately at the same time, the reader does not always take out the author's entire intention from his work, which he laid into it. According to the source code, there are some scattered seals for the future, such as RemoteServiceInfo and others, which may well be missed when considering the material.

The source code is available on github. It has a minimal build script. It can serve as a source of examples and ideas for your projects.

Thank you all for your attention!

Source: https://habr.com/ru/post/164699/


All Articles