📜 ⬆️ ⬇️

Universal adapter

Foreword


This article is the author's translation from English of its own article called God Adapter . You can also watch video presentations from the C ++ Russia conference .


1 Abstract


The article presents a special adapter that allows you to wrap any object into another with the addition of the necessary functionality. Adapted objects have the same interface, so they are completely transparent in terms of use. A common concept will be introduced consistently using simple, but powerful and interesting examples.


2 Introduction


WARNING . Almost all the methods mentioned in the article contain dirty hacks and abnormal use of the C ++ language. So, if you are not tolerant of such perversions, please do not read this article.


The term universal adapter comes from being able to universally add the necessary behavior for any object.



3 Task setting


A long time ago, I introduced the concept of a smart mutex to simplify access to shared data. The idea was simple: bind the mutex to the data and automatically call lock and unlock every time you access the data. The code looks like this:


 struct Data { int get() const { return val_; } void set(int v) { val_ = v; } private: int val_ = 0; }; //     SmartMutex<Data> d; //  ,      d->set(4); //   std::cout << d->get() << std::endl; 

But there are several problems with this approach.


3.1 Blocking time


The lock is held for the duration of the current expression. Consider the following line:


 std::cout << d->get() << std::endl; 

Unlocking is called after the completion of the entire expression, including output to std::cout . This is an unnecessary waste of time, which significantly increases the waiting time when locking is taken.


3.2 Possibility of interlocking


As a consequence of the first problem, there is the possibility of interlocking due to the implicit locking mechanism and the long blocking time when the current expression is executed. Consider the following code snippet:


 int sum(const SmartMutex<Data>& x, const SmartMutex<Data>& y) { return x->get() + y->get(); } 

It is not at all obvious that the function potentially contains a deadlock. This is due to the fact that the method ->get() can be called in any order for different pairs of instances x and y .


Thus, it would be better to avoid an increase in the time taken to take the lock and to prevent the interlocks mentioned above.


4 Solution


The idea is quite simple: we need to implement the functionality of the proxy object inside the call itself. And in order to simplify interaction with our object, let's replace -> with . .


Simply put, we need to convert the Data object to another object:


 using Lock = std::unique_lock<std::mutex>; struct DataLocked { int get() const { Lock _{mutex_}; return data_.get(); } void set(int v) { Lock _{mutex_}; data_.set(v); } private: mutable std::mutex mutex_; Data data_; }; 

In this case, we control the operations of getting and releasing the mutex inside the methods themselves. This prevents the problems mentioned earlier.


But such a record is inconvenient for implementation, because the basic idea of ​​a smart mutex is to avoid additional code. The preferred way is to take advantage of both approaches: less code and fewer problems at the same time. Therefore, it is necessary to generalize this solution and distribute it for wider usage scenarios.


4.1 Generalized adapter


We need to somehow adapt our old Data implementation without mutex to an implementation containing mutex , which should look similar to the DataLocked class. To do this, wrap the method call for further transformation of behavior:


 template<typename T_base> struct DataAdapter : T_base { //      set void set(int v) { T_base::call([v](Data& data) { data.set(v); }); } }; 

Here we postpone the call to data.set(v) and pass it to T_base::call(lambda) . A possible implementation of T_base could be:


 struct MutexBase { protected: template<typename F> void call(F f) { Lock _{mutex_}; f(data_); } private: Data data_; std::mutex mutex_; }; 

As you can see, we have divided the monolithic implementation of the DataLocked class into two classes: DataAdapter<T_base> and MutexBase as one of the possible base classes for the created adapter. But the actual implementation is very close: we hold the mutex during the call to Data::set(v) .


4.2 More generalization


Let's summarize our implementation. Our MutexBase implementation only works for Data . Improve this:


 template<typename T_base, typename T_locker> struct BaseLocker : T_base { protected: template<typename F> auto call(F f) { using Lock = std::lock_guard<T_locker>; Lock _{lock_}; return f(static_cast<T_base&>(*this)); } private: T_locker lock_; }; 

Here are a few generalizations:


  1. I do not use a specific implementation of the mutex. You can use either std::mutex or any object that implements BasicLockable .
  2. T_base is an instance of an object with the same interface. This could be Data or even an already adapted Data object, such as, for example, DataLocked .

Thus, we can define:


 using DataLocked = DataAdapter<BaseLocker<Data, std::mutex>>; 

4.3 Need more generalization


When using generalizations it is impossible to stop. Sometimes I would like to convert input parameters. For this, I will change the adapter:


 template<typename T_base> struct DataAdapter : T_base { void set(int v) { T_base::call([](Data& data, int v) { data.set(v); }, v); } }; 

And the BaseLocker implementation BaseLocker converted to:


 template<typename T_base, typename T_locker> struct BaseLocker : T_base { protected: template<typename F, typename... V> auto call(F f, V&&... v) { using Lock = std::lock_guard<T_locker>; Lock _{lock_}; return f(static_cast<T_base&>(*this), std::forward<V>(v)...); } private: T_locker lock_; }; 

4.4 Universal Adapter


Finally, let's reduce the size of the template code associated with the adapter. Templates end and advanced macros come into play with iterators:


 #define DECL_FN_ADAPTER(D_name) \ template<typename... V> \ auto D_name(V&&... v) \ { \ return T_base::call([](auto& t, auto&&... x) { \ return t.D_name(std::forward<decltype(x)>(x)...); \ }, std::forward<V>(v)...); \ } 

DECL_FN_ADAPTER allows DECL_FN_ADAPTER to wrap any method with the name D_name . Now it only remains to go through all the methods of the object and wrap them:


 #define DECL_FN_ADAPTER_ITERATION(D_r, D_data, D_elem) \ DECL_FN_ADAPTER(D_elem) #define DECL_ADAPTER(D_type, ...) \ template<typename T_base> \ struct Adapter<D_type, T_base> : T_base \ { \ BOOST_PP_LIST_FOR_EACH(DECL_FN_ADAPTER_ITERATION, , \ BOOST_PP_TUPLE_TO_LIST((__VA_ARGS__))) \ }; 

Now we can adapt our Data using only one line:


 DECL_ADAPTER(Data, get, set) //      template<typename T, typename T_locker = std::mutex, typename T_base = T> using AdaptedLocked = Adapter<T, BaseLocker<T_base, T_locker>>; using DataLocked = AdaptedLocked<Data>; 

And that's it!


5 Examples


We looked at a mutex-based adapter. Consider other interesting adapters.


5.1 Adapter for reference counting


Sometimes for some reason we need to use shared_ptr for our objects. And it would be better to hide this behavior from the user: instead of using operator-> I would just like to use operator. . Well, or at least simple . . The implementation is very simple:


 template<typename T> struct BaseShared { protected: template<typename F, typename... V> auto call(F f, V&&... v) { return f(*shared_, std::forward<V>(v)...); } private: std::shared_ptr<T> shared_; }; //     BaseShared  template<typename T, typename T_base = T> using AdaptedShared = Adapter<T, BaseShared<T_base>>; 

Application:


 using DataRefCounted = AdaptedShared<Data>; DataRefCounted data; data.set(2); 

5.2. Adapter combination.


Sometimes there is a great idea to fumble data between threads. The general scheme is to combine shared_ptr with mutex . shared_ptr solves problems with the lifetime of an object, and mutex used to prevent a race condition.


Since each adapted object has the same interface as the original one, we can simply combine several adapters:


 template<typename T, typename T_locker = std::mutex, typename T_base = T> using AdaptedSharedLocked = AdaptedShared<T, AdaptedLocked<T, T_locker, T_base>>; 

With such use:


 using DataRefCountedWithMutex = AdaptedSharedLocked<Data>; DataRefCountedWithMutex data; //           //    int v = data.get(); 

5.3 Asynchronous example: from callbacks to the future


Let's step into the future. For example, we have the following interface:


 struct AsyncCb { void async(std::function<void(int)> cb); }; 

But we would like to use the asynchronous interface of the future:


 struct AsyncFuture { Future<int> async(); }; 

Where Future has the following interface:


 template<typename T> struct Future { struct Promise { Future future(); void put(const T& v); }; void then(std::function<void(const T&)>); }; 

Matching adapter:


 template<typename T_base, typename T_future> struct BaseCallback2Future : T_base { protected: template<typename F, typename... V> auto call(F f, V&&... v) { typename T_future::Promise promise; f(static_cast<T_base&>(*this), std::forward<V>(v)..., [promise](auto&& val) mutable { promise.put(std::move(val)); }); return promise.future(); } }; 

Application:


 DECL_ADAPTER(AsyncCb, async) using AsyncFuture = AdaptedCallback<AsyncCb, Future<int>>; AsyncFuture af; af.async().then([](int v) { //    }); 

5.4 Asynchronous example: from the future to the callback


Since this leads us into the past, then let it be a domestic task.


5.5 Lazy adapter


The developers are lazy. Let's adapt any object for compatibility with developers.


In this context, laziness means creating an object on demand. Consider the following example:


 struct Obj { Obj(); void action(); }; Obj obj; // : Obj::Obj obj.action(); // : Obj::action obj.action(); // : Obj::action AdaptedLazy<Obj> obj; //   ! obj.action(); // : Obj::Obj  Obj::action obj.action(); // : Obj::action 

Those. the idea is to delay the creation of an object to the last. If the user decides to use an object, we must create it and call the appropriate method. The base class implementation can be like this:


 template<typename T> struct BaseLazy { template<typename... V> BaseLazy(V&&... v) { //    state_ = [v...]() mutable { return T{std::move(v)...}; }; } protected: using Creator = std::function<T()>; template<typename F, typename... V> auto call(F f, V&&... v) { auto* t = boost::get<T>(&state_); if (t == nullptr) { //       state_ = std::get<Creator>(state_)(); t = std::get<T>(&state_); } return f(*t, std::forward<V>(v)...); } private: // variant     //    :     std::variant<Creator, T> state_; }; template<typename T, typename T_base = T> using AdaptedLazy = Adapter<T, BaseLazy<T_base>>; 

And now we can create a heavy lazy object and initialize it only when necessary. At the same time, it is completely transparent to the user.


6 Overhead


Let's look at the performance of the adapter. The fact is that we use lambdas and transfer them to other objects. Thus, it would be extremely interesting to know the overhead of such adapters.


To do this, consider a simple example: we wrap the object call using the object itself, i.e. create an identical adapter and try to measure the overhead for such a case. Instead of making direct measurements of performance, let's just look at the generated assembler code for different compilers.


First, let's create a simple version of our adapter to work only with the on methods:


 #include <utility> template<typename T, typename T_base> struct Adapter : T_base { template<typename... V> auto on(V&&... v) { return T_base::call([](auto& t, auto&&... x) { return t.on(std::forward<decltype(x)>(x)...); }, std::forward<V>(v)...); } }; 

BaseValue is our identical base class for calling methods directly from the same type T :


 template<typename T> struct BaseValue { protected: template<typename F, typename... V> auto call(F f, V&&... v) { return f(t, std::forward<V>(v)...); } private: T t; }; 

And here is our test class:


 struct X { int on(int v) { return v + 1; } }; //      int f1(int v) { X x; return x.on(v); } //       int f2(int v) { Adapter<X, BaseValue<X>> x; return x.on(v); } 

Below you can find the results obtained in the online compiler :


GCC 4.9.2


 f1(int): leal 1(%rdi), %eax ret f2(int): leal 1(%rdi), %eax ret 

Clang 3.5.1


 f1(int): # @f1(int) leal 1(%rdi), %eax retq f2(int): # @f2(int) leal 1(%rdi), %eax retq 

As you can see, there is no difference between f1 and f2 , which means that compilers can optimize and completely eliminate the overhead associated with creating and transmitting a lambda object.


7 Conclusion


The article presents an adapter that allows you to convert an object to another object with additional functionality that leaves the interface unchanged without the overhead of conversion and call. Classes of the base adapter are universal transformers that can be applied to any object. They are used to improve and further extend the functionality of the adapter. Different combinations of base classes make it easy to create very complex objects without additional effort.


This powerful and entertaining technique will be used and expanded in subsequent articles.


useful links


[1] github.com/gridem/GodAdapter
[2] bitbucket.org/gridem/godadapter
[3] Blog: God Adapter
[4] C ++ Russia Report: Universal Adapter
[5] Video C ++ Russia: Universal Adapter
[6] Habrahabr: Useful multithreading idioms C ++
[7] godbolt online compiler


')

Source: https://habr.com/ru/post/340314/


All Articles