📜 ⬆️ ⬇️

A bunch of ways to reuse code in Rust

I found this article by Alexis Beingessner as the most understandable description of the type system in Rust, and what can be done with them. I hope someone this translation will be useful. Do not look at the fact that the obvious things are described first - in the end you can drown. The article is huge and most likely will be disassembled into chapters. Translated fairly freely. Author's style saved. - comment per.

(article written about Rust 1.7 stable)

There is a lot of stuff in the Rust type system. As far as I know, practically all the complexity of this all lies in expressing the program in the most generalized form. Moreover, the people also require more! I have always had problems with a simple understanding of the most difficult things, because this post is rather a reminder to myself. But nevertheless, I also like to do something useful to others, so this article also has things that I can hardly forget, but which some may not know.
')
This article will not provide an exhaustive description of the syntax or general details of the features described. It tells why it happens anyway, since I always forget these things. If you found this article in an attempt to learn Rust fully, you should definitely get acquainted with the Book to begin with. At the same time, I will clarify here some arbitrary theoretical aspects of what is happening.
Most likely, this article is full of mistakes, and it should not claim to be the official leadership. This is just a collection of what I’ve dug up in a week while I was looking for a new job.

Brief description of code reuse principles


(hereinafter the term “reuse” I mean “reuse” - sounds not so clumsy, it is understood faster - comment. per.)

The desire to use parts of the code more than once has existed since the earliest times when the very first computers got their first useful result bit. Definitely, I have no idea how the reuse of the code looked at that wonderful time. Maybe crib leaves? Or a stack of punched cards? I have no idea. I am interested in how this is done now.

The most well-known form of code reuse is definitely a function. Well, yes, the functions are familiar to everyone. However, depending on what language you write and what you need to do, the capabilities of functions like receiving code reuse may not be enough. Perhaps you need to apply something that exists under the modern terms "metaprogramming" (when the code creates itself) or "polymorphism" (when the code can be applied to different types of data).

Technically, these principles are completely different, however, they often have to be used together. Modern languages ​​implement these principles quite widely: macros, patterns, generics, inheritance, function pointers, interfaces, overload, grouping (union), and so on. However, all this is only a semantic variety of the implementation of the three basic principles - monomorphism, virtualization, enumeration.

Monomorphism


Monomorphism is essentially the practice of copy-paste a piece of code, with minor changes in each new copy. The main benefit of monomorphism is the ability to “perfectly customize” the implementation, without scaring the compiler with complex constructions. This is also the main disadvantage of the principle - in the worst case, we will get a pretty thick code, because of the many almost identical parts that are physically copied to all the places where they are used. To the thick binary and increased compile time, the monstrous load on the instruction cache in the processor is added here. In fact, there is no reuse of code here!

The semantic limitation of monomorphism is that it cannot be used (directly) in processing several different types of data at the same time. For example, I want to build a job queue (job queue) that accepts various tasks, and with its help to perform these tasks in the order received. Provided that all tasks are identical, everything is solved by a monomorphism quite simply. Problems appear when tasks are different - it becomes unclear how to implement this only by monomorphism. Therefore, his name is monomorphism. Abstraction over code that does just one thing .

Common examples of monomorphism: C ++ templates, C macros, Go Generate, C # generics. Most of them work at compilation, except C # generics, which are mono morphic during code execution. All that is created at compile time is a template. Monomorphism is terribly popular as a means of optimization in ordinary (inline) and JIT compilation.

Virtualization


The exact opposite of monomorphism, to which each developer comes, having played enough copy-paste: fasten the application variability. Both data and executable code can be virtualized, after which all that the user of the virtual interface sees is pointing to something .

Virtualization allows code to work with types of different size and structure in exactly the same way. Virtualization of a function allows it to have alternative behavior without the need for copy-paste. An example with an execution queue, on which a monomorphism breaks teeth, is perfectly solved by virtualization - any task that needs to be performed is a pointer to a function that can be found and launched. We need data separately for each task - without questions, we add another pointer to the data, they will be loaded together with the function.

The main drawback of virtualization is that it usually affects performance, the variability of the code results in frequent heap allocation, jumping on pointers (the cache is resentful) and determining what exactly we are dealing with at the moment.
However, virtualization can be more productive than monomorphing! Every time when the function jerks statically, the compiler is able to bin it, but it does not always, because, as already mentioned, it overloads and trash the binary. For the same reasons, forced virtualization of rarely used functions is beneficial. For example, exception handlers do not want to run all the time, and it is better to virtualize them, thus clearing the instruction cache for the “error-free” execution branch.

Common examples of virtualization include function pointers and empty (void) C pointers, callbacks, inheritance, Java generics, Javascript prototypes. Note that in many of these examples there is no difference between data virtualization and executable code. For example, if I have a pointer to an Animal , both the Cat and the Dog can stand behind it, and when I ask this Animal to give a voice () - does he know from somewhere, tell him "Gav" or "Meow"?

The usual way to implement virtualization for each object of each type is the inheritance hierarchy for the hidden storage of pointers to different implementation pieces that may be needed during the work of the program, called “vtable”. Usually, a stack of function pointers is stored in a vtable (including the voice () from the example above), but the size, alignment in memory, and the specific type of object can also be stored.

Transfers


Enums is a trade-off between virtualization and monomorphism. At runtime, monomorphic code can be only one, without options , virtualized can be anything . The code from the listing can be any of a limited list of options . Usually the use of enums is to work with some kind of integer "tag" that defines the option from the list that needs to be used.

For example, our execution queue, implemented by listing, can define three types of possible tasks, “Create”, “Modify”, “Delete”. To use, for example, "Creation", you only need to send the queue data for the Creation, marked with a tag corresponding to the "Create" function. The queue sees the tag, understands from it what is wanted of it and what lies in the data, and runs the corresponding code.

As in virtualization, enumeration can understand different data types using the same code, which is no longer necessary to copy. As in monomorphism, there is no need for variation — only the tag changes. In addition, it is much easier to optimize transfers.

It should, however, be noted that if the variability is not used at all, the enumerated type will seriously grow, since each type object will have to store information for the largest type that is present in the enumeration. In order to “Delete”, only the name is enough, but “Create” asks for the name, type, author, contents, and so on, and even if it so happens that the queue is mainly used for “deletion”, it will be memory ask for both permanent creation.

Well, of course, you need to know in advance the whole range of possibilities, this is the main limitation of transfers. Both monomorphism and virtualization can be expanded if necessary at any time, which cannot be said about the enumeration - the pattern can be imposed on a new type, the class can be inherited, and the enumeration is already burned in the code tightly. It is better not to pick it up with attempts to deceive and expand it - you are more likely to break the code of those who already use it!

Therefore, this strategy is somewhat unintelligible. Many languages ​​have it in the form of enum , however its use is seriously limited due to the inability to associate data separately for each option in the enumeration. C allows you to define an option as a group of two types, but deciding which of these types is data and what is the code imposes on the enumeration user. In many functional languages, there are groups with tags (tagged unions), which are a union of enumerations and a group in C, allowing you to glue arbitrary data to different enumeration options.

What about Rust?


Yes, we have listed the capabilities of other languages, but what can we do with ours? And in our everything keeps on three pillars:

Macros


It's simple. Net reuse code. In Rust, they work on top of the main syntax tree (AST, abstract syntax tree) - you feed a macro with a syntax tree, the result is another tree. There is no information about types like “hm, this line is similar to someone's name” in macros ( in fact, there is not much - approx. Lane).

Usually, macros are used for two reasons: to extend the language itself or to make a copy of the existing code. The first one is openly used in the standard Rust library ( println!, Thread_local!, Vec!, Try!, And so on):

///   `Vec`,  . /// /// `vec!`   `Vec`s    . ///     : /// /// -  `Vec`    : /// /// ``` /// let v = vec![1, 2, 3]; /// assert_eq!(v[0], 1); /// assert_eq!(v[1], 2); /// assert_eq!(v[2], 3); /// ``` /// /// -  `Vec`      : /// /// ``` /// let v = vec![1; 3]; /// assert_eq!(v, [1, 1, 1]); /// ``` /// ///   -          , ///    `Clone`,      ,    . /// ///   `clone()`   ,    ///     `Clone`. , /// `vec![Rc::new(1); 5]`         integer  , ///     . #[cfg(not(test))] #[macro_export] #[stable(feature = "rust1", since = "1.0.0")] macro_rules! vec { ($elem:expr; $n:expr) => ( $crate::vec::from_elem($elem, $n) ); ($($x:expr),*) => ( <[_]>::into_vec($crate::boxed::Box::new([$($x),*])) ); ($($x:expr,)*) => (vec![$($x),*]) } 

and the latter is used internally to implement multiple duplicate interfaces:

 //        //  T -> T      . //          //      macro_rules! impl_from { ($Small: ty, $Large: ty) => { impl From<$Small> for $Large { fn from(small: $Small) -> $Large { small as $Large } } } } //  ->  impl_from! { u8, u16 } impl_from! { u8, u32 } impl_from! { u8, u64 } //         ... 

As I see it, macros are the worst way to reuse code. They should help (variable names are not used internally and do not flow away from the macro), but many are too addicted to them (using unsafe in macros gives strange side effects (wondering which ones?).). At the heart of the macro processor is a regular expression (if you close your eyes to the fact that expr and tt parse is not trivial at all), and in general, no one likes to read regulars!

More importantly, IMHO, the macros here are essentially metaprogramming with dynamic typing. The compiler does not check that the body of the macro corresponds to its signature, it generates code according to the macro, receives something at the output, and only then does the check, which leads to the typical problem of dynamic programming - late binding of errors. So we can get an analog of the imperishable "undefined is not a function" for Rust:

 macro_rules! make_struct { (name: ident) => { struct name { field: u32, } } } make_struct! { Foo } 

 <anon>:10:16: 10:19 error: no rules expected the token `Foo` <anon>:10 make_struct! { Foo } ^~~ playpen: application terminated with error code 101 

What is the mistake here? Of course, I forgot about $ , because the macro understands the name not as a variable, but as a literal and always gives
 struct name { field: u32 } 

(To be honest, the reason to treat macros as cool is cool - approx. lane.)
Further, if a normal error comes out in the macro-generated code, there will be indigestible porridge in the logs:
 use std::fs::File; fn main() { let x = try!(File::open("Hello")); } 

 <std macros>:5:8: 6:42 error: mismatched types: expected `()`, found `core::result::Result<_, _>` (expected (), found enum `core::result::Result`) [E0308] <std macros>:5 return $ crate:: result:: Result:: Err ( <std macros>:6 $ crate:: convert:: From:: from ( err ) ) } } ) <anon>:4:13: 4:38 note: in this expansion of try! (defined in <std macros>) <std macros>:5:8: 6:42 help: see the detailed explanation for E0308 

Well ... but in our pros, like other dynamically typed languages, there is much more flexibility in expressions. In short, macros are beautiful in those areas where their use is justified, they are simply ... fragile or something.

Worth mentioning: syntax extension and code generation


Of course, macros have limits. They do not execute arbitrary code at compile time. This is good for safety and frequent assembly, but sometimes it interferes. In Rust, this can be fixed in two ways: syntax extensions (known as procedural macros ) and code generation (build.rs) (in the unstable branch of the language there are still plugins for the compiler - note. Lane). They all give you the green light to do anything to generate anything.

Syntax extensions look like macros or annotations, but they have the ability to ask the compiler to perform arbitrary actions to (ideally) change the syntax tree. The build.rs files are understood by the Cargo batch manager as something that needs to be built and run each time the package is built. Obviously, they are allowed to fumble in the project as they please. It is expected that it is better to use it for code generation inaccessible to macros.

I could also add a couple of examples, but not particularly in the topic of these possibilities, and I am completely indifferent to them. Well, code generation, okay. And in general, I have not been writing this article for the first day and have thoroughly podzadolbalsya (author's caps removed - approx. Lane.).

Transfers


Exactly described groupings with tags.
Most often found in the person of Option and Result , expressing a successful / unsuccessful result of something. That is, it is literally a listing with the options "Success" and "Breakage".

You can write your enumerations. Here, for example, you need a network code that works with ipv4 and ipv6 . You definitely do not need the possible support of a hypothetical ipv8 , and even if it is necessary, the dog still knows what to do with it in the code. We write an enumeration for what exactly is:

 enum IpAddress { V4(IPv4Address), V6(Ipv6Address), } fn connect(addr: IpAddress) { // ,  ,    match addr { V4(ip) => connect_v4(ip), V6(ip) => connect_v6(ip), } } 

Everything. Then you can work with the general type IpAddress , and if anyone needs to know the exact type inside, it can be obtained using the match method in the manner described above.

Treit


Up to this point, everything was simple, now it will be more complicated and interesting.
In short, the traits in Rust are intended to describe everything else . Monomorphizing, virtualization, reflection, operator overloading, type conversion, copying semantics, thread safety, higher-order functions, iterators for loops — this whole colorful panopticon works through treits. Further, all new user language features are most likely to be realized through traits.

In general, traits are interfaces. No seriously.

 struct MyType { data: u32, } // ,   trait MyTrait { fn foo(&self) -> u32; } //     impl MyTrait for MyType { fn foo(&self) -> u32 { self.data } } fn main() { let mine = MyType { data: 0 }; println!("{}", mine.foo()); } 

Very often, communication with traits is no different from communication with interfaces in Java or C #, but sometimes you have to cross the line. Traits are conceived architecturally more flexible. In C # and Java, only the owner of MyType can implement MyTrait for MyType . In Rust, this is also allowed for the owner of MyTrait . This allows authors of libraries with treits to write their implementations also for, say, types from the standard library.
Of course, letting such a feature take its course is quite fraught with it - you never know who will come to realize where. Therefore, this happiness is limited by the visibility of the implementations only for the code that has the corresponding trait in the scope. From here, by the way, all problems with work with input-output, if not to import Read and Write explicitly.

In topic: consistency


Those familiar with Haskell can see in traits a lot in common with type classes. They also have the right to ask a completely obvious and reasonable question: what, in fact, will happen if we implement the same treit for the same type several times in different places? The issue of consistency, that is. In a coherent world, there is only one pair of implementation of the treyt-type. And in Rust, to achieve consistency, there are more restrictions than there are in Haskell. The restrictions are as follows: you must be the owner of either a trait or the type that implements it, and you should not have circular dependencies.

 impl Trait for Type 

It is beautiful, simple and understandable, but a bit wrong, since you can draw something like:

 impl Trait for Box<MyType> 

even if you have no idea where Trait and MyType are physically located. Correct handling of such manipulations is the main difficulty of consistency. It is regulated by the so-called. “Rules of uniqueness” (orphan rules), which require the condition that for the entire web of dependencies only one crate contains the implementation of a trait for a certain combination of types (about combinations will be lower - comment. Per.). As a result, two different libraries containing conflicting implementations, being imported at the same time, simply do not compile. It is sometimes annoying to such an extent that Niko Matsakis wants to curse naturally ( Niko Matsakis , one of Rust’s main committers).

It's funny that consistency violations in the standard Rust library (which is glued together from several non-overlapping parts) are very often, so some traits, implementations and types pop up in rather unexpected places. Even more funny, that not so much help to shuffle the types, as a result of which the fundamental [#] crutch was born, ordering the compiler to close its eyes on the inconsistency.

Generics


(I understand that it is correct to call them “generalized types”, but this is first, long, secondly, less clear, since everyone still uses the term “generics” - comment.)

So how do you use traits to reuse code? Here Rust gives us a choice! We can monomorphize, we can virtualize. In the overwhelming majority of cases, monomorphism is a choice in the standard library, as well as in most of the code I have seen. This is probably because monomorphism is generally more efficient as well as clearly more generalized. Nevertheless, the monomorphic interface can be virtualized, which I will show later.

The monomorphic interface is implemented in Rust by generics:

 //  ,   . struct Concrete { data: u32, } //  . `<..>`     . //    `Generic`   ,  // `Concrete`,         `u32`. struct Generic<T> { data: T, } //   impl Concrete { fn new(data: u32) -> Concrete { Concrete { data: data } } fn is_big(&self) -> bool { self.data > 120 } } //     Foo. // ,   ,   , //        //   . // ( ,         - ..) impl Generic<u32> { fn is_big(&self) -> bool { self.data > 120 } } //     T. // "impl"   Generic. // ,      //  ,     <T>. impl<T> Generic<T> { fn new(data: T) -> Generic<T> { Generic { data: data } } fn get(&self) -> &T { &self.data } } //  . trait Clone { fn clone(&self) -> Self; } //  . //     .      //       ,    . //   . trait Equal<T> { fn equal(&self, other: &T) -> bool; } //    impl Clone for Concrete { fn clone(&self) -> Self { Concrete { data: self.data } } } //        impl Equal<Concrete> for Concrete { fn equal(&self, other: &Concrete) -> bool { self.data == other.data } } // ,      ,    ! impl Clone for u32 { fn clone(&self) -> Self { *self } } impl Equal<u32> for u32 { fn equal(&self, other: &u32) -> Self { *self == *other } } //  ,   ! impl Equal<i32> for u32 { fn equal(&self, other: &i32) -> Self { if *other < 0 { false } else { *self == *other as u32 } } } //        impl<T: Equal<u32>> Equal<T> for Concrete { fn equal(&self, other: &T) -> bool { other.equal(&self.data) } } //        // ,  ,  `T`  //  `Clone`!  * * (trait bound). // (  , .    - ..) impl<T: Clone> Clone for Generic<T> { fn clone(&self) -> Self { Generic { data: self.data.clone() } } } //       . //     - U. impl<T: Equal<U>, U> Equal<Generic<U>> for Generic<T> { fn equal(&self, other: &Generic<U>) -> bool { self.equal(&other.data) } } //  ,     . impl Concrete { fn my_equal<T: Equal<u32>>(&self, other: &T) -> bool { other.equal(&self.data) } } impl<T> Generic<T> { // ,   :   `equal`  , //  ,   . // (`x == y`     `y == x`).       , //   `T: Equal<U>`   ?       //   `T`,   `U`   ! //   . fn my_equal<U: Equal<T>>(&self, other: &Generic<U>) -> bool { other.data.equal(&self.data) } } 

Fuh.
As we see, as soon as the need arises to define interfaces and their implementations, our choice is rich for different variations of generalization. And under the hood of the compiler, as I said, all this is monomorphized. At least before the first optimization, we get the following intermediate code:

 //  struct Generic<T> { data: T } impl<T> Generic<T> { fn new(data: T) { Generic { data: data } } } fn main() { let thing1 = Generic::new(0u32); let thing2 = Generic::new(0i32); } 

 //  struct Generic_u32 { data: u32 } impl Generic_u32 { fn new(data: u32) { Generic { data: data } } } struct Generic_i32 { data: i32 } impl Generic_i32 { fn new(data: i32) { Generic { data: data } } } fn main() { let thing1 = Generic_u32::new(0u32); let thing2 = Generic_i32::new(0i32); } 

You may be surprised (or not surprised), but some important functions are inlined very much where . For example, brson found in the Servo code more than 1,700 copies of Option :: map . In general, it is true that the virtualization of all these calls will completely kill runtime performance.

Too important: type definition and turbo-fish operator


(I can't translate “turbofish” better. Options are welcome - approx. per.)
Generics in Rust determine the type automatically. If the type is somewhere specified, everything works like a clock. And if not specified, fireworks start:

 // Vec::new()  ,     //   ,   .   .     //  `Vec`,  -  . let mut x = Vec::new(); //    `u8`  `x`,    `Vec<T>` x.push(0u8); x.push(10); x.push(20); // `collect`   .   ,   // `FromIterator`,    Vec  VecDeque. //    - ,    // `collect` ,    `Vec::new()`,     //     //  -     . let y: Vec<u8> = x.clone().into_iter().collect(); //    ,       //   "-" `::<>`! let y = x.clone().into_iter().collect::<Vec<u8>>(); 


Trait objects


So how do we get virtualized? How do we delete information about a particular type in order to become just a faceless “something”? With Rust, this happens with trait objects. You simply say that a given type instance is an instance of a trait, and the compiler does the rest. Of course, you also need to abstract from the size of the instance, because we also hide behind a pointer, like &, & mut, Box, Rc, Arc:

 trait Print { fn print(&self); } impl Print for i32 { fn print(&self) { println!("{}", self); } } impl Print for i64 { fn print(&self) { println!("{}", self); } } fn main() { //    let x = 0i32; let y = 10i64; x.print(); // 0 y.print(); // 10 // Box<Print> - -,     // ,  Print.   Box<Print>    // `Box<T: Print>`,    -,   `Box<Print>`. //    `data`  `Box<Print>`, //  -     ! // ,    i32  i64     , //         . let data: [Box<Print>; 2] = [Box::new(20i32), Box::new(30i64)]; //     . for val in &data { val.print(); // 20, 30 } } 

Please note, the requirement to hide a specific type behind the pointer has more consequences than it might seem at first glance. Here, for example, our old familiar treit:

 trait Clone { fn clone(&self) -> Self; } 

A trait defines a function that returns an instance of its own type by value .

 fn main() { let x: &Clone = ...; // -  ,  let y = x.clone(); //   , ...? } 

But how much space should be reserved on the stack for y ? What is he like?
The answer is that we do not know this at compile time . This suggests that the Clone trait object is meaningless in fact. More precisely, a treyt cannot be turned into a treyt-object, if it contains a reference to its own type as a value (and not a pointer — a comment per.).

Trait objects are implemented in Rust in a rather unexpected way. Let's remember - usually virtualized tables of functions are used for such purposes. There are at least two reasons that are annoying in this way.
The first is that everything is stored behind the pointer, regardless of whether it is necessary. That is, if the type is defined as virtualized, then this pointer should be stored for all type instances .

The second is that getting the functions you need from a virtual table is not such a trivial task. This is all due to the fact that interfaces are in general a special case of multiple inheritance (in C ++, full inheritance of multiple inheritance). As an example, here's a set for you:

 trait Animal { } //  trait Feline { } //  trait Pet { } //  // , ,  struct Cat { } // ,  struct Dog { } // ,  struct Tiger { } 

How to organize storage of function pointers in case of mixed types, such as Animal + Pet , or Animal + Feline ? Animal + Pet consists of Cat and Dog. We tamper them with the following signs:

 Cat vtable Dog vtable Tiger vtable
 + ----------------- + + ----------------- + + ----------- ------ +
 |  type stuff |  |  type stuff |  |  type stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Animal stuff |  |  Animal stuff |  |  Animal stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Pet stuff |  |  Pet stuff |  |  Feline stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Feline stuff |
 + ----------------- +


And now Cat and Tiger are different. Ok, swap the Pet and Cat cats at Cat:

 Cat vtable Dog vtable Tiger vtable
 + ----------------- + + ----------------- + + ----------- ------ +
 |  type stuff |  |  type stuff |  |  type stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Animal stuff |  |  Animal stuff |  |  Animal stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Feline stuff |  |  Pet stuff |  |  Feline stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Pet stuff |
 + ----------------- +


Uh, now Cat and Dog are different. Overlap the markup under the function again, like this.

 Cat vtable Dog vtable Tiger vtable
 + ----------------- + + ----------------- + + ----------- ------ +
 |  type stuff |  |  type stuff |  |  type stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Animal stuff |  |  Animal stuff |  |  Animal stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Feline stuff |  |  |  |  Feline stuff |
 + ----------------- + + ----------------- + + ----------- ------ +
 |  Pet stuff |  |  Pet stuff |
 + ----------------- + + ----------------- +


Good. Only it does not scale. It turns out that each interface should have its own unique offset, so that each virtual table of functions could theoretically contain any necessary interface, but this also means that each table should store information about all the interfaces without exception . Yes, the unused space at the end of the table can be cut off, but this is poor consolation, too much memory is consumed. In addition, we cannot know the offset of interfaces that are imported from dynamic libraries. Therefore, in most languages, function tables are defined only during program execution.

However, this all has nothing to do with Rust. Rust does not store virtual tables in types. Trayte objects in Rust are so-called thick pointers . & Pet is not one pointer, but two, on the data and on the virtual table. The virtual table of the trait object, in turn, is not tied to a specific type. It is unique for each combination of types.

 Cat's Pet vtable Dog's Pet vtable
+-----------------+ +-----------------+
 | type stuff |  | type stuff |
+-----------------+ +-----------------+
 | Pet stuff |  | Pet stuff |
+-----------------+ +-----------------+


, + + . , .

. , , , , , .

. , , . . This is possible due to the fact that we can find out statically the type of each object at some specific point in time, and all casts to treit objects as well. Replacing monomorphing with virtualization here is fraught with a serious drop in performance. (and, therefore, in the language there is a very limited possibility of casting - comment. per.).

Warning: the opportunity to find fault!

Thick pointers can in principle be generalized further to obese pointers . As a “fat pointer”, the type Animal + Feline indicates one common virtual table, nevertheless there is no reason not to divide this table into two, separately for each treit, awarding the type two pointers to them, respectively. In theory, this can limit the monomorphism of the table, due to even more thickened pointers. This idea pops up regularly, but no one takes it seriously.

Finally, we recall a recent statement - the user can make the monomorphic interface virtualized. This is possible thanks to such a thing as “implement a treit for a treit” (impl Trait for Trait), or rather, a treit object implements its own treyt (IMHO is the coolest feature of the type system - approx. Per.). As a result, the following code is valid:

 //   ... trait Print { fn print(&self); } impl Print for i32 { fn print(&self) { println!("{}", self); } } impl Print for i64 { fn print(&self) { println!("{}", self); } } // ?Sized ,  T    (&, Box, Rc,   ). // Sized (  ) - ,   . //      Traits  [T]  " ". //  `T: ?Sized`,      //    T  ,    , //   Sized    . fn print_it_twice<T: ?Sized + Print>(to_print: &T) { to_print.print(); to_print.print(); } fn main() { //      :  . print_it_twice(&0i32); // 0, 0 print_it_twice(&10i64); // 10, 10 //      i32::Print  i64::Print. let data: [Box<Print>; 2] = [Box::new(20i32), Box::new(30i64)]; for val in &data { //  :    . //   &Box<Print>  &Print ,   //   -     ... print_it_twice(&**val); // 20, 20, 30, 30 } } 

Cool.Not exactly perfect, but cool. Unfortunately, there is no impl Trait for Box <Trait> - it seems to me that it will interact badly with impl <T: Trait> Trait for Box <T> , but I haven’t seriously dug it up yet. Maybe enough that we have T Sized ?

Associated Types


What are the consequences of the fact that we define something as generalized by some type? And what do we want to express by this? Actually, the expression and the consequence are the same here, we want to determine how to work with the type that we slip. In fact - our type is an input parameter . struct Foo <T> says that we can put together a full type only from Foo and T together, Foo itself is incomplete. If you have a passion for special terms, you can say that Foo is a type constructor - a function that takes a type as an argument and returns a type as a result. That is a higher order type.

Further. trait Eat <T> , or the generalized treat of Eat - what does he tell us about himself? At a minimum, that it can be implemented more than once. And also that for each of its implementations one must keep in mind a certain third-party type T , without it Eat is incomplete. Hence the conclusion that it is impossible to say that Eat is implemented somewhere, only Eat <T> can be implemented , where T , in turn, is defined by the user.

Well, ok, and we have to do with it? This can be shown by the example of iterators:

 trait Iterator<T> { fn next(&mut self) -> Option<T>; } /// ,    ,    struct StackIter<T> { data: Vec<T>, } //   [min, max) struct RangeIter { min: u32, max: u32, } impl<T> Iterator<T> for StackIter<T> { fn next(&mut self) -> Option<T> { self.data.pop() } } impl Iterator<u32> for RangeIter { fn next(&mut self) -> Option<u32> { if self.min >= self.max { None } else { let res = Some(self.min); self.min += 1; res } } } 

Still alright. We can write both general and private interface implementations. And then miracles began to work - every real type can implement Iterator only once . StackIter <Cat> implements only Iterator <Cat> , it has no need to implement Iterator <Dog> . In fact, he cannot be allowed to implement anything else, otherwise the user will be puzzled, which object of the implemented types will Iterator :: next () return to it !

It turns out that we are absolutely not thrilled that T as an input type for Iterator is the same input type T for StackIter. Nevertheless, there is no way out of this, because we, as a user, cannot hardcode the types that Iterator :: next () yields . This information is required to give us the type that implements the iterator!
In this not too joyful moment, it's time to get acquainted with the associated types.

Associated types allow us to specify that the implementation of a trait should indicate additional types associated with a particular implementation. That is to say that the trait requires specific types in the same way as specific functions. Here is an appropriately converted Iterator :

 trait Iterator { //       , //      type Item; fn next(&mut self) -> Option<Self::Item>; } /// ,    ,    struct StackIter<T> { data: Vec<T>, } //   [min, max) struct RangeIter { min: u32, max: u32, } impl<T> Iterator for StackIter<T> { //     //   - type Item = T; fn next(&mut self) -> Option<Self::Item> { self.data.pop() } } impl Iterator for RangeIter { //    type Item = u32; fn next(&mut self) -> Option<Self::Item> { if self.min >= self.max { None } else { let res = Some(self.min); self.min += 1; res } } } 

And now we cannot implement Iterator several times. Although associated types can be generalized, they cannot be defined separately from other types, such as, for example, here:

 impl<T> Iterator for RangeIter { type Item = T; fn next(&mut self) -> Option<Self::Item> { unimplemented!() } } 

 <anon>:3:6: 3:7 error: the type parameter `T` is not constrained by the impl trait, self type, or predicates [E0207] <anon>:3 impl<T> Iterator for RangeIter { ^ 

Therefore, the associated types can be entitled to give the name " outgoing types ".
So, we have limited the implementation of the trait to associated types, what else does this give us? Now we can express something such inaccessible earlier?
And then!
Here is the state machine (our slightly tweaked iterator):

 trait StateMachine { type NextState: StateMachine; fn step(self) -> Option<Self::NextState>; } 

So, this is a type, an instance of which we can ask to make step () , the result of which will be its transformation into another instance of the same type. Express it to the generic ...:

 trait StateMachine<NextStep: StateMachine<_____>> { fn step(self) -> Option<NextState>; } 

... and happily get infinite recursion of types. Since the generic type of generic is incoming , it must be defined by the user of the trait. In this case, this type is the treit itself. However, you can do without the associated types - we also have virtualization!

 trait StateMachine { // Box ! self    Box<Self>. //  Rc<Self>    ,   *Box *. // (    ,     - ..) fn step(self: Box<Self>) -> Option<Box<StateMachine>>; } 

Here we drew our old state machine, but without associated types. We absorb the original copy (self is not borrowed from us, that is, after the completion of step () it can no longer be used - comment. Per.), At the output we get something that implements the state machine. But for this to work, we need to restrict the use of all state machines to their implementations on the heap, and we lose information about a particular type of machine after the first call to step (). With associative types neither the first nor the second occurs.
And here's another: trait objects do not work with associated types. For the same reason why they do not work with Self by value, the specific implementation type is unknown. The way to make them friends is to specify all specific types. Box <Iterator> curses, and Box <Iterator <Item = u32 >> is quite a whistle.

Where clauses


Our old example:

 impl<T> Generic<T> { // ,   :   `equal`  , //  ,   . // (`x == y`     `y == x`).       , //   `T: Equal<U>`   ?       //   `T`,   `U`   ! //   . fn my_equal<U: Equal<T>>(&self, other: &Generic<U>) -> bool { other.data.equal(&self.data) } } 

Now we are able to associate types - how will this help us?

 //  :      //   " ",       //   ,    Item! fn min<I: Iterator<Item = T>, T: Ord>(mut iter: I) -> Option<I::Item> { if let Some(first) = iter.next() { let mut min = first; for x in iter { if x < min { min = x; } } Some(min) } else { None } } 

And there is a solution - the condition "Where".
 impl<T> Generic<T> { fn my_equal<U>(&self, other: &Generic<U>) -> bool where T: Equal<U> { self.data.equal(&other.data) } } fn min<I>(mut iter: I) -> Option<I::Item> where I: Iterator, I::Item: Ord, { if let Some(first) = iter.next() { let mut min = first; for x in iter { if x < min { min = x; } } Some(min) } else { None } } 

Where . , . , . Where , , , , — , , .

, ( - impl Send for MyReference<T> where &T: Send ( — ..)), Whereinteraction with treyt-objects is also accompanied by some special effects. Remember, I said that treyte, which implies addressing to oneself by meaning, cannot be used in a treyt-object? In short, special effects can fix this:

 trait Print { fn print(&self); // `where Self: Sized` ,  //   - . , //   Print  -! fn copy(&self) -> Self where Self: Sized; } impl Print for u32 { fn print(&self) { println!("{}", self); } fn copy(&self) -> Self { *self } } fn main() { let x: Box<Print> = Box::new(0u32); x.print(); } 


Higher order trait restrictions


If by this time you have not gone to the roof yet - you are just lucky, as the next section does not leave her a chance. Here a frankly muddy bog will take place and be described, in which only notorious fanatics of stranded type systems can get the pleasure of picking.

Now we will write higher-order functions, which, in turn, are "functions that work with functions." A beautiful and well-known example is map () :

 let x: Option<u32> = Some(0); let y: Option<bool> = x.map(|v| v > 5); 

, Rust . «, , , , », , « !». : Fn , FnMut FnOnce . , , , .

Fn - , . Fn(A, B) -> C , , . - . : . Fn(A, B) -> C — , ( 1.7 ). Fn <(A, B), Output = C> . Incoming types, outgoing types are clearly visible, everything is straightforward as we tell!
Based on this, the closure from the map () example implements FnOnce (u32) -> bool . While not at all scary. For now.

 fn get_first(input: &(u32, i32)) -> &u32 { &input.0 } fn main() { let a = (0, 1); let b = (2, 3); let x = Some(&a); let y = Some(&b); println!("{}", x.map(get_first).unwrap()); println!("{}", y.map(get_first).unwrap()); } 

What, in fact, does get_first trait implement here? Looks like Fn (& (u32, i32)) -> & u32 ? Hell no.

 trait MyFn<Input> { type Output; } // -;    //  ,      . struct Thunk; impl MyFn<&(u32, i32)> for Thunk { type Output = &u32; } 

 <anon>:9:11: 9:22 error: missing lifetime specifier [E0106] <anon>:9 impl MyFn<&(u32, i32)> for Thunk { ^~~~~~~~~~~ <anon>:9:11: 9:22 help: see the detailed explanation for E0106 <anon>:10:19: 10:23 error: missing lifetime specifier [E0106] <anon>:10 type Output = &u32; ^~~~ <anon>:10:19: 10:23 help: see the detailed explanation for E0106 error: aborting due to 2 previous errors 

— . : — ! , Rust , 99% , ( , , — ..).

, get_first :

 fn get_first<'a>(input: &'a (u32, i32)) -> &'a u32 { &input.0 } 

: , . , :

 trait MyFn<Input> { type Output; } struct Thunk; impl<'a> MyFn<&'a (u32, i32)> for Thunk { type Output = &'a u32; } 

Compiles. And now I tell you the opposite: the trey we need is Fn (& (u32, i32)) -> & u32 . There, above, I lied to you. How and why, now I will show with an example filter for an iterator:

 ///    struct Filter<I, F> { iter: I, pred: F, } ///   fn filter<I, F>(iter: I, pred: F) -> Filter<I, F> { Filter { iter: iter, pred: pred } } impl<I, F> Iterator for Filter<I, F> where I: Iterator, F: Fn(&I::Item) -> bool, // !   -   !   ? { type Item = I::Item; fn next(&mut self) -> Option<I::Item> { while let Some(val) = self.iter.next() { if (self.pred)(&val) { return Some(val); } } None } } fn main() { let x = vec![1, 2, 3, 4, 5]; for v in filter(x.into_iter(), |v: &i32| *v % 2 == 0) { println!("{}", v); // 2, 4 } } 

Everything is strange and strange. We need pred () to work with the lifetime & val . Alas, we are not able to explicitly call this lifetime , even if we hang the Where condition on next () (in general, Iterator will not allow us this way). This lifetime simply appears and disappears inside the function, we can neither select nor name it. And we also need pred () to work with that name — you can't call it! Getting in the forehead will require pred () to work with all lifetimes. Suddenly:

 F: Fn(&I::Item) -> bool 

it's sugar for
 for<'a> F: Fn(&I::Item) -> bool 

That is, for <'a> is read almost literally: for all' a (for all 'a)!

Let's call this constraint higher-order trait (higher rank trait bound (HRTB) ). You do not need to work with them, unless you have already been sucked into some swamp with complex types of structures. Usually, HTRBs are shown when working with functional traits, and even there they are covered with syntactic sugar, that is, they are often transparent to the user. Even at the moment, the restrictions of higher-order traits work only with lifetimes.

Types of higher orders


, . Vec (T) -> Vec<T> . , impl<T> Trait for Vec<T> fn make_vec<T>() -> Vec<T> . — , . , .

, , (reference-counted pointers). Rust , Rc Arc . Rc , Arc is thread safe. Suppose, from the point of view of implementation in our structure, these types are completely interchangeable. But for users of the structure it is very important what kind of pointer-counter is used.

Of course, we want our structure to be generalized relative to Rc and Arc . Ideally, we would write such things:

 //   ,   .   ! ///     . /// RefCount   Rc  Arc,    . struct Node<RefCount: RcLike, T> { elem: T, next: Option<RefCount<Node<RefCount, T>>>, } 

, ! Rc Arc , . - , , -, Rc<SomeType> . Rc<Node<T>> , , Rc Arc . , , :

 /// ,    `next` , ///      . trait RefIterator { type Item; fn next(&mut self) -> &mut T } 

, ,
 trait RefIterator { type Item; fn next<'a>(&'a mut self) -> &'a mut Self::Item; } 

, , , Self::Item - . , :

 trait RefIterator { type Item; fn next<'a>(&'a mut self) -> Self::Item<'a>; } 

It seems more beautiful and more generalized, because you can write Self :: Item = & mut T , and theoretically we are satisfied. So far, we do not notice that Item has quietly turned into a type constructor, but it is impossible to generalize them!
Although if you ask the compiler well, you can. But I did not tell you that. Please do not scorch the office.
The key knowledge here is the understanding that the treit has incoming and outgoing types, that is, the treit is a function on top of the types. Look here:

 trait TypeToType<Input> { type Output; } 

Type constructor! Let's go to implement the general RefIter reference counting iterator :

 use std::marker::PhantomData; use std::mem; use std::cmp; // ,  RefIter    struct MyType<'a> { slice: &'a mut [u8], index: usize, } //     : trait LifetimeToType<'a> { type Output; } // - , //      /// &'* T struct Ref_<T>(PhantomData<T>); /// &'* mut T struct RefMut_<T>(PhantomData<T>); /// MyType<*> struct MyType_; //   ,      impl<'a, T: 'a> LifetimeToType<'a> for Ref_<T> { type Output = &'a T; } impl<'a, T: 'a> LifetimeToType<'a> for RefMut_<T> { type Output = &'a mut T; } impl<'a> LifetimeToType<'a> for MyType_ { type Output = MyType<'a>; } //   ,   ! // `Self::TypeCtor as LifetimeToType<'a>>::Output` // -   'a  TypeCtor. // // , : <X as Trait>::AssociatedItem  " -", //      . // // :  ,     HRTB, //    `T: 'a`. // `for<'a> Self::TypeCtor: LifetimeToType<'a>`   // `&'a T`    `'a`, //       `T: 'static`! //    "where"  `next`. trait RefIterator { type TypeCtor; fn next<'a>(&'a mut self) -> Option<<Self::TypeCtor as LifetimeToType<'a>>::Output> where Self::TypeCtor: LifetimeToType<'a>; } // ! struct Iter<'a, T: 'a> { slice: &'a [T], } struct IterMut<'a, T: 'a> { slice: &'a mut [T], } struct MyIter<'a> { slice: &'a mut [u8], } // FIXME: https://github.com/rust-lang/rust/issues/31580 //      ,    . //         // (     ,   ) fn _hack_project_ref<'a, T>(v: &'a T) -> <Ref_<T> as LifetimeToType<'a>>::Output { v } fn _hack_project_ref_mut<'a, T>(v: &'a mut T) -> <RefMut_<T> as LifetimeToType<'a>>::Output { v } fn _hack_project_my_type<'a>(v: MyType<'a>) -> <MyType_ as LifetimeToType<'a>>::Output { v } //  (   ) impl<'x, T> RefIterator for Iter<'x, T> { type TypeCtor = Ref_<T>; fn next<'a>(&'a mut self) -> Option<<Self::TypeCtor as LifetimeToType<'a>>::Output> where Self::TypeCtor: LifetimeToType<'a> { if self.slice.is_empty() { None } else { let (l, r) = self.slice.split_at(1); self.slice = r; Some(_hack_project_ref(&l[0])) } } } impl<'x, T> RefIterator for IterMut<'x, T> { type TypeCtor = RefMut_<T>; fn next<'a>(&'a mut self) -> Option<<Self::TypeCtor as LifetimeToType<'a>>::Output> where Self::TypeCtor: LifetimeToType<'a> { if self.slice.is_empty() { None } else { let (l, r) = mem::replace(&mut self.slice, &mut []).split_at_mut(1); self.slice = r; Some(_hack_project_ref_mut(&mut l[0])) } } } impl<'x> RefIterator for MyIter<'x> { type TypeCtor = MyType_; fn next<'a>(&'a mut self) -> Option<<Self::TypeCtor as LifetimeToType<'a>>::Output> where Self::TypeCtor: LifetimeToType<'a> { if self.slice.is_empty() { None } else { let split = cmp::min(self.slice.len(), 5); let (l, r) = mem::replace(&mut self.slice, &mut []).split_at_mut(split); self.slice = r; let my_type = MyType { slice: l, index: split / 2 }; Some(_hack_project_my_type(my_type)) } } } // ! fn main() { let mut data: [u8; 12] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; { let mut iter = Iter { slice: &data }; while let Some(v) = iter.next() { println!("{:?}", v); } } { let mut iter = IterMut { slice: &mut data }; while let Some(v) = iter.next() { println!("{:?}", v); } } { let mut iter = MyIter { slice: &mut data }; while let Some(v) = iter.next() { println!("{:?} {}", v.slice, v.index); } } } 

I'm not sure that I will be able to decipher what is happening above, and I am not sure that this is necessary — the usual sequence of actions described here earlier. And anyway, will it help us deal with Rc / Arc ?
And yes!

 use std::rc::Rc; use std::sync::Arc; use std::ops::Deref; //   T -> Output trait RcLike<T> { type Output; fn new(data: T) -> Self::Output; } //  struct Rc_; struct Arc_; impl<T> RcLike<T> for Rc_ { type Output = Rc<T>; fn new(data: T) -> Self::Output { Rc::new(data) } } impl<T> RcLike<T> for Arc_ { type Output = Arc<T>; fn new(data: T) -> Self::Output { Arc::new(data) } } struct Node<Ref, T> //  `where`     ( - ) where Ref: RcLike<Node<Ref, T>>, { elem: T, //  : Option<Rc<Node<Rc_, T>> next: Option<<Ref as RcLike<Node<Ref, T>>>::Output> } struct List<Ref, T> where Ref: RcLike<Node<Ref, T>>, { head: Option<<Ref as RcLike<Node<Ref, T>>>::Output> } impl<Ref, T, RefNode> List<Ref, T> where Ref: RcLike<Node<Ref, T>, Output=RefNode>, RefNode: Deref<Target=Node<Ref, T>>, RefNode: Clone, { fn new() -> Self { List { head: None } } fn push(&self, elem: T) -> Self { List { head: Some(Ref::new(Node { elem: elem, next: self.head.clone(), })) } } fn tail(&self) -> Self { List { head: self.head.as_ref().and_then(|head| head.next.clone()) } } } fn main() { //  ( ,      ) let list: List<Rc_, u32> = List::new().push(0).push(1).push(2).tail(); println!("{}", list.head.unwrap().elem); // 1 let list: List<Arc_, u32> = List::new().push(10).push(11).push(12).tail(); println!("{}", list.head.unwrap().elem); // 11 } 

We almost succeeded, but the above-mentioned condition prevents where Ref: RcLike <Node <Ref, T >> . This is a hole in our abstraction. On the one hand, users of our structure do not need to know about Node, on the other hand, we have to mention them there. And I would like to be able to say that Ref is an RcLike for anything . Technically, it sounds like where for <T> Ref: RcLike <T> . This will allow us to hide the optional distribution details of the use of Ref .

Alas. , , . - , , ! .
( , RFC , , -, — ..)

, , , , , , Rust . , .


Have you ever thought that two copies of the same type are actually interchangeable? So that if we had two Widgets , and we swapped them, no-one would lead us. In most cases, this is highly welcome. And what if you prohibit instances of the same type to be interchanged?

Arrays, for example. When we iterate through an array, this is usually done to get its elements. Well, yes, this task requires us to provide the appropriate iterators for the various needs of accessing the elements: Iter , IterMut and IntoIter . It would not be more convenient if the iterator told us where to look, and we already decided howis there a boss?

Theoretically, this is possible if your array has its own index iterator. 0, 1, 2, ..., len - 1 . Run on indices, business, everyone is happy. Only in this case the reliability of the iterator suffers. Usually, the iterators are just waiting for confidence that each element will be passed at least once, and they are guaranteed not to fall.

Access by index breaks all of the above. Indices can be changed, you can change the array itself, making the indices invalid, you can finally accidentally pull the indices of one array onto another. But all this is solved, with varying degrees of effort. Against the substitution of indexes - type-wrapper over them, so that their real values ​​are inaccessible to the user. Binding them to the lifetime of their array will help excellently against invalidating indices.

And with the substitution of the array itself? I have two arrays, the types of their indices are identical and de facto interchangeable, but we do not want this. The way we get the (un) desired is called generalizability (generativity). The basis of generalizability is the idea of ​​having different instances of the same type different associated types.The value of the associated type depends on the type instance , that is.

I'm tired, terribly tired. We are almost the finish line anyway, so I’ll just copy here a demonstration of the above, which I wrote some time ago. Anyway, everything is in the comments, read them.

 //      , //       ""  //   .        , //    ,     . // (      Vec,        ). // //     "   ",  //   .      //   ,         , //      ( ?).    //   ,      ,    //       ,     . //     .    //       (let idx = arr.validate(idx)). // //     -    //  ,    ,   //       ( moving values in/out  try!). //  ,        //   , ,    ,  . //          //  API ( ,    Vec --     `push`  `pop`. //       . // //     ,        , //       ,        // //      gereeter    //    BTreeMap.  ST Monad  Haskell   . // //      ,     //     &[T]  &mut [T]. fn main() { use indexing::indices; let arr1: &[u32] = &[1, 2, 3, 4, 5]; let arr2: &[u32] = &[10, 20, 30]; //   ( ,     ) indices(arr1, |arr1, it1| { indices(arr2, move |arr2, it2| { for (i, j) in it1.zip(it2) { println!("{} {}", arr1.get(i), arr2.get(j)); //        // println!("{} ", arr2.get(i)); // println!("{} ", arr1.get(j)); } }); }); //   ,      let _a = indices(arr1, |arr, mut it| { let a = it.next().unwrap(); let b = it.next_back().unwrap(); println!("{} {}", arr.get(a), arr.get(b)); // a //   ,    }); //     ,    let (x, y) = indices(arr1, |arr, mut it| { let a = it.next().unwrap(); let b = it.next_back().unwrap(); (arr.get(a), arr.get(b)) }); println!("{} {}", x, y); //  :    !? // (:        ) } mod indexing { use std::marker::PhantomData; use std::ops::Deref; use std::iter::DoubleEndedIterator; // Cell<T>   T;  Cell<&'id _>  `id` . //  ,         // 'id  "" . type Id<'id> = PhantomData<::std::cell::Cell<&'id mut ()>>; pub struct Indexer<'id, Array> { _id: Id<'id>, arr: Array, } pub struct Indices<'id> { _id: Id<'id>, min: usize, max: usize, } #[derive(Copy, Clone)] pub struct Index<'id> { _id: Id<'id>, idx: usize, } impl<'id, 'a> Indexer<'id, &'a [u32]> { pub fn get(&self, idx: Index<'id>) -> &'a u32 { unsafe { self.arr.get_unchecked(idx.idx) } } } impl<'id> Iterator for Indices<'id> { type Item = Index<'id>; fn next(&mut self) -> Option<Self::Item> { if self.min != self.max { self.min += 1; Some(Index { _id: PhantomData, idx: self.min - 1 }) } else { None } } } impl<'id> DoubleEndedIterator for Indices<'id> { fn next_back(&mut self) -> Option<Self::Item> { if self.min != self.max { self.max -= 1; Some(Index { _id: PhantomData, idx: self.max }) } else { None } } } pub fn indices<Array, F, Out>(arr: Array, f: F) -> Out where F: for<'id> FnOnce(Indexer<'id, Array>, Indices<'id>) -> Out, Array: Deref<Target = [u32]>, { //    .      //         (,  //  F).   ,    `indices`  //  ,       . // //          //  ,     `'static`,      ,     //  *this*.           . //       ,    ,  //           // ,     . // //       , //     ,     ,      . //   ,        , //        . let len = arr.len(); let indexer = Indexer { _id: PhantomData, arr: arr }; let indices = Indices { _id: PhantomData, min: 0, max: len }; f(indexer, indices) } } 


, , ? , . . Rust Unsafe — ( . — ..) , , , unsafe — .

— , Rust, HTRB .

, . . . . .

Source: https://habr.com/ru/post/307616/


All Articles