Recently, there has been a tendency of the ideas of functional programming to the masses. For me, as a 1C programmer, the most interesting thing is to increase the level of abstraction when working with tabular data. It's one thing to encode loops with a lot of variables that change from iteration to iteration, and after a month you need to do a “debugging with your eyes” (or even start the debugger) to understand how these loops work. It is much more elegant to use ready-made algorithms that can be applied to the table as a whole, and get the expected result.
Year after year, coding similar and not so cycles, I was filled with the desire to change something for the better in this dull process. At first, I was inspired by the generic STL C ++ algorithms. Then, for general development, I studied Haskell - this language really turns perception.
About 2 years ago I started writing a library of universal functions that I used in my daily work. Practice has convinced me that the approach works, and brings tangible benefits. More recently, I discovered LINQ, which is used on the .NET platform for unified work with collections, the generation of SQL queries and other useful things. I envy the white envy of the Sharperam, who have such a wonderful tool!
')
Having studied the library of standard query operators that make up the LINQ core, I decided to write a similar library for 1C Enterprise 8.
Those practices that I have had so far - I throw them in the trash, because they are not sufficiently systemic and universal.
The main obstacle to creating a function-oriented library is the lack of support for the transfer of functions as parameters in the built-in language.
Typical algorithms for working with collections require function arguments as their arguments. For example, the filtering algorithm uses a condition function that accepts a collection item and returns boolean. Converting table fields requires an even more complex function. Functions that are passed to another function as a parameter are called closures in functional programming.
I considered various options for replacing full-fledged closures with any surrogates, even writing in the embedded language of a virtual machine that interprets bytecode. As a result, I came to the conclusion that it is advisable to take advantage of the possibility of the language to calculate expressions in the embedded language, which are transferred to the library as strings (the rationale for the decision is in the project documentation). It looks like this:
= .("_. > 0",, );
The underscore symbol denotes a function parameter, in this case a row of the original table. If the function takes several parameters, they are denoted by "_0", "_1", etc. The function passed to the algorithm may use some data known on the caller (context). For example:
= 100;
= .("_. > _.", ("", ), );
The variable "_k" gets the context value (usually a structure) passed to the second parameter of the If function. To demonstrate more complex constructions, I’ll give a somewhat contrived example:
Customer Order Amounts = Tables. Connection (
"Customer = _. Link",
"Customer = _0. Link
| TIN = _0.INN
| Ordered = _1. Amount ",,
Customers, Orders
);
The first parameter specifies the key fields of the connection, and converts the field names of the first table (“Link”) into the field names of the second table (“Client”). The second parameter sets expressions for the result fields (Customer, TIN, Ordered). "Customers" and "Orders" are the original tables of values.
Processing data in the tables of values ​​using the built-in language is not the best solution to the problem. The built-in language runs much slower than SQL queries. We resort to the built-in language only if the data is not received from the database or its volume is small, as well as for non-trivial operations (not everything can be done by queries).
I have included functions in my library that look like universal algorithms but actually form a query.
Request = Requests. Select (
"Specs. Nomenclature AS Material,
| (Orders.Number * Specifications.Number) HOW the Quantity ",
Requests. Connection (
"Orders", "Specifications",
"Orders. Specification = Specs. Reference",
Requests. From ("Document. OrderOnProduction.Products"),
Requests. From ("Handbook. Specifications. SourceComplete")
)
)
Result = Requests. Execute (,, Request);
Why did I do this? The given example is faster and more reliable to make the query constructor. It is read all the same worse than the usual query language. Is it only because of the love of art? Not only. This part of the library is designed for complex queries that contain duplicate elements, or a variable number of fields, subqueries. Such queries are usually formed by the contact string, which leads to the need to follow the syntax, in particular, the arrangement of commas and parentheses. I hope the library in these cases will bring more benefits than harm, although in practice I have not tested this approach yet.
At the moment, the project documentation and function headers are completely ready. I haven’t started implementing the functions yet, but I imagine it to be accurate to the line - the project has been worked out in great detail. Testing the entire array of functions can be more difficult. I hope to get feedback from colleagues. If there are well-grounded remarks / suggestions, I am ready to correct the interface of the library, for the present I have not started implementation. Ready modules are also going to publish. I also plan to expand the library by adding specific accounting algorithms: distribution over the base, distribution over the fifo, etc.
References:
- Library design documentation
- Requests module
- Table Module
- Module Arrays
- Standard Query Operators Library Specification for LINQ Language
- Inspiring presentation on the benefits of the functional approach using the example of Haskell