📜 ⬆️ ⬇️

JS Transducers - Are They Necessary?

The functional approach slowly penetrated almost into all modern programming languages. While some elements from there, like monads (“just a monoid in the endofuncture category, what's the problem?”) Are very controversial for the mainstream, others - like map, reduce, filter - have become de facto standard.

image

With all its advantages, the holy trinity of map / filter / reduce does not work very well with memory in JS. The grandiose architectural crutch - transducers - was successfully portrayed from Clojure to JS, and strikes neophytes with its incomprehensibility, while sort of solves the problem with excessive memory allocation.

When reading the documentation for Transducers.js (protocol for transducers), I was not left with a strong feeling of deja vu - I saw something similar somewhere. Aha - iterators and generators on MDN ! All sane browsers and server runtimes already know how (hussars, keep quiet about Ishak!).

Not so long ago, I experimented with these things to build something that can stream data in JS — without creating intermediate arrays.
')
So - let's go.

In order to be stylish, fashionable, youth - we are spionerim from the Internet, two functions:

const compose = (...fns) => x => fns.reduceRight((c, f) => f(c), x) 

Spacious comments are superfluous - a classic composition of functions: compose (f, g, k) is f (g (k (x))). Wow, like with brackets did not work.

And the second (here we have about functionalism, remember?):

 const curry = (fn, a = []) => (...args) => a.length + args.length >= fn.length ? fn(...a, ...args) : curry(fn, [...a, ...args]); 

Turns a function with a bunch of arguments into a function of one argument. For the conditional f (a, b), a call to curry (f) will return a function g - a wrapper for f, which can be called either g (a, b) or g (a) (b). The main thing is that the wrapped function has a stable number of arguments (no arguments with default values).

Now re-invent the map function using generators.

 function *_gmap(f, a) { for (let i of a) yield f(i); } const gmap = curry(_gmap); 

The code is elementary, we pass through the input a by the function f (a) and spread the answer outside. You can call either gmap (f, a) or gmap (f) (a) . What it does is that you can save the partially applied gmap (f) to a variable, and when you need to, reuse it.

Now filtering:

 function *_gfilter(p, a) { for (let i of a) if (p(i)) yield i; } const gfilter = curry(_gfilter); 

It is also possible to call both immediately - gfilter (f, a) , and in the best traditions of functionalism - gfilter (f) (a) .

To make it easier, a couple more primitive functions (lisp inspired):

 function *ghead(a) { for (let i of a) { yield i; break; } } function *gtail(a) { let flag = false; for (let i of a) { if (flag) { yield i; } else { flag = true; } } } 

ghead (a) returns the first element from the input, gtail (a) - everything except the first.

Well, a small example of how this all can be used:

 let values = [3, 4, 5, 6, 7, 8, 9]; const square = x => x * x; const squareNfirst = compose(ghead, gmap(square)); let x = [...squareNfirst(values)]; 

The variable x will get an array of one element.

 const moreThan5 = gfilter(x => x > 5); let xxx = [...moreThan5(values)]; 

The general idea is that the gmap and gfilter can be fed into the array as well as something that implements the iterable protocol, and the output will also have an iterator, which in ES6 can be expanded into a regular array in three points ( let x = [... squareNfirst (values) ] ).

And what about reduce, you can ask? There will be no universal approach here, or use the classic [] .reduce (f, init), or something like this:

 function _greduce(f, i, a) { let c = i; for(let v of a) c = f(c, v); return c; } const greduce = curry(_greduce); 

greduce (f, i, a) will collapse the incoming array or iterator into one value.

Example:

 const mixFn = compose(greduce((c, v) => c + v, 0), square, moreThan5); let yyy = mixFn(values); 

The function-composite sequentially cuts from the input a number more than 5, then it will square the resulting elements, and finally sum it with the help of reduce.

Why all this fuss?


The main profit comes from the fact that processing on iterators is in small memory consumption. With the chain of transformations, we drag one element exactly. Plus, if there are ghead (a) functions in the chain of transformations, then we get “lazy”, i.e. the fact that ghead () does not take it will not even be counted.

Well, plus a functional, it is now fashionable :)

I hope that this approach will help you to save a little bit of memory when processing multi-ten megabyte arrays. Otherwise, you should not ride.

Source: https://habr.com/ru/post/348170/


All Articles