Kirpichev correctly writes about the carelessness of an intuitive understanding of imperative languages:
http://antilamer.livejournal.com/300607.html .
However, it seems to me that it would be important to announce that everything that is now hidden under the name “monad” is in itself rather confused in terms of pedagogy and evangelism. The classic SPJ / Wadler joke sounds like "we should call THIS warm fuzzy things, so as not to frighten people with the theater." The joke is strikingly short-sighted. The problem lies in the same plane as the naming of the tasks facing you by the word "stuff" (this is what Allen struggles with in his GTD).
Monads are currently the world as a complex of historically determined causes, problems, solutions, technical capabilities, and theoretical foundations (both algebraic and aspects of the theory of computation).
All these layers can (and should) be split in the first approximation as follows (the order is approximately random):
- striving for explication of effects (a clean introduction of imperatively similar moments in the calculation), (see the works of Wadler); here we include I / O, STM, parallel computing, etc.)
- convenient mechanism for the materialization of basic micro-strategies for computing - function call (call-by-name / call-by-value), polysemy, state change (assignment!), exception handling, stop on failure, continuations, backtracking;
- typeclasses as a mechanism for making monads into a language, and as a result, a convenient mechanism for meta-capture of computations (incredibly convenient for domain-specific embedded languages);
- strict type checking resulting from the use of typeclasses and allowing mechanically checking the correctness of using objects;
- the existence of monadic laws in which the monads fit, which allows materializing abstract combinators; this allows sometimes to find unexpected isomorphisms between different subject areas, and also helps with the optimization and verification of programs;
- a elaborated theoretical framework (category theory) on which monads are based; this makes life easier for the creators of the base libraries, on which all real programming is then based;
- monads are just one of the classes in a long chain of interesting algebraically conditioned classes , some of which are weaker than monads, and some of them are stronger: Functor, Applicative, Monoid, Traversable, Foldable, Monad and Comrades, Arrow and Comrades;
- the desire to materialize some types of calculations into an algebraic structure ( monoidal calculations ); this opens up a wide scope for optimizations, program verification, creation of abstract combinators, as well as the elimination of unbounded recursion - in terms of power of results, this is similar to how I / O was once reliably isolated in IO Monad;
Spend a couple of paragraphs for each item.
Explication of effects. Clean first (EMNIP) coped in order to lay the input-output in the ecosystem of a clean lazy language. Input-output is only the first effect, which has been dealt with in this way. Now everything that does not fit in the purity and laziness, is made out in the form of effects: transactions (and their interaction with input-output), random number generators, synchronization with parallel calculations). In addition, it is connected with the outside world, here also there are calculations that do not fit into the native semantics of the language - for example, strict calculation of the arguments of functions (as in all imperative languages).
Effects let you worry only about a clear limited subset of the problem, because everything that is beyond effects is very “simple” things that cannot “harm” - there are no deadlocks, there is no need to think about barriers and the order of calculation. The monster is thus driven into the cage.
See 1. SPJ "Beautiful concurrency"
http://www.ece.iastate.edu/~pjscott/beautiful.pdf 2. Wadler "The essence of functional programming"
http://mynd.rinet.ru/~alexm/ monads / essence.pdf 3. SPJ "Tackling the awkward squad"
http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/ 4. Kiselev et al. "Purely Functional Lazy Non-deterministic programming"
http://www-ps.informatik.uni-kiel.de/~sebf/pub/icfp09.htmlBasic micro computing strategies. Do not forget that the people who stood at the origins of Haskell, first of all were interested in programming the compiler itself. With the help of monads, you can explicate all the basic processes occurring during the calculation - calls of functions with different methods of passing arguments, lexical and dynamic nesting of variables (Env), exception handling, expressing the assignment through the state machine (State), ambiguous calculations (amb / List), calculations with failure (Maybe / Error), continuations (Cont), back-tracking.
A separate area of ​​research was also modular interpreters implemented through monads and related monad transformers. Such a construction made it possible to construct an interpreter of the desired language (for example, arithmetic + function call + ambiguity) by simply combining the individual components of the interpreter. Each individual component is isolated from the rest and allows separate development.
It is clear that this approach allows you to write the appropriate block once, check it, and use it for any task that requires creating an interpreter or even a compiler).
See 1.
Guy L. Steele Jr. "Building Interpreters by Composing Monads", http://mynd.rinet.ru/~alexm/monads/steele.pdf 2. S. Liang, P. Hudak "Modular Denotational Semantics for Compiler Construction" http: //mynd.rinet .ru / ~ alexm / monads / liang2.pdf 3. S. Liang, P. Hudak "Modular Monadic Semantics", 1998 http://flint.cs.yale.edu/trifonov/cs629/modular-monadic-semantics.pdf
The mechanism for meta-interception calculations. Monads in Haskell are implemented through typeclasses, which are somewhat similar to either interfaces in Java, or concepts in C ++. Typeclasses are completely orthogonal to monads! Typeclasses allow you to "intercept" the calculation in a different type of result.
For example, you can write a function in Haskell to calculate the roots of a quadratic equation. It can be calculated by passing it three arguments - then it will return a pair of roots. With the help of typeclasses, you can “calculate” this function in another context so that the result of the calculation is JavaScript code, which is a JavaScript function that calculates the roots of a quadratic equation.
Thus, it is possible to separate the development of the basic algorithm from the method of its calculation.
Actively developing area now is writing DSL for parallel computing. You can write a vectorized algorithm to Haskell, to which you can then write a series of backends: for SSE, for CUDA, for ordinary C, for any other vectoring technology that has not yet appeared. Naturally, this algorithm can be simply calculated directly from Haskell to debug its fundamental correctness.
See 1. Lennar Augustsson "Strongly Types DSEL" presentation http://www.infoq.com/presentations/Strongly-Typed-DSEL-Lennart-Augustsson (at the end there is a story about the language for generating Excel files); PDF2.
http://hackage.haskell.org/package/accelerate3.
http://cdsmith.wordpress.com/2009/09/20/side-computations-via-type-classes/ (no monads at all);
Monadic laws, theoretical foundations, younger and older monad brothers. Provide the ability to formally prove some basic things, to be at least somewhat confident that the foundation will not fail. This is interesting not only to the authors of the most basic libraries, but also to those who have gotten to tasks with such complexity, when every possibility of an automatic self-test is welcomed with enthusiasm.
Together with strict type checking, this allows you to catch situations where the task is misunderstood, articulated contradictory, or simply more difficult than it seems. It also allows you to develop the program, dramatically reducing the likelihood of erroneous changes. So, Galois, in their presentation, says that they are constantly trying to encode certain code requirements in the form of type signatures.
Also, the formal structure (including the
monoidal style ) opens up a wide scope for optimizations at the compiler level (including those managed by the programmer through the rules mechanism). Some rigidly structured functions can be optimized as close as possible to the machine code without sacrificing the abstractness of the source code.
See 1.
http://www.galois.com/blog/2009/04/27/engineering-large-projects-in-haskell-a-decade-of-fp-at-galois/2.
http://comonad.com/reader/2009/iteratees-parsec-and-monoid/ (monoidal style)
3.
http://www.cse.unsw.edu.au/~dons/papers/CLS07.html (dons et al. "Stream Fusion: