By mike vanier
In the Haskell community, it’s a joke that every Haskell programmer should write one or more monad tutorials during his training. And I am no exception. But I know that there are so many manuals on this topic, many of them are good - so why should I write Another One? Two reasons:
- I think I can explain some aspects of monads better than many other guides that I have seen.
- I began to understand monads much better than I now want to share as much as possible.
Prerequisites
Since I will be writing Haskell examples, it would be useful for you, the reader, to know it, including sections such as polymorphism and type classes. Without this knowledge, the material will be difficult to understand. Dozens of Haskell introductory tutorials have already been written that are worth reading to an unprepared reader, and then return to a series of these articles.
')
But to know the theory of categories, a very abstract branch of mathematics, I do not demand, even though it describes the theory of monads (in terms of this article). Of course, knowledge of the theory of categories does not harm, but it is not necessary to understand the presented material. I do not believe those who say that you need a theory of categories before learning monads in application to programming languages - this is not so. If you studied it, it's good, but I see no advantage in using terminology from there.
Waiver of ...
I am not going to teach you everything that is in the world of monads for two reasons. First: it would be very long, and the second: I do not know everything and, probably, I will never know. I want to give you an understanding of monads at a pure conceptual level, and why they are useful, how to work with them, and which common monads are most often used. By clicking the links at the end of this series of articles, you can explore the monads in more depth.
Do not wait for tons of code that you can immediately use in your daily work. This is not a "book of ready-made recipes!" I really believe that you need to understand what happens when you program using monads, and this manual was written to explain them in detail. With it, you can read other manuals (see links) to find the best monadic solutions to practical problems, and my goal is to outline the big picture and help you really understand the monads and their work.
Finally, I notify you that I will repeat and repeat the main thoughts until they smack me dead, because I want you to fully understand what I am trying to say. I hope it will not be boring - although it will be long, because the monads cannot be explained in several sentences. Make a cup of coffee and make sure your chair is comfortable - understanding will take time.
Motivation: why should you think about monads?
As far as I know, the first monads were used in Haskell, based on the works of Eugenio Moggi and Philip Wadler (two giants with whom I can not compare). Since then, they have appeared in other languages, especially in functional languages. But why should you, the reader (presumably a programmer who has not tried a functional programming drug), worry about monads?
The main idea of functional programming is to use
pure functions as widely as possible. The net function is the black box. All it does is take one or more arguments, calculate something and return the result. It
does not operate with any side effects. No reads-entries to files and files, no printing to the console, no changes to global variables, no exception handling, and so on. The advantage here is that the behavior of a pure function is strictly defined: it
always returns the same value to the same arguments. Pure function is more predictable, easier to test, and less error prone. ({1}) For comparison, an unclean function (which has side effects) will not necessarily calculate the same result with several identical calls. For example, the answer may be different if the value of the global variable changes, or if the contents of the readable file are different. Unclean functions are harder to test, prone to many errors, and there are many situations where functions fail. For these reasons, functional programming languages encourage you to write pure functions whenever possible.
However, programs in pure functions are too limited. There are cases when programs are written easier using side effects, although they can be written (with torment) only in pure functions. And in some other cases it is impossible to do without side effects at all. For example, a program that copies a file from one folder to another interacts with the file system and modifies it; if your pure functions are not allowed to read and write files (and these are side effects), then they will not be able to solve this problem. So we
need ways to work with side effects even in functional languages.
Functional languages are of two kinds: clean and unclean. Impure FWs (Scheme, OCaml) do not care about this problem: they simply allow you to write any functions with side effects, although programmers of unclean FWs usually avoid this without special need. Pure FWs (such as Haskell) are more hardcore: they generally prohibit writing directly functions with side effects (you will soon find out why I wrote “directly”). Therefore, as you can imagine, the topic of side effects in pure programming languages has been one of the main research areas for a long time.
Monads were the key to solving this problem. (More precisely, one of the keys; in some other FWs, other approaches were invented; “Clean's uniqueness type” as an option.) With the help of monads, you can use calculations with side effects without disturbing the purity of the language. Monads and type systems allow us to separate calculations with side effects from other calculations, and they will not interfere with each other. We get all the advantages of the code without side effects, and this is guaranteed by the type system. At the same time, we can perform side effects as needed. And this is a very powerful concept.
And as if that were not enough, it turned out that monads have many other uses, not just curbing side effects. Monads are a very versatile tool with which you can organize various types of calculations with transparent behavior. Some programs are greatly simplified. In many cases, the monadic code is shorter and clearer than its non-adic analogue; we will examine examples of this phenomenon. In general, monads are also useful outside of side effects in functional languages.
Monads are one of the amazing ideas in the theory of programming languages, and they are worth exploring.
Definition: what are monads?
Monads are a generalization over functions, over the application of functions and over the composition of a function, with the help of which the very concept of computation is abstracted in comparison with standard functions.
In the process, I hope to explain to you not only the monads themselves and how they work, but also why they discourage programmers who have not met them before. (Hint: it's not because programmers aren't smart enough or don't know category theory.)
Concept of computing
Well, let's begin the analysis of my definition with the expression “the concept of calculations”.
The simplest and most predictable calculation is ordinary (pure) functions (that is, the mathematical definition of functions). For simplicity, I will consider functions that map one input argument to one output. (It is possible to reduce a multi-argument function to a function with a single argument using the
currying procedure, and I will have to tell more about this - but later. Now just take my words for granted.) As I said earlier, the rule for a pure function should be : it should
always return the same results for the same input parameter. In strongly typed languages like Haskell, a function has a type definition, which always means that for types a and b, the function maps the value of type a to the value of type b. Here’s what it looks like in Haskell:
f :: a -> b
Here the double colon "::" means "has the following type." Thus, the function f has the functional type a -> b, and this means that the function takes a value of type a and returns a value of type b. In practice, instead of a and b, there are usually specific types: Int, Float, String ..., but in Haskell, functions can also work regardless of the types of arguments. ({3})
So, pure functions are the simplest “notion of computation”. And what calculations still exist? There are many of them, and you know many of them; This includes calculations that:
- work with input / output (files, console);
- cause exceptions;
- change some general state (global, local);
- can sometimes fail;
- returns many results at once;
- and many others.
Note: I used the phrase “Input / Output”, or abbreviated I / O, to denote input / output when working with a file or console. It is known that I / O operations carry side effects. Do not confuse the input / output operation with the input and output value of the function.
Think for a second about how you would work with these calculations in ordinary programming languages - in C or Java. Calculations with I / O operations? No problems!
Any C and Java function can. What about calling exceptions? In C, this is a little difficult, since there is no language support for exceptions, but you can return an error code in the event of a failure. (Or you can handle errors in general with setjmp / longjmp if you are a hard-core low-level programmer.) In Java, you simply throw an exception in the hope that it is being processed somewhere. In addition to exceptions, there is still a state - how to work with it? Yes, in general, simple: in C, and in Java, you can read and write variables, global and local, in different ways. And calculations that can fail? They can be considered as a degenerate case of exceptions, so again no problems. Finally, what about computations that return many values? Here, under the set of values, I mean not one object that contains a bunch of results - not a C structure or a Java object - I’m talking about functions that can return several separate results "in parallel". It's not entirely clear how to do this in C or Java. ({four})
It is important to note the following: in all cases we are no longer talking about the traditional concept of computing, since besides the usual mapping of the input parameter to the output “somewhere else”, “something else” happens. In addition, there are other kinds of “something else,” with their own concepts of computation. We usually don’t worry about it when we write programs; we simply understand that our “functions” are not exactly the same as functions in the mathematical sense. After all, they have side effects of input / output, exceptions, changes in global variables, and so on. For most programmers, this is not important - until they grab an unpleasant mistake, because of what turns out to be a changed global variable, or until the program suddenly stops with an exception, or until some other problem occurs due to the non-functional nature of all these "Functions". Thus, we would like to use pure functions as much as possible. We would like to, but there are cases when it is impossible, and we have to do “something else”, that is, calculations with side effects.
Only one conclusion: we want to sit on two chairs. ({5}) We would like to write code in pure functions wherever possible, getting all the benefits of this: easier debugging, verification ... But also we would like to work with that very “something else” in a
controlled way, because there is no exit or so better in a particular situation. And that is what monads allow us to do.
BUT! The key phrase of the last paragraph is “in a controlled way”. If this mechanism worked the same way as in C or Java, we would, of course, solve our problems with the help of many of these non-functional calculations, however, we would also lose the benefits of functional programming. After all, we would have no guarantees that the functions are clean, even type checking would not help here. Some kind of systematic approach is needed to work with other notions of calculations, which would not violate the purity of the code.
Now we will consider the useful concepts of (pure) functions, (pure) use of functions and (pure) composition of functions, and then compare it with the monadic method that implements the same goals.
Functions, application (application) of functions and composition of functions
Earlier, I mentioned that Haskell uses a special entry to determine the types of input and output parameters of functions. For the function f, whose input type is a and output type b, the entry will look like this:
f :: a -> b
Thus, f is of type a -> b (reads “from a to b”). Here is a more specific example of a function that doubles the input value:
f :: Int -> Int
f x = 2 * x
f is of type Int -> Int, because it takes an integer, multiplies it by two, and returns another integer.
To execute a function is simple, for this we apply it to the argument (we assume that it has one argument). This is usually done by putting an argument to a function:
f 2 - the function "f 2" value = 4.
Note that in Haskell the arguments are not wrapped in parentheses, as in many other programming languages.
Currying
In practice, single-argument functions are insufficient for many problems. How do we define a two-argument function? How do we, for example, write the function q, which takes two integer arguments and returns the sum of their squares? The body of the function is easy to write:
q x y = x * x + y * y
Function type signature omitted. Perhaps you are expecting some such option:
q :: Int Int -> Int
or perhaps this:
q :: ( Int , Int ) -> Int
In fact, the type of this function looks like this:
q :: Int -> Int -> Int
The arrow "->" is right associative, so the entry means the following:
q :: Int -> ( Int -> Int )
Now it looks interesting. The function of two arguments, which in Haskell becomes a function of one argument (x in our case), returns another function of one argument, which in turn takes the next argument (y) and returns the result. And this is correct, because in Haskell, as in other FFs, functions can be returned as values of other functions. (In other words, functions are just another type of data in the FN.) This way of representing multi-argument functions as one-argument functions is called currying (after Haskell Curry, whose name also calls Haskell. Currying is independently open to scientist Scheinfinkle, so you can call this procedure and so, if you want). For clarification, take the function r with four integer arguments w, x, y, and z, which returns an integer.
r :: Int -> Int -> Int -> Int -> Int
r w x y z = ... is some function of w, x, y, and z
Right associative arrow gives:
r :: Int -> ( Int -> ( Int -> ( Int -> Int ) ) )
r w x y z = ... is some function of w, x, y, and z
where r is a function of one integer argument w, which returns a function of type (Int -> (Int -> (Int -> Int))). That function, when applied to an integer (x in our example), returns a function of type (Int -> (Int -> Int)). The next function, when applied to the integer (y in the example), returns a function of the type (Int -> Int), which, in turn, when applied to another integer (z), returns an integer - the result of the call (rwxyz), which , in fact, ((((rw) x) y) z). And this is called currying. Haskell automatically curries functions. Carring is very convenient, because you can pass arguments one by one and not all at once, and these partially applied functions are often quite useful in their own right. And also currying is conceptually useful to us by the fact that from now on it is enough for us to think about the functions of one argument, and nothing more. Perfectly!
In Haskell, there is a special operator $, it is an application function operator. He has the following type:
( $ ) :: ( a -> b ) -> a -> b
(In Haskell, character infix operators are equivalent to functions with the same name, enclosed in parentheses. So, writing f $ 2 is equivalent to writing ($) f 2. Operators are usually defined in their functional form - for convenience. Refer to the introductory materials on the language, if you want to know more. We will often use operators here.)
Writing means that for any types a and b, this operator takes a function from a to b as the first argument, applies it to the second argument of type a, and returns the result of type b. In functional languages, it is considered common to pass functions as arguments to other functions, so there are no problems here. The following conclusions can be drawn:
f 2 -> returns 4
f $ 2 -> will also return 4
( $ ) f 2 -> and here returns 4
You see just three different ways to write the same thing.
The $ operator is not really needed here, because it is technically easier to substitute an argument to a function in order to execute it. But for the sake of interest, we can specify an operator of “reverse application”, let's call it> $>, and let it take the same arguments in reverse order:
( > $> ) :: a -> ( a -> b ) -> b
x > $> f = f x - = the same as f $ x
We can read this as "the operator takes the value of x, applies the function to it and returns the result." If you are familiar with UNIX systems, you may have noticed that the unix conveyor (pipe, |) works in a similar way. You give him some data, and he applies the following program to them. We can work with operators of using functions when it is convenient, although usually we don’t use them at all, just substitute arguments to functions.
Now that we have talked about the use of functions, the next important topic is the composition of functions. And this
is a really important topic. Suppose that we have two functions f and g, as well as the x value of the following form:
x :: a
f :: a -> b
g :: b -> c
where a, b, c are some types. You could do the following with these x, f, and g: take x, apply the function f to it (get a value of type b), and then apply the function g to the result. A value x of type a would be converted to a value of type b, and then what happened would be converted to a value of type c. Writing to Haskell is easier than saying:
g ( f x )
But it will work only if the types f and g are compatible, that is, if the result of the function f has the same type as the argument of the function g (in our case it is type b). Applying one function to another can be interpreted in another way: we take two functions of f and g types, respectively, a -> b and b -> c, and create a third function of type a -> c. Applying it to the argument x, we get a result of type c. This idea of combining two functions into a third is called a composition of functions. Haskell even defines a simple function composition operator:
( . ) :: ( b -> c ) -> ( a -> b ) -> ( a -> c )
g . f = \ x -> g ( f x )
Here, the entry "\ x -> ..." is used, which denotes a lambda expression (or, the same, an anonymous function) with one argument x. This is how the composition operator takes two functions as arguments and returns a third. And again: in function functions as arguments and as returned values, this is quite a common phenomenon that occurs at every step.
Sometimes there is a nuisance with the composition operator when functions follow the wrong order. But we can write a “reverse composition operator”>.>:
( >.> ) :: ( a -> b ) -> ( b -> c ) -> ( a -> c )
f >.> g = \ x -> g ( f x )
We can even express it through the inverse operator of the function> $>:
( >.> ) :: ( a -> b ) -> ( b -> c ) -> ( a -> c )
f >.> g = \ x -> x > $> f > $> g
Or even easier - through the composition operator:
( >.> ) :: ( a -> b ) -> ( b -> c ) -> ( a -> c )
f >.> g = g . f
The operator signature>.> Is slightly clearer and shows what happens when functions are composited. You take the functions f and g and calculate the new function. Let it be called h. By applying h to the value, you will get the same thing if you apply f to the value first and then g to the result. This is what the composition of a function is - a way to make others from one function.
Let's look at an example:
f :: Int -> Int
f x = 2 * x
g :: Int -> Int
g y = 3 + y
h :: Int -> Int
h = g . f - or the same: f>.> g
What does the h function do here? It takes an integer, multiplies it by 2 and adds 3. That is, it is equivalent to the following option:
h :: Int -> Int
h x = 3 + 2 * x
Composition of functions may not seem such a great thing - in reality, this is one of the main points of functional programming. Composition allows you to associate existing functions in more complex functions, omitting manual work with arguments. And instead of saying “h is a function that is obtained by first calculating the function y = f (x) and then calculating the function h = g (y)”, we simply say “h is the function that we get by first applying f and then g ". Without intermediate entities, the code becomes shorter and higher level. Imagine that you needed to call ten functions one by one. If you were recording intermediate results, it would have resulted in something like this:
f11 x =
let
x2 = f1 x
x3 = f2 x2
x4 = f3 x3
x5 = f4 x4
x6 = f5 x5
x7 = f6 x6
x8 = f7 x7
x9 = f8 x8
x10 = f9 x9
x11 = f10 x10
in
x11
Very tiring, right? And now let's look at the composition of functions:
f11 = f10 . f9 . f8 . f7 . f6 . f5 . f4 . f3 . f2 . f1
or, the same:
f11 = f1 >.> f2 >.> f3 >.> f4 >.> f5 >.> f6 >.> f7 >.> f8 >.> f9 >.> f10
It is not only shorter, but also more intuitive. (“Applying f1, then f2, then f3, and so on, we get f11”). By the way, this way of writing functions using composition and without arguments is called “pointless style”. The irony is that the “dot” (.) Operator is very much used in the “pointless” style - stronger than in the usual code. It would be more correct to say “argumentless style” rather than “pointless”, since we omit the arguments of the functions.
Topics of thinking, fixing the material:
- Functions, application (application) of functions, composition of functions as fundamental concepts of functional programming.
- Operators for applying functions, for composition of functions, taking arguments in any order we want.
Monadic functions, monadic values
So far everything that I have said, I hope, was pretty simple. Now we turn to more complex things.
Earlier, I said that the essence of monads is to generalize the concept of composition and the use of functions in the form of calculations, which are different from those in pure functions, and we even looked at some examples of “impurity”. From the definition of monads, it follows that we get some "extended functions" that do something else besides simple calculations on the input value. In a schematic pseudo-Haskell language, we could write these “extended functions” like this:
f :: a - [something else] -> b
where f is an extended function, a is the type of the argument, b is the type of the result, and “something else” is specific to different computational concepts. In Haskell, behind the words "concept of computation" lie, in particular, monads. (We still do not know what it is, so for now take my word for it.) We can understand “extended functions” as “monadic functions”. This is not standard terminology, I call them so as to distinguish them from ordinary pure functions.
Of course, the "- [something else] ->" entry is invalid in Haskell; we will see how it actually looks a bit later, and I hope it will be clear. And now we will stick to these notations in order to compare the concepts of calculation described above; we give each notion of computations names corresponding to the monads in Haskell.
- Functions that perform input / output operations in the console or file. The I / O operations correspond to the monad IO, so we write it this way:
f :: a - [IO] -> b
(By the way, the IO monad has other uses, as we will see later.) - Functions that can generate exceptions. They correspond to several types of monads:
f :: a - [error] -> b
- Functions that interact with the global or local state. This is the State monad:
f :: a - [State s] -> b
- Functions that can fail. We are talking about the Maybe monad:
f :: a - [Maybe] -> b
- Functions that return multiple values simultaneously. Monad list (list):
f :: a - [list] -> b
I wrote the word "list" with a small letter, because the lists in Haskell look a little different due to syntactic sugar, so we do not need a separate word for them.
Later I will give examples for all these monads, and now consider the functions that perform input / output operations, that is, functions related to the IO monad. We have a pseudo-record:
f :: a - [IO] -> b
One could say that f is a function from a to b, acting in the IO monad. As I mentioned above, this is an invalid syntax. In Haskell, you must wrap the monadic function of a monadic function in a type, surrounding it with an input or output parameter. In principle, there would be two options for writing a monad function, like this:
f :: IO a -> b
or
f :: a -> IO b
It turns out that Haskell uses the second form of writing for monadic functions:
f :: a -> m b
for any monad m; for IO, for example. (For hardcore players, I note that there is a notion of comonads, where each function has the form f :: ca -> b for some comonad c. Let us leave this question for future articles.)
Well, well, what really lies behind the entry "f :: a -> mb"? The record means that there is some ordinary (pure) function f, which takes a value of type a and returns a value of type mb (whatever they may be). So, in Haskell, monad functions are
pure functions with the monadic type of the return parameter. In other words, a pure function takes the usual value and returns a monad. And what does this mean?
The “mb” entry needs clarification. b - This is some type. m represents some monad. However, what
exactly is meant by m in Haskell? In Haskell, “m” must be
a type constructor — a special function on types: it takes an argument and returns a type. This is not as strange as it may seem. Consider the concept of a “list of integers,” the type of which in Haskell looks like [Int]. The “list of something” part can be understood as a type constructor that takes a certain type (Int) and returns another type (the list of integers, [Int]). Square brackets are hard-wired in Haskell for labeling lists, but you can define your own type constructors. Also, any polymorphic type has its own constructor. One of the simplest polymorphic types is Maybe, defined as
data Maybe a = Nothing | Just a
It says here that Maybe is a type constructor that takes a type (called a) and produces a new type as an output value. If we substitute a type Int, we get a new type Maybe Int, which is written as:
data Maybe Int = Nothing | Just int
Thus, Maybe is a function on types that maps one type to another.
The monads, as they are in Haskell, are type constructors that wrap around the old type. And the IO monad, in fact, is the type constructor with which types such as IO Bool, IO Int, IO Float, IO Char, IO String, and so on are produced. These are all valid types in Haskell. Similarly, the Maybe types Bool, Maybe Int, ... are constructed for the Maybe monad.
I will call the types created by the monad constructor “monad types”. IO Bool, Maybe Int, and so on are all monad types.
Marginal notes: all monads in Haskell must be type constructors, but not all type constructors are monads. As we will see, monads are required to be type constructors. For monads, special operations must be defined, and they must satisfy several “monadic laws”.
We come to a very important question: what do the values representing the monadic type do? I call them monadic values. For example, what is a value of type Maybe Int? And IO Float - what is it?
We have just come across what makes monads seem "difficult to understand."
Let's recap.
- « », , , ( , ).
- , - . «- » / , , , , . , « ». , , .
- In Haskell, monadic functions are pure functions that convert an input value of some type into an output value of a special monadic type. I call these values "monadic."
Now let us reformulate the question: what can we say about the essence of “monadic meanings”?
The answer is: They do not represent anything really intuitive! .
Intuitively, the concept of a monadic function (one that does something else besides converting one data into another). The concept of "monadic value " is not at all intuitive. Just in Haskell, it is customary to denote output values of monadic functions. You will waste your time if you try to understand the monads through what these monadic values really are. Do not bother! Not worth it!
However, in the Haskell literature you can find two general ways to explain monadic values (well, a bunch of silly ways that many guides sin):
1. A monadic value of type ma (for some monad m) is a special kind of "action", which does something and returns a value of type a. The essence of the action depends on each particular monad.
2. A monadic value of type ma (for some monad m) is a container in which a value of type a is stored.
Studying monads through reflections on monadic values is the wrong approach, and the right one is through reflections on monadic functions. I will try to convince you that definition (1) even has some meaning. But the definition (2), as we will see later, is the wrong way to study monads. Most monads are not containers at all, although some may behave like containers too.
Let's take our function, hopefully reasonably clear, as a starting point:
f :: a -> m b
Then the function fx, where x of type a, will be of type mb:
x :: a
fx :: mb
fx is now a “monadic value”, which is not entirely intuitive. Consider another function:
g :: a -> ( ) -> a
gx ( ) = x
g literally does the following: it takes a value of any type a and wraps it in a function, so you can get a result by passing an empty value to g. ({6}) An empty type and value are written to Haskell equally as parentheses (), and this is just a type / value that is not important to us. (The word “empty” means that this value is of no interest to us.) Let us give an example:
h = g 10
h ( ) - the number 10 will be calculated
Now, what do we get by inventing the function g (fx)? Let's look at the types:
fx :: mb - see above
g :: a -> ( ) -> a
g ( fx ) :: ( ) -> mb
Thus, the function g (fx) is of type () -> m b. In other words, it takes an empty value and returns a monadic value. And if you look at it from the other side, it is a monadic function that converts an empty value (no matter which one) into a value of type b, while at the same time performing “something else”. (“Something else” depends on which monad is used.) There is some meaning to this.
Here is my thought. If you think that you need to understand what a monadic value is (type mb), it is best to consider it a monadic function of type () -> mb, that is, a function that not only maps an empty value to a value of type b, but also does something something else. As if a value of type mb is a function of type () -> mb, only written in a different way. Monadic values, so to speak, are “secret functions.” That is why they are often called “actions”, and are associated with functions, and not quite functions. (Sometimes we even say “perform an action”, which is similar to the use of a function.)
A few examples now do not hurt. I will use two Haskell I / O functions:
getLine :: IO String
putStrLn :: String -> IO ( )
getLine is a “function” (actually, a monadic value, also known as a “monadic action”), which reads a line of text from the console and somehow returns it. putStrLn is a function (this time is really a function), which takes a string as an argument and prints it to the console, adding the end-of-line character.
Think for a second about how the types of these functions look in traditional languages. You can assume something like this:
getLine :: ( ) -> String - not in Haskell
putStrLn :: String -> ( ) - not in Haskell
The getLine function is simple to understand: it takes an empty value (no matter what), somehow interacts with the console, fishes a string from there, and returns this line. putStrLn takes a string as an argument, somehow interacts with the console (typing the string), and returns an empty value (no matter what). Please note that the meaning of empty values has been reduced to ensure that functions are really functions, that is, they have an input and an output value. Having got rid of (), we would have stayed with:
getLine :: String
putStrLn :: String
and this is not true: getLine is not just a line; it must be called with an argument to return a string. Similarly, putStrLn is not just a string; because it needs a string argument, although it returns no matter what. In each case, empty values are needed only to substitute the input or output value.
But back to Haskell. We have:
getLine :: IO String
putStrLn :: String -> IO ( )
The function type putStrLn is not hard to understand. This is simply a monadic function inside the IO monad. The implication is that it takes a string to print, returns an empty value (no matter what), and does "something else." (In this case, it interacts with the console to print a line, which is what the IO monad allows you to do.)
The function type getLine is more difficult to understand. getLine - monadic value. But it's easier for us to think of it as a monadic function of the type () -> IO String. Then it makes sense: it is a function that takes no matter what value and returns a string, interacting with the console in the process (that is, waiting for what you type in the console).
However, in Haskell for this function there is no such type as () -> IO String, but there is a type IO String. It turns out that a monadic value is a monadic function with an implicit input argument of type (). Many Haskell experts perceive it as an “action”. When they say getLine is an “action” that performs some kind of I / O operation, they mean a monadic function. When in the following articles we will discuss state monads, you are more aware of how something that looks like value can act as a function.
In the next article we will talk about two fundamental monadic operations: where they come from and what lies behind them.
Content
Part 1: The Basics
Part 2: functions >> = and return
Part 3: Monad Laws
Part 4: Maybe Monad and List Monad
Notes
{1} In the original “error prone” - “prone to bugs”, which could be translated somewhat differently. ;)
{3} This is called “parametric polymorphism”.
{4} The author has in mind a set of objects of the same type as a result of functions. The problem, in his opinion, is that functions can return a different number of objects: from zero to n, - that is, the number of objects is not known in advance. In both C and Java, this problem is effectively solved by dynamic data types.
{5} In the original - a steady expression: "have our cake and eat it too."
{6} In the original - “single” value, unit.
From translator
Links to other materials from the author, I have not found yet. I will give my own.
1. Haskell Tutorials The most comprehensive collection of links to manuals and articles on Haskell in English.
2. Haskell on xgu.ru - many useful links.
3. Russian Lambda Planet - an excellent source of information on AF in Russian.
4. Haskell Planet is an even more excellent source of information on Haskell and OP, in English.