By mike vanier
In previous articles in this series, we studied the conceptual basis of monads, but our discussion was too abstract. Now that you, I hope, have understood what monads are and what they are for, the time has come for a detailed examination of specific monads. This means that we define the correct embodiments of the
Monad type class for the many different concepts of computation that we saw before. We use our knowledge to get a monadic composition in each case through monadic application (operator
>> = ), and using monadic laws we derive the definition of
return .
Monad Maybe
Maybe monad is usually presented first in Haskell manuals, because it is very easy to use, implement and understand. First, take a look at the definition of the
Maybe data type:
')
data Maybe a = Nothing | Just a
Here it is stated that
Maybe is a type constructor in which a specific type
a is placed to get a (specific) data type. They also say that
Maybe is a “polymorphic” data type, the meaning is the same. So, if
a were an
Int , we would get:
data Maybe Int = Nothing | Just int
Only we don’t need to write it directly, since the abstract definition above applies to all types.
A value of type
Maybe a may or may not be present. If the value is
Nothing (“Nothing”), then it is “as if not”, and if it is
Just x for some value of
x , then this is “just” the value of
x . You can think of this as a container, in which either 0 elements or it is one. (Recall that I once said that monadic values ​​are sometimes mistakenly represented as containers. This is the very case.)
The
Maybe polymorphic type is useful in that we can use it as a model of an “extended function” that either produces something as an output value, or it crashes and cannot return any value. (That is, such a function may fail.) It is written like this:
f :: a -> Maybe b
The function
f takes a value of type
a and either returns
Nothing (a sign of failure) or a value of
Just x , where y
x is type
b . Functions like
f will work in the
Maybe monad, and the composition of two such functions is as follows:
f :: a -> Maybe b - we assume that f is somewhere defined
g :: b -> Maybe c - we assume that g is somewhere defined
h :: a -> Maybe c is the monadic composition f and g
h = f > => g - remember:> => is the operator of a monadic composition
We said that all monads must be type constructors.
Maybe is a type constructor, so everything is fine. But in order for
Maybe to become a monad, we need to create an instance of the
Monad type class, which means that we have to fill in the following definition:
instance Monad Maybe where
( >> = ) = {- definition >> = for Maybe -}
return = {- the definition for Maybe -}
How can we set
(>> =) and
return for
Maybe ?
First, we write the definition framework for
>> = , covering two possible cases of the left operand of type
Maybe a :
Nothing >> = f = {- you need to add -}
Just x >> = f = {- you need to add -}
where
x is of type
a . The left part of the definition can be written in another way:
( >> = ) Nothing f = {- you need to add -}
( >> = ) ( Just x ) f = {- you need to add -}
But it is better still if the operator
(>> =) is specified as an operator, and not as a function, and Haskell allows us to.
To complete this definition, let's think about what we want from the monadic composition in the
Maybe monad. Let's take our example with the functions
f and
g , monadically compose them and get the function
h :
f :: a -> Maybe b
g :: b -> Maybe c
h :: a -> Maybe c
h = f > => g
If we pass an argument to the function
f , and it returns
Nothing (that is, it fails), then what should the function
h return?
f x = Nothing
h x = ( f > => g ) x = ???
It seems obvious that if
fx returns
Nothing , then
h must also return
Nothing , because if part of the expression (function
f ) could not return the result, then the whole expression (function
h ) cannot return it. The only option when
h returns a value is when
f returns the result (let's call it
y ), it will be passed to the function
g , and
gy will also have the correct result. If
f or
g fails, then
h will fail, that is, calculating
hx will be
Nothing .
With this in mind, from our definition of
h we get:
h = f > => g
h = \ x -> ( f x >> = g ) - equivalent (from definition> =>)
h x = f x >> = g - equivalent
- assume that f x == Nothing
h x = ( Nothing >> = g )
= Nothing
-- in this way:
Nothing >> = g = Nothing
Now we know how the
(>> =) operator responds to the
Nothing argument — it simply returns the same
Nothing :
Nothing >> = f = Nothing
Just x >> = f = {- you need to add -}
Please note, I replaced
g with
f here , and this is correct, because the names of the functions are not important. In practice, we generally get rid of the function names, if possible, and replace them with the special operator
_ (underscore), like this:
Nothing >> = _ = Nothing
With the second equation, this cannot be done, because we will also use the function
f in the definition.
Now let's get on the other side. If
fx does not fail, the result is the value
Just y for some
y . We need to "unpack" the value of
y from
Just y , which we would then pass to the function
g , and
gy is the result of the whole function
h :
h = f > => g
h = \ x -> ( f x >> = g ) - equivalent (from definition> =>)
h x = f x >> = g - equivalent
- suppose that f x == Just y
h x = ( Just y >> = g )
= g y
What gives us the second part of the definition:
Nothing >> = f = Nothing
Just x >> = f = f x
Notice that I replaced
y with
x and
g with
f . Again, the names of variables and functions do not matter as long as you are consistent.
This completes the definition of the operator
>> = for the
Maybe monad. Now we need to get a
return for this monad:
return x = ???
for any value of
x . What options do we have? We could just say that
return x = Nothing
for any
x . However, we would violate monad laws if we did:
return x >> = f == f x
Nothing >> = f == f x
Nothing == f x - WRONG!
Assuming that at least
some fx is not
Nothing (for example, consider the monadic function
fx =
Just x ), we get an error. There is another option:
return x = Just x
and it satisfies the monad laws:
return x >> = f
= ( Just x ) >> = f - by definition, return for Maybe monad
= f x - by definition >> = for Maybe monad
- implementation of the first monad law
Just x >> = return
= return x - by definition >> = for Maybe monad
= Just x - by definition, return for Maybe monad
- implementation of the second monad law
Nothing >> = return
= Nothing - by definition >> = for the monad Maybe
- implementation of the second monad law
Once the laws are respected, take this option. The full definition of the
Maybe monad is:
instance Monad Maybe where
return x = Just x
Nothing >> = f = Nothing
Just x >> = f = f x
Wow We just created our first monad!
Just to protect yourself, check that it satisfies the third monad law, which says:
( mv >> = f ) >> = g == mv >> = ( \ x -> ( f x >> = g ) )
We first check the law for the case when mv = Nothing:
( Nothing >> = f ) >> = g - on the left side
= Nothing >> = g - by definition >> =
= Nothing - by definition >> =
Nothing >> = ( \ x -> ( f x >> = g ) ) - on the right side
= Nothing - by definition >> =
Well, the check was successful. Now let's see if it works for
mv = Just v , where
v is some value:
( ( Just v ) >> = f ) >> = g - on the left side
= f v >> = g - by definition >> =
( Just v ) >> = ( \ x -> ( f x >> = g ) ) - on the right side
= ( \ x -> ( f x >> = g ) ) v - by definition >> =
= f v >> = g - normal function application (beta reduction)
And also successful. So the law is being carried out! This is
really the correct definition of the
Maybe monad! And the audience goes crazy!
What is the meaning of all this? This means that now we can easily connect a bunch of monadic functions of the
Maybe monad. You may be wondering why this is important? It is not hard to imagine many monadic functions in the
Maybe monad, that is, those that can fail. Let's say they have the type
Int -> Maybe Int . Here are three similar functions:
f :: Int -> Maybe Int
f x = if x ` mod` 2 == 0 then Nothing else Just ( 2 * x )
g :: Int -> Maybe Int
g x = if x ` mod` 3 == 0 then Nothing else Just ( 3 * x )
h :: Int -> Maybe Int
h x = if x ` mod` 5 == 0 then Nothing else Just ( 5 * x )
We would like to combine them into one function, which is the result of applying in order
f ,
g ,
h :
k :: Int -> Maybe Int
And if one of the three functions fails, the function
k should return
Nothing . This function multiplies the input number by 30 if it is not divisible by 2, 3 or 5 (and if divisible, the function returns
Nothing ).
From the previous material, if you understood it well, it should be clear that you can specify
k through the monadic composition:
k = f > => g > => h
Or you can take the operator
>> = :
k x = f x >> = g >> = h
Or perhaps you like to
do- abstract:
k x = do y <- f x
z <- g y
h z
It's easy, anyway. {1: In the original - the steady expression "Any way you slice it", similar in meaning. - Approx. trans.}
In general, the function
h can be defined at all without monadic constructions; it will look like this:
k x = case f x of
Nothing -> Nothing
Just y -> case g y of
Nothing -> Nothing
Just z -> h z
Now I understand why the
Maybe monad is important. It dramatically simplifies the code by chaining several
Maybe functions. Imagine a coarse nonmonadic code for composing ten
Maybe functions in this form. Such an indent to the right would have been such that readability would suffer greatly, and the overall structure of the calculations would be lost in the labyrinth of nested
case expressions. But with the help of monads, the composition of ten functions is simply written:
f11 = f1 > => f2 > => f3 > => f4 > => f5 > => f6 > => f7 > => f8 > => f9 > => f10
or (using
>> = ):
f11 x = f1 x >> = f2 >> = f3 >> = f4 >> = f5 >> = f6 >> = f7 >> = f8 >> = f9 >> = f10
Using monads, the composition of monadic functions is as simple as the composition of ordinary (nonmonadic) functions.
The
Maybe monad is very useful for explaining basic concepts, but it can be confusing: many people mistakenly believe that the only role of monads is in processing non-functional calculations, that is, those calculations that work with input / output (with console or file), with a changeable global state, and so on. And I showed that some monadic calculations can equally well be performed without monads at all. It turns out that monads are not something mandatory, they are simply
very convenient. That is why I said that even despite the
initial reason for the invention of monads for non-functional calculations (working with
IO ), they turned out to have much greater applicability. Because of this, monads are good.
Now for the next monad.
Monad list
If
Maybe you just liked the monad, you will even love the
list monad. ;-) In this case, we will fill in the following definition:
instance monad [ ] where
( >> = ) = {- definition of >> = for lists -}
return = {- definition of return for lists -}
Notice that to represent the empty list
[] we use a list type constructor. This is a small hack (for lists in Haskell special syntax support is included). But nothing can be done.
As with all monads, the first task will be to understand what the monadic functions of this monad are. For the list, the monadic function
f looks like this:
f :: a -> [ b ]
(where
[b] means, of course, “a list of elements of type
b ”). Recall that the generic definition of a monadic function is written as follows:
f :: a -> m b
for some monad
m , which must be a type constructor. The list is an obvious candidate for the monad, since the “list of” is a type constructor (even if its syntax is hard-wired in Haskell); Optionally, we could define the list ourselves:
data List a = Nil | Cons a ( List a )
The type of monadic functions for it would look accordingly:
f :: a -> List b
But we will still adhere to the standard syntax.
What are the functions of this sort? Usually they are understood as functions that take an input value of type
a and produce a bunch of values ​​of type
b , collected in one convenient container (list). (And again, we have a monad that looks like a container.) Another way to think of them as functions that return multiple values, that is, such functions return a bunch of different values ​​“in one”. (I do not mean "parallel" because it implies parallel processing, which is not here.) Multiple output values ​​are just list items. When using functions like the following, useful perspectives open up:
f :: Int -> [ Int ]
g :: Int -> [ Int ]
Here both
f and
g take one
Int -value and return many
Int -values. But what if we want to take each result of the function
f and apply it to each result of the function
g , collecting the results of the application? It would be great if this could be done directly, without unpacking each element from the result lists of the functions
g and
f . And this can be done using the monad list.
Let's move on to more tangible examples of these functions:
f :: Int -> [ Int ]
f x = [ x - 1 , x , x + 1 ]
g :: Int -> [ Int ]
g x = [ - x , x ]
How do we “bundle” these two functions?
fx returns a list, and to apply
g to each element we need a
map function:
f 10 -> [ 9 , 10 , 11 ]
map g ( f 10 ) -> [ [ - 9 , 9 ] , [ - 10 , 10 ] , [ - 11 , 11 ] ]
This new result is interesting, but it cannot be a composition of
f and
g , because it has a different type (list of lists of
Int , and not just a list of
Int ). We can smooth it into a simple list with the
concat function (which simply combines the lists into one):
- Pay particular attention to the type concat: [[a]] -> [a]
concat ( map g ( f 10 ) ) -> [ - 9 , 9 , - 10 , 10 , - 11 , 11 ]
We get a set of all results produced by applying
f to an integer and then applying
g to what happened after
f . If you think of
f and
g as functions that create a set of “here and now” results, their output values ​​will be the set of all possible applications, first the functions
f , and then the functions
g . We can present this in the form of a diagram:
g | -9
| 9 ----> |
| | 9
|
f | g | -ten
10 ----> | 10 ----> |
| | ten
|
| g | -eleven
| 11 ----> |
| eleven
It is clearly seen that the composition
f and
g is the set of all paths between
f and
g .
Curiously, we have just defined the
>> = operator for the list monad! It is given as follows:
- mv :: [a]
- g :: a -> [b]
mv >> = g = concat ( map g mv )
where
mv is a monadic value in a list monad (which is simply a list of values ​​of type
a ). In the previous example,
mv is the result of calculating
f 10 . The definition works even for an empty list
[] , since mapping a function to an empty list will give an empty list, and a
concat for an empty list is also always an empty list. The result is a very simple definition of the operator
>> = .
[Note to GHC fans: I believe the
>> = operator in the GHC compiler is implemented more efficiently and differently, although it does the same thing.]
How to set
return for this monad? Let's think of the monadic value-list as an “action” that returns many values. Recall that
return should be equivalent to a single function, as in other monads. What would be the equivalent of a single function in a list monad? It must take a value and return an “action” that, after a “calculation,” simply returns that value. So we realized that
return can't just return an empty list. It is reasonable to assume about
return something like this:
return :: a -> [ b ]
return x = [ x ]
That is,
return elementarily creates a list from a single value. Check whether the monad laws are observed in this case:
- f :: a -> [b]
- x :: a
return x >> = f = concat ( map f ( return x ) ) - by definition >> =
= concat ( map f [ x ] ) - by definition return
= concat [ f x ] - by definition map
= f x - by definition concat
- implementation of the first monad law
- mv :: [a]
mv >> = return = concat ( map return mv ) - by definition >> =
= concat ( map ( \ x -> [ x ] ) mv ) - by definition return
- Two cases:
- Case 1: mv == []
= concat ( map ( \ x -> [ x ] ) [ ] ) - by definition mv
= concat [ ] - by definition map
= [ ] - by definition concat
= mv - by definition mv
- Case 2: mv == [v1, v2, ...]
= concat ( map ( \ x -> [ x ] ) [ v1 , v2 , ... ] ) - by definition mv
= concat [ [ v1 ] , [ v2 ] , ... ] - by definition map
= [ v1 , v2 , ... ] - by definition concat
= mv - by definition mv
- implementation of the second monad law
Well, the two laws for the monad are proven. You might want to try other definitions of
return (when
return returns, for example, a specific list
[0, 2, 3] , or when it returns an infinite number of copies of its argument), and you will see that they will all violate monadic laws. This is a good way to practice monadic laws.
It remains to prove the third monad law, before calling the list a real monad. I must say, it is more difficult, but we will try anyway. Simplify your task - take the "pleasant" form of the third monad law (defined through the monadic composition). First we need a definition of a monadic composition of lists:
- The third monad law (pleasant version):
( f > => g ) > => h = f > => ( g > => h )
-- By definition:
f > => g = \ x -> f x >> = g
- Take the definition >> = for the list monad:
f > => g = \ x -> concat ( map g ( f x ) )
- You can rewrite the expression through the composition operator (.):
f > => g = concat . map g . f
In addition, I will use several properties of the
concat and
map functions. You still have to take them on faith; then I'll show you how to get them:
- equation 1:
map ( f . g ) = map f . map g
- equation 2:
map f . concat = concat . map ( map f )
- equation 3:
concat . concat = concat . map concat
I once said that the dot
(.) Is a (pure) composition operator. It has lower priority than the use of the function, therefore, an expression like
map f. map g means only
(map f). (map g) . Haskell programmers usually get rid of parentheses where possible. It is also important to understand that, for example, the function
map f is a function of the
map , which generally has two arguments (such: a function for the elements of the list and the list itself). If you recall what I was talking about currying, then you will guess that
map f is a function that takes one list and returns another, where the function
f is applied to each element. Curry we will now use a lot.
So, the conclusion of the evidence, taking into account all the above:
( f > => g ) > => h
= ( concat . map g . f ) > => h - by definition> =>
= concat . map h . ( concat . map g . f ) - by definition> =>
= concat . map h . concat . map g . f - delete unnecessary brackets
f > => ( g > => h )
= f > => ( concat . map h . g ) - by definition> =>
= concat . map ( concat . map h . g ) . f - by definition> =>
= concat . map ( ( concat . map h ) . g ) . f - equivalent conversion
= concat . ( map ( concat . map h ) ) . ( map g ) . f - by equation 1
= concat . map ( concat . map h ) . map g . f - delete unnecessary brackets
= concat . map concat . map ( map h ) . map g . f - by equation 1
Now we need to show that:
concat . map h . concat = concat . map concat . map ( map h )
Let's prove it.
- add parentheses for clarity:
concat . ( map h . concat ) = concat . map concat . map ( map h )
- according to equation 2:
concat . concat . map ( map h ) = concat . map concat . map ( map h )
- add parentheses for clarity:
( concat . concat ) . map ( map h ) = concat . map concat . map ( map h )
- according to equation 3:
concat . map concat . map ( map h ) = concat . map concat . map ( map h )
And this is the end. Fuuuh! In fact, Haskellists rarely do this, but evidence is needed to show that the intended monad is indeed a monad.
Note in the margin: Conclusion of identities with map / concat (equations 1, 2 and 3)
Training
Before embarking on the proofs of identities, we first need to prove several others (mathematics is difficult!). We list them:
- Expression 4:
concat ( x: xs ) = x ++ concat xs
- Expression 5:
concat ( x ++ y ) = concat x ++ concat y
- Expression 6:
map f ( x ++ y ) = map f x ++ map f y
Expression 4 follows from the definition of concat . Expression 5 is easily proved by induction on x using equation 4.
- base case: x - empty list
concat ( [ ] ++ y ) = concat [ ] ++ concat y
concat y = [ ] ++ concat y - by definition concat []
concat y = concat y - by definition ++
-- Right.
- induction: the list x is not empty; x1 is the head of the list; xs is the tail of the list.
concat ( ( x1: xs ) ++ y )
= concat ( x1: ( xs ++ y ) ) - by definition ++
= x1 ++ concat ( xs ++ y ) - by equation 4
= x1 ++ concat xs ++ concat y - inductive hypothesis
concat ( x1: xs ) ++ concat y
= x1 ++ concat xs ++ concat y - by equation 4
- True, that was required to prove.
Equation 6 can be proved in the same way:
- base case: x - empty list
map f ( [ ] ++ y ) = map f [ ] ++ map f y
map f y = [ ] ++ map f y
map f y = map f y
-- Right.
- induction: the list x is not empty; x1 is the head of the list; xs is the tail of the list.
map f ( x ++ y )
= map f ( ( x1: xs ) ++ y )
= map f ( x1: ( xs ++ y ) ) - by definition ++
= f x1: map f ( xs ++ y ) - by definition map
= f x1: ( map f xs ++ map f y ) - inductive hypothesis
= ( f x1: map f xs ) ++ map f y - by definition ++
= map f ( x1: xs ) ++ map f y - by definition map
= map f x ++ map f y - by definition x
- True, that was required to prove.
Now with this we prove equations 1, 2 and 3.
Equation 1:
map ( f . g ) = map f . map g
We use induction on the implicit list argument on both sides, as well as the definition of the map :
- base case: empty list
map ( f . g ) [ ] = [ ]
( map f . map g ) [ ] = map f ( map g [ ] ) = map f [ ] = [ ]
- OK
- induction: non-empty list:
map ( f . g ) ( x: xs )
= ( ( f . g ) x ) : ( map ( f . g ) xs ) - by definition map
= ( f ( g x ) ) : ( map ( f . g ) xs ) - by definition (.)
( map f . map g ) ( x: xs )
= map f ( map g ( x: xs ) ) - by definition (.)
= map f ( ( g x ) : ( map g xs ) ) - by definition map
= ( f ( g x ) ) : ( map f ( map g xs ) ) - by definition map
= ( f ( g x ) ) : ( ( map f . map g ) xs ) - by definition (.)
= ( f ( g x ) ) : ( map ( f . g ) xs ) - inductive hypothesis
- True, that was required to prove.
Equation 2:
map f . concat = concat . map ( map f )
We prove by induction:
- base case: empty list
( map f . concat ) [ ] = map f ( concat [ ] ) = map f [ ] = [ ]
( concat . map ( map f ) ) [ ] = concat ( map ( map f ) [ ] ) = concat [ ] = [ ]
- OK
- induction: non-empty list
( map f . concat ) ( x: xs )
= map f ( concat ( x: xs ) ) - by definition (.)
= map f ( x ++ concat xs ) - by equation 4
= map f x ++ ( map f ( concat xs ) ) - by equation 6
= map f x ++ ( ( map f . concat ) xs ) - by definition (.)
= map f x ++ ( ( concat . map ( map f ) ) xs ) - inductive hypothesis
= map f x ++ concat ( map ( map f ) xs ) - by definition (.)
( concat . map ( map f ) ) ( x: xs )
= concat ( map ( map f ) ( x: xs ) ) - by definition (.)
= concat ( map f x: map ( map f ) xs ) - by definition map
= map f x ++ concat ( map ( map f ) xs ) - by equation 4
- True, that was required to prove.
Equation 3:
concat . concat = concat . map concat
As always, we use induction:
- base case: empty list
( concat . concat ) [ ] = concat ( concat [ ] ) = concat [ ] = [ ]
( concat . map concat ) [ ] = concat ( map concat [ ] ) = concat [ ] = [ ]
-- Right
- induction: non-empty list
( concat . concat ) ( x: xs )
= concat ( concat ( x: xs ) ) - by definition (.)
= concat ( x ++ concat xs ) - by equation 4
= concat x ++ concat ( concat xs ) - by equation 5
( concat . map concat ) ( x: xs )
= concat ( map concat ( x: xs ) ) - by definition (.)
= concat ( concat x: map concat xs ) - by definition map
= concat x ++ concat ( map concat xs ) - by equation 4
= concat x ++ ( concat . map concat ) xs - by definition (.)
= concat x ++ ( concat . concat ) xs - inductive hypothesis
= concat x ++ concat ( concat xs ) - by definition (.)
- True, that was required to prove.
I hope, now you have no doubt that the monad of the list is really a monad. ;-)
And the most interesting question here, of course, is: what can we
do with a monad-list that would be difficult without a monad? Here is a simple example: to find all pairs of numbers between 1 and 6, whose sum equals 7 (numbers are, for example, dice). Using the monad of the list, solve the problem elementarily:
- Use the <font color = blue> do </ font> notation:
do n1 <- [ 1 .. 6 ]
n2 <- [ 1 .. 6 ]
if n1 + n2 == 7 then return ( n1 , n2 ) else [ ]
- Result: [(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)]
And you can also rewrite without the
do- notation, only it turns out incomprehensible
[ 1 .. 6 ] >> = \ n1 ->
[ 1 .. 6 ] >> = \ n2 ->
if n1 + n2 == 7 then return ( n1 , n2 ) else [ ]
How it works? You would have to sit down and track all the calculations related to
>> = and
return for lists, but here’s an explanation on your fingers. So:
[1..6] is the monadic value in the monad of the list, and
n1 runs through
all of them at once.
n2 - exactly the same. And all pairs are returned
(n1, n2) , whose sum is correct. This is how we calculate a function over all elements, as if they were one element. This is the essence of the monad list.
If you are very adept at Haskell programming, a siren will probably scream in your head. "How so!" - I hear your indignation. “Why not just take advantage of
list comprehensions ?” And indeed:
[ ( n1 , n2 ) | n1 <- [ 1 .. 6 ] , n2 <- [ 1 .. 6 ] , n1 + n2 == 7 ]
The possibilities of the list monad and the list generators are identical. The choice of a particular syntax depends on preferences, and it can also be determined by a task. In his article
“Comprehending Monads,” Phil Walder (in the title there is a play on words, as in many of his articles) even suggested extending the syntax of list generators to arbitrary monads. This proposal was still rejected in favor of the current record.
A list monad is more than an alternative to a list generator. On the one hand, there are many very common functions that work with the incarnations of any monads; with the list monad they will work too. On the other hand, there is an extension of the
Monad type class called
MonadPlus . It complements the functionality of monads (specifically, the “zero” element for the monad is defined there and the operation of “adding” two monadic values ​​is introduced). The lists are made by both
MonadPlus and
MonadPlus , and this means that common functions from
MonadPlus will work on lists as well. (There is, for example, a generalization of the
concat function, the msum
function , which works with all
MonadPlus embodiments, including lists.) It is great to use generic functions that can work with many data types, but not specify them for each type separately. This is a clear victory.
Next
In the next article, we will look at monads that allow you to track errors.
Content
Part 1: The BasicsPart 2: functions >> = and returnPart 3: Monad LawsPart 4: Maybe Monad and List Monad