📜 ⬆️ ⬇️

Categories big and small

This is the fourth article in the series "Theory of Categories for Programmers."

Understand the benefits of categories by studying various examples. Categories come in all shapes and sizes and often appear in unexpected places. We will start with the simplest.

No objects


The simplest category is without objects and, as a result, without morphisms. This is a very sad category, but it is important in the context of other categories, for example, in the category of all categories (yes, there is one). If you think that the empty set is meaningful, then why not be an empty category?
')

Simple graphs


You can build categories by simply connecting objects with arrows. Start with any directed graph and turn it into a category simply by adding more arrows. First, add a single arrow on each node. Then, for any two arrows, such that the end of one coincides with the beginning of the other (in other words, any two composable arrows), you need to add a new arrow, which will be their composition. Every time you add a new arrow, you need to consider its composition with all the other arrows (except single ones). As a rule, it turns out an infinite number of arrows, but this is normal.

This process can be viewed as both the creation of a category in which there is one object for each node of the graph, and all possible chains of composable edges as morphisms. (You can look at single morphisms like chains of length zero.)

This category is called the free category generated by this graph. This is an example of a free construction, the process of supplementing a structure by expanding it with a minimum number of elements in order to satisfy the laws of construction (in this case, the laws of the category). Then we will see more examples of this design.

Orders


And now something completely different! A category in which morphisms are relations between objects: the ratio is less or equal. Let's check if this is really a category. Do we have singular morphisms? Every object is less than or equal to itself! Do we have a composition? If A <= B and B <= C, then A <= C! Is the composition associative? Yes! A set with this relation is called a preorder, and we have shown that the preorder is a category.

You can also take a stronger dependence that satisfies the additional condition that if A <= B and B <= A then A must be the same as B. This is called partial order.

Finally, you can impose the condition that any two objects are connected with each other, then you get a linear or full order.

We characterize these ordered sets as categories. A preorder is a category where there is no more than one morphism from any object A to any object B. Another name for this category is “thin”. Preorder is a subtle category.

The set of morphisms from object a to object b in the category C is called the hom set and is written as C (a, b) (or, sometimes, HomC (a, b)). Thus, each hom-set in the preorder is either empty or single-element, including the hom-set C (a, a), the set of morphisms from a to itself, which must be one-element and contain only a single morphism. In the preorder, you can have cycles, in a partial order, they are prohibited.

One can easily verify that any oriented acyclic graph generates a partial order in its free category.

It is very important to be able to recognize preorders, partial and complete orders due to sorting. Sorting algorithms, for example, quick sorting, bubble sorting, merge sorting, etc., can work correctly only on full orders. Partial orders can be sorted using topological sorting.

Monoid as many


The monoid is a surprisingly simple, but surprisingly powerful concept. This concept underlies arithmetic: both addition and multiplication form a monoid. Monoids are widespread in programming. They appear in the form of strings, lists, collapsible data structures, futures in parallel programming, events in functional reactive programming, and so on.

Traditionally, a monoid is defined as a set with a binary operation. All that is required of this operation is its associativity, and the presence of a single special element that behaves as a unit in relation to this operation.

For example, natural numbers with addition and zero form a monoid. Associativity means that:
(a + b) + c = a + (b + c) 

(In other words, we can omit the brackets when adding numbers.)

The neutral element is zero because:
 0 + a = a 

and
 a + 0 = a 

The second equation is superfluous because addition is commutative (a + b = b + a), but commutativity is not part of the definition of a monoid. For example, concatenation is not commutative, but it forms a monoid. The neutral element for concatenating strings, by the way, will be an empty string that can be attached on either side of the string without changing it.

In Haskell, we can define a type class for monoids — a type for which there is a neutral element called mempty and a binary operation called mappend:
 class Monoid m where mempty :: m mappend :: m -> m -> m 

The type signature for the function of two arguments, m -> m -> m, may look strange at first glance, but it will become clear after we talk about currying.
You can interpret a signature with several arrows in two main ways: as a function with several arguments, with the rightmost type as returned, or as a function of one argument (leftmost), which returns the function. The last interpretation can be underlined with the help of brackets (which are redundant, since the arrow is right associative), for example: m -> (m -> m). We will return to this interpretation a little later.

Note that in Haskell there is no way to express the monoidal properties of mempty and mappend (that is, that mempty is neutral and that mappend is associative). The responsibility for this lies with the programmer. (comment of the translator: GHC has a similar mechanism intended for another purpose, but it can be used as a comment known to the compiler: www.haskell.org/haskellwiki/GHC/Using_rules )

Haskell classes are not as intrusive as C ++ classes. When you define a new type, you do not need to specify its class in advance. You can be lazy and declare that this type will be an instance of some class much later. As an example, let's declare that the String type is a monoid, providing the implementation of mempty and mappend (this, in fact, was done for you in the standard Prelude):
 instance Monoid String where mempty = "" mappend = (++) 

Here we reused the list concatenation operator (++) because the string is, in fact, a list of characters.

A little about Haskell syntax: any infix operator can be turned into a two-argument function by enclosing it in brackets. Having two strings, you can combine them by pasting ++ in between:
 "Hello " ++ "world!" 

or passing them as two arguments to the (++) function:
 (++) "Hello " "world!" 

Note that function arguments are not separated by commas and are not surrounded by parentheses. (This is probably the hardest thing to get used to while learning Haskell.)

It is worth noting that Haskell makes it possible to express the equality of functions directly:
 mappend = (++) 

Conceptually, this is not the same as the equality of the values ​​returned by the function:
 mappend s1 s2 = (++) s1 s2 

The first represents the equality of morphisms in the Hask category (or Set, if you ignore the bottom). Such equations are not only more capacious, but are often generalized to other categories. The second is called extensional equality, and states the fact that for any two input lines, the results of mappend and (++) are equal. Since argument values ​​are sometimes called points (for example: the value of f at point x), this is called dotted notation. Functional equality without specifying arguments is called pointless notation. (By the way, equations in pointless notation often include a composition of functions, which is designated as a point, so this name may seem strange to newbies.)

The closest analogue to a monoid declaration in C ++ is the proposed syntax for concepts.
 template<class T> T mempty = delete; template<class T> T mappend(T, T) = delete; template<class M> concept bool Monoid = requires (M m) { { mempty<M> } -> M; { mappend(m, m); } -> M; }; 

The first definition uses a template value (also proposed in the standard). A polymorphic value is a family of values, a different value for each type.

The delete keyword means that the default value is undefined: they must be specified individually for each case. Also, there is no default implementation for mappend.

The concept Monoid is a predicate (hence the type bool), which checks whether the corresponding definitions of mempty and mappend exist for a given type M.

You can implement the Monoid concept by providing the appropriate specializations and overloads:
 template<> std::string mempty<std::string> = {""}; std::string mappend(std::string s1, std::string s2) { return s1 + s2; } 

Monoid as a category


It was a “familiar” definition of a monoid in terms of the elements of a set. But, as you know, in the theory of categories we try to get away from the sets and their elements, and instead talk about objects and morphisms. So let's change our line of thought a bit, and think about using a binary operator as “mixing” or “shifting” within a set.

For example, there is an operation to add 5 to any natural number. It displays 0 to 5, 1 to 6, 2 to 7, and so on. This is a function defined on the set of natural numbers. This is good: we have a function and a lot. In general, for any number n, there is a function to add n - the “adder” n.

How to compile these adders? The composition of a function that adds 5, with a function that adds 7, is a function that adds 12. Thus, the composition of adders satisfies the rules that are equivalent to the rules of addition. This is also good: we can replace addition with composition of functions.

But that's not all: there is also an adder for the neutral element, zero. Adding zero does not change the elements, so it is a single function in the set of natural numbers.

Instead of giving you the traditional rules of addition, I could give you the rules of composition adders without losing information. Note that the composition of adders is associative, since the composition of functions is associative; and we have a zero adder corresponding to a single function.

An astute reader might have noticed that the mapping of integers to adders follows from the second interpretation of a mappend signature: m -> (m -> m). The type tells us that mappend maps an element of the monoid set to a function acting on this set.

Now I want you to forget that you are dealing with a set of natural numbers and just think of it as a single object, a blob with a bunch of morphisms - adders. A monoid is a category of one object. In fact, the name of the monoid comes from the Greek mono - it means lonely. Each monoid can be described as a category with one object and a set of morphisms that follow the corresponding rules of composition.

image


Concatenation of strings is interesting because we have the choice of right or left append. The models are mirror reverse to each other. You can easily make sure that adding “bar” to “foo” on the right corresponds to adding “rab” to “oof” on the left.

You may ask the question: Does each categorical monoid — the category of one object — determine a unique set with a binary operator? It turns out that we can always extract a lot from the category of a single object. This will be a set of morphisms - adders in our example. In other words, we have a hom set M (m, m) of one object m in category M. We can easily define a binary operator in this set: the monoidal product of the two elements of this set is an element corresponding to the composition of the corresponding morphisms. If you give me two elements M (m, m) corresponding to the morphisms f and g, their product will correspond to the composition g∘f. Composition always exists because the source and result of these morphisms is one and the same object. And it is associative by category rules. The identical morphism is the neutral element of this work. Thus, we can always recover a monoid set from a categorical monoid. They are practically the same thing.

image


There is only one small problem for mathematicians: morphisms are not required to form a multitude. There are more things in the world of categories than sets. A category in which morphisms between any two objects form a set is called locally small. As promised, I will ignore such subtleties, but I decided that I need to mention this fact.

Many interesting phenomena in category theory are due to the fact that elements of a hom-set can be considered both as morphisms that follow the rules of composition and as elements of a set. Here the composition of morphisms in M ​​corresponds in the monoidal product in the set M (m, m).

Category Theory for Programmers: Preface
Category: essence of composition
Types and functions
Categories big and small
Categories

Source: https://habr.com/ru/post/248257/


All Articles