The move to functional programming began in earnest about a decade ago. We have seen languages like Scala, Clojure and F # attract attention. This movement was more than just the usual admiration "Oh, cool, new language!". There was something really driving this movement - or so we thought.
Moore's law told us that the speed of computers would double every 18 months. This law was observed from the 1960s to the 2000s. And then stopped. Frustratingly. The frequencies reached 3 GHz, and stabilized. Limit the speed of light has been achieved. The signals could not spread across the surface of the chip fast enough to ensure high speeds.
According to this, the equipment developers have changed their strategy. To achieve more bandwidth, they added more processors (cores). To make room for these cores, they removed most of the caching and pipeline equipment from the chip. Thus, the processors have become a little slower than they were, but they have become more. Bandwidth has grown.
I got my first dual-core machine 8 years ago. Two years later, I purchased a quad car. So, began multiplication of nuclei. And we all understood that this will affect software development in a way that we could not even imagine.
One of our reactions was the study of functional programming (FP). The FP strongly interferes with the state change of the once initialized variable. This has a profound effect on concurrency. If you cannot change the state of a variable, you cannot enter the race condition. If you cannot update the value of a variable, you cannot get the parallel update problem.
This of course was considered a solution to the multi-core problem. As the number of cores grows, concurrency, no, synchronicity would become an important issue. The OP was supposed to provide a programming style that would reduce the problems of working with 1024 cores in a single processor.
So everyone began to learn Clojure, or Scala, or F #, or Haskell, because they knew that the freight train was going on them, and they wanted to be prepared when he arrived.
But the freight train never arrived. Six years later, I purchased a quad-core laptop. Since then, I have had two more. It seems that the next laptop I get will also be quad-core. Are we seeing another stabilization?
By the way, last night I watched the 2007 film. The heroine used a laptop, looked through the pages in a trendy browser, used google, and received text messages on a clamshell phone. Oh, it was dated - I could see that the laptop was of an older model, the browser was of an older version, and the phone was far from modern smartphones. And yet these were not as impressive changes as the changes between 2000 and 2011 were. And even not nearly as impressive as the changes between 1990 and 2000. Do we observe stabilization in the pace of computer and software technologies?
Well, perhaps, FP is not such a critical skill as we once thought. Maybe we will not be buried under the cores. Maybe we should not worry about chips with 32,768 cores. Maybe we can all relax and get back to updating our variables again.
I think that would be a mistake. Big. I think that would be as big a mistake as using goto
. I think it would be as dangerous as giving up dynamic dispatch.
Why? We can start with a cause that interests us first. FP makes concurrency much safer. If you create a system with multiple threads, or processes, then using the AF will greatly reduce the number of problems with race conditions and concurrent updates that you might have.
Why more? Well, the OP is easier to write, easier to read, easier to test and easier to understand. Imagine how some of you are now waving their arms and shouting at the screen. You tried the OP and found it somehow, but not simple. All these maps and reduce, and all this recursion — especially tail recursion — is anything but simple . Of course. I understood. But this is only the problem of dating. As soon as you become familiar with these concepts - and the development of this acquaintance will take not so much time - programming will become much easier.
Why will it be easier? Because you do not need to track the status of the system. The state of variables cannot change, so the state of the system remains unchanged. And you do not need to track more than just the system. You do not need to keep track of the state of a list, or a set, or a stack, or a queue, because these data structures cannot be changed. When you put an element on top of a stack in the FP language, you create a new stack, rather than changing the old one. This means that the programmer needs to juggle a smaller number of balls simultaneously in the air. Less remembered. Less tracked. And for this code easier to write, read, understand and test.
So which language should you use? My favorite is Clojure. The reason why Clojure is absurd is simple - this is a Lisp dialect, beautifully simple language. Let me demonstrate.
This is a function in Java: f(x)
;
Now, to turn it into a function in Lisp, we simply transfer the first bracket to the left: (fx)
.
Now you know 95% Lisp, and you know 99% Clojure. A bit of stupid syntax with parentheses - in fact, almost everything regarding the syntax in these languages. They are absurdly simple.
Now I know, maybe you saw a program in Lisp earlier and you didn’t like all these brackets. And maybe you didn’t like CAR
, CDR
, CADR
, etc. Do not worry. Clojure has a bit more punctuation than Lisp, so there are fewer parentheses. Also in Clojure CAR
, CDR
and CADR
replaced by first
, rest
and second
. In addition, Clojure is based on the JVM, and gives you full access to the entire Java library, and any other Java framework or library you want. Compatibility is fast and simple. Better yet, Clojure will provide full access to object-oriented JVM features.
I hear you say “But wait!”, “OP and OOP are mutually incompatible!”. Who told you that? That's nonsense! Oh, it's true that in the OP you can not change the state of the object, but so what? In the same way as adding a number to the stack gives a new stack, calling the method object that sets the values gives a new object instead of changing the old one. This is very easy to handle as soon as you get used to it.
But back to the PLO. One of the features of OOP that I find most useful, at the level of the software architecture, is dynamic polymorphism. And Clojure provides full access to dynamic Java polymorphism. Perhaps an example will explain this better.
(defprotocol Gateway (get-internal-episodes [this]) (get-public-episodes [this]))
The code above defines a polymorphic interface for the JVM. In Java, this interface would look like:
public interface Gateway { List<Episode> getInternalEpisodes(); List<Episode> getPublicEpisodes(); }
At the JVM level, the bytecode produced will be identical. Indeed, a program written in Java would implement the interface if it were written in Java. Similarly, a Clojure program can implement a Java interface. In Clojure, it looks like this:
(deftype Gateway-imp [db] Gateway (get-internal-episodes [this] (internal-episodes db)) (get-public-episodes [this] (public-episodes db)))
Notice the db
constructor parameter, and how all these methods can access it. In this case, interface implementations are simply delegated to some local functions that forward db
.
Perhaps the best is the fact that Lisp, and therefore Clojure, is (wait, wait) homoiconic , which means that the code is the data that the program can manipulate. It is easy to see. The following code (1 2 3)
represents a list of three integers. If the first element is a function, as in (f 2 3)
, then it becomes a function call. Thus, all function calls in Clojure are lists, and lists can be manipulated from code. Thus, the program can create and execute other programs.
The bottom line is this. Functional programming is important. You should study it. And if you're worried about what language to learn, I recommend Clojure.
Source: https://habr.com/ru/post/335878/
All Articles