📜 ⬆️ ⬇️

Why are our high-level languages ​​still not so high-level?

Please do not judge strictly, this is my first article on Habré. It is better to leave comments in the comments, and I promise to draw the appropriate conclusions.

So, my first acquaintance with programming languages ​​took place at school, and it was Basic. Since then I haven’t overlapped with programming languages. More than 20 years have passed and it took me to learn one of the modern programming languages. All the criteria for the programming language I presented, which I was able to formulate at that time, fully satisfied C #. Having rolled up my sleeves, I began to study it and I had the opportunity to evaluate how much the programming languages ​​have changed during this time.

Much was familiar, if not in appearance, then in concept: variables, cycles, branching operators ... And something was completely new: OOP with its classes (fields, methods, properties), inheritance, etc.
Of course, much has pleased. Classes now allow you to engage in the distribution of responsibilities: So, you bring me "that"; You do this; You count me "this", the result will be reported. This is very similar to our interaction with the environment. For example, went to the laundry, gave away dirty bed linen, took clean. We went to the studio, they took the pants away, took the finished version. We went to a diner, made an order, got breakfast. Very convenient and intuitive.
But back with the topic of the article. I never expected to see some things in programming languages ​​after 20 years:

1. Indexing from 0.
')
It was a great surprise for me that the indexing of arrays, lists, and any other collections starts from 0, not from 1. Why? This is inconvenient, I thought.

Then I helped my younger brother with learning the basics of C # and explained that it was necessary to count here from 0, not from 1, and he asked me: “Why? This is inconvenient! ”

Then, for the sake of interest, I began to tell my friends and acquaintances that in modern programming languages ​​(with rare exceptions) it is necessary to count from 0, not from 1. And everyone asked: “Why? This is inconvenient! ”And I still do not know what to answer this question for them.

If earlier this was some kind of logic, now, when modern compilers make a huge number of various optimizations, it loses all meaning. Make an amendment to -1 for them is worth nothing.
Now the programmer has to make this amendment in the mind. But the question is, is it not for this that programming languages ​​were created in order to save us from part of the work, especially the routine, constantly repeating? So why are we still counting from 0?

2. Numeric data types.

Another surprise for me was the large number of numeric data types. For some reason it seemed to me that one type of numerical data should have remained a long time ago, after all, the 21st century is in the courtyard. If a programmer no longer cares about memory allocation, the garbage collector does everything for him, if there is a dynamically changing collection size, then why do we still not have a single numeric data type, for example, also dynamically increasing the space occupied in RAM without a programmer ?

Firstly , it is simply not convenient to constantly think about which numeric data type to use in this particular case.
Secondly , you need to constantly take care of the correct type conversion.
Third , you need to monitor the type overflow. And by the way, because of this rocket fall. Hi Ariane 5. Is 7-8 billion dollars really an adequate payment for this kind of error?

3. Separation of data types into whole and fractional.

The third thing that surprised me was the separation of numeric data types into integers and with a floating point.
Again, it seemed that progress had gone far forward, and if we operate with numbers, then these are any numbers for which there is a single data type.

There was an article about Smalltalk on Habré , it was told about calculations without loss of accuracy in Smalltalk.
Calculations without loss of accuracy. The main problem in many languages ​​is the loss of accuracy when dividing. For example: 1/3 is 0.3333 and 3 in the period. Accordingly, in other languages, the number of decimal places is specified, and accuracy is lost. In Smalltalk, this problem was solved very beautifully. As a result of this division, we get the number of the class Fraction. It contains the numerator (1) and the denominator (3). That is, there will be no division per se, but a fraction. We lose nothing. And all subsequent operations are subject to the rules for working with fractions. When you need to get a decimal number, just say this object asFixedPoint: with the required number of decimal places and get the result. In this case, the number itself remains unchanged, and we can continue to conduct calculations without loss of accuracy.
So why do we still do not have a single data type for any numbers, which would consist of a numerator, a denominator and dynamically increase its place in memory when the limit is reached?

Probably, many could cite “cons” arguments, such as: slower speed of the finished program, greater demands on computer resources, but we are talking about high-level programming languages. As far as I understand, it is possible to achieve the greatest performance from a program written in assembler, but what is the percentage of programs written in assembler from the total number of programs in our days? Performance is far from the most important thing in modern programming languages.

If the programming language is high-level, is it not better to have a slightly lower language performance, but incomparably greater convenience in working with it?

I invite everyone to discuss this topic. Perhaps, some things I just do not understand or do not take into account?

PS I do not know who SkidanovAlex is , but thank you, kind person, for inviting me to Habr.

Source: https://habr.com/ru/post/272229/


All Articles