Recently, in popular technical forums, I often encounter violent disputes between adherents and opponents of .NET. These disputes, as a rule, begin because of a misunderstanding, and end with hard trolling, talking “for life” and comparing the radii and specific densities of the material of different spherical horses. Both sides try to prove and argue, but neither wants to look at the subject of the dispute with different eyes. Habrahabr is no exception, alas.
Religious fanatics would envy the passions of such a conversation. The only thing that saves opponents from the crusades against each other, armed with pitchforks and LangSpecs, is that they are separated by the Internet.
So you can not live, gentlemen. I wanted to correct this situation, and speak on one of the sides. With this post I will try to inflict irreparable benefits on the community and deal with myths, for discussion of which, rather than for mutual self-mutilation, unfortunately, the forces of debaters go. And since I once climbed from C ++ to C # and everything around it, then I will debunk the negative myths, add positives and in every way embellish reality - and how without it. And - note - it will cost completely free of charge for M $. Well, I want to do it in the format of Q & A.
')
# C # and CLR are such a VM, i.e. interpreter, and, therefore, very slowly and sadly. I need it to be fast, very fast!
I will not tell you here how compilation differs from interpretation. I just want to note this: gentlemen, a recent
survey on Habrahabr showed that most developers somehow use “managed” languages ​​that are not compiled into native code, but into bytecodes executed by interpreters of direct or compiling type. Any TraceMonkey, LuaJIT, YARV are just examples for the last classification. This means that switching to another platform of a similar architecture will certainly not make the application slower. In this sense, nothing to worry about.
However, the CLR is sort of virtual machine, but it is not an interpreter. Once again I will repeat: MS.NET is NOT a BEITCODE INTERPRETER. The special JIT compiler gradually converts the bytecode of the program into native code, about the same as that produced by the C ++ compiler. The current CLR implementation, MS.NET and Mono, ensures that ANY code to be executed is converted to native code. At the same time, the statement is even stronger for desktops: any code will be compiled only once. Moreover, the fact that it is compiled “on the fly” theoretically makes it possible to more optimally use the features of a particular processor, and therefore, to optimize the code more.
Moreover, a comparison of absolute numbers on
benchmarks shows that the CLR turns out to be orders of magnitude more efficient than popular scripting languages ​​like JavaScript and Ruby, which also use JIT technology.
# Languages ​​with garbage collection are slower than languages ​​like C ++.
True, here you are very close to the truth. But, as well as any holivarschik, you do not finish speaking a little. The correct phrase will be: “a correctly written and completely manually optimized native application without errors, using special memory management techniques, will be faster than an application with automatic garbage collection”.
But for more or less serious software to create such an application means a huge amount of effort. Significantly higher than what is required for a managed language.
That is why high-level languages ​​have appeared - in the long run, on average, the code issued by the compiler will contain fewer errors and run faster than hand-written.
And - yes - stupid numbers do not lie: memory allocation in languages ​​with garbage collection is performed FASTER, and _not_ fragments a heap, unlike c ++. Exception handling in managed languages ​​is also faster.
And then there is the factor of time, and the cost of development, including the number of errors. Because the error of damage or lost memory ... hmm ... when did I see it in the CLR the last time? 10 years ago, no less.
# CLR programs consume a lot of memory. Straight in your eyes, they eat everything, leave nothing ...
Hm A comparable loaded Ruby-on-Rails application on the server eats 100-150MB of RAM, about the same as an ASP.NET CLR site. There is no big difference.
Of course, in small scripting tasks the same Ruby is much more efficient. But the question is not about scripting tasks - on real-life projects that bring money, the CLR's appetites look proportionate to other technologies, and I cannot agree with the definition of “devouring a lot”.
# Okay, okay, GC is good. But the garbage collector is a very capricious animal, there is a huge amount of settings. Nobody can correctly expose them - manual intervention only harms. GC in my ZZZ works and so! Himself!
By the way, the CLR has one of the best garbage collectors to date. Its first version was written in LISPe to more clearly express the semantics of relations between objects in memory and to perform an automatic analysis of the correctness of the algorithm, and then rewritten in C ++. Much time has passed since then, GC has been run in by millions of developers and no fewer projects. Not flowing, that neither do!
The settings are one
key in the gcServer = ”true / false” configuration file. Includes parallel garbage collection, as well as other optimizations. By default, it is set to false in order not to interfere with the interactive UI mode (gc is invisible for UI) on uniprocessor machines. In CLR 4.0, new settings have appeared, but the essence is the same - it works fine out of the box, put your pliers away.
# And in my favorite language ZZZ there is FFI , and therefore I can write extensions to it in C, if I need speed. Never, however, did not write, but so what! After all, I can! And what about CLR / C #, do you need to rewrite everything into a managed language?
Very happy for zzz. You'd be surprised, but the CLR also has the ability to call functions from native dlls written in good old C. And, of course, transfer data there and get it back. And, unlike most FFIs, you do not need to design dlls for FFIs — use call conventions and special data types. The CLR is omnivorous, it can be flexibly configured to eat almost any library. Separately enabled and automated support for COM, for more convenient access to the features of Windows.
This is called
Interop / Platform Invoke# Okay, I can write a lot of things in C. But I will not write everything myself! .NET does not have the necessary libraries; and to work with the database, you need to buy MSSQL for a hundred thousand million money!
You do not need to write everything. In .NET there is a great stdlib called BCL (Base Class Library). There are a lot of things you need: files, sockets / network, http and web, regular expressions, SQL and data manipulation, xml and web services, etc.
If you need something that is not in the BCL, most likely, such a library already exists. Or you can use the native - so made the wrappers for OpenGL and OpenAL, bass.dll (sound) and a lot of things.
Providers for MySQL and Oracle, SQLite and PostgresQL are written for .NET, they are stable and work fine. Why are there SQL, there are both MongoDB and their object databases, there is a client for Memcache and RabbitMQ. There are own ServiceBus and MessageQueue, and the API to existing systems is very simple to write.
# You can write for the CLR only from Visual Studio. And only under Windows. Both that, and that again costs money.
Not true. There is
SharpDevelop , which is good enough for a free environment; there is
MonoDevelop , which is also good, and works both in Win and in * nix. There are
plugins to Eclipse; By the way, using
IKVM.NET does not need Java to run Eclipse, one CLR is enough.
The light version of Visual Studio Express allows you to create full-fledged applications in Win. Free MS SQL Express will last long for most projects.
There are tools for debugging, profiling, establishing the process of Continuous Integration, themselves written in .NET. There are tools like make / ant -
NAnt , msbuild.
Download @ Install!
# The CLR location is on the server. And Mono is a terrible, unreliable, unreliable muck that has not grown out of MiguelDeIcaza's Labs © diapers.
For sure. On the application server and on the web server - the CLR has its own Rails (ASP.NET MVC), it has its own
Hibernate and dozens of other ORMs. Suitable for everything. Well, is it really scary - we all gradually creep into the web.
On the other hand, the creators of
Unity3D do not agree with you. This is a player that hosts the CLR environment right in your browser, and the script programs for it are written in .NET languages. Very fast, beautiful. 3D now. No need to wait for Flash Player with GPU support.
By the way, did you hear that Mono applications are compiled for both iPhone and iPad (#MonoTouch)? And the same Unity3D knows how.
# Using CLR makes me switch to C #, I don't want to teach it!
And absolutely not necessary. Yes, in C #, all CLR features are most fully available, but no one forces them to use them. CLR is not just C #, it’s a great platform and BCL, providing a quality object model and tools. There are a huge number of languages ​​- new, such as Boo, Nemerle, F #, or previously known: Delphi, Ada, Lisp, VB, php, as the back-end using the CLR.
From this point of view, CLR is similar to LLVM — it provides low-level services, such as IL (bytecode) and JIT, garbage collection, object model, common type system, standard library, security system, etc.
# With # this is a bydlojazyk for a printerprise, it was stuck in the last century, and in my language ZZZ every six months there are new features!
Yes, C # is now firmly established in a low-cost sector enterprise - all due to its characteristics: it is simple enough to write, static typing and managed environment eliminates a whole class of errors inherent in scripting languages ​​or lower level languages, IDE provides access to all necessary tools in a couple of clicks, built right into IDE documentation and first-class IntelliSense.
Because of this, C # / CLR solutions are not as expensive as in Java.
C # respects the principle of backward compatibility, but this does not prevent the addition of new features to the language. Already there is a parametric polymorphism (this is when the Vector), lambda functions and closures, LINQ (Language-integrated queries) appeared for the first time from all languages ​​based on limited citation, type inference is present, a whole layer of DLR has appeared. In version 5.0, native support for asynchronous programming appears.
CLR / C # is not as bad as you think, and it's worth checking out. But if you have a little C #, there is F # (Caml port) and Nemerle (a hybrid of C # and a functional language), there is even a CLR C ++ - take what you need from two worlds and combine.
# So, aloe, I remembered something. What kind of cross-platform speech, when to run under Mono, I have to recompile everything. It's the same as in the good old C, the better?
Another stupidity. I do not know who told you, but in general, a fully managed CLR application compiled under Windows DO NOT NEED TO BE COMPOSED. You can simply transfer to Linux with Mono installed, command mono myapp.exe, and it will start. And vice versa also works. I checked.
True, the linking with libraries comes into play here. It's like Ruby gems - if this particular gem uses native libraries, then you need to install these very native libraries. But in general, it is full of pure Ruby gems.
No magic, somehow.
# .NET applications use the registry. Again, this headache with software version management, installation and removal of programs?
Not. All managed .NET applications can be distributed using the deploy-by-copy model - copy it to the necessary folder and launch it from there. They do not climb into the registry, do not look into the system folders.
If you want to use a shared managed library, then a special mechanism, called GAC (Global Assembly Cache), using cryptography mechanisms, will take care of the absence of duplication, that the library you need is exactly that version that you are waiting for.
# But my friends are programmers ... They said that the mandatory Rectal Vibrating Tube and the instructions for its continuous wear are included in the Visual Studio IDE and C # developer kit!
OMFG O_o! I absolutely responsibly declare: your familiar programmers have been deceived. I would recommend that you go and help them to abandon the wearing of the self-styled probe, but I am afraid that they have already tasted and will not be able to ... But MS and CLR have nothing to do with it, right?
findingsOf course, I can argue. I would be glad if someone wants to correct me or supplement it, and maybe even refute it.
In general, I told how things are, what, I hope, caused a lot of good to my colleagues in the CLR workshop. I hope, now there will be no stupid questions, like “why C #, if there is Python, and with it GC”.
To go in this direction in life or not - the choice is yours. Nothing prevents to combine. I write in .NET for food and in Ruby for the soul.