Finally, to write this post I was forced to have an old discussion here for
this post on a topic that occasionally pops up here and there.
Many times I have had the opportunity to make sure that not everyone understands the same way, what is the declarativeness vs the procedural nature of this or that assembly system. The main advantage of the assembly tool is often considered the ability to write assembly algorithms in a convenient language. Need DSL, nowhere without it.
In the gradle, this DSL is based on groovy. sbt uses rock, leiningen - Clojure, Ant uses xml (completely out of business, by the way). It is easy to remember the build system based on javascript. Everyone collects in the language that is closer to him. And new tools are written, for example, on cotlin.
Alas, the presence of DSL does not mean declarative. Quite the contrary. As a result, a gradle project cannot live at all without pom.xml. This is a joke, but with a lot of truth. Look here: this is a
repository .
')
What do we have here? Yes, yes, it is pom.xml, he is. And where is the build.gradle? And not him. Do you understand?
He is not here .
That is, all the metadata that is provided about the
compiled project, which
we note is collected using a gradle, consist solely and exclusively of the maven metadata.
Does this surprise you? Me - not at all. In my opinion, gradle, sbt, leiningen and many other tools almost (or not at all) do not provide metadata in a form accessible to other products.
They are really visible only to themselves, and only from within the build process. Just because the build script is a grub code (Clojure, rock, javascript). That is why they are so poorly supported by IDE (compared to maven or ant, where the script is xml). To understand what is happening there, you need to perform. It is necessary to have a gradle inside, and it is necessary that the gradle give the necessary information to the IDE.
How do I see myself declarative, and why is it needed? To illustrate, I’ll give two of my projects, and one from apache:
- Let's start with Apache Karaf. In the ssh console, you can access the module installation command (OSGI-bundle). This command uses nothing more than the module coordinates in maven repository: bundle: install mvn: groupId / artifactId / version . At the same time, the maven karaf itself does not contain - under the hood, nothing more than Aether, plus a wrapper over it (Pax URL).
- I implemented a similar construction a long time ago on Jython, and it worked inside Weblogic. The same Aether was used, plus the WLST API. And it allowed all this to automatically update the modules installed in the JavaEE container, looking for their new versions in the repository.
Again - maven was not included in the design. The repository, Aether, and pom.xml, which the maven packs into the META-INF module, was used.
- And the last example. In my practice, there was one maven plugin, which used SVN as source data for its work, scanning folders and files for changes in projects, comparing them with the tracker, and deciding which particular artifacts should be included in the release, and deploy .
What conclusions can be drawn from this?
First, we see that with the description of the project (in my case it was always pom.xml) sometimes it is necessary and possible to work from any sufficiently developed language. This does not have to be an assembly tool for which the project was originally designed.
Secondly, if the repository is designed correctly, in accordance with the principles of REST, then we do not need any special software to work with it, except for the http-server. Only data meta level is needed just above the project (a list of available versions, for example).
Thirdly, it is very convenient that an important part of the infrastructure, in our case, the plugins, themselves simply ordinary artifacts, lie in the same repository, and are in no way connected at all with a specific project.
This is what I call the declarative approach. We have projects, a lot of them. They are in the repository, in the form of collected artifacts, in VCS, and elsewhere, and can be processed by different tools - and not just one. A project descriptor is just a file of any standard and convenient format for processing. The artifact repository is also a standardized format + a simple REST API. And the assembly logic, on any DSL convenient to you, lies separately. Want - in the descriptor itself, want - in the repository.
How would I do it all? In essence, if you take maven as a basis, now there is not enough flexibility in terms of plugins, their settings, etc., because the existing xml is a serialized representation of the plugins and kernel java-data models, and it is not flexible. If the developers decide that you don’t need any data about the project, you only have the option key-value in the form of properties.
And it would be necessary to make something arbitrary, but of a regular structure, well, let's say of type RDF (not necessarily it).
Description of the project in the form of RDF immediately allows you to do useful things like searching in it, as in a database (ie, the repository, in terms of metadata, it becomes just a SPARQL endpoint, and can respond to search queries). And the same requests can be built in relation to the project.
That would be the true poliglot maven. And this, by the way, looks quite realizable, even within the framework of the existing infrastructure.