In continuation of the topics raised in the last comments posts on the site
SHCHERBAK.NET !
Dear readers, I will tell you honestly - I’m generally depressed by all these query languages (SPARQL, SPARUL and others) - I decided to work with them only in order to increase the compatibility level of my current project with other web applications semantics, and for that need a sparql access point.
In addition, I have achieved an almost linear growth of complexity for the ontology analysis algorithm, despite the fact that I control this process — and what sparql and the like are — are logical computations (the complexity of which can increase exponentially) —which are very resource-intensive — and in most There are no tasks needed at all - for example, why should users throw statistics on a site to the triple store (even if through mapping)? if the relational database does it better. And, you can build a smart query on this! so what. There are special analytical tools that will still make it better.
Semantic Web is good because it supports the distribution and heterogeneity of information sources. And again, not necessarily through the total transition to RDF or OWL.
And through the dynamic component of the Semantic Web - semantic web services. Here their task is to support access through SPARQL. But this is the minimum that needs to be maintained. And it is absolutely not necessary that the source (repository) of information be on RDF and OWL. What you need to be able to do mapping in OWL is of course yes, but mapping needs to be supported for which tasks? In order to be able to adequately respond to external influences under conditions of unknown content requests and nothing more!
Why bother recursive query, for example, through Jena, to select instances of a class, if you, as a system developer, can write a query in classical SQL, which can be executed several (and maybe several dozen times) faster. You know the data scheme! Suppose that the external user or agent does not know the scheme (because we provide him with an access point). I repeat, you know the scheme. Why make a logical conclusion? This is not effective. Or you can not make a sample? The logical conclusion should be made where it is really necessary.
According to my observations, even the developers of modern triple storie are trying to implement logical inference, the speed of which will be comparable to relational computations, but again in tasks where globally this logical inference is not needed! And when it is necessary to make a real conclusion, in terms of complexity, comparable to analytical inquiries, everyone shakes their shoulders - well, this is computationally difficult.
I just laughed when they told me about building “hands-on” requests for ontologies with more than a thousand connections. This is what your mind should have in order to build a conclusion that takes into account the meaning, at least half of the links. Of course, you say, not everyone uses ontologies with a thousand links. Well, you think it will be much easier for you on a hundred links. If yes, then you are a grandmaster of situation analysis (chess players rest and nervously smoke aside) And when we talk about agents - he should be able to examine the environment in order to “understand” - and where am I? What is it for? to understand how to implement some kind of action on the surveyed source. This is where SPARQL begins to help you. Only here is the problem - the logic of the action you have to explain to him (the agent), so that he understands this and hears it. And this is pure programming - and programming action. And this is no longer just a description of facts and objects, this is already something more complicated ... And believe me, this is no longer such a simple matter, as it may seem to many! This is where the terrible word Artificial Intelligence begins.
PS I certainly understand, now everyone who is not too lazy to start writing SW applications, such as boom, and all that. Just think all the same need, where technology SW should be applied, and where not.