Modern search engines are far advanced in their field. Their sites are the leaders of the global network in attendance, and the capitalization of companies is growing by leaps and bounds. Who else but they know which site is most relevant to your request, analyzing its TIC, titles and other meta tags. But behind the trees we seem to have missed the forest. To me, as a user, it is violet - what are there TITsY- “shmitsy” resources. I want to get in the extradition that I am looking for.
However, finding something worthwhile has recently become increasingly difficult. I wonder why? Yes, because my concept of "relevance" is at variance with the concept of search engines. In their opinion, the most relevant sites that their owners have previously optimized. Before the web-masters put a bunch of conditions and restrictions on how the site should look. After all, search engines have a powerful leverage: on average, on the Runet, 30% of the traffic comes from search engines, with some projects it reaches 90%.
Suppose we are trying to make a cool young startup and ... we rest on the very first condition: the site should not be young! Interestingly, and who decided that the older the site, the better it is? In ordinary life, everything is exactly the opposite: the newer the phone / laptop / car, the better it is - this is the principle of progress.
')
Further:
- On the promoted page, you must have a SEO text with keywords. It doesn't matter that this text is absolutely unnecessary for anyone, and only the search robot reads it.
- There should be a linking on the site in a special way, and even the location of links on the page is dictated by optimization. For example, the use of hidden links in the drop-down menu is excluded - otherwise they can simply be banned.
- Photo / video materials are not indexed, so you can score on them at all
- Flash - God forbid you! And in general, less fantasy, more pragmatism.
In general, if we make an interesting and modern website, and then, step by step, we begin to optimize it, then in the end it turns out to be a misery a la “nineties” with a bunch of unnecessary text and stupid links. It seems that instead of a catalyst for the development of the Internet, search engines have become its brake. They are like a tail wagging a dog, dictate their conditions to the sites. Those in turn, like chameleons that turn green against the background of foliage, turn into boring, uncomfortable and eye-sharp sets of web pages.
Is there any other way out of the impasse? Of course. This is a departure from the traditional search algorithms that Google set up in the early 2000s into two pillars: “text” and “links”. But if at the beginning of the Internet the website was essentially a text mixed with links, today it has obviously evolved. Competent developers have learned to make websites both “for people” and “for robots” in one bottle. But the real breakthrough will be a complete departure from the text-based algorithms and replacing them with fundamentally different mechanisms. For example, by the mechanism of collaborative filtering: on my request, I get sites that were visited by users who usually visit sites the same as me, and there are as many times there as I am. There are a number of other perceptual opportunities that have not yet reached the hand of SEO-optimizer, and perhaps they are already beginning to be used. In any case, I have the persistent feeling that the collaboration is using Google through Chrome, but it is doing something inactive. This Goliath will wait for his David, who will fill him up as a hulking monster. We, the creators of startups, will be interested to watch this battle, because in any case we will emerge as winners.