We recently published a CMS rating as part of the Runet Rating project. Having started one of the first, it was completed last. This post is about how the ranking methodology has evolved.
')
Start - like everyone else
At the start (December 2009), the rating methodology in many ways repeated similar ratings for web studios and SEO companies:
all implementations were taken into account, the TIC and PR were summed for them, the places were distributed depending on the share of total indicators of a particular system in the total amount;
in the general rating all systems were displayed in one list (commercial, Open Source, “studio”).
Problems
And they turned out to be quite serious:
If developers of commercial systems are able to provide a database of implementations (at least, they cannot but know about legal implementations :-)), then a big problem arises with Open Source. None of them have such a list.
Common list. Commercial CMS, Open Source, "studio" - they are not just different, but very different. And their implementation base is very difficult to compare with each other. Although in the initial version there were local ratings for these types, it is clear that the basic rating is of decisive importance.
Given the specificity of the requirements for the platform of some types of sites (for example, online stores), it is important to have ratings by specialization. And here there was a serious problem with the source data. Almost all developers of the notorious commercial CMS provide us with data on a decent part of their implementations, but, in most cases, they lack information about the type of project.
The first solution is ideological.
To solve the first and third problems, we had to introduce a serious limitation: to consider only those implementations that were made and confirmed by web-studios:
studios have many incentives to place the fullest possible portfolio on our projects, so there is no need to search for all implementations by the system developer;
when adding their works, the studios necessarily specify the type of site (and other parameters).
Solution two - visual
We refused a uniform rating of all CMS. Baseline and local ratings (by site types, web servers) consist of three sub-ratings. Thus, the site owner, having decided on the main division (to take a commercial “box”, save on Open Source or trust studios with their developments), can see the most popular solutions in a professional environment in each of the categories.
What is the bottom line
Rating, reflecting the popularity of CMS among professional developers of sites, with self-sufficient sub-ratings. The decisions made, of course, added their own problems, but they made the rating solid.