Few in response to the topic of Microsoft fails? I decided to translate the following, in my opinion, quite interesting and deep post. Is Windows RT the ultimate example of using Telemetry? about the reasons why Windows 8 and Windows RT turned out that way. The author, Hal Berenson, a former Microsoft employee who held a fairly high position there, is now an independent consultant. In general, he maintains a very good blog, which I advise everyone who monitors Microsoft.Many are puzzled, why Windows RT turned out just that? Especially in the absence of support for third-party desktop applications. I believe that Windows RT is the result of mass telemetry use when making decisions. Those who actively dislike Microsoft's new product can of course recall the adage that there is “a lie, a big lie and a statistic”. But, on the other hand, reliance on statistical analysis may explain why the reaction of a standard user to Windows 8 and Windows RT seems to be much better than the reaction of experts and experienced users. It is difficult to positively evaluate something if you are outside the target audience for which the product is designed.
')
Each decision is based on a variety of prerequisites, the most important of which is data. There is always not enough data to solve (hence the saying that management is the art of making decisions based on insufficient data). Some might think that with excellent data, it is easy to make decisions, because the answer becomes obvious. But usually you have to mix data from different sources. The more sources, the more potential errors may appear as a result of the analysis process. Often, these sources themselves are the result of analyzing the primary data, which had its own errors. And, in the end, you can get solutions that are completely irrelevant to the reality of life (for example,
New Coke ).
But what if you have near-perfect data? What if instead of small volumes, limited technologies of collecting random data, there is a huge sample about which one can say that it absolutely accurately represents reality? Could this lead to better solutions? Especially when these solutions are high complexity and high risk? It seems that a big experiment with Windows 8 will eventually answer this question.
When designing the Windows Phone 7, the development team investigated the 100 most popular applications for the iPhone to be sure that the platform would be able to realize all the necessary features. The basis of the decision was a study of usage scenarios, but this is not direct data. The Windows 8 team, in addition, had access to a huge array of data collected using the
CEIP program from those users who voluntarily subscribed to it (
or did not unsubscribe when using the beta versions, where it is enabled by default - note. ).
Anyone who is following the Building Windows 8 blog may have noticed how seriously Microsoft took these data when developing the new Windows 8 interface. Removing the Start button (to make desktop users jump into the new start screen) is just an example of using standard telemetry system usage patterns. Of course, if you are in the minority of those whose usage pattern is very different from the standard one, you will not be happy with what Microsoft did. And even, just the fact that Microsoft has data that tells how people actually use their product does not guarantee that the solution obtained will be correct. Data can not tell for example that the user wants just such a switch between the old and the new interface.
But back to Windows RT and how telemetry could be used to make key decisions. Recall netbooks. Five years ago, it came to the realization that the use of computers is moving to the network, and that users need an inexpensive special device for this. Such a device would not be a full-fledged replacement of a PC for the majority of users, but was considered as an addition to the main computer. Initially, netbooks were not designed to run existing windows applications, and so manufacturers focused on Linux, as the main OS for them. The rapid growth of this market led Microsoft to pay attention to it and to propose an inexpensive version of Windows XP first and then Windows 7 Starter. Despite the additional 15% of the cost (partly because of the cost of the license, partly because of the demand for more powerful hardware than Linux), Windows netbooks instantly occupied 90% of the market.
The netbook market share took off, became quite a noticeable niche in the PC market, and after that immediately received a triple blow. In 2009, the App Store was introduced and quickly grew, providing an alternative to the care of all applications on the web (plus the IPhone had a quite decent browser). Then the demand for netbooks shifted towards light and thin 11 inch full-fledged laptops. And finally, Apple introduced the iPad, which was a much better alternative to a netbook for using the web, and still had the opportunity to launch applications from the App Store. Netbooks have almost disappeared.
Development of Windows 8 began even before the submission of the iPad. Even later, during the launch of the iPad 2, many analysts considered tablets nothing more than replacing netbooks. And until recently, the impact of tablets on PC sales was only in the sense of a shift in demand for them from netbooks. So, from the point of view of data availability for making decisions about the design of Windows 8 and Windows RT at the end of 2009 and the beginning of 2010 - telemetry from netbooks through CEIP is what Microsoft had at its disposal.
Now let's focus purely on Windows RT. In 2009, it was already clear that a huge ecosystem grows around ARM processors. Microsoft watched, and even worked on porting Windows to ARM since the beginning of the decade. The decision to port to ARM is an easy part of the task. The decision about what to do with it further, came through the analysis of telemetry. And I will be clear, I have no information that it was really that way, but I am ready to bet money, that it is not far from the truth.
Since the Windows Phone team dealt with the phones, and ARM definitely couldn’t pull heavy tasks for which full-featured laptops and desktops are used, it was clear that Windows should focus on ARM on the class of devices between the two. At that time, only one device was there - this is a netbook. Once again, the iPad did not yet exist, and the reaction of users to it was not known. But the general characteristics of the tablet, similar to the iPhone then it was already clearly possible to foresee.
So it was obvious that Windows on ARM should target netbooks in a niche, netbooks with support for a touchscreen and tablets with characteristics similar to netbooks. But how, in the absence of knowledge about what happened later (the disappearance of netbooks and the rise of tablets), could Microsoft make decisions? Telemetry.
Why did 90% + percent of users choose to pay more for a netbook with Windows instead of Linux? If the device is used only for browsing the web, this user behavior is meaningless. Of course, we can only further assume. UI familiarity, compatibility with a large number of printers, the ability to run regular windows applications (although this is contrary to the original idea behind netbooks), etc. As I said, we can only guess. And analysts can interview users and draw their own conclusions. But Microsoft? Microsoft has accurate data from CEIP.
Microsoft could take the data, and see how often users print, and which printers they use. How often they use USB, and what they do with it. How often a netbook is used with external monitors, keyboards and mice. How often used WiFi, wired ethernet and 3G. Microsoft can see how often a user uses a browser, and what types of sites he visits. What other applications launches, and how much time spends inside them.
And what do you think, what exactly did Microsoft get from telemetry? I guess they saw that netbooks mostly use the web. Then comes a small but noticeable part on using Microsoft Office. And then a complete drop in the use of any other applications. Netbooks are basically a device for the web and office applications. Then they looked at the statistics of visiting sites, and saw that most of them fall under the concept of applications for the consumption of content that has become so popular on the iPhone, and why the new Metro application model should be developed. And they also saw the massive use of standard Windows functionality: support for various peripherals, networking, etc. Now connect this data from using netbooks with the development of the touch and application boom in the App Store, and you will get Windows RT.
The standard question is: why not make Windows RT support for inherited x86 applications via emulation? At first glance, there is nothing technically impossible. DEC Alpha processors ran x86 applications through emulation, but recall, Alpha was in itself at that time faster than x86, which allowed it to have sufficient performance. Any current ARM processor is noticeably slower than the equivalent x86 (despite being much more energy efficient). So x86 emulation on ARM will make most applications unsuitable for normal use. But, what is much more important, if the data on the use of netbooks show that the user still does not start anything even when there is such an opportunity, why bother to do it at all?
Well, emulation is difficult, but why not give developers of third-party applications tools for porting to ARM and allow the installation of third-party applications there? Of course, there are problems associated with energy consumption, memory usage, security and. etc., from which classical applications suffer, and which the new API just had to solve. Microsoft could force third-party developers to pay due attention to all this, as she did, for example, with the Microsoft Office team. But, it would distract them from creating applications for the new API. And why bother doing this at all if users really are not going to run them on this type of device?
It is unlikely that many users launched Photoshop on netbooks. If they used netbooks for photography, most likely they used lightweight applications that filled the App Store and that will appear fairly quickly in their own Microsoft store. So, the analysis of telemetry data on the use of netbooks led them to the conclusion that only a small part of users really need to run desktop applications on devices of this class.
Now let's take a look at Windows RT, or even better on Surface, and see what it is. Surface is where the netbook meets the iPad. It transports exactly what most users liked in Windows on netbooks in the modern era, removing what users of netbooks simply did not use. This is exactly what users told Microsoft through telemetry data extrapolated from those long-standing netbooks to the modern world.
Another proof? Domains On the one hand, it seems strange that you cannot enter Windows RT into a domain, on the other hand, how many netbooks were there in domains? Microsoft could have many other reasons not to do this functionality, but the main factor was that telemetry analysis showed that it was not important for this type of device. And now remember any question about the functionality of Windows RT and most likely you will find your answer in typical use cases for netbooks.
Using telemetry can explain why Windows 8, Windows RT and Surface
better perceived by ordinary users than by experts and experienced users, whose usage scenarios deviate strongly from the average. Windows 8, Windows RT and Surface were designed based on actual usage data on a device class that is usually ignored by experts and experienced users. This segment occupied about 20% by volume before the post-PC era began. If Microsoft really wisely used the entire amount of telemetry to do something that made netbooks so popular, plus added the ability to at least partially cover the scenarios for which Apple created the iPad, Windows RT could actually win.
What if Windows RT fails? This may be for example because the disgruntled voice of experts will score the product before it gets a real chance. Or as a result of bad decisions that were made despite the excellent data. Or because of serious mistakes in marketing, sales, mistakes of partners that are not related to the product as such. Or because it will become a victim of a lie called statistics.