Note: below is the translation of the article “Fixing The Web - Part 1” , which raises the most pressing issues regarding the current state of affairs on the Web.Does the web need fixes?
The history of the Web has 16 years. In the first 10 years, Web technologies have evolved tremendously quickly, but in the last 6 years there have been no major changes in them. How good or bad? There are a number of understandable advantages of having a stable development environment for any technology, but is the web stable, or is it stagnant? Will the Web without innovations suffer the same fate as any other technology that ceases to follow changes in user preferences and becomes obsolete?
')
Naturally, the Web plays a role, above all, of a global information source, but how well do the Web technologies themselves function? To answer this question, we must consider a number of problems that exist on the Web:
Much of the Web is unavailable
Millions of people cannot get full access to the Internet, because websites are designed for people with excellent vision and dexterous hands (
manual dexterity ), who can control the mouse.
The web is not device independent
Cell phones and other mobile devices with a small screen resolution capable of browsing the Web will soon become much more than ordinary desktop computers. So far, most websites are designed for high resolution screens, which makes it very difficult to view them using mobile devices.
Practical techniques on the Web cause difficulties even for experts
The basics of Web technology are easy to understand. As a result, even a newbie can create his own website. But creating a user-friendly, accessible and device-independent website is not at all an easy task, a real challenge for an expert. Understanding how to properly use X / HTML, apply CSS and JavaScript, in fact, require a very high level of knowledge.
The complexity of web design
It’s not so difficult to create very beautiful websites using current technology; however, some of the most attractive and interactive visual effects can be achieved only through the use of special extensions, such as Flash. Even basic effects, such as casting shadows or rounding corners, are not possible in a cross-browser context, or are performed using hacks.
Web application development is a challenge.
Current web technologies limit the functionality of web applications compared to desktop ones. Web developers have very few form control elements from which to choose, and some of the basic features that are expected from a network application are impossible on the Web. For example, it is impossible to download the exact number of active users using a web application at any particular moment due to the lack of statistical capabilities of the HTTP protocol. Also, servers cannot send a message to all active users, because only the client computer can be the source of the connection.
Problems of web localization
It can be expected that in such a global information system, which is the Web, there is the same support for all world languages. However, it is not. Unfortunately, most web technologies are still built on using only ASCII codes. Even without taking into account discussions about the support of non-ASCII characters in URLs, you can pay attention to the content of web pages that are replete with "incomprehensible" entities (for example, & auml;) and numeric character links (& # 8364;) instead of using the letters of the current alphabet (for example, Greek or Russian), all this makes the text less readable, and makes it difficult to support.
Insufficient interoperability
Creating websites and web applications that work equally well in different web browsers is a real challenge for developers. Browser manufacturers are hesitant to correct some errors (
note: probably, this refers to the transition from IE 6 to IE 7 ), because too many websites, calculated on the erroneous or incorrect behavior of the browser, are written with these errors in mind. Many developers create websites only for specific browsers or screen resolutions. On some sites you can still see the inscription: “Optimized for Browser X”.
Data on the Web cannot be used more than once.
One of the expected benefits of the Web was a digital environment in which data could be used for various purposes (
note: the term repurposed is used , one definition is “reuse of content, for example, by making semantic or structural changes to it or changing the method of organizing , for other purposes or by other methods than those used in the original version " ). For example, an article posted on a website may be published on other sites in the future, printed in a journal or added to the knowledge base of a desktop application, all without manual corrections of the data structure or its format. Unfortunately, it is not available right now: current web applications create pages in which the markup is mixed with the content, and vice versa. While this continues, there is no point in talking about re-using data on the Web.
The web is not secure
Web technology allows you to use too simple methods of hacking. For example, a simple modification of the GET query string (the URL in the address bar of the browser) or saving a local copy of the web form, then changing it, and sending data from it directly to the web server. Web developers need to be information security experts to overcome the open nature of web technologies.
Data on the web is too vulnerable
When publishing materials on the Web, you should beware of the leakage of email addresses that will be found by spamming bots, sending spam using the HTTP referrer header, or bots that automatically fill out forms. Website owners also abuse web technologies to trick search bots or get private information from visitors to their websites.
The web is not optimized for robots ( machine friendly )
People just have to look at the text, get the necessary information or understand what it is about. On the other hand, the data must be well organized so that the robots can process them correctly. Due to the fact that data on the Web is extremely poorly structured, robots cannot understand what they are about. Why is this so important? For example, for the reason that people receive information on the Web, including using search engines, and they, in turn, use just such search robots to collect information. If robots cannot correctly recognize data on web pages, search engines will not be able to provide good relevance when responding to your request.
Raising these and other problems concerning the Web, one should ask: do we need new technologies, or can current ones be improved to resolve the questions posed?
Thanks to all who read to the end. The authors of the site xhtml.com promise in September to publish a continuation of the article, in which they will probably tell about possible measures and partially answer the questions posed.Re-published with permission from
xhtml.com .
Web Optimizator: checking the speed of loading sites