📜 ⬆️ ⬇️

Google is looking for ideas to build human-readable artificial intelligence applications.

Google has consistently promoted the idea that the most promising way of developing AI for people will be the inclusion of human space by default in machine algorithms. It will be safer and more efficient. It is safer because people will be able to maintain control over the processes inside AI, more effectively, because a number of problems the algorithms solve for the time being purely mechanical, and the person is creative.

In addition to these two reasons, with which it is possible to argue with realism, the interaction of the machine and the person will allow us to save jobs. The first two instruments of control in the framework of the PAIR initiative (People + AI Research) were recently announced and proposed for wide use. The goal of the initiative is to make a person friends with AI in the maximum number of industries and applications through collaboration.

The Google initiative is of interest to the corporation itself. Publicly discussing the tools of interaction with a person with machines, the company receives as allies and future users of its decisions representatives of all levels that determine the future application of AI - scientists, industry experts and users. Particularly interested in Google medicine, agriculture, entertainment and manufacturing solutions. Experiments of the general public in these areas will give corporations new convenient cases for the use of AI and will prepare consumers for new applications.
')


The PAIRs are led by Fernanda Viegasch and Martin Vattenberg, who specialize in recognizing the processes involved in processing large data sets. And this is the essence of machine learning. It is in the uncontrolled self-learning of machines that most futurists see the threat. It is necessary to see the direction of thoughts of the car in time. To do this, Fernanda and Martin have developed two big data visualization tools and plan to visualize the machine learning processes - Facets Overview and Facets Dive. One is designed to monitor the function, the other is for detailed studies of the transformation of each part of the data set.

The developed tools are capable of catching abnormal values ​​of functions, lack of typical signs and normal results, failures of testing and tuning. And most importantly - by means of flexible settings, the software allows you to see patterns and structures that are not obvious or did not exist initially. As for people, statistical generalizations, for machines, there are grounds for conclusions, to assess the validity and acceptability for a person that they cannot. We have to see what patterns and data “conclusions” the machine has built for itself in order to fix errors in time - dangerous or safe for us.

Background PAIR Initiative


Earlier, Google has already co-founded the “Partnership in Artificial Intelligence” with colleagues in the direction of creating useful people cases, has introduced the “person-oriented interaction” nomination to its award for researchers, and has also published recommendations for machine learning developers.

Google identified 7 common errors that are useful to avoid when creating applications that are required by end users:

  1. Do not expect machine learning to determine which problems to solve. Look for problems yourself. Marketing before you code.
  2. Think about how justified the solution to a problem is through machine learning. There are many mathematical and software tools that work easier, faster, or more precisely on narrow problems. Heuristic analysis may be inferior to machine learning in accuracy, but requires less time and calculations. Imagine how a problem could be solved by a person, in which way you could improve his results for indicators of each of the 4 sectors of the error matrix , what expectations and stereotypes users of similar tasks now have.
  3. Try to change the input conditions of the problem and simulate how the problem could be solved by a person imitating the thinking of the machine.
  4. Evaluate possible errors in the algorithms, how critical they are for the loyalty of future users. Errors can lead to a simultaneous increase in the frequency of occurrences of false and true solutions, and vice versa - to reduce all decisions in general. It is necessary to understand what is more important: completeness or accuracy, and find a balance.
  5. Keep in mind that users will "get smarter" as they get used to new technologies. Part of the "stupid" algorithms must be turned off in time, otherwise users will become annoyed at their presence.
  6. Use reinforcement training, motivating users to put the right tags and tags.
  7. Encourage the imagination of developers on how users can apply and test the future of the application.

Source: https://habr.com/ru/post/373639/


All Articles