📜 ⬆️ ⬇️

Created the first technology to fake any votes.



They say that in Soviet times they installed equipment for wiretapping conversations at telephone stations. Naturally, to record and physically listen to all the conversations at that time was not possible, but the voice recognition technology worked effectively. Following the pattern of a specific person’s voice, the system instantly worked - to wiretap or record, from whatever phone he rang. These technologies are available today and are probably used in operational investigative activities. A man’s voice is as unique as his fingerprints.

Thanks to advanced developments in the field of AI, attackers will now be able to put operatives on the wrong track. On April 24, 2017, the Canadian startup Lyrebird announced the first service in the world with which to fake the voice of any person. For learning the system is enough minute sample.

Lyrebird website explains that, based on the minute sample, the system “generates a unique key”, with which it can process any other speech, giving it the characteristics of the desired voice.
')
This system can be used to impersonate another person, that is, for practical jokes (just don’t joke with the voices of individuals who are on the federal wanted list). From this day you should not be surprised if your mom / grandmother / wife / your child calls you from an unfamiliar number - and says weird things, asks for help or transfer money to some account. Anyone can speak in the voice of your relative.

The capabilities of the system are not limited to sweepstakes and social engineering. For example, you can develop your own unique voice - and use it in communication, if your own voice is not satisfied for some reason. This service will be useful for telephone operators, marketers, salespeople and other professionals in areas where conversations and telephone conversations play an important role. Do you want to charm the girl, win over the interlocutor, add yourself credibility - just add a little bass and velvety.

It is known that a person’s voice is directly related to psychological personality traits; this information is transmitted to the interlocutor on a subconscious level. So, squeaky, thin and squealing sounds of voices cause uncomfortable anxious sensations, and such voices are subconsciously associated with youth, vigor, inexperience and immaturity. On the other hand, people with low voices are perceived as self-sufficient, highly intelligent and self-confident. A person with a low voice is intuitively considered knowledgeable and authoritative. Even image-makers use these techniques when they lower the voice of political candidates during television broadcasts in order to create greater confidence among female voters.

In the Lyrebird service, you can select one of the thousands of the most optimally prepared voices prepared in advance for your own purposes - or design your own original sound. The developers guarantee that processing with a unique “key” thousands of offers on their GPU clusters takes less than 0.5 seconds.

Lyrebird speech generation technology was developed at the Montreal Institute for Learning Algorithms (Montreal Institute for Learning Algorithms, MILA) at the University of Montreal (Canada).

As a demonstration of technology, developers have generated keys for the votes of Donald Trump, Barack Obama and Hillary Clinton. In a demo audio clip, these policies discuss the capabilities of the Lyrebird voice fraud system ( audio ).

Here are the individual generated phrases in different voices. The same phrases say the same voice with different intonations:

Obama 0
Obama 1
Obama 2
Obama 3
Trump 0
Trump 1
Trump 2
Trump 3
Trump 4
Trump 5
Trump 6
Trump 7

In the demo playlist presents two dozen voices with different characteristics, as an example of what voices can be generated to your taste.

Now Lyrebird is finishing the development of the API, so that the service can really be used in its applications. The developers say that Lyrebird is the first company in the world that offers the technology to fake someone else’s voices. In this regard, they are subject to certain ethical obligations . The main ethical obligation is the widespread informing about the possibilities of technology to accurately counterfeit someone else's voice, so from this day - April 24, 2017 - no court in the world, no operational search activity should rely on the authenticity of the voice of a particular person. From this day on, the voices are no longer unique, each of them can be faked.

Citizens who care about their privacy can be advised to be careful with using their voice — not to transmit it through unprotected channels and speak in brief phrases so that the attacker cannot collect enough material to fake the identity.

Source: https://habr.com/ru/post/403413/


All Articles