In Voximplant, we mainly deal with automatic calls: automatically answer and tell what is with the order, automatically call before delivery, automatically connect with the right customer - all this story and JavaScript in the cloud. But besides this, we like to do our SDK platform: Web SDK to call from the browser and to the browser, native Android and iOS SDK, to make calls while roaming via the Internet, React Native SDK to call from cross-platform applications. A few days ago we made an SDK for Unity. Which allows you to call from virtual reality.
Surprise: this is not about games

When they say “Unity,” they mean games. And we even talked to representatives of the gaming industry, whether they need voice and video calls in games. The friendly chorus “No, by no means!” Slightly surprised us. It turned out that for most cooperative games, the ability to communicate with voice with random partners in the game does not lead to anything good. Cheats, insults and mate are the most frequent things that developers fear. Only the largest projects with specific gameplay can afford to add voice communication: World of Warcraft, World of Tanks and the like. But not a cooperative analogue of Clash Royale from Stor's App.
Unity + VR = not just games

Virtual reality in the user segment is now a toy. Not in the sense of "playing with toys", but in the sense of "high-tech toy": I tried it, I admired it and put it on the shelf, because there is no content. But for business it's not like that. How easy it was to deceive our vestibular apparatus and the visual cortex of the brain was appreciated by all the companies that need to train employees. This programmer can be trained with only a table, chair and laptop. And petroleum engineers are gaining experience, digging in very expensive equipment. Which, moreover, likes to be located in remote places, where they also need to deliver. The use of virtual reality helmets and training programs allows you to save astronomical sums, attract many more candidates for training and give them much more material.
VR in education, training and quests
Only a few days have passed since the release of the SDK, but we already see a great interest in the transmission of video and voice without delay “inside” VR applications. From what we are told and want to try:
- Training in specialized schools . Experiments in physics and chemistry in virtual space look spectacular and well remembered. A teacher who has put on a helmet and turned into a “talking ball” can “teleport” from one student to another, correcting their actions and answering questions. At the same time, a demanded teacher can conduct classes for several schools at once, physically being at home or in the head office.
- Presentations in the virtual space. What gives VR compared to the monitor? Ability to turn his head. We recall the usual webinar: slides on the whole screen, a small video of the host in the corner - sadness, longing. We put the camera in front of the presenter, we project the video call on the texture into virtual reality, on the texture to the left of it - the slides. We get the opportunity for students not only to communicate in real time, but also by turning their heads to choose which focus on attention - on slides or on a speaker. That is, do what we do when attending full-time classes.
')
- Training for Emergencies Ministry employees, oil workers and other children who need special places, situations and equipment for training. Communication without delays and the ability to include in the conference a lot of people from different devices: web-pages of private offices, cellular, SIP-negotiation.
- Quests . The opportunity to play mafia while sitting at home is priceless :)
Realization: a little pain and suffering

Android and iOS SDK with libwebrtc at the ready we already have a long time ago, and we naively thought that it would not be very difficult to make an SDK for Unity. The harsh reality, as usual, has made its own adjustments. The first "naive" version added to the top menu the item "export Android / iOS project with the necessary modifications and connected libraries." Familiar developers to whom we showed it, twisted a finger at the temple and sent to watch the Google Cardboard SDK. As it turned out, the assembly of the native code can still be integrated into the Unity Editor itself: to do this, you need to decompose the binary files into strictly designated areas of the project and add the
plugin to the IDE itself, which will be included in the build process and perform all the modifications on the fly. Another difficulty was the rights in the new androids: Unity itself is not able to ask them, so I had to do two rounds of binding: the first added the necessary manifest to the project, and the second requested the rights to the microphone and camera when the SDK was initialized.
Your opinion?
All of the above is done since the appearance of the first VR prototypes. Including voice and video. But each such development now is a unique piece with its fleet of bicycles. To make them expensive and long. And our SDK allows you to add voice and video to VR applications in a few hours, which allows colleagues to quickly experiment and move the industry forward. The first results will be at least six months later, but you can
try our SDK now and share your opinion on the future of VR in the comments.