
I was motivated to write this article by another
article about a sound manager suitable for use in small projects. In this post, I will describe some of the flaws that the author did not list, and offer my own version of the implementation, in my opinion, correcting them.
This article will be useful as novice developers to gain experience and get ready to work, as well as avid architects, whose offices have debates about the importance of separating the form from the model and removing the static from the code. I am sure that the solution proposed by me is not completely universal, and has its drawbacks, but an important and pleasant element for me would be that every interested Habrawner would draw useful information and improve its own modules using my advice.
Problems
Angry loner
Many may not agree with me, but I believe that the use of singletons, especially in such aspects as sound reproduction, is unacceptable in projects of any scale. With the help of this
anti- pattern, all parts of the code are tightly connected, with a direct indication of the type that binds hands in several directions at once. If it is possible to write a test to a singleton, it is very difficult, it looks ugly and does not shine with determination. Also, you cannot elegantly write a test for any module that this sound manager will use. Due to the fact that the same instance is used with an uncontrolled life cycle, you will also tie your hands to yourself, tacitly creating a logical relationship in certain parts of the code that you shouldn’t know about each other at all.
Examples:
')
static void PlayMusic(string name); static void PlaySound(string name, bool pausable = true);
The method insists that third-party code be aware of specific melody names. It is the responsibility of the programmer in each of the modules responsible for their sounds to correctly transfer the arguments. And there may be a lot of such places in the project: various elements of the UI, shooting / dying units, environment. In the comments to the ref article, one of the readers
suggests using various channels for sound in the arguments, which also logically links parts of the code:
public void PlayFX (AudioClip clip, SoundFXChannel channel = SoundFXChannel.First, bool forceInterrupt = false) { } public void StopFX (SoundFXChannel channel) { }
Now, for example, the buttons (or, if you like, the UIManager) using methods should take into account which of the channels they belong to, in fact, this is again the responsibility of the programmer.
Too much access
It has always been strange to me that when I call a method in a separate code, I get a heir type from MonoBehaviour. Is it safe to let coroutines on it? Has the developer protected it from Destroy ()? Or do I even want to see further in the code “using UnityEngine” or do I not need MonoBehaviour? This problem partially applies to the previous paragraph about singleton, we do not need a reference to the instance itself, we have enough API to work with it. It's funny, but even if you implement a static call like this:
private static SoundManager instance; public static ISoundManager Instance { get{ return (instance as ISoundManager) }}
Then when you get an abstraction, you still have to use a specific type:
ISoundManager sm = SoundManager.Instance;
That solves the problem only partially.
Sewn path and direct download
private AudioClip LoadClip(string name) { string path = "Sounds/" + name; AudioClip clip = Resources.Load<AudioClip>(path); return clip; }
The delayed loading of sounds, in my opinion, does not always make sense. First of all, in the import settings of sounds in a unit, you can configure how to store the sound: right in RAM, stream from disk or load into memory, but convert directly before playback.
Read more about import settings . Secondly, the experience of parsing the assembly logs of a unit suggests that the resources of sounds in terms of overall size are on average at the 3rd or lower place. And the optimization of memory, if you start, it is not unique with sounds. (Of course, this is potentially not applicable to projects whose gameplay is tied to sounds).
Read more about logs .
Now about the sewn into the code path: Again, the responsibility of the programmer is to monitor the compliance of the path when transferring this module from project to project. Real dances begin when a sensible thought comes to the team: “Why not make a git submodule, put an audio manager there, so that in all projects, if necessary, the latest version of this module will be?”. Since the path is sewn into the code, we cannot change it, since on other projects it will become erroneous. On the other hand, if you change the path only locally, then the git will always shine that change for you.
Own decision
The module code is located at:
https://github.com/hexgrimm/AudioFor publication within the article, the code was simplified, I removed most of the tests and abstractions for them, so that the code looked clearer. In projects under my leadership, a module is used with a somewhat greater potential for extensibility and a bulk configuration.
So, first, let's talk about architecture:
This audio module is considered the final sheet in the dependency graph of any architecture, it does not require dependencies below the graph, and it does not matter who creates it, but there is a restriction: This module must have the “Singleton” lifestyle (not to be confused with the Singleton design pattern, more details in the book “Implementing dependencies in .NET” Author: Mark Siman). This is due to the Unity3D requirement for only one AudioListener in the application. In case you use dependency injection in a project, then the binds will look like this (using the example of Ninject):
binder.Bind<IAudioController, IAudioPlayer, IMusicPlayer>().To<AudioController>().InSingletonScope();
In case you just want to create this class and use it in a project, make sure that all sources of the call for sound playback are provided with abstractions of the same instance.
As an example:
var ac = new AudioController(); IAudioController iac = ac; IAudioPlayer iap = ac; IMusicPlayer imp = ac;
And in the future, work and delivery to all sources is carried out only with abstractions iac, iap, imp.
Abstractions
IAudioController, an interface designed for general sound control (on / off, master volume):
IAudioController public interface IAudioController : IDisposable {
IAudioPlayer, the interface is designed to play 2D and 3D sounds, and their further control.
Iaudioplayer public interface IAudioPlayer {
IMusicPlayer, music playback and control.
IMusicPlayer public interface IMusicPlayer {
When you call the method of playing a sound or music, the consumer is given a numeric code by which he can later control the sound.
For example, turn it off or change the position of the sound source if the object is moving.
A separate method is:
SetAudioListenerToPosition(Vector3 position);
In the case of 3d sound and a moving listener, you must provide access to control its position.
You may have noticed that one of the arguments to the playback call is the AudioClip type, in my opinion, the storage logic or association of clips and sound sources should not be in the controller itself, so I just took this authority for the module, thereby allowing the module consumer to decide whether to create whether a sound storage base or associating clips directly with sources (in most of our cases this is the case. Different units have female and male voices, this information is an integral part of units, whatever kind of Inca sulyatsiya have not been applied; and that the unit delivers this information using IAudioPlayer interface).
You may also have noticed that IAudioController is inherited from IDisposable. This is intentional and justified by the limitations that Unity3D imposes. In the Dispose method, the unit objects created to ensure the module's functionality are removed, in my opinion, with respect to the module, the scene objects are “separately-managed” resources, and since the AudioController is not MonoBehaviour, we cannot call Destroy (). And the garbage collector will not be able to clear the links, as managed unit links will be alive. By calling the Dispose method, we ensure that all resources and links related to the unit have been cleared. Although in small projects, the life cycle of the audio module is always the same in length as the application's work cycle, so maybe you should not bother.
I also apologize for the large number of lines of the form:
source.pitch = 1 + Random.Range(-0.1f, 0.1f);
The use of magic numbers, of course, is unacceptable and intentionally written for example, since the configuration that we transmit in real projects through the constructor complicates the code, and I would like to keep the code as simple as possible for beginners.
Separately, I will say a few words about the class SavableValue <>. The utility class for storing any type of serializable types in Prefs had to be duplicated in this module in order not to drag a separate namespace Utils. I don’t know how well BinaryFormatter works on non-mobile platforms.
What happened in the end
Without using Singleton in the project, we create a convenient seam, and in the future we can replace abstractions, if necessary. Now you can write any test for playing a class of sound just using mock abstraction.
IAudioPlayer mock = Substitute.For<IAudioPlayer >(); var testClass = new Class(mock);
Access to classes is limited by interfaces, nothing extra can be done with them (unless you take abuz with invalid audioCode). No extra dependencies, except the namespace HexGrimmDev. Audio does not stretch. As in the recommendations of Mark Simon, all the extra responsibility imposed on the class and, if necessary, can be passed through the designer. There are no external logical links, you can distribute the module as a git-submodule.
I understand that not all isolations are equally useful, but in this case it did not take much to create a seam. For more inspiration, I propose to read the lecture by Oleg Chumakov on the topic “
Why should your Unity project work in the console? ".
And I also strongly recommend that you pass links on modules through the constructor, this is of course more understandable for the consumer, and besides, it damn well disciplines. And most importantly, I propose not to chase full universalization. There is an excellent lecture on this topic, "
How not to get carried away by the pursuit of the universalization of components ."
Functional list in sample code:
- Reproduction and control of 2d and 3d sounds as well as music.
- Balancing sound. (a float argument is passed with a 0-1 range for accurate balancing of individual sounds) (taken into account when changing volume)
- The possibility of looping.
- Change the listener's position for 3d sounds.
- There is a random pitch + -0.1f shift for all sounds except music. (for example)
- Pause and resume for music.
From specific features:
- AudioMixer is not used.
- The code has a lot of magic numbers, refactoring before use.
- There is no smooth transition between music videos, you can implement in many ways.
- Because of the cuts in the code and after deleting the tests, there is a possibility that something is not working correctly, the code is first and foremost an example, not a means.
- For writing tests, it is recommended to enter the seam between the components of the unit and the AudioController, and work with AudioSource and AudioListener through additional abstractions, and in the test replace the abstractions with dummies. In addition, since the test will be performed in a minimum of time.