📜 ⬆️ ⬇️

Creating a sound effects synthesizer from retro games

image

In this article, you will learn how to create an audio engine based on a synthesizer that can generate sounds for retro-style games. The sound engine will generate all sounds at run time and it does not require any external dependencies, such as MP3 or WAV files. The end result will be a working library that can be conveniently embedded in games.

Before we start creating an audio engine, we need to deal with a couple of concepts. First, with the waves that the engine will use to generate sounds. Secondly, it is necessary to understand how sound waves are stored and designated in digital form.

This tutorial uses the programming language ActionScript 3.0, but the techniques and concepts used can be easily converted to any other language that provides access to a low-level API for working with sound.



Waves


The audio engine we create will use four basic types of waves (also known as periodic waves, because their basic forms are periodically repeated). All of them are very often used in both analog and digital synthesizers. Each waveform has its own unique sound characteristic.
')
Below is a visual representation of each of the waveforms, sound samples and the code necessary to generate each of the waveforms as an array of sampled data.

Pulse




Pulse wave creates a sharp and harmonious sound.

Download MP3 .

To generate an array of values ​​representing a pulse wave (in the range from -1.0 to 1.0), you can use the following code, in which n is the number of values ​​needed to fill the array, a is an array, p is the normalized position inside the wave:

 var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.5 ? 1.0 : -1.0; i ++; } 

Saw




The sawtooth wave creates a harsh and harsh sound.

Download MP3 .

To generate an array of values ​​representing a sawtooth wave (in the range from -1.0 to 1.0), where n is the number of values ​​required to fill the array, a is the array, p is the normalized position inside the wave:

 var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; i ++; } 


Sinusoid




The sine wave creates a smooth and clear sound.

Download MP3 .

To generate an array of values ​​representing a sine wave (in the range from -1.0 to 1.0), you can use the following code, where n is the number of values ​​required to fill the array, a is an array, p is the normalized position inside the wave:

 var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = Math.sin( p * 2.0 * Math.PI ); i ++; } 

Triangle




The triangular wave creates a smooth and harmonious sound.

Download MP3 .

To generate an array of values ​​(in the range from -1.0 to 1.0), you can use the following code, where n is the number of values ​​needed to fill the array, a is the array, p is the normalized position representing the wave:

 var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i ++; } 

Here is the expanded version of line 6:

 if (p < 0.25) { a[i] = p * 4.0; } else if (p < 0.75) { a[i] = 2.0 - (p * 4.0); } else { a[i] = (p * 4.0) - 4.0; } 



Wave amplitude and frequency


A sound wave has two important properties — the amplitude and frequency of the wave: the volume and pitch of the sound, respectively, depend on them. Amplitude is the absolute peak value of a wave, and frequency is the number of times that a wave repeats in a second. Frequency is usually measured in Hertz (Hz, Hz)

The figure below shows a 200-ms snapshot of a sawtooth wave with an amplitude of 0.5 and a frequency of 20 Hz:



I will give an example of how the frequency of a wave directly affects the pitch: a wave with a frequency of 440 Hz has the same pitch as the standard note for the first octave (A4) of a modern concert piano. Given this frequency, we can calculate the frequency of any other note using the following code:

 f = Math.pow( 2, n / 12 ) * 440.0; 

The variable n in this code is the number of notes from A4 to the note of interest. For example, to find the frequency for the second octave (A5), one octave above A4, we need to assign n value of 12 , because A5 is 12 notes above A4. To find the frequencies of the large octave (E2), we need to assign n value of -5 , because E2 is 5 notes below A4. You can also do the reverse operation and find a note (relative to A4) at a given frequency:

 n = Math.round( 12.0 * Math.log( f / 440.0 ) * Math.LOG2E ); 

These calculations work because the frequencies of the notes are logarithmic — multiplying the frequency by two shifts the note up one octave, and dividing the frequency by two lowers the note one octave.



Digital sound waves


In the digital world, sound waves must be stored as binary data, and this is usually accomplished by taking periodic snapshots of the state (or samples) of the sound wave. The number of wave samples received per second sound duration is called the sampling frequency , that is, a sound with a sampling frequency of 44100 will contain 44100 wave samples (per channel) per second of sound duration.

The figure below shows how to sample a sound wave:



White dots in the figure show wave amplitude points sampled and stored in digital format. You can perceive them as the resolution of a bitmap: the more pixels there are in the bitmap, the more visual information it can store, and the increase in the amount of information leads to an increase in the size of the files (here we do not consider compression). The same is true for digital sounds: the more wave samples a sound file contains, the more accurate the recreated sound wave will be.

In addition to the sampling frequency, digital sounds also have a bit rate , measured in bits per second. From the bit rate (bit rate) depends on the number of binary bits used to store each sample wave. This is similar to the number of bits used to store the ARGB information of each pixel of a bitmap. For example, a sound with a sampling frequency of 44100 and a bit rate of 705600 will store each of the wave samples as a 16-bit value, and we can quite easily calculate it with the following code:

 bitsPerSample = bitRate / sampleRate; 

Here is a practical example in which the above values ​​are used:

 trace( 705600 / 44100 ); // "16" 

The most important thing here is to understand what sound samples are. The engine we create will generate raw sound samples and control them.



Modulators


Before you start programming the sound engine, you need to get acquainted with another concept - modulators that are actively used in analog and digital synthesizers. In essence, a modulator is a standard wave, but instead of creating sound, they are usually used to modulate one or more properties of a sound wave (i.e., its amplitude or frequency).

For example, take vibrato . Vibrato is a periodic, pulsating change in height. To create such an effect using a modulator, you can set a modulator wave for a sine wave and set the modulator frequency to, for example, 8 Hz. If you then connect this modulator to the frequency of the sound wave, the result is a vibrato effect - the modulator will smoothly increase and decrease the frequency (height) of the sound wave eight times per second.

The engine we create will allow you to attach modulators to sounds to provide a wide range of different effects.



Demo audio engine


In this part we will write all the basic code needed for a full audio engine. Here is a simple demonstration of the audio engine (Flash): demo .

In this demonstration, only one sound is played, but the frequency of the sound changes randomly. A modulator is also connected to the sound, creating a vibrato effect (modulating the amplitude of the sound), and the frequency of the modulator also changes randomly.



AudioWaveform class


The first class we create will simply store the constant values ​​for the waves that the engine will use to generate sounds.

Let's start by creating a new class package called noise , and then add the following class to this package:

 package noise { public final class AudioWaveform { static public const PULSE:int = 0; static public const SAWTOOTH:int = 1; static public const SINE:int = 2; static public const TRIANGLE:int = 3; } } 

We will also add to the class a static general method that can be used to check the value of the wave. The method will return true or false depending on the correctness of the wave value.

 static public function validate( waveform:int ):Boolean { if( waveform == PULSE ) return true; if( waveform == SAWTOOTH ) return true; if( waveform == SINE ) return true; if( waveform == TRIANGLE ) return true; return false; } 

Finally, we need to protect the class from instantiating it, because there is no reason to create it. This can be done inside the class constructor:

 public function AudioWaveform() { throw new Error( "AudioWaveform class cannot be instantiated" ); } 

With this we have completed the creation of the class.

Protecting enum-style classes, completely static classes and singleton classes from directly creating instances is good practice, because instances of such classes should not be created, there is no reason for that. In some programming languages, for example, in Java, for most of these types of classes this is done automatically, but in ActionScript 3.0 it is necessary to enforce this behavior inside the class constructor.



Class audio


Next on the list is the Audio class. By nature, this class is similar to the native ActionScript 3.0 Sound class, each audio engine will be represented as an instance of the Audio class.

Add the following class skeleton to the noise package:

 package noise { public class Audio { public function Audio() {} } } 

The first thing to add to the class is the properties that tell the audio engine how to generate a sound wave when the sound is played. These properties are the type of wave used in the sound, the frequency and amplitude of the wave, the duration of the sound and the decay time. All these properties will be private, and they will be accessed via getters / setters:

 private var m_waveform:int = AudioWaveform.PULSE; private var m_frequency:Number = 100.0; private var m_amplitude:Number = 0.5; private var m_duration:Number = 0.2; private var m_release:Number = 0.2; 

As you can see, we set reasonable default values ​​for each property. amplitude is a value in the range from 0.0 to 1.0 , frequency is in Hz, and duration and release are in seconds.

We also need to add two more private properties for modulators connected to the sound. Access to these properties will also be carried out via getters / setters:

 private var m_frequencyModulator:AudioModulator = null; private var m_amplitudeModulator:AudioModulator = null; 

Finally, the Audio class should contain several internal properties that only the AudioEngine class will have access to (we will write it soon). These properties do not need to be hidden behind getters / setters:

 internal var position:Number = 0.0; internal var playing:Boolean = false; internal var releasing:Boolean = false; internal var samples:Vector.<Number> = null; 

position is set in seconds and allows the AudioEngine class AudioEngine track the position of the sound when it is played. This is necessary to calculate the sound samples of the wave. The playing and releasing tell the AudioEngine class what state the sound is in, and the samples property is a reference to the cached wave samples used by the sound. How these properties are used, we will understand when we write the AudioEngine class.

To complete the Audio class, add getters / setters:

Audio. waveform

 public final function get waveform():int { return m_waveform; } public final function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: samples = AudioEngine.TRIANGLE; break; } m_waveform = value; } 

Audio. frequency

 [Inline] public final function get frequency():Number { return m_frequency; } public final function set frequency( value:Number ):void { //  frequency  1.0 - 14080.0 m_frequency = value < 1.0 ? 1.0 : value > 14080.0 ? 14080.0 : value; } 

Audio. amplitude

 [Inline] public final function get amplitude():Number { return m_amplitude; } public final function set amplitude( value:Number ):void { //  amplitude  0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } 

Audio. duration

 [Inline] public final function get duration():Number { return m_duration; } public final function set duration( value:Number ):void { //  duration  0.0 - 60.0 m_duration = value < 0.0 ? 0.0 : value > 60.0 ? 60.0 : value; } 

Audio. release

 [Inline] public final function get release():Number { return m_release; } public function set release( value:Number ):void { //   release  0.0 - 10.0 m_release = value < 0.0 ? 0.0 : value > 10.0 ? 10.0 : value; } 

Audio. frequencyModulator

 [Inline] public final function get frequencyModulator():AudioModulator { return m_frequencyModulator; } public final function set frequencyModulator( value:AudioModulator ):void { m_frequencyModulator = value; } 

Audio. amplitudeModulator

 [Inline] public final function get amplitudeModulator():AudioModulator { return m_amplitudeModulator; } public final function set amplitudeModulator( value:AudioModulator ):void { m_amplitudeModulator = value; } 

You certainly noticed the [Inline] metadata label associated with some of the getter functions. This metadata label is a feature of the ActionScript 3.0 Compiler , and it does exactly what its name implies: it embeds (expands) the contents of the function. When used wisely, this feature is incredibly useful for optimization, and the task of generating a dynamic audio signal during the execution of an optimization program exactly requires.



AudioModulator class


The AudioModulator is to enable the amplitude and frequency modulation of Audio instances to create various useful effects. Modulators are actually similar to Audio instances, they have a waveform, amplitude and frequency, but they do not create any audible sound, but only modify other sounds.

Let's start from the beginning - let's create the following class skeleton in the noise package:

 package noise { public class AudioModulator { public function AudioModulator() {} } } 

Now add private properties:

 private var m_waveform:int = AudioWaveform.SINE; private var m_frequency:Number = 4.0; private var m_amplitude:Number = 1.0; private var m_shift:Number = 0.0; private var m_samples:Vector.<Number> = null; 

If you think that this is very similar to the Audio class, then you are not mistaken: everything is the same here, except for the shift property.

To understand what the shift property does, recall one of the base waves used by the audio engine (pulse, sawtooth, sinusoidal, or triangular) and imagine a vertical line passing through the wave anywhere. The horizontal position of this vertical line will be the shift value; this value is in the range from 0.0 to 1.0 , telling the modulator where to start reading the wave. In turn, it has an absolute influence on the modifications made by the modulator to the amplitude or frequency of the sound.

For example, if a modulator uses a sine wave to modulate the frequency of the sound, and shift has the value 0.0 , then the frequency of the sound will first increase and then decrease according to the curvature of the sinusoid. However, if shift set to 0.5 , the frequency of the sound will first decrease, and therefore increase.

Well, back to the code. AudioModulator contains one internal method used only by AudioEngine . The method is as follows:

 [Inline] internal final function process( time:Number ):Number { var p:int = 0; var s:Number = 0.0; if( m_shift != 0.0 ) { time += ( 1.0 / m_frequency ) * m_shift; } p = ( 44100 * m_frequency * time ) % 44100; s = m_samples[p]; return s * m_amplitude; } 

This function is built in because it is often used, and by “often” I mean “44,100 times per second” for each reproduced sound to which the modulator is connected (this is where embedding turns out to be incredibly useful). The function simply receives a sound sample from the waveform used by the modulator, changes the amplitude of the sample, and then returns the result.

To complete the AudioModulator class, add getters / setters:

AudioModulator. waveform

 public function get waveform():int { return m_waveform; } public function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: m_samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: m_samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: m_samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: m_samples = AudioEngine.TRIANGLE; break; } m_waveform = value; } 

AudioModulator. frequency

 public function get frequency():Number { return m_frequency; } public function set frequency( value:Number ):void { //  frequency  0.01 - 100.0 m_frequency = value < 0.01 ? 0.01 : value > 100.0 ? 100.0 : value; } 

AudioModulator. amplitude

 public function get amplitude():Number { return m_amplitude; } public function set amplitude( value:Number ):void { //  amplitude  0.0 - 8000.0 m_amplitude = value < 0.0 ? 0.0 : value > 8000.0 ? 8000.0 : value; } 

AudioModulator. shift

 public function get shift():Number { return m_shift; } public function set shift( value:Number ):void { //  shift  0.0 - 1.0 m_shift = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } 

And on this class AudioModulator can be considered complete.



AudioEngine class


And now a serious task: class AudioEngine . This is a completely static class. It controls almost everything related to Audio and sound generation.

Let's start as usual with the class skeleton in noise :

 package noise { import flash.events.SampleDataEvent; import flash.media.Sound; import flash.media.SoundChannel; import flash.utils.ByteArray; // public final class AudioEngine { public function AudioEngine() { throw new Error( "AudioEngine class cannot be instantiated" ); } } } 

As stated above, for completely static classes, no instances should be created, so if someone tries to create an instance, an exception is thrown in the class constructor. The class is also final , because there is no reason to extend the completely static class.

The first thing we add to this class is internal constants. These constants will be used to cache the samples of each of the four waveforms used by the audio engine. Each cache contains 44,100 samples, which is equal to one-herbal waveforms. This allows the audio engine to create very clean low frequency sound waves.

The following constants are used:

 static internal const PULSE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SAWTOOTH:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SINE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const TRIANGLE:Vector.<Number> = new Vector.<Number>( 44100 ); 

The class also uses two private constants:

 static private const BUFFER_SIZE:int = 2048; static private const SAMPLE_TIME:Number = 1.0 / 44100.0; 

BUFFER_SIZE is the number of sound samples transmitted by the ActionScript 3.0 audio API when making a sound sample request. This is the smallest allowable number of samples, which provides the lowest possible sound latency. The number of samples can be increased to reduce the load on the CPU, but this will increase the latency of the sound. SAMPLE_TIME is the duration of one sound sample in seconds.

And now private variables:

 static private var m_position:Number = 0.0; static private var m_amplitude:Number = 0.5; static private var m_soundStream:Sound = null; static private var m_soundChannel:SoundChannel = null; static private var m_audioList:Vector.<Audio> = new Vector.<Audio>(); static private var m_sampleList:Vector.<Number> = new Vector.<Number>( BUFFER_SIZE ); 


Now we need to initialize the class. There are many ways to do this, but I prefer the simple and understandable static class constructor:

 static private function $AudioEngine():void { var i:int = 0; var n:int = 44100; var p:Number = 0.0; // while( i < n ) { p = i / n; SINE[i] = Math.sin( Math.PI * 2.0 * p ); PULSE[i] = p < 0.5 ? 1.0 : -1.0; SAWTOOTH[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; TRIANGLE[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i++; } // m_soundStream = new Sound(); m_soundStream.addEventListener( SampleDataEvent.SAMPLE_DATA, onSampleData ); m_soundChannel = m_soundStream.play(); } $AudioEngine(); 

In this code, the following happens: samples are generated and cached for each of the four waveforms, and this happens only once. It also creates an instance of the audio stream that runs and plays before the application is completed.

The AudioEngine class has three general methods used to play and stop Audio instances:

AudioEngine. play()

 static public function play( audio:Audio ):void { if( audio.playing == false ) { m_audioList.push( audio ); } //     ,      audio.position = m_position - ( m_soundChannel.position * 0.001 ); audio.playing = true; audio.releasing = false; } 

AudioEngine. stop()

 static public function stop( audio:Audio, allowRelease:Boolean = true ):void { if( audio.playing == false ) { //    return; } if( allowRelease ) { //         audio.position = audio.duration; audio.releasing = true; return; } audio.playing = false; audio.releasing = false; } 

AudioEngine. stopAll()

 static public function stopAll( allowRelease:Boolean = true ):void { var i:int = 0; var n:int = m_audioList.length; var o:Audio = null; // if( allowRelease ) { while( i < n ) { o = m_audioList[i]; o.position = o.duration; o.releasing = true; i++; } return; } while( i < n ) { o = m_audioList[i]; o.playing = false; o.releasing = false; i++; } } 

, :

AudioEngine. onSampleData()

 static private function onSampleData( event:SampleDataEvent ):void { var i:int = 0; var n:int = BUFFER_SIZE; var s:Number = 0.0; var b:ByteArray = event.data; // if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } // m_position = m_soundChannel.position * 0.001; } 

, if , - m_soundChannel null. , SAMPLE_DATA m_soundStream.play() , SoundChannel .

while , m_soundStream ByteArray . :

AudioEngine. generateSamples()

 static private function generateSamples():void { var i:int = 0; var n:int = m_audioList.length; var j:int = 0; var k:int = BUFFER_SIZE; var p:int = 0; var f:Number = 0.0; var a:Number = 0.0; var s:Number = 0.0; var o:Audio = null; //   audio while( i < n ) { o = m_audioList[i]; // if( o.playing == false ) { //  audio   m_audioList.splice( i, 1 ); n--; continue; } // j = 0; //      while( j < k ) { if( o.position < 0.0 ) { //  audio     o.position += SAMPLE_TIME; j++; continue; } if( o.position >= o.duration ) { if( o.position >= o.duration + o.release ) { //  audio  o.playing = false; j++; continue; } //  audio    o.releasing = true; } //      audio f = o.frequency; a = o.amplitude; // if( o.frequencyModulator != null ) { //   f += o.frequencyModulator.process( o.position ); } // if( o.amplitudeModulator != null ) { //   a += o.amplitudeModulator.process( o.position ); } //      p = ( 44100 * f * o.position ) % 44100; //    s = o.samples[p]; // if( o.releasing ) { //      s *= 1.0 - ( ( o.position - o.duration ) / o.release ); } //     m_sampleList[j] += s * a; //    audio o.position += SAMPLE_TIME; j++; } i++; } } 

, , , / m_amplitude :

 static public function get amplitude():Number { return m_amplitude; } static public function set amplitude( value:Number ):void { //  amplitude  0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } 




. (Flash): .

, , , .



AudioProcessor


, — :

 package noise { public class AudioProcessor { // public var enabled:Boolean = true; // public function AudioProcessor() { if( Object(this).constructor == AudioProcessor ) { throw new Error( "AudioProcessor class must be extended" ); } } // internal function process( samples:Vector.<Number> ):void {} } } 

, , process() , AudioEngine , , enabled , .



AudioDelay


AudioDelay — , . AudioProcessor . , :

 package noise { public class AudioDelay extends AudioProcessor { // public function AudioDelay( time:Number = 0.5 ) { this.time = time; } } } 

time , , — ( ) , .

:

 private var m_buffer:Vector.<Number> = new Vector.<Number>(); private var m_bufferSize:int = 0; private var m_bufferIndex:int = 0; private var m_time:Number = 0.0; private var m_gain:Number = 0.8; 

m_buffer — : , process , ( ) m_bufferIndex . , process() .

m_bufferSize m_bufferIndex . m_time — . m_gain — , .

, process() , process() AudioProcessor :

 internal override function process( samples:Vector.<Number> ):void { var i:int = 0; var n:int = samples.length; var v:Number = 0.0; // while( i < n ) { v = m_buffer[m_bufferIndex]; //    v *= m_gain; //   v += samples[i]; //    // m_buffer[m_bufferIndex] = v; m_bufferIndex++; // if( m_bufferIndex == m_bufferSize ) { m_bufferIndex = 0; } // samples[i] = v; i++; } } 

, / m_time m_gain :

 public function get time():Number { return m_time; } public function set time( value:Number ):void { //  time  0.0001 - 8.0 value = value < 0.0001 ? 0.0001 : value > 8.0 ? 8.0 : value; //  time  ,      if( m_time == value ) { return; } //  time m_time = value; //    m_bufferSize = Math.floor( 44100 * m_time ); m_buffer.length = m_bufferSize; } 

 public function get gain():Number { return m_gain; } public function set gain( value:Number ):void { //  gain  0.0 - 1.0 m_gain = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } 

, AudioDelay . , , ( m_buffer ).



AudioEngine


, — AudioEngine , . -, :

 static private var m_processorList:Vector.<AudioProcessor> = new Vector.<AudioProcessor>(); 

AudioEngine , :

AudioEngine. addProcessor()

 static public function addProcessor( processor:AudioProcessor ):void { if( m_processorList.indexOf( processor ) == -1 ) { m_processorList.push( processor ); } } 

AudioEngine. removeProcessor()

 static public function removeProcessor( processor:AudioProcessor ):void { var i:int = m_processorList.indexOf( processor ); if( i != -1 ) { m_processorList.splice( i, 1 ); } } 

— AudioProcessor m_processorList .

, , , , process() :

 static private function processSamples():void { var i:int = 0; var n:int = m_processorList.length; // while( i < n ) { if( m_processorList[i].enabled ) { m_processorList[i].process( m_sampleList ); } i++; } } 

, , onSampleData() AudioEngine :

 if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); processSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } 

processSamples(); . processSamples() , .



Conclusion


That's all. . , , .

, , . ( ), — ( worker ActionScript 3.0), .

, . () , : , .

: — , , . , . .

, , - . , .

.

!

Source: https://habr.com/ru/post/338544/


All Articles