n
is the number of values ​​needed to fill the array, a
is an array, p
is the normalized position inside the wave: var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.5 ? 1.0 : -1.0; i ++; }
n
is the number of values ​​required to fill the array, a
is the array, p
is the normalized position inside the wave: var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; i ++; }
n
is the number of values ​​required to fill the array, a
is an array, p
is the normalized position inside the wave: var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = Math.sin( p * 2.0 * Math.PI ); i ++; }
n
is the number of values ​​needed to fill the array, a
is the array, p
is the normalized position representing the wave: var i:int = 0; var n:int = 100; var p:Number; while( i < n ) { p = i / n; a[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i ++; }
if (p < 0.25) { a[i] = p * 4.0; } else if (p < 0.75) { a[i] = 2.0 - (p * 4.0); } else { a[i] = (p * 4.0) - 4.0; }
f = Math.pow( 2, n / 12 ) * 440.0;
n
in this code is the number of notes from A4 to the note of interest. For example, to find the frequency for the second octave (A5), one octave above A4, we need to assign n
value of 12
, because A5 is 12 notes above A4. To find the frequencies of the large octave (E2), we need to assign n
value of -5
, because E2 is 5 notes below A4. You can also do the reverse operation and find a note (relative to A4) at a given frequency: n = Math.round( 12.0 * Math.log( f / 440.0 ) * Math.LOG2E );
bitsPerSample = bitRate / sampleRate;
trace( 705600 / 44100 ); // "16"
noise
, and then add the following class to this package: package noise { public final class AudioWaveform { static public const PULSE:int = 0; static public const SAWTOOTH:int = 1; static public const SINE:int = 2; static public const TRIANGLE:int = 3; } }
true
or false
depending on the correctness of the wave value. static public function validate( waveform:int ):Boolean { if( waveform == PULSE ) return true; if( waveform == SAWTOOTH ) return true; if( waveform == SINE ) return true; if( waveform == TRIANGLE ) return true; return false; }
public function AudioWaveform() { throw new Error( "AudioWaveform class cannot be instantiated" ); }
Audio
class. By nature, this class is similar to the native ActionScript 3.0 Sound
class, each audio engine will be represented as an instance of the Audio
class.noise
package: package noise { public class Audio { public function Audio() {} } }
private var m_waveform:int = AudioWaveform.PULSE; private var m_frequency:Number = 100.0; private var m_amplitude:Number = 0.5; private var m_duration:Number = 0.2; private var m_release:Number = 0.2;
amplitude
is a value in the range from 0.0
to 1.0
, frequency
is in Hz, and duration
and release
are in seconds. private var m_frequencyModulator:AudioModulator = null; private var m_amplitudeModulator:AudioModulator = null;
Audio
class should contain several internal properties that only the AudioEngine
class will have access to (we will write it soon). These properties do not need to be hidden behind getters / setters: internal var position:Number = 0.0; internal var playing:Boolean = false; internal var releasing:Boolean = false; internal var samples:Vector.<Number> = null;
position
is set in seconds and allows the AudioEngine
class AudioEngine
track the position of the sound when it is played. This is necessary to calculate the sound samples of the wave. The playing
and releasing
tell the AudioEngine
class what state the sound is in, and the samples
property is a reference to the cached wave samples used by the sound. How these properties are used, we will understand when we write the AudioEngine
class.Audio
class, add getters / setters:Audio. waveform
public final function get waveform():int { return m_waveform; } public final function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: samples = AudioEngine.TRIANGLE; break; } m_waveform = value; }
Audio. frequency
[Inline] public final function get frequency():Number { return m_frequency; } public final function set frequency( value:Number ):void { // frequency 1.0 - 14080.0 m_frequency = value < 1.0 ? 1.0 : value > 14080.0 ? 14080.0 : value; }
Audio. amplitude
[Inline] public final function get amplitude():Number { return m_amplitude; } public final function set amplitude( value:Number ):void { // amplitude 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
Audio. duration
[Inline] public final function get duration():Number { return m_duration; } public final function set duration( value:Number ):void { // duration 0.0 - 60.0 m_duration = value < 0.0 ? 0.0 : value > 60.0 ? 60.0 : value; }
Audio. release
[Inline] public final function get release():Number { return m_release; } public function set release( value:Number ):void { // release 0.0 - 10.0 m_release = value < 0.0 ? 0.0 : value > 10.0 ? 10.0 : value; }
Audio. frequencyModulator
[Inline] public final function get frequencyModulator():AudioModulator { return m_frequencyModulator; } public final function set frequencyModulator( value:AudioModulator ):void { m_frequencyModulator = value; }
Audio. amplitudeModulator
[Inline] public final function get amplitudeModulator():AudioModulator { return m_amplitudeModulator; } public final function set amplitudeModulator( value:AudioModulator ):void { m_amplitudeModulator = value; }
[Inline]
metadata label associated with some of the getter functions. This metadata label is a feature of the ActionScript 3.0 Compiler , and it does exactly what its name implies: it embeds (expands) the contents of the function. When used wisely, this feature is incredibly useful for optimization, and the task of generating a dynamic audio signal during the execution of an optimization program exactly requires.AudioModulator
is to enable the amplitude and frequency modulation of Audio
instances to create various useful effects. Modulators are actually similar to Audio
instances, they have a waveform, amplitude and frequency, but they do not create any audible sound, but only modify other sounds.noise
package: package noise { public class AudioModulator { public function AudioModulator() {} } }
private var m_waveform:int = AudioWaveform.SINE; private var m_frequency:Number = 4.0; private var m_amplitude:Number = 1.0; private var m_shift:Number = 0.0; private var m_samples:Vector.<Number> = null;
Audio
class, then you are not mistaken: everything is the same here, except for the shift
property.shift
property does, recall one of the base waves used by the audio engine (pulse, sawtooth, sinusoidal, or triangular) and imagine a vertical line passing through the wave anywhere. The horizontal position of this vertical line will be the shift
value; this value is in the range from 0.0
to 1.0
, telling the modulator where to start reading the wave. In turn, it has an absolute influence on the modifications made by the modulator to the amplitude or frequency of the sound.shift
has the value 0.0
, then the frequency of the sound will first increase and then decrease according to the curvature of the sinusoid. However, if shift
set to 0.5
, the frequency of the sound will first decrease, and therefore increase.AudioModulator
contains one internal method used only by AudioEngine
. The method is as follows: [Inline] internal final function process( time:Number ):Number { var p:int = 0; var s:Number = 0.0; if( m_shift != 0.0 ) { time += ( 1.0 / m_frequency ) * m_shift; } p = ( 44100 * m_frequency * time ) % 44100; s = m_samples[p]; return s * m_amplitude; }
AudioModulator
class, add getters / setters:AudioModulator. waveform
public function get waveform():int { return m_waveform; } public function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: m_samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: m_samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: m_samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: m_samples = AudioEngine.TRIANGLE; break; } m_waveform = value; }
AudioModulator. frequency
public function get frequency():Number { return m_frequency; } public function set frequency( value:Number ):void { // frequency 0.01 - 100.0 m_frequency = value < 0.01 ? 0.01 : value > 100.0 ? 100.0 : value; }
AudioModulator. amplitude
public function get amplitude():Number { return m_amplitude; } public function set amplitude( value:Number ):void { // amplitude 0.0 - 8000.0 m_amplitude = value < 0.0 ? 0.0 : value > 8000.0 ? 8000.0 : value; }
AudioModulator. shift
public function get shift():Number { return m_shift; } public function set shift( value:Number ):void { // shift 0.0 - 1.0 m_shift = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
AudioModulator
can be considered complete.AudioEngine
. This is a completely static class. It controls almost everything related to Audio
and sound generation.noise
: package noise { import flash.events.SampleDataEvent; import flash.media.Sound; import flash.media.SoundChannel; import flash.utils.ByteArray; // public final class AudioEngine { public function AudioEngine() { throw new Error( "AudioEngine class cannot be instantiated" ); } } }
final
, because there is no reason to extend the completely static class. static internal const PULSE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SAWTOOTH:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SINE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const TRIANGLE:Vector.<Number> = new Vector.<Number>( 44100 );
static private const BUFFER_SIZE:int = 2048; static private const SAMPLE_TIME:Number = 1.0 / 44100.0;
BUFFER_SIZE
is the number of sound samples transmitted by the ActionScript 3.0 audio API when making a sound sample request. This is the smallest allowable number of samples, which provides the lowest possible sound latency. The number of samples can be increased to reduce the load on the CPU, but this will increase the latency of the sound. SAMPLE_TIME
is the duration of one sound sample in seconds. static private var m_position:Number = 0.0; static private var m_amplitude:Number = 0.5; static private var m_soundStream:Sound = null; static private var m_soundChannel:SoundChannel = null; static private var m_audioList:Vector.<Audio> = new Vector.<Audio>(); static private var m_sampleList:Vector.<Number> = new Vector.<Number>( BUFFER_SIZE );
m_position
used to track audio streaming time in seconds.m_amplitude
is the global secondary amplitude for all playable Audio
instances.m_soundStream
and m_soundChannel
explanatory.m_audioList
contains links to all playable copies of Audio
.m_sampleList
is a temporary buffer used to store sound samples when they are requested by the ActionScript 3.0 audio API. static private function $AudioEngine():void { var i:int = 0; var n:int = 44100; var p:Number = 0.0; // while( i < n ) { p = i / n; SINE[i] = Math.sin( Math.PI * 2.0 * p ); PULSE[i] = p < 0.5 ? 1.0 : -1.0; SAWTOOTH[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; TRIANGLE[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i++; } // m_soundStream = new Sound(); m_soundStream.addEventListener( SampleDataEvent.SAMPLE_DATA, onSampleData ); m_soundChannel = m_soundStream.play(); } $AudioEngine();
AudioEngine
class has three general methods used to play and stop Audio
instances:AudioEngine. play()
static public function play( audio:Audio ):void { if( audio.playing == false ) { m_audioList.push( audio ); } // , audio.position = m_position - ( m_soundChannel.position * 0.001 ); audio.playing = true; audio.releasing = false; }
AudioEngine. stop()
static public function stop( audio:Audio, allowRelease:Boolean = true ):void { if( audio.playing == false ) { // return; } if( allowRelease ) { // audio.position = audio.duration; audio.releasing = true; return; } audio.playing = false; audio.releasing = false; }
AudioEngine. stopAll()
static public function stopAll( allowRelease:Boolean = true ):void { var i:int = 0; var n:int = m_audioList.length; var o:Audio = null; // if( allowRelease ) { while( i < n ) { o = m_audioList[i]; o.position = o.duration; o.releasing = true; i++; } return; } while( i < n ) { o = m_audioList[i]; o.playing = false; o.releasing = false; i++; } }
AudioEngine. onSampleData()
static private function onSampleData( event:SampleDataEvent ):void { var i:int = 0; var n:int = BUFFER_SIZE; var s:Number = 0.0; var b:ByteArray = event.data; // if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } // m_position = m_soundChannel.position * 0.001; }
if
, - m_soundChannel
null. , SAMPLE_DATA
m_soundStream.play()
, SoundChannel
.while
, m_soundStream
ByteArray
. :AudioEngine. generateSamples()
static private function generateSamples():void { var i:int = 0; var n:int = m_audioList.length; var j:int = 0; var k:int = BUFFER_SIZE; var p:int = 0; var f:Number = 0.0; var a:Number = 0.0; var s:Number = 0.0; var o:Audio = null; // audio while( i < n ) { o = m_audioList[i]; // if( o.playing == false ) { // audio m_audioList.splice( i, 1 ); n--; continue; } // j = 0; // while( j < k ) { if( o.position < 0.0 ) { // audio o.position += SAMPLE_TIME; j++; continue; } if( o.position >= o.duration ) { if( o.position >= o.duration + o.release ) { // audio o.playing = false; j++; continue; } // audio o.releasing = true; } // audio f = o.frequency; a = o.amplitude; // if( o.frequencyModulator != null ) { // f += o.frequencyModulator.process( o.position ); } // if( o.amplitudeModulator != null ) { // a += o.amplitudeModulator.process( o.position ); } // p = ( 44100 * f * o.position ) % 44100; // s = o.samples[p]; // if( o.releasing ) { // s *= 1.0 - ( ( o.position - o.duration ) / o.release ); } // m_sampleList[j] += s * a; // audio o.position += SAMPLE_TIME; j++; } i++; } }
m_amplitude
: static public function get amplitude():Number { return m_amplitude; } static public function set amplitude( value:Number ):void { // amplitude 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
package noise { public class AudioProcessor { // public var enabled:Boolean = true; // public function AudioProcessor() { if( Object(this).constructor == AudioProcessor ) { throw new Error( "AudioProcessor class must be extended" ); } } // internal function process( samples:Vector.<Number> ):void {} } }
process()
, AudioEngine
, , enabled
, .AudioDelay
— , . AudioProcessor
. , : package noise { public class AudioDelay extends AudioProcessor { // public function AudioDelay( time:Number = 0.5 ) { this.time = time; } } }
time
, , — ( ) , . private var m_buffer:Vector.<Number> = new Vector.<Number>(); private var m_bufferSize:int = 0; private var m_bufferIndex:int = 0; private var m_time:Number = 0.0; private var m_gain:Number = 0.8;
m_buffer
— : , process
, ( ) m_bufferIndex
. , process()
.m_bufferSize
m_bufferIndex
. m_time
— . m_gain
— , .process()
, process()
AudioProcessor
: internal override function process( samples:Vector.<Number> ):void { var i:int = 0; var n:int = samples.length; var v:Number = 0.0; // while( i < n ) { v = m_buffer[m_bufferIndex]; // v *= m_gain; // v += samples[i]; // // m_buffer[m_bufferIndex] = v; m_bufferIndex++; // if( m_bufferIndex == m_bufferSize ) { m_bufferIndex = 0; } // samples[i] = v; i++; } }
m_time
m_gain
: public function get time():Number { return m_time; } public function set time( value:Number ):void { // time 0.0001 - 8.0 value = value < 0.0001 ? 0.0001 : value > 8.0 ? 8.0 : value; // time , if( m_time == value ) { return; } // time m_time = value; // m_bufferSize = Math.floor( 44100 * m_time ); m_buffer.length = m_bufferSize; }
public function get gain():Number { return m_gain; } public function set gain( value:Number ):void { // gain 0.0 - 1.0 m_gain = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
AudioDelay
. , , ( m_buffer
).AudioEngine
AudioEngine
, . -, : static private var m_processorList:Vector.<AudioProcessor> = new Vector.<AudioProcessor>();
AudioEngine
, :AudioEngine. addProcessor()
static public function addProcessor( processor:AudioProcessor ):void { if( m_processorList.indexOf( processor ) == -1 ) { m_processorList.push( processor ); } }
AudioEngine. removeProcessor()
static public function removeProcessor( processor:AudioProcessor ):void { var i:int = m_processorList.indexOf( processor ); if( i != -1 ) { m_processorList.splice( i, 1 ); } }
AudioProcessor
m_processorList
.process()
: static private function processSamples():void { var i:int = 0; var n:int = m_processorList.length; // while( i < n ) { if( m_processorList[i].enabled ) { m_processorList[i].process( m_sampleList ); } i++; } }
onSampleData()
AudioEngine
: if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); processSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; }
processSamples();
. processSamples()
, .Source: https://habr.com/ru/post/338544/
All Articles