SLObjectItf
) is an abstraction of a set of resources designed to perform a specific set of tasks and to store information about these resources. When creating an object, its type is defined, which determines the range of tasks that can be solved with its help.SLEngineItf
, SLSeekItf
, etc.) is an abstraction of a set of interrelated functionality provided by a particular object. The interface includes a variety of methods used to perform actions on an object. The interface has a type that defines the exact list of methods supported by this interface. The interface is determined by its identifier, which can be used in the code to refer to the type of interface (for example, SL_IID_VOLUME, SL_IID_SEEK
). All constants and interface names are pretty obvious, so special problems should not arise.SLDataLocator_AndroidFD
, which supports an interface for moving positions around a track. At the same time, you can load the entire file into the buffer (using SLDataLocator_AndroidFD
), and play it from there. But this object does not support the SL_IID_SEEK
interface, therefore it will not be possible to move around the track = /OSLContext
. It is responsible for initializing the library and creating instances of the required buffers.OSLSound
. Base class for working with sounds.OSLWav
. Class to work with wav. Inherited from OSLSound to keep the overall interface to work. To work with ogg, you can then create a class OSLOgg, as I did in OpenAL. This distinction is made, since these formats have a completely different loading process. WAV is a clean format, it’s enough just to read the bytes, but ogg must also be decompressed with Ogg Vorbis , I’m generally silent about mp3 (:OSLMp3
. Class for working with Mp3. Inherited from OSLSound to keep the overall interface to work. The class almost doesn't implement anything at all, because the mp3 stream is. But if you want to decode mp3 with the help of some lame or something else, then in the load (char * filename) method you can implement decoding and use the BufferPlayer.OSLPlayer
. Actually, the main class for working with sound. The fact is that the mechanism of work in OpenSL ES is not the same as in OpenAL. In OpenAL there is a special structure for the buffer and sound source (on which we hang the buffer). In OpenSL ES, everything revolves around players that are different.OSLBufferPlayer
. We use this player when we want to load the entire file into memory. As a rule, it is used for short sound effects (shot, explosion, etc.). As already said, it does not support the SL_IID_SEEK
interface, therefore it will not be possible to move around the track.OSLAssetPlayer
, allows you to stream from the assets directory (that is, do not load the entire file into memory). Use to play long tracks (background music, for example).(*obj)->Realize(obj, async)
.(*obj)-> GetInterface (obj, ID, &itf)
(*obj)->Destroy(obj)
.LOCAL_LDLIBS += -lOpenSLES
and connect two header files: #include <SLES/OpenSLES.h> #include <SLES/OpenSLES_Android.h>
slCreateEngine
method. The resulting object becomes the central object for accessing the OpenSL ES API. Next, we initialize the object using the Realize
method. result = slCreateEngine(&engineObj, //pointer to object 0, // count of elements is array of additional options NULL, // array of additional options lEngineMixIIDCount, // interface count lEngineMixIIDs, // array of interface ids lEngineMixReqs); if (result != SL_RESULT_SUCCESS ) { LOGE("Error after slCreateEngine"); return; } result = (*engineObj)->Realize(engineObj, SL_BOOLEAN_FALSE ); if (result != SL_RESULT_SUCCESS ) { LOGE("Error after Realize"); return; }
SL_IID_ENGINE
, through which you will get access to the speakers, playing sounds, and so on. result = (*engineObj)->GetInterface(engineObj, SL_IID_ENGINE, &engine); if (result != SL_RESULT_SUCCESS ) { LOGE("Error after GetInterface"); return; }
CreateOutputMix
method: result = (*engine)->CreateOutputMix(engine, &outputMixObj, lOutputMixIIDCount, lOutputMixIIDs, lOutputMixReqs); if(result != SL_RESULT_SUCCESS){ LOGE("Error after CreateOutputMix"); return; } result = (*outputMixObj)->Realize(outputMixObj, SL_BOOLEAN_FALSE); if(result != SL_RESULT_SUCCESS){ LOGE("Error after Realize"); return; }
OSLContext
, all the necessary players are initialized. The maximum possible number of players is limited. I recommend to create no more than 20. void OSLContext::initPlayers(){ for(int i = 0; i< MAX_ASSET_PLAYERS_COUNT; ++i) assetPlayers[i] = new OSLAssetPlayer(this); for(int i = 0; i< MAX_BUF_PLAYERS_COUNT; ++i) bufPlayers[i] = new OSLBufferPlayer(this); }
OSLContext
). If you want to loop the sound, we get OSLAssetPlayer
, in another case OSLBufferPlayer
. locatorBufferQueue.locatorType = SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE; locatorBufferQueue.numBuffers = 16; // , SLDataFormat_PCM formatPCM; formatPCM.formatType = SL_DATAFORMAT_PCM; formatPCM.numChannels = 2; formatPCM.samplesPerSec = SL_SAMPLINGRATE_44_1;// header.samplesPerSec*1000; formatPCM.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16 ;//header.bitsPerSample; formatPCM.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;// header.fmtSize; formatPCM.channelMask = SL_SPEAKER_FRONT_LEFT|SL_SPEAKER_FRONT_RIGHT ; formatPCM.endianness = SL_BYTEORDER_LITTLEENDIAN; audioSrc.pLocator = &locatorBufferQueue; audioSrc.pFormat = &formatPCM; locatorOutMix.locatorType = SL_DATALOCATOR_OUTPUTMIX; locatorOutMix.outputMix = context->getOutputMixObject(); audioSnk.pLocator = &locatorOutMix; audioSnk.pFormat = NULL; // const SLInterfaceID ids[2] = {SL_IID_ANDROIDSIMPLEBUFFERQUEUE,/*SL_IID_MUTESOLO,*/ /*SL_IID_EFFECTSEND,SL_IID_SEEK,*/ /*SL_IID_MUTESOLO,*/ SL_IID_VOLUME}; const SLboolean req[2] = {SL_BOOLEAN_TRUE,SL_BOOLEAN_TRUE}; result = (*context->getEngine())->CreateAudioPlayer(context->getEngine(), &playerObj, &audioSrc, &audioSnk,2, ids, req); assert(SL_RESULT_SUCCESS == result); result = (*playerObj)->Realize(playerObj, SL_BOOLEAN_FALSE ); assert(SL_RESULT_SUCCESS == result); if (result != SL_RESULT_SUCCESS ) { LOGE("Can not CreateAudioPlayer %d", result); playerObj = NULL; } // result = (*playerObj)->GetInterface(playerObj, SL_IID_PLAY, &player); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->GetInterface(playerObj, SL_IID_VOLUME, &fdPlayerVolume); assert(SL_RESULT_SUCCESS == result); result = (*playerObj)->GetInterface(playerObj, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &bufferQueue); assert(SL_RESULT_SUCCESS == result);
SLDataFormat_PCM
structure. Why did I explicitly fill in the parameters myself and not read the WAV file from the headers? Because I have all the WAV files in the same format, i.e. the same number of channels, frequency, bit rate, etc. The fact is that if you create a buffer and specify 2 channels in the parameters, and try to play a track with 1 channel, the application will fall. The only option is to reinitialize the entire buffer if the file has a different format. But after all, the beauty is just that we initialize the player 1 time, and then just change the buffer on it. Therefore, there are two options here, either create several players with different parameters, or all of your .wav files lead to the same format. Well, or initialize the buffer every time again -_-SL_IID_MUTESOLO
for managing channels (for multichannel audio only, this is indicated in the numChannels field of the SLDataFormat_PCM structure).SL_IID_EFFECTSEND
for applying effects (by specification - only the reverb effect). void OSLBufferPlayer::setSound(OSLSound * sound){ if(bufferQueue == NULL) LOGD("bufferQueue is null"); this->sound = sound; (*bufferQueue)->Clear(bufferQueue); (*bufferQueue)->Enqueue(bufferQueue, sound->getBuffer() , sound->getSize()); }
OSLMp3
class, which, in fact, only stores the file name in order to install it on the player in the future. The same can be done for ogg and other supported formats. void OSLAssetPlayer::init(char * filename){ SLresult result; AAsset* asset = AAssetManager_open(mgr, filename, AASSET_MODE_UNKNOWN); if (NULL == asset) { return JNI_FALSE; } // off_t start, length; int fd = AAsset_openFileDescriptor(asset, &start, &length); assert(0 <= fd); AAsset_close(asset); // SLDataLocator_AndroidFD loc_fd = {SL_DATALOCATOR_ANDROIDFD, fd, start, length}; SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED}; SLDataSource audioSrc = {&loc_fd, &format_mime}; SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, context->getOutputMixObject()}; SLDataSink audioSnk = {&loc_outmix, NULL}; // const SLInterfaceID ids[3] = {SL_IID_SEEK, SL_IID_MUTESOLO, SL_IID_VOLUME}; const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE}; result = (*context->getEngine())->CreateAudioPlayer(context->getEngine(), &playerObj, &audioSrc, &audioSnk, 3, ids, req); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->Realize(playerObj, SL_BOOLEAN_FALSE); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->GetInterface(playerObj, SL_IID_PLAY, &player); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->GetInterface(playerObj, SL_IID_SEEK, &fdPlayerSeek); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->GetInterface(playerObj, SL_IID_MUTESOLO, &fdPlayerMuteSolo); assert(SL_RESULT_SUCCESS == result); // result = (*playerObj)->GetInterface(playerObj, SL_IID_VOLUME, &fdPlayerVolume); assert(SL_RESULT_SUCCESS == result); // result = (*fdPlayerSeek)->SetLoop(fdPlayerSeek, sound->isLooping() ? SL_BOOLEAN_TRUE : SL_BOOLEAN_FALSE, 0, SL_TIME_UNKNOWN); assert(SL_RESULT_SUCCESS == result); // return JNI_TRUE; }
Source: https://habr.com/ru/post/235795/
All Articles