⬆️ ⬇️

We broadcast sound over the network using Java

It became interesting to me to experiment with the transmission of sound over the network.

I chose Java technology for this.

As a result, I wrote three components - a transmitter for Java SE, a receiver for Java SE and a receiver for Android.



In Java SE, classes from the javax.sound.sampled package were used to work with sound, in Android, classes android.media.AudioFormat , android.media.AudioManager and android.media.AudioTrack were used .

To work with the network - standard Socket and ServerSocket .



With the help of these components, we were able to successfully conduct a voice communication session between the Russian Far East and the Netherlands.

')

And another possible application - if you install a virtual sound card, for example, Virtual Audio Cable, you can stream music to other devices, and thus listen to music simultaneously in several rooms of the apartment (if you have the appropriate number of devices).





1. Transmitter.





The sound transmission method is trivial - we read the stream of bytes from the microphone, and write it to the output stream of the socket.



Working with a microphone and data transmission over the network occurs in separate streams:



mr = new MicrophoneReader(); mr.start(); ServerSocket ss = new ServerSocket(7373); while (true) { Socket s = ss.accept(); Sender sndr = new Sender(s); senderList.add(sndr); sndr.start(); } 




Stream to work with a microphone:



 public void run() { try { microphone = AudioSystem.getTargetDataLine(format); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); microphone = (TargetDataLine) AudioSystem.getLine(info); microphone.open(format); data = new byte[CHUNK_SIZE]; microphone.start(); while (!finishFlag) { synchronized (monitor) { if (senderNotReady==sendersCreated) { monitor.notifyAll(); continue; } numBytesRead = microphone.read(data, 0, CHUNK_SIZE); } System.out.print("Microphone reader: "); System.out.print(numBytesRead); System.out.println(" bytes read"); } } catch (LineUnavailableException e) { e.printStackTrace(); } } 




UPD. Note: it is important to correctly select the CHUNK_SIZE parameter. If the value is too small, stuttering will be heard; if it is too large, the sound delay becomes noticeable.



Audio stream:



 public void run() { try { OutputStream os = s.getOutputStream(); while (!finishFlag) { synchronized (monitor) { senderNotReady++; monitor.wait(); os.write(data, 0, numBytesRead); os.flush(); senderNotReady--; } System.out.print("Sender #"); System.out.print(senderNumber); System.out.print(": "); System.out.print(numBytesRead); System.out.println(" bytes sent"); } } catch (Exception e) { e.printStackTrace(); } } 




Both thread classes — nested, the outer data class variables data , numBytesRead , senderNotReady , sendersCreated, and monitor must be declared volatile .

The monitor object is used to synchronize streams.



2. Receiver for Java SE.





The method is also trivial - we read the stream of bytes from the socket, and write to the audio output.



 try { InetAddress ipAddr = InetAddress.getByName(host); Socket s = new Socket(ipAddr, 7373); InputStream is = s.getInputStream(); DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, format); speakers = (SourceDataLine) AudioSystem.getLine(dataLineInfo); speakers.open(format); speakers.start(); Scanner sc = new Scanner(System.in); int numBytesRead; byte[] data = new byte[204800]; while (true) { numBytesRead = is.read(data); speakers.write(data, 0, numBytesRead); } } catch (Exception e) { e.printStackTrace(); } 




3. Receiver for Android.





The way is the same.

The only difference is that instead of javax.sound.sampled.SourceDataLine we use android.media.AudioTrack .

It is also necessary to take into account that in Android work with the network can not occur in the main flow of the application.

With the creation of services I decided not to bother, we will start the workflow from the main Activity.



 toogle.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (!isRunning) { isRunning = true; toogle.setText("Stop"); rp = new ReceiverPlayer(hostname.getText().toString()); rp.start(); } else { toogle.setText("Start"); isRunning = false; rp.setFinishFlag(); } } }); 




The code for the workflow itself:



 class ReceiverPlayer extends Thread { volatile boolean finishFlag; String host; public ReceiverPlayer(String hostname) { host = hostname; finishFlag = false; } public void setFinishFlag() { finishFlag = true; } public void run() { try { InetAddress ipAddr = InetAddress.getByName(host); Socket s = new Socket(ipAddr, 7373); InputStream is = s.getInputStream(); int bufferSize = AudioTrack.getMinBufferSize(16000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT); int numBytesRead; byte[] data = new byte[bufferSize]; AudioTrack aTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 16000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM); aTrack.play(); while (!finishFlag) { numBytesRead = is.read(data, 0, bufferSize); aTrack.write(data, 0, numBytesRead); } aTrack.stop(); s.close(); } catch (Exception e) { StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); e.printStackTrace(pw); Log.e("Error",sw.toString()); } } } 




4. A note about audio formats.





Java SE uses the javax.sound.sampled.AudioFormat class.



In Android, audio parameters are transferred directly to the android.media.AudioTrack object constructor.



Consider the constructors of these classes that were used in my code.



Java SE:



AudioFormat (float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean bigEndian)

Constructs an AudioFormat with a linear PCM encoding and the given parameters.



Android:



AudioTrack (int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode) .



For successful playback, the receiver and transmitter parameters sampleRate / sampleRate , sampleSizeInBits / audioFormat and channels / channelConfig must match.



In addition, the mode value for Android needs to be set to AudioTrack.MODE_STREAM .



Also, it was experimentally established that for successful playback on Android you need to transfer data in the signed little endian format, that is:

signed = true; bigEndian = false.



As a result, the following formats were chosen:



 // Java SE: AudioFormat format = new AudioFormat(16000.0f, 16, 2, true, bigEndian); // Android: AudioTrack aTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 16000, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM); 




5. Testing.





Between the laptop on Windows 8 and the desktop on Debian Wheezy everything started right away without any problems.



The receiver on Android initially produced only noise, but this problem was eliminated after proper selection of the signed and bigEndian parameters for the audio format.



On the Raspberry Pi (Raspbian Wheezy), stuttering was initially heard - crutches were needed in the form of installing an avian lightweight virtual java-machine.



I wrote the following startup script:



 case "$1" in start) java -avian -jar jAudioReceiver.jar 192.168.1.50 & echo "kill -KILL $!">kill_receiver.sh ;; stop) ./kill_receiver.sh ;; esac 




Source codes of all components here:



github.com/tabatsky/NetworkingAudio



Source: https://habr.com/ru/post/242949/



All Articles