📜 ⬆️ ⬇️

MIDI and OSC - the main protocols for the interaction of musical applications

Part 1. MIDI


MIDI (Musical Instrument Digital Interface) is a standard for exchanging data between digital musical instruments. It allows you to exchange information such as note number, velocity, timecode, etc. MIDI supports most of the music devices produced, exceptions are many modular system modules (for example, Eurorack), as well as such specific things as Monome.

1 Background

The need for such a standard arose around the end of the 70s. At that time, synthesizers were voltage controlled via the CV / Gate interface. There were several types of it, however, the variant proposed by Roland was the most popular: when the voltage was increased by 1 V , the frequency of the generated tone increased by one octave. The main disadvantage of this interface is that with the help of it you can control only one voice of polyphony. To extract an extra note, you need to add another CV / Gate interface. In addition, only the fact of pressing a key and its height is transmitted in this way, which is definitely not enough for an expressive game.

Another disadvantage of synthesizers of the time was the complexity of the tuning. For each sound, the musicians had to tune the instrument anew, which was not very convenient for live performances. At concerts of those times one could often see whole racks of synthesizers - so the musicians got out of the situation. Over time, mini-computers were built into the tools with which it was possible to save the positions of the handles in the presets.
However, there is another point that has had a great influence on the development of MIDI .

Undoubtedly, each synthesizer has its own character of sound, each of them was strong in certain types of sounds. Therefore, many musicians of that time practiced the game on two instruments at once, as if using the best of different models. The layering of sounds from various synthesizers has become a performing device, the hallmark of many musicians. [one]
')
2 History of appearance

By the early 80s, most manufacturers realized the need to create a single interface. The task was this: to develop a standard for transmitting performer's actions in digital form between all types of electromusical instruments. [one]

We will not go into the history in detail, (although it is very interesting, you can read about it in [1]), here are a few basic dates:

3 Basics

MIDI is a serial communication protocol between the master and slave devices. The master device generates messages and sends them to the slave device, which executes the received commands. Serial means information is transmitted one bit at a time, bit by bit. Hence the impossibility of transmitting several messages simultaneously.

The protocol itself consists of three parts [1]: data format specification, interface hardware specification and data storage specification. This article will deal only with the first part.

MIDI messages are divided into two types: channel messages (channel messages) and system messages (system messages). The former control sound formation, and the latter perform service functions, for example, synchronization.

Types of MIDI messages.

A message usually consists of two or three bytes. The first byte is called status byte. It specifies the type of message and the channel number to which it belongs. All subsequent bytes are called data bytes. The status byte always begins with one, and the byte data from scratch - so the system distinguishes them. It turns out that for MIDI information, only 7 bits remain, with which you can encode integers from 0 to 127, this is where this “famous” limit on the number of notes and the values ​​of the controllers comes from.

The structure of the MIDI message.

As can be seen from the figure, information about the type of messages is given only 3 bits, in which only 8 numbers can be encoded. 7 of them are reserved for the most frequently used commands, and the latter is used for system messages. When a system message is transmitted, the last 4 bits of the byte status (in which the channel number is usually transmitted) determine the type of system message.

Tab. 1. Channel messages.
MessageStatus byteData byte 1Data byte 2
Note off1000nnnnNote numberVelocity
Note on1001nnnnNote numberVelocity
Polyphonic Key Pressue1010nnnnNote numberPressure
Control change1011nnnnController numberValue
Program Change1100nnnnProgram number-
Channel pressure1101nnnnPressure-
Pitch Wheel Change Change1110nnnnProgram number-
System messages1111nnnn......

Tab. 2. System messages
MessageStatus byteData byte 1Data byte 2
System Exclusive (SysEx)
System Exclusive11110000ID...
System common
MTC Quater Frame11110001Timecode-
Song Position Pointer11110010LSBMSB
Song select11110011Song number-
Tune request11110110--
End Of Exclusive (EOX)11110111--
Real time
Timing clock11111000 (248)--
Start11111010 (250)--
Continue11111011 (251)--
Stop11111100 (248)--
Active sensing11111110--
System reset11111111--

4 Disadvantages

MIDI was developed as an affordable and practical standard for transmitting performer gestures between any MIDI devices [2]. Last but not least due to its lightness, he received such a distribution. Whatever you say, he copes with his mission perfectly, and this is confirmed by time.
So, probably, the most well-known drawback is the limitation of controller values ​​to 128 values. Of course, it is possible to transmit them using two data bytes (which gives 16 384 possible values), but for this you need to send three Control Change messages, which will very heavily load the protocol, since data is transmitted over it at a speed of 31,250 bps . It is very small. For comparison, a 12-note chord will be transmitted in about 10 ms . And this is without other messages, such as Clock and CC . In a real performance, when many different parameters are simultaneously transmitted, synchronization problems may occur.

Part 2. Open Sound Contol


“Open Sound Control is a new, optimized for modern network technologies protocol for interaction between computers, sound synthesizers and other multimedia devices,” said OSC at the international conference on computer music in 1997 [3]. OSC is not a protocol in the form that MIDI is , as it does not describe the hardware requirements - the specifications describe only the data transfer format. In this respect, the OSC is more similar to XML or JSON than to MIDI [8].

For now, let's leave the technical details and start from the very beginning, from the story.

1 History, applications

Open Sound Control was created in 1997 by Matthew Wright and Adrian Freed at the University of California at the Center for New Music and Audio Technologies (CNMAT - Center for New Music and Audio Technologies) . The developers wanted to use high-speed network technologies in interactive computer music [4]. OSC does not matter which protocol to transmit, since it is just a binary message format, although most implementations use TCP / IP or UDP . Another reason for the creation was that MIDI with its notes, channels and controllers did not logically fit the CAST (CNMAT Additive Synthesis Tools) synthesizer that was being developed at that time, which is understandable, because MIDI is a keyboard-oriented protocol that was developed to control one synthesizer from another [1].

The word “Open” in the title means that the OSC does not predetermine which messages should be used for certain parameters - this is decided by the developer of a specific device. In addition, this word has another meaning: the protocol is open, its specifications are on the official site where you can download the source code.

A small (and incomplete) list of programs using Open Sound Control :

2 Features


3 Post Anatomy

Anatomy of OSC messages.
It is worth noting that when using UDP , if messages are transmitted in different packets, they will not necessarily come in the order in which they were transmitted [6]. Suppose messages were transmitted:

/synth1/noteoff 54
/synth1/noteon 60


In fact, they may come in reverse order:

/synth1/noteoff 60
/synth1/noteon 54


This can lead to problems with managing voice in polyphony, for example, in this message, the noteoff command is transmitted , which turns off the voice and then turns on another note. If these messages come in reverse order, the voice will not be released and the new note will not be able to start.

To avoid this, you need to send messages in one package (bundle), or use TCP / IP , it differs from UDP in that it guarantees correct delivery of packets, transmitting each of them until it is transmitted in its original form. It should be borne in mind that the cost of this convenience will be large in comparison with UDP delay, so the use of TCP / IP should be justified.

4 Pattern matching

Characters that can be used in the address bar [7]:

Links


[1] Series of articles about MIDI from the magazine "Musical equipment" .
[2] T. Winkler " Composing interactive music " - 2001 MIT Press.
[3] M. Wright, and A. Freed, 1997. " Open Sound Control: a new protocol for communicating with sound synthesizers. " - ICMC 1997.
[4] M. Wright " Open Sound Control: Organized Sound 10 (3): 193–200–2005 Cambridge University Press.
[5] A. Schmeder, A. Freed, D. Wessel " Best Practices for Open Sound Control " - Linux Audio Conference, 01/05/2010, Utrecht, NL, (2010).
[6] A. Fraietta " Open Sound Control: Constraints and Limitations " - 2008 8th NIME conference.
[7] M. Wright " The Open Sound Control 1.0 Specification " - 2002.
[8] A. Freed, A. Schmeder " Features and Future of Open Sound Control version 1.1 for NIME " - 2009.

PS Many thanks to the 8bitjoey habraiser for the error found in the article.

Source: https://habr.com/ru/post/139226/


All Articles