
If you have ever visited a cinema, you probably have heard
Deep Note - the
sound trademark of the
THX company. This is one of the first sounds that is heard at the beginning of the trailers in THX certified halls. I always liked his recognizable crescendo, starting with a terrible mixing of notes and ending with a bright and grand finale (
sound ). What a delight for the ear!
Yesterday (probably) without any reason I was interested in the origin of this sound, and I did a little research. I was deeply touched by his story, which I want to share with you. Then we will continue - and we will create this sound ourselves, prepare scissors and glue!
The best source of information about sound that I could find - in my opinion, this is his complete electro-acoustic composition, published in the excellent
Music Thing Blog in 2005. Here is a
link to the message .
Some facts about sound:
')
- It was created by Dr. James Andy Moorer in 1982.
- On one of the days in history, he was lost 4000 times a day, almost every 20 seconds! Dr. Murer quote:
“I would like to say that the sound of THX is the most popular piece of computer music in the world. It may or may not be true, but it sounds cool! ”
- It is created on an ASP (Audio Signal Processor) computer, capable of synthesizing sounds in real time.
- A program of 20,000 lines of C code generated the data to be played on the ASP. The generated data consisted of 250,000 rows that were processed on an ASP.
- Oscillators of voices use a digitized cello tone as a signal. Dr. Murer recalls that there were about 12 harmonics in the sample. ASP could run 30 such oscillators in real time (for comparison, my laptop right now can process more than 1000 of these without failures).
- The sound itself is copyrighted, but here’s the problem: Dr. Murer’s code relies on random number generators (the generative process) and the sound is slightly different each time. Therefore, I do not think that it is safe to say that the process itself is or may be “copyrighted”. The sound itself, yes, the specific sample is protected.
- The sound made its debut in the THX “Return of the Jedi” trailer before its premiere in 1983.
- At some point, the generative characteristics of the process became problematic. After the release of the Return of the Jedi, the original Deep Note record was lost. Dr. Murer recreated the work for the company, but they constantly complained that it did not sound like the original. In the end, the original record was found and kept in a safe place.
- Dr. Dre asked permission to use the sample in his music, but he was refused. He still used it and got a lawsuit.
- In the Metastaseis of Janis Xenakis (1954) there is a very similar opening crescendo (as in other works by various composers). But it begins with a single tone and ends with a semi-triple tonal cluster instead of fully consonant, as in Deep Note. Sound recordings from the patent application can be heard here .
Be sure to listen to the sound, because when recreating a Deep Note, we will refer to this particular recording.
Here are some technical / theoretical facts before proceeding to sound synthesis:
- My observation: on the original record from the Patent Office site, the main tone is between D and Eb, and in newer versions the fundamental value is between E and F. We will use the original constant D / Eb. New options are usually shorter, if not mistaken. Obviously, I prefer the option that was filed with the patent office.
- According to Dr. Murer (and also confirmed by my ears), the fragment begins with oscillators tuned to random frequencies between 200 Hz and 400 Hz. But oscillators are not just buzzing - their frequencies are modulated randomly, and they use smoothing filters to smooth random transitions of tones. This continues until the beginning of the crescendo.
- Inside the crescendo and at the end of the sound fragment, the randomizers still modulate the frequencies of the oscillators, so none of them is stable at some point in time. But the random scan range is so narrow that it simply adds a natural / choral sound.
- Dr. Murer recalls that there were about 12 distinct harmonics in the spectrum of the digitized sound of the cello.
- As far as I know, the values for the generator (which were used to obtain copyright) were never published in writing. Dr. Murer says he can write them down if we get permission from THX. But I think it is not necessary to recreate the sound.
- The sound in the finale (technically not a chord) - to my ear, just the addition of the octaves of the fundamental tone. So, when recreating, let's start with randomly tuned (between 200 and 400 Hz) oscillators, make a more or less complex sweep and finish by imposing octaves on the main tone between low D / Eb.
So let's get started. Here is my work tool - SuperCollider. Let's start with a simple sample. As a source I want to use a sawtooth wave, it has a rich and harmonic spectrum of even and odd components. Later, I plan to filter the vertices. Here is a fragment from the initial part of the code:
I chose 30 oscillators to generate sound, in accordance with the capabilities of the ASP computer, as Dr. Murer said. He created an array of 30 random frequencies between 200 and 400 Hz, distributed them randomly over a stereofield using Pan2.ar with the argument rrand (-0.5, 0.5), and assigned frequencies to sawtooth oscillators (30 copies).
This is how it sounds .
If you study the information from Dr. Moorer and / or listen carefully to the original fragment, you can hear that the frequencies of the oscillators are randomly shifted up and down. I want to add this effect for a more organic sound. The frequency scale is logarithmic, therefore at low frequencies there should be narrower ranges of oscillations than at higher ones. This can be done by sorting our randomly generated frequencies by assigning LFNoise2 (which generates quadratically interpolated random values) of mul arguments in order within our Mix macro. And I also added a low-pass filter for oscillators with a cut-off frequency for a five-fold oscillator frequency and a moderate 1 / q:
This is how the sample with the latest edits
sounds .
This already looks like a good starting point, so let's get down to the implementation of a sweep, at first very roughly. To implement the sweep, you first need to determine the final frequency for each oscillator. This is not very simple, but not very difficult. The base tone must be between low D and Eb, so the average frequency for this tone is 14.5 (0 is C, counting chromatically, without the first octave). So for 30 oscillators we translate random frequencies between 200 and 400 Hz to the value of 14.5 and the corresponding octaves. At the hearing, I chose the first 6 octaves. So, the final array of frequencies is obtained as follows:
(numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps;
We will use the sweep from 0 to 1. Random frequencies are multiplied by the value
(1 − )
, and the target frequencies are multiplied by the sweep itself. Therefore, when the sweep is 0 (start), the frequency will be random. When the sweep is 0.5, it turns out
(( + ) / 2)
, and when equal to 1, the frequency will be the final value. Here is the modified code:
The sound is
here .
As I said, this is a very rough scan. It rises linearly from 0 to 1, which is inconsistent with the original composition. You may also have noticed that the last octaves sound horrible because they are tuned to perfect octaves and merge with each other like base tones and overtones. We will fix this by adding a random oscillation in the final stage - just as it was done at the beginning, and it will sound much more organic.
First you need to correct the general formula of the frequency sweep. The previous one was just for sample. If you look at the original, we note that in the first 5-6 seconds there are very few changes in the sound. After this, a fast and exponential sweep occurs, which leads the oscillators to finite octave intervals. Here is the option I chose:
sweepEnv = EnvGen.kr(Env([0, 0.1, 1], [5, 8], [2, 5]));
Here the transition from 0 to 0.1 takes 5 seconds, and the transition from 0.1 to 1 takes 8 seconds. The quarries for these segments are set at 2 and 5. Later we will hear what happened, but first we need to correct the final intervals. As before, we add random oscillations with LFNoise2, whose range is proportional to the final frequency of the oscillator. This will make the final more organic. Here is the modified code:
Here, I also corrected the cut-off frequency of the low-pass filter to your taste. I like to tinker things up, if the result does not deteriorate ... In any case,
this is what happened .
I don't really like this sweep pattern. It is necessary to stretch the beginning and speed up the finish. Or wait ... Is it necessary to implement the same scheme for all oscillators? Absolutely not! Each oscillator should have its own scheme with slightly different values of time and curvature - I am sure that it will be more interesting. The high-frequency overtones of the random sawtooth cluster are also a little annoying, so we add to the overall result a low-pass filter, the cut-off of which is controlled by a global “external” value that has nothing to do with oscillator circuits. Here is the modified code:
A small change made the sweep a bit more interesting. A low-pass filter at 2000 Hz helps tame the initial cluster.
This is how it sounds .
There is one more thing that will make the process more interesting. Remember we sorted random oscillators at the beginning? Well, now we can sort them in reverse order and make sure that the oscillators at higher random frequencies end up in lower voices after the crescendo, and vice versa. This will add more “movement” to the crescendo and is quite consistent with the way the original fragment is structured. I'm not sure that Dr. Moorer programmed it this way, but there is this process on the record, and it sounds cool, whether it is a random product of the generative process or a special choice. (Oh, I said that? If the process provides such an option, then this is the choice ... or not?). Thus, we change the sorting order and code structure so that the saw teeth with higher frequencies fall into lower voices in the final, and vice versa.
One more thing: you need a louder bass. Now all voices have the same amplitude. I want the low sounds to sound a little louder and fade out in proportion to the increase in frequency. Therefore, we appropriately change the mul argument for Pan2. Reconfigure the cut-off frequencies of the low-pass filters for individual oscillators. And I'm going to add an amplitude scaling scheme that will smoothly enter and disappear to the final, and get rid of the scserver. A few more numerical settings here and there - and here is the final code:
And here is the
final recording of the work .
You can compare with the
original .
Yes, this is my interpretation. And of course, it can be optimized to death, changing the scheme, frequency, distribution, whatever ... nevertheless, I think this is a worthy attempt to preserve the sound legacy. I would like to hear your comments and / or your own attempts to synthesize this crescendo.
Yes, and here's another thing I did for fun. Remember, I told you that it took 20,000 lines of code in C to generate the original. I am pretty sure that Dr. Moorer had to write everything manually, so this figure is not surprising. But you know, due to the popularity of twitter, we are trying to squeeze everything into 140 characters of code. For fun, I tried to reproduce the main elements of the composition in 140 characters of code. I think that the sample still sounds cool, here's the code (here with the main tone F / E):
play{Mix({|k|k=k+1/2;2/k*Mix({|i|i=i+1;Blip.ar(i*XLine.kr(rand(2e2,4e2),87+LFNoise2.kr(2)*k,15),2,1/(i/a=XLine.kr(0.3,1,9))/9)}!9)}!40)!2*a}
And
here is the sound that this version generates.
In
one document - all the code from this page for your experiments.
Good luck crescendo friends!