In 2011-2012, the telecommunications world stepped into the era of 100G. In the sense, massively stepped, everywhere. If dear habrachiteli it will be interesting, how it happened - a little knowledge under the cut.
NoteFirst of all, it is necessary to make a small explanation: under 100G is meant data transmission at a speed of one hundred gigabits per second. However, there are several options for implementing such data rates. For example, the
802.3 family of standards for the so-called "client" connections, allowing you to work at a distance of 100m to 10km.
Hereinafter, transmission of 100G in DWDM networks is assumed, within a 50GHz channel. As a rule, DWDM (Dense Wavelength Division Multiplexing) implies the ability to transmit about 40-88 channels in the C (96 in the extended) range, at distances from 80 to several thousand kilometers, these are the so-called backbone networks (Optical Transport Network). More information about such networks has already been written, for example,
here . For the transmission of 100G in OTN networks, the OTU4 structure is used, but this is a completely different story ...
So, how was such a speed possible?
In order to realize the possibility of transmitting at such a speed, engineers had to use some tricks to reduce the initial symbol rate:
1. The use of polarization sealing technology.
2. Using new types of modulation
3. The use of coherent reception.
NoteIt is important to understand that there is a difference between bit rate and symbolic, because one character can encode multiple bits. Symbol speed is measured in bauds, more information about this can be read here
ru.wikipedia.org/wiki/%D0%91%D0%BE%D0%B4 or google.
')
Polarization seal (Polarization Multiplexing).
When light propagates, there are two perpendicularly oriented components:

In previous generation systems, information was transmitted by two components simultaneously, which caused problems with signal dispersion (in particular, polarization mode dispersion, PMD). Using both components for data transmission, the required symbol rate is reduced (the bit rate multiplier is 2).
Some manufacturers that did not have high-performance transmitters at the beginning of the 100G emerged from the situation using the multiplexing of two subcarriers:

The bit rate multiplier is 2 * 2, so a lower bit rate is required, but transmission performance deteriorates and production complexity increases.
The disadvantage of this solution is stronger requirements for dispersion compensation, compared to conventional non-2x polarization systems (i.e., the effect of dispersion is even more destructive).
New types of modulation.
For speeds of 10G and below, simple types of modulation are usually used, called OOK On Off Keying, for example CRZ; CSRZ, ODB (one bit per character)

For speeds above 10G, modulations that increase the symbol rate are applied, for example DQPSK (Differential Quadrature Phase Shift Keying):

Since in this case, when transmitting one symbol, two values can be encoded, the bit rate is greater (two bits per symbol). The next step is the introduction of QAM modulation, in particular 16-QAM, but this is the subject of another article.
As a result, the required speed of 100G requires about 112 Gbd symbol rate (due to headers and service information), but due to technical solutions, this requirement is reduced to 28Gbd (28 * 2 xpolarization * 2 x modulation = 112) with one carrier and 14 Gbd with two subcarriers. Of course, this is a very approximate explanation on the "fingers", but perhaps it will help someone to understand where and what to look next.
Coherent reception
The use of coherent reception is mainly associated with the problems of dispersion (broadening) of pulses in a fiber.
The main difference between coherent detection and systems with conventional (direct detection) is that in systems with direct detection it is only possible to read an effective value, for example, the current light intensity on a photodiode.

The basic idea of coherent reception is that two signals are sent to the receiver (4) from the source and from the local generator (2) (the so-called reference signal). In this case, the two signals interfere with (3) and the photo detector already sees a certain interference pattern, which means it can get some information about the phase.
In addition, after light-to-electricity conversion on the photodiode (4), then error compensation (FEC) caused by the dispersion and the influence of noise occurs, which increases the possibility of transmitting signals for hundreds of kilometers without the need for their recovery (3R regeneration). In fact, this technology brought the DWDM system to a new level of evolution, due to the extremely effective compensation of errors.
As a result, such a mixture of engineering solutions and allowed to implement such a necessary and promising technology. New challenges are on the threshold, currently active studies of transmissions at speeds of 400Gbit and 1000Gbit (1Tbit) per second are underway, and I think that in a couple of years these technologies will move from laboratories to the world of practical use.
Honestly, I have never written articles before, so I apologize if some moments are a little messy, I will try to answer your questions if they suddenly appear.