📜 ⬆️ ⬇️

TV fields in computer graphics

“It has been established experimentally that in order for the human eye to see a smoothed and continuous image on a TV screen, the frequency of changing pictures must be at least 50 times per second (50 Hz) ...”

1. The Basics of Television Image Formation


The image on the TV screen (and computer monitor) is drawn line by line. This is done so-called. “Electron gun” emitting a stream of electrons that bombard the screen of the kinescope covered with phosphor from the inside and make the desired areas glow with the desired color and light for some time after being hit. This is not projecting a single assembled picture at a time as in a filmstrip, but a very fast thin line drawing of hundreds of lines in a certain order one after another.

It has been empirically established that in order for the human eye to see the whole continuous image on the TV screen, and not a set of consistently flashing pictures, the frequency of changing these pictures should be at least 50 times per second (50 Hz). This is due to the so-called afterglow time of the phosphor, which is covered from the inside by the kinescope screen and due to the glow of which we can see the image. If you slow down this frequency, by the time the next frame is drawn, some of the phosphor on the screen will have time to lose brightness and begin to fade. Visually, it will look like a constantly pulsating change in the brightness of the image. I hope it is clear what discomfort the viewer will experience. So, it seems that it is necessary to send a signal to the TV screen with a frequency of 50 frames per second, but in fact, only 25 are fed to our TV screens and we don’t see any particularly noticeable flickers of brightness. How so?! To do this, you need to understand how the on-screen scan works.

Note: all figures given hereinafter are valid for the PAL and SECAM standards. In the NTSC version, a 30-frame sweep is used, but this standard is out of the question.
')

2. Screen sweep operation


There are 2 types of television developments (ways of drawing a television image raster with an electron beam):

Progressive (progressive) - lines of the image are drawn in turn (1,2,3,4,5 .... 625). It is used in special equipment and computer monitors (for example, the computer system unit -> monitor). Each image frame is drawn in one pass (without half-frames). The advantage of such a sweep is in the simplicity of organizing and processing the signal, and the minus is the strong flickering of brightness at frequencies below 60 Hz. Probably many noticed how quickly the eyes get tired when working at a computer monitor with a refresh rate of 60 or even 75 Hz. That's right: while the beam passes from the top of the screen to the bottom, the top will noticeably lose its energy charge and will go out ... and the whole picture will begin to flicker. It is for this reason that high-resolution framing is used in computer monitors (from 85 to 150 Hz).

Interlaced - here the ray of the kinescope first draws all the odd lines on the screen. Next comes the so-called. “Reverse” - the ray is directed upward to line 2 and continues to draw all even lines between already odd lines (still glowing from the bombardment of electrons) and ends its turn in the lower right corner of the kinescope in line 624. When applying these two half-frames each other and get a full frame. Compared with progressive scanning, here the screen is illuminated twice per second, which significantly smoothes the flickering of the picture as a whole. In other words, with interlaced scanning, you can lower the frame rate by 2 times without much damage to the comfort of perception. Slyly invented, right?

Do you catch anything yet? These are the same 2 passes of the beam that make up the whole frame and which are called “half frames” or “fields”. For those who are especially gifted, I paraphrase: the first half-frame (the first field) is lines 1,3,5,7 .... 625, the second half-frame (the second field) - lines 2,4,6,8 ... .624.

The terms “first” or “second” indicate a field dominant in a video signal, i.e. from which field begins to form a full frame. If they say that "the movie was brought with the first field," it means that each frame in the material begins with the first field (with an odd line).

IMPORTANT!
But then it begins something dull and not completely clear even to me: with all its obvious correctness of using the first field as the dominant, the second field is also the dominant field in the video. Where I grow ears, I do not know, but all this often causes non-child hemorrhoids to users. Especially now, with the active implementation of the DV standard and motherboards based on it working with the second field. It becomes even more incomprehensible after realizing that such video cards still convert the signal into the first field on their video output, because our (including the most ancient lamp) television sets work with the first field. That's where the logic here is, I find it difficult to explain, but many years ago I came across an article about the history of the DV format. A “beloved” Bill Gates with his company, trying to make it the main video format for Windows, applied his hand to its development. And his company is American. And in America, the TV standards are NTSC. And he has the first field - this is just the second (sorry for the pun). I do not know whether it is true or not, but such an explanation may well describe the resulting absurdity.

It is clear that interlaced scanning is a must die! It is more difficult and more difficult to process than line-by-line, it causes a lot of trouble when converting from one field to another. But nevertheless, all the televisions in the world work with it (at least when broadcasting television programs). However, since everything is so bad, why is everything so neglected?

3. The history of the appearance of fields


It all began in the middle of the 20th century, when television was born and the frequency distribution of the radio radio frequency band began. The range is far from rubber, there are hard frames of latitude (number of channels) of its use for various services (police, amateur radio, radio, aviation, taxi, television, etc.), plus the limitations on the element base at that time , with the inability to create microwave receivers and transmitters. In general, even then, designers understood that the frequency range allocated for television would obviously be small in the near future.

I don’t know how much space was specifically allocated for the entire meter-wide TV-band, but I’m aware that one television channel was supposed to occupy a band of about 12 MHz. Processing and transmitting such a broadband signal is very difficult and expensive. In addition, the number of TV channels that can be squeezed into the range of broadcast frequencies allocated for TV broadcasting is reduced, especially considering that it is impossible to sculpt the channels close to each other, since mutual interferences and spurious harmonics appear.

Engineers were puzzled. And how was it not to break, when in the future we with you could only watch 4-5 channels, instead of a couple of dozens. The game was not worth the candle. And there was one way out - to reduce the frequency range occupied by each individual channel (the one that is 12 MHz). By reducing the frame rate by 2 times (up to 25 from 50 required for the phosphor, remember?) And by introducing the fields it was eventually narrowed to 6 MHz. And it was an elegant and beautiful solution.

Fortunately, these problems already seem to fade into the background and the day is near when television will switch from analog to digital. Then thousands of TV channels can be shoved into the same frequency range, and even with HD resolution, transforming them into digital form ... and forgetting about the fields as a nightmare. I don’t know when it will happen in the “great and mighty”, but in Japan and Europe already many TV stations, if they don’t broadcast continuously all the time, then conduct dress rehearsals.

In the meantime, everything remains the same, we need to clarify for ourselves a couple of conclusions:

Conclusion 1: The main advantage of interlaced scanning over a line-by-line one is that with the same image changing frequency (25 half frames x 2 passes = required 50 Hz per second) and the same number of lines (625 per full frame), the full frame repetition rate is 2 times reduced and the band of terrestrial frequencies occupied in the TV signal is also reduced by 2 times.

Conclusion 2: For high-quality and comfortable perception by TV viewers of TV pictures, the video signal supplied to the output of a television transmitter should contain information not only about the number and frame rate, but also about half-frames! Naturally, this can be achieved only if the entire computer graphics and all the video material fed into the air will also contain this information about the half-frames. Working without borders, a lamer-designer involuntarily shows the viewer a frame rate of 2 times less than it is possible to actually show. Really stupid?

It should be noted that this rule mainly applies to fast-moving elements and to camera panoramas. On static plans, the lack of fields will not be noticeable at all, but who turns the static shots on television? .. By the way, all the CG works with the fields, so be equal to the professionals.

4. Computer boards for video capture / output


Almost all non-linear editing boards known to me are able to capture and output video with fields. The exception is the Miro Video DC1 board, but it is hardly already where it is used, and it worked with 1/4 of the normal television resolution with a square pixel (384x288), so everything described below does not concern it.

Capturing video occurs either in the editing video program through the input device driver, or using the board's utilities. Further work with the material takes place already on the TimeLine of the video editor and depends on the project settings whether the output clip will contain fields or not. Recently, input of information through the 1394 interface (FireWire / iLink) has become widespread, but the method of capturing / processing does not change here.

When there was no DV format and the boards working in it (the golden era of non-linear editing - the mid-90s), everything was simple enough. The overwhelming majority of boards of that time worked in the MJPEG image compression format and had the first as the dominant field. Bright representatives of this class: Truevision Targa 1000/2000; Miro Video DC30; Matrox DigiSuite; DPS Perception. At that time there were practically no problems - in most cases videos from studio to studio were transmitted from the first field and the transfer of the material brought to “one's own format” was done through Avid MCX Press or Adobe Premiere by stupid conversion to one's own codec. Opponents of such video cards working with the second field were Fast AV Master, Miro Video DC20. Sometimes there is no, yes, and brought a movie in their format, and here the smut began. Do not know.

It may be a long time to talk about what difficulties arose (and are now appearing) when transferring video from one group to another. And the matter is not so much in the codec as it is in different resolutions, the dominant field, the size and cropping of the frame. Very often, simply overturning the fields in the most popular Adobe Premiere editing program cannot solve the problem qualitatively. You need to launch heavy artillery such as Adobe After Effects or Eyeon Digital Fusion in order to correctly change the fields, change the frame resolution, crop cropping (which idiot still uses it), etc.

With the advent of the DV standard with its second field it became both worse and better. Worse, because the park of old cards that work with the first field is still extensive and there are no prerequisites for this park to die as quickly as possible. Until now, there are many expensive and simply excellent video cards (Truevision Targa, Matrox DigiSuite, DPS Perception, etc.) working in the MJPEG or Uncompress format, designed for professional use and giving much higher quality than DV. So why the second field as a dominant received DV, I do not know. However, as mentioned above, I happened to hear the opinion that this happened at the suggestion of Microsoft: the Americans quite logically made a new standard for themselves and for their NTSC format. But be that as it may, the whole world still has to clear up this mess.

And what has become better? Unification! Now (2006 - Note), the acuteness of the problem is slowly disappearing: DV is increasingly penetrating low-budget studios and becoming the de-facto standard, which some 10 years ago was S-VHS. In fact, now there is one universal codec - Microsoft-DV; single frame size; single video bit rate; uniform sound parameters. In other words, the transfer of material from one studio to another has become a simplest matter, which does not require time for conversion and the voltage of the brain of video editors.

5. Work with video


With video recordings, everything is relatively easy. Video cameras were developed for television, work with fields, video recorded by them on the carrier contains information about the fields and it is quite obvious that in the video editor with captured video material you also need to work with fields. And for this it is enough to make the correct software settings only once. When working in a project, using various built-in filters and effects, you can be sure that you will receive the correct video signal during video output.

You still do not understand what I mean? Well, it is quite understandable if you have never worked with the fields before, and all your acquaintance with them comes down mainly to the mats addressed to the office, whose production the video began to give on the TV screen a strobe or comb.

Below are 2 frames of video containing fields. Screenshots are made from the SCREEN of the COMPUTER . It is a computer monitor with line scanning that allows you to see and explore the essence of interlaced scanning.

Cars in the frame have noticeable lanes: this is the television fields. Then why are they noticeable on cars, but not on people and grass?

The answer is simple.

The car moves in the frame quickly. While the ray of the kinescope traced odd lines (it took him 1/50 of a second), the car managed to get a little closer and when drawing the second half-frame its position was different. This is exactly how the video was captured and laid out on the fields of the video camera, and that’s how it should be displayed on the TV screen. And the movement of people and grass was small (if at all), and therefore the comb on them is almost invisible. But, such a picture is observed only on a computer monitor, which has a line-by-line scan, but if it is displayed on a conventional TV screen, we will not see any bands, moving objects will be smooth, and the objects themselves will be whole.

I will try to explain the same thing on the example of animated pictures. For simplicity and convenience, I took only 4 lines (2 lines per each field respectively) and only 4 frames, but this will be quite enough. So, we move a square across the screen from left to right. The dominant field in the material is the first.

(Note: for clarity, the created picture of odd and even lines will be shown at a time, and not in turn. The reader should remember the sequence of drawing first, then second).

Figure 1 - half-frames (fields) are present.
Here is the same "comb." Here you can see how the square is divided into lines when moving ... and this happens on a scale of the entire television raster! In one frame, the ray of the kinescope makes two passes across the screen and the contents of these passes are MISCELLANEOUS (here it is, the key point - unlike the progressive signal!). Each following line as if draws the movement started in the previous line (and so on until the change of the shooting plan).

Well, a person, due to the inertia of his vision, sees on the screen not a spasmodic movement of a square 25 times a second with impudent jumps to the side, and ... (how to put it ...) sees a smoother ... "flowing" movement consisting of 50 phases that is perceived as smooth gliding. Here is such a pure deception of sight, just think!

Consider that this square is our fast moving car from the screenshots above. Quite another thing, if the square did not move (like people or grass) - we would not see any comb in the image.

(Note: unwittingly, I involuntarily touched on the subject of encoding (compressing) video material. Many compression algorithms are based on analyzing the position of such objects in the frame (the so-called “blocks”). So ... in the first case, because of the difference in content, it would be necessary to encode all 4 frames, and in the second, only one - the first, and subsequent to it during playback, only transmit a link to it. And this saves time in coding and disk space in storing information).

Figure 2 - half-frames (fields) are missing.
And now consider the option without fields. This animation shows that the square moves in the same time intervals completely and discretely: it was here, and now it was here. And no transition phases to you, no line splitting. Moving to a new place, he stupidly there is 1/25 of a second. But with interlaced drawing, he would have stood motionless “in one pose” for only 1/50 of a second.

If such a movement is in no way stylized as a movie by blurring moving objects or mixing adjacent frames (blending), the viewer will see a light and unpleasant strobe.

To consolidate the material, I propose to do the experience. Create (for example in Adobe Photoshop) a file of your screen resolution for your video card. Draw a white square approximately 50 x 50 pixels in the center of the window and save the file. Import it into a video editor and do an animation of horizontal movement from left to right so that the square starts moving at the left edge of the TV monitor, and finishes at the right in 2 seconds. Browse the project settings (and the output file), find the function to enable the fields. Which field to choose depends on your video card, but if you are working in DV, then most likely you will have a second field (lower, bottom, second). Render and watch the finished video on the TV monitor screen - the square should travel smoothly and clearly from edge to edge. If so, then you saw the correct image with fields, consisting of as many as 100 static images (25 half-frames x 2 passes per frame x 2 seconds). Now calculate the same clip, but with the fields turned off and look at it already - our square will also pass, but already with a strobe, losing image clarity and shape - you have viewed only 50 static images. You can change the field to another, and then you will see that the square began to tremble at all not childishly.



Here are photos from the screen of the TV monitor, made with a long exposure. Although the lines forming the image here and not visible, but you can see the discreteness of movement of the square on the screen. Pay attention to the differences between Figure 1 and 2. You see what the difference is in the step of the square on the screen: in picture 1, the movement consists of two small movements (First field is drawn for 1/50 second, and then the second field is drawn for the same time, and the square during this time has time to shift to the right). In Figure 2, for the same time, a square passes only one big jump with a duration of 1/25 second. And now guess in which case the movement will be smoother? As for Figure 3, here our square will literally jump, because the order of the fields is disturbed when displaying video from a video card. In this case, the material with the second field is shown, and I tried to bring it out through a video card that works with the first (Matrox DigiSuite LE).

Note: If you did everything correctly, you are sure that the settings are correct, but you didn’t see the difference in the display of the TV monitor (crooked hands?) Or the difference didn’t convince you (features of sight?), Then I'm afraid this is no longer cured.

6. Work with computer graphics


Everything is much more complicated and we will understand for a long time. True, all that is written below concerns only those who nevertheless accepted the rules of the game and, at least vaguely, but began to realize the difference in working with and without fields, and are ready to continue their studies.

So it is clear to us that all information on the TV must be served with the fields. But if in the video footage shot by the cameras they are from the very beginning and are generated at the iron level forcibly, then we will have to introduce them into the computer graphics ourselves and also forcibly. And you need to carefully trace the presence of the fields while working on the video to not miss anything anywhere (by the way, this fact is probably the most common brake factor when working with fields: some comrades are just lazy with them, so they prefer to feed the public with 25 strobing pictures instead much smoother moving 50).

A typical way to create a graphic clip looks like this:
1. animation of objects in the 3D editor and rendering the scene into a sequence;
2. import of the sequence, its processing and adding objects in the compositing program;
3. The final calculation of the composition and recording on tape or on the air server.
So, at all stages of the work you need to monitor compliance with half-frames. It is worth losing them in at least one place and hello - get a borderless clip. In all programs it is recommended to observe the same sequence of fields, although in most cases you can try to correct the incorrectly counted source material without resorting to recalculating it in the 3D editor again.

Now about the difficulties. In order to render graphics with fields in the 3D editor, including the commonplace “Used fields” in render settings may not be enough. For example, 3DSMax still requires their correct settings in the Preference Settings / Rendering / Field Order - here you must select Odd or Even.

7. Borderless work options and when fields are not needed.


Strange as it may seem, situations when working with fields can be not only superfluous but also harmful are not so rare.

The first example is working with source files captured on film. In fact, why use the fields when working with material in which they were not originally. On a film, all objects moving dynamically in a frame do not have clear boundaries - there movements have a natural blur and their movement does not have such clarity as in computer graphics. Look at the screenshots, pay attention to how the nature of the film material differs from the video material represented by the screenshots on the previous page (the areas to which you should pay attention are highlighted with a white frame). But on the other hand, computer graphics, which are mixed into this film material at the installation, must be calculated without fields and so that they do not differ in their appearance from the cinema-smeared source code. This is achieved in various ways, for example, by applying motion blur to dynamic objects, so computer objects in the frame also get blurred when moving and look very much like movies. Here the fields are more likely to even damage the work than to help.

The second example is when at the compositing stage it is very difficult and difficult to work with sequences prepared in advance. These are various transformations, rotations, changes in perspective, playback speed, all kinds of warpings, etc. , ( ) , , .. . , . . , 25 , 50 . 200% ( ). , , , . , , .

: , , 3D 50 25 2 .

– , . , . , , .. «» ( VGA- Matrox - , ).

— , 1 . , . , .

— . ( ) ( ), - DVCAM. , … , , . , . , plug-ins, , . , , , - .

8.


« , » – . , - ____. , : -. , avi- -, , . , « ». , , , , . , – . , , 99% .

() ( ) , :
— – : 720576 768576. ;
— – , ;
— – . , ;
— – - . ( ), .
, , wav-. .

, . , , . , : — , – , .

, . , , , , .

Conclusion


… , . , , , . , .

, , . , .

DimSUN ( )

Source: https://habr.com/ru/post/13811/


All Articles