The video signal explained
Video started life as an analogue signal and remained so for over 50 years. However, digital television via satellite, cable and terrestrial transmission is now rapidly replacing analogue. Digital signals are less expensive to transmit because more information can be carried within a given bandwidth using digital multiplex and compression techniques. Digital signals also hold a significant advantage when it comes to video recording, editing and reproduction: whereas an analogue recording degrades with each generation of copying, the digital equivalent can nearly maintain the original quality after numerous copies. On initial analysis, picture quality is also superior to analogue with no apparent interference, ghosting or other problems. On closer inspection, however, errors can be detected, especially on rapidly moving pictures e.g. a football match.
Analogue Video
Analogue video is commonly distributed as a composite signal, an almost universal connection between video cameras, VCR / DVD players and video monitors. When superimposed on a radio frequency carrier it forms the aerial signal transmitted to homes. It is also the most usual input connection found on a videoconference picture monitor. This and other signal types are described below:
- The composite signal: The composite signal is composed of three parts: the black and white information (Luminance), the colour information (Chrominance) and the synchronisation (Synch) signals which ensure that the displayed pictures stay in close time synchrony with the transmission source. A problem with composite signals is that the three elements have to be coded to enable them to combine, but in the picture monitor these have to be decoded in order to display an image. These coding / decoding processes introduce unwanted noise and distortion. Three different coding systems are used worldwide. These systems – PAL, NTSC and SECAM – are all mutually incompatible. The NTSC (the National Television Standards Committee) system is used in North and South America and Japan. SECAM (SEquentiel Couleur Avec Memoire) is used in France and Eastern Europe, and PAL (Phase Alternating Line) is used in the UK and the rest of Europe.
- S-Video / YC: To reduce coding / decoding distortion the TV signal can be transmitted as an S-Video or YC signal. S-Video has two separate parts, Luminance (Y) and Chrominance (C), and so requires two separate connection channels between equipment in the chain. This involves less signal processing (decoding) in the picture monitor, which means less noise / distortion and thus a better picture.
- Alternatives: To reduce decoding noise even further the TV information may be transmitted as three signals, with a luminance channel (Y) and two colour component or colour difference channels, Red-Y (R-Y) and Blue-Y (B-Y). This minimises processing in the display monitor but requires three connection paths. An alternative method transmits Red (R), Green (G) and Blue (B), as separate components.
- SCART: The popular SCART interface includes three separate R G B channels together with composite and stereo sound signals, with the connected devices choosing the most appropriate video connection.
Digital Video Formats
Analogue video signals degrade when material is recorded or distributed. To overcome this, digital signals are now used throughout broadcasting. Another important advantage of digital signals is that massive compression is possible.
- CCIR-601 / 4:2:2: CCIR-601 was one of the first high quality digital standards to be introduced. It is also known as 4:2:2 or Y Cr Cb. It comprises Luminance (Y) and two Chrominance components (Cr and Cb) but as it requires a very wide bandwidth for transmission (around 166Mbit/s), it is rarely found outside broadcast studio environments.
- 4:2:0: To reduce the required bandwidth, and thus cost, other formats were developed, including 4:2:0. This has the same picture rate and luminance resolution as 4:2:2 but a reduced colour resolution, which is imperceptible to human eyes, as the human eye is much more sensitive to the luminance signal than to the chrominance signal. It is important as it forms the basis for the MPEG-2 (a video compression standard) form of coding used extensively for distributing digital television including SKY and Freeview. More information on MPEG-2 coding is available at: Videoconferencing Standards.
- SIF and CIF: For less demanding applications SIF (Source Intermediate Format) was introduced. This has reduced frame rate and chrominance resolutions. An even lower quality format – CIF (Common Intermediate Format) which is a cross between the US and UK SIF formats – is used in videoconferencing. Other formats found in videoconferencing include the Quarter (QCIF), 4xCIF and 16CIF (for still images). QCIF has the lowest resolution and frame rate and is the base line format used within IP and ISDN conferencing for compatibility. A big advantage of CIF and its derivatives is that they are independent of origination television standards (PAL, NTSC etc.). This allows communication from the UK to the USA without standards conversion.
The main digital formats are summarised below.