Sie sind hier : Startseite →  Wissen und Technik→  Abkürzungen / Kurzformen→  Abbreviations - Shortforms

Begriffe und Abkürzungen in der Fernsehtechnik

Die allermeisten Begriffe und Abkürzungen (Abbreviations) kommen aus dem Englischen. Die Weltsprache im gesamten TV Bereich ist nun mal Englisch. Sogar die dominierenden Japaner mußten das schlucken. Und auch die Pekinger Machthaber der "Peoples Republic of China" (PRC) mußten das erst noch lernen.

Doch die engliche Sprache hat auch Nachteile. So sind bei den Amerikanern "lines" eben nur "Lines". Bei uns werden nämlich die TV-Normen mit "Zeilen" spezifiziert und die effektive Auflösung der Bildqualität mit der Anzahl "Linien" gemessen. Ab und zu werden unsere "Linien" im Englischen mit "TV-Lines" unterschieden.

Dann lieben es die Amerikaner, alles nur mit 3 oder 4 Buchstaben zu bezeichnen. Das kollidiert des Öfteren und muß dann erklärt werden. Dann wurden eingeführte Abkürzungen ersetzt wie z.B. CCIR durch ITU-R.

Ein Teil dieser Inhalte ist einem englsichen Buch entnommen

Sehr bekannt ist das "Digital Fact Book" erstmals in 1987 von Quantel herausgegeben. Wir beziehen uns auf die Ausgabe von 2008, korrigieren aber die in der Zwischenzeit veränderten Begriffe zurück auf die Urbedeutungen.

Die Digital-Technik im TV begann etwa 1987 als kleine Insel

so schreibt es der Autor, als er 1987 mit seinem Facts-Book angefangen hatte. Wir wollen es natürlich mit den Neuerungen der techhnologischen Entwicklung der Technik-Ausstellungen in Montreux verknüpfen, sodaß auch eine chronologische Entwicklung dieser Technik zu erkennen ist.


Digital terms (numerical)


1000/1001 (ist bei uns nahezu uninteressant)

The nominal 30 frames/60 fields per second of NTSC color television is usually multiplied by 1000/1001 (= 0.999) to produce slightly reduced rates of 29.97 and 59.94 Hz.
The reason for the 1000/1001 offset is based in monochrome legacy. Back in 1953, the NTSC color subcarrier was specified to be half an odd multiple (455) of line frequency to minimize the visibility of the subcarrier on the picture.

10-bit lin

A type of digital sampling of analog images that creates 10-bit (2 hoch 10 = 1024 possible levels) numbers to describe the post gamma corrected analog brightness levels of an image. Lin, short for ‘linear’ means the levels are assigned equally to the levels of the post gamma corrected analog signal they describe.

So an LSB change describes the same change in level if it is in a bright area or a dark area of the picture. Most professional HD and some SD television is sampled this way according to ITU-R BT.601 and 709.

10-bit lin sampling allows good quality to be maintained through TV production and post production where the processes can make particular demands outside the range of normal viewing, and so produce good results for viewers. However if color grading is required then the useful wide dynamic range that can be described by 10-bit log would be preferable.
See also: 10-bit log, gamma

10-bit log

This usually refers to a 10-bit sampling system that maps analog values logarithmically rather than linearly. It is widely used when scanning film images which are themselves a logarithmic representation of the film’s exposure. This form of sampling is now available directly from some digital cinematography cameras.

13.5 MHz (Namensgeber für 4:2:2)

This is the sampling frequency of luminance in SD digital television. It is represented by the 4 in 4:2:2.

The use of the number 4 is pure nostalgia (damit das "Kind" einen Namen hat) as 13.5 MHz is in the region of 14.3 MHz, the sampling rate of 4 x NTSC color subcarrier (3.58 MHz), used at the very genesis of digital television equipment.

Reasons for the choice of 13.5 MHz belong to politics, physics and legacy. Politically it had to be global and work for both 525/60 (NTSC) and 625/50 (PAL) systems.

The physics is the easy part; it had to be significantly above the Nyquist frequency so that the highest luminance frequency, 5.5 MHz for 625-line PAL systems, could be faithfully reproduced from the sampled digits - i.e. sampling in excess of 11 MHz - but not so high as to produce unnecessary, wasteful amounts of data. Some math is required to understand the legacy.

The sampling frequency had to produce a static pattern on both 525 and 625-line standards, otherwise it would be very complicated to handle and, possibly, restrictive in use. In other words, the frequency must be a whole multiple of the lines speeds of both standards.

The line frequency of the 625/50 system is simply 625 x 25 = 15,625 Hz
(NB 50 fields/s makes 25 frames/s)

So line length is 1/15,625 = 0.000064 or 64µs

Now, back to the physics. The sampling frequency has to be well above 11 MHz, so 11.25 MHz (5 x 2.25) is not enough. 6 x 2.25 gives the sampling frequency that has been adopted - 13.5 MHz.

Similar arguments have been applied to the derivation of sampling for HD. Here 74.25 MHz (33 x 2.25) is used.

14 : 9 (aspect ratio)

A picture aspect ratio that has been used as a preferred way to present 16:9 images on 4:3 screens. It avoids showing larger areas of black above and below letterboxed pictures but does include more of the 16:9 image than 4:3. It is commonly used for analog transmissions that are derived from 16:9 digital services.

16 : 9 (aspect ratio)

Picture aspect ratio used for HDTV and some SDTV (usually digital). See also: 14:9, 4:3, Widescreen


Refers to 24 frames-per-second, progressive scan. 24 f/s has been the frame rate of motion picture film since talkies arrived.

It is also one of the rates allowed for transmission in the DVB and ATSC digital television standards - so they can handle film without needing any frame-rate change (3:2 pull-down for 60 fields/s ‘NTSC’ systems or running film fast, at 25f/s, for 50 Hz ‘PAL’ systems).

24P is now accepted as a part of television production formats - usually associated with high definition 1080 lines to give a ‘filmic’ look on 60 Hz TV systems.

A major attraction is a relatively easy path from this to all major television formats as well as offering direct electronic support for motion picture film and D-cinema. However, the relatively slow refresh rate has drawbacks.

For display it needs to be double shuttered - showing each frame twice to avoid excessive flicker, as in cinema film projection, and fast pans and movements are not well portrayed. Faster vertical refresh rates are preferred for sports and live action.

See also: 24PsF, 25P, 3:2 Pull-down, ATSC, Common Image


Refers to 25 f/s, progressive scan. Despite the international appeal of 24P, 25P is widely used for HD productions in Europe and other countries using 50 Hz TV systems. This is a direct follow-on from the practice of shooting film for television at 25 f/s.


See Film formats

3:2 Pull-down (a.k.a. 2:3 Pull-down)

ist nur NTSC relevant


This is a set of sampling frequencies in the ratio 4:1:1, used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal.

The 4 represents 13.5 MHz, (74.25 MHz at HD) the sampling frequency of Y, and the 1s each 3.75 MHz (18.5625) for R-Y and B-Y (ie R-Y and B-Y are each sampled once for every four samples of Y).

With the color information sampled at half the rate of the 4:2:2 system, this is used as a more economic form of sampling where video data rates need to be reduced.

Both luminance and color difference are still sampled on every line but the latter has half the horizontal resolution of 4:2:2 while the vertical resolution of the color information is maintained. 4:1:1 sampling is used in DVCPRO (625 and 525 formats), DVCAM (525/NTSC) and others.


A sampling system used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. The 4 represents the 13.5 MHz (74.25 MHz at HD) sampling frequency of Y while the R-Y and B-Y are sampled at 6.75 MHz (37.125 MHz) - effectively on every other line only (ie one line is sampled at 4:0:0, luminance only, and the next at 4:2:2).

This is used in some 625-line systems where video data rate needs to be reduced. It decreases the overall data by 25 percent against 4:2:2 sampling and the color information has a "reasonably even" resolution in both the vertical and horizontal directions.

4:2:0 is widely used in MPEG-2 coding meaning that the broadcast and DVD digital video seen at home is usually sampled this way. 625 DV and DVCAM coding also use 4:2:0. However the different H and V chroma bandwiths make it inappropriate for post applications.

4:2:2 (alle benutzen es, wenige erklären es)

A ratio of sampling frequencies used to digitize the luminance and color difference components (Y, R-Y, B-Y) of an image signal.

he term 4:2:2 denotes that for every foursamples of the Y luminance, there are two samples each of R-Y and B-Y, giving less chrominance (color) bandwidth in relation to luminance.

This compares with 4:4:4 sampling where full same bandwidth is given to all three channels - in this case usually sampled as RGB.

(CCIR steht für "Consultative Committee for International Radio" und ist der alte Name für die neue ITU-R.)

The term "4:2:2" originated from the ITU-R BT.601 digital video sampling where 4:2:2 sampling is the standard for digital studio equipment.

The terms ‘4:2:2’ and ‘601’ are commonly (but technically incorrectly) used synonymously in TV. For SD the sampling frequency of Y is 13.5 MHz and that of R-Y and B-Y is each 6.75 MHz, providing a maximum color bandwidth of 3.37 MHz - enough for high quality chroma keying. For HD the sampling rates are 5.5 times greater, 74.25 MHz for Y, and 37.125 MHz for R-Y and B-Y.

The origin of the term is steeped in digital history and should strictly only be used to describe a specific format of "standard definition" digital television sampling.

However, it is widely used to describe the sampling frequency ratios of image components (Y, B-Y, R-Y) of HD, film and other image formats.

See also: 13.5 MHz, Co-sited sampling, Digital keying, ITU-R BT.601, ITU-R BT.709, Nyquist


This is the same as 4:2:2 but with the key signal (alpha channel) included as the fourth component, also sampled at 13.5 MHz (74.25 MHz at HD).
See also: Dual link

4:3 (aspect ratio)

The aspect ratio of PAL and NTSC traditional television pictures, originally chosen to match the 35mm film. All broadcast television pictures were 4:3 until the introduction of high definition when a wider image was considered to be more absorbing for viewers.

For display tube manufacturers the most efficient aspect ratio would be 1:1 - square - as this is inherently the strongest, uses less glass and weighs less.

16:9 tubes are more expensive to produce. Such restraints do not apply to panels basic on LED, Plasma or SED technologies.

4:4:4 (das wäre dann ganz normales RGB)

One of the ratios of sampling frequencies used to digitize the luminance and color difference components (Y, B-Y, R-Y) or, more usually, the RGB components of a video signal.

In this ratio there is always an equal number of samples of all components. RGB 4:4:4 is commonly used in standard computer platform-based equipment, when scanning film or for high-end post including that used for cinematography.

In the converged media world, big screen requirements for cinema demand a new high level of picture quality. Film is commonly scanned in RGB for digital intermediate and effects work, and recorded directly to disks. The signal is then kept in the RGB form all the way through the DI process to the film recorder - making the best use of the full RGB data.

For the rapidly growing market of digital cinema exhibition the DCI has recommended X´Y´Z´ chromaticity which can be derived from RGB using a 3D LUT.


As 4:4:4, except that the key signal (a.k.a. alpha channel) is included as a fourth component, also sampled at 13.5 MHz (74.25 MHz at HD).


A sampling rate locked to four times the frequency of color subcarrier (fsc). For example, D2 and D3 digital VTRs, little used today, sample composite video at the rate of 4 x color subcarrier frequency (i.e. 17.7 MHz PAL and 14.3 MHz NTSC).

Its use is declining as all new digital equipment is based on component video where color subcarrier does not exist and sampling clock signals are derived from the line frequency.


See Film formats

5.1 Audio

See Discrete 5.1 Audio

50P and 60P

These indicate a video format that has 50 or 60 progressive frames per second and usually refers to high definition. The original digital television standards only included progressive frame rates above 30 Hz for image sizes up to 720 lines - thus limiting the total video data.

More recently this has been expanded up to 60 Hz for the larger 1080-line television standards to provide the best of the best - the maximum HD image size with a fast rate for rendition of fast action and progressive frames for optimum vertical resolution (better than interlaced scans). The baseband signal produces twice the data rates of the equivalent interlaced (50I and 60I) formats, pushing up equipment specifications.

Digital terms (alphabetical)



"Advanced Audio Coding", a codec originally known as MPEG-2 NBC (non-backwards compatible), is considered the successor to MP3, with about 25 percent efficiency improvement. However this performance has more recently been considerably enhanced with aacPlus, also known as High Efficiency AAC (HE-AAC), and included in MPEG-4 and delivers CD quality stereo at 48 kb/s and 5.1 surround sound at 128 kb/s.


See Dolby Digital

Active line

The part of a television line that actually includes picture information. This is usually over 80 percent of the total line time. The remainder of the time was reserved for scans to reset to the start of the next line in camera tubes and CRT screens.

Although the imaging and display technologies have moved on to chips and panels, there remains a break (blanking) in the sampling of digital TV as in ITU-R BT.601 and ITU-R BT 709. These ‘spaces’ carry data for the start of lines and pictures, as well as other information such as embedded audio tracks.

Active picture

The area of a TV frame that carries picture information. Outside the active area there are line and field blanking which roughly, but not exactly, correspond to the areas defined for the original 525- and 625-line analog systems.

(Anmerkung : Hier sind die Zeilen gemeint.)

In digital versions of these, the blanked/active areas are defined by ITU-R BT.601, SMPTE RP125 and EBU-E.

For 1125-line HDTV (1080 active lines), which may have 60, 30, 25 or 24 Hz frame rates (and more), the active lines are always the same length - 1920 pixel samples at 74.25 MHz - a time of 25.86 microseconds - defined in SMPTE 274M and ITU-R.BT 709-4.

Only their line blanking differs so the active portion may be mapped pixel-for-pixel between these formats.

DTV standards tend to be quoted by only their active picture content, eg 1920 x 1080, 1280 x 720, 720 x 576, as opposed to analog where the whole active and blanked areas are included, such as 525 and 625 lines.

For both 625 and 525 line formats active line length is 720 luminance samples at 13.5 MHz = 53.3 microseconds. In digital video there are no half lines as there are in analog. The table below shows blanking for SD and some popular HD standards.

Analog Format 625/50 525/60 1125/60I 1125/50I 1125/24P
Active lines Blanking 576 487 1080 1080 1080
Field 1 lines 24 19 22 22 45/frame
Field 2 lines 25 19 23 23 -
Line blanking 12µs 10.5µs 3.8µs 9.7µs 11.5µs

Picture follows

64 µs (625-line system) 63.5 µs (525-line system)

ADC or A/D

Analog to Digital Conversion. Also referred to as digitization or quantization. The conversion of analog signals into digital data - normally for subsequent use in digital equipment.

For TV, samples of audio and video are taken, the accuracy of the process depending on both the sampling frequency and the resolution of the analog amplitude information - how many bits are used to describe the analog levels.

For TV pictures 8 or 10 bits are normally used; for sound, 16, 20 or 24 bits are common. The ITU-R BT.601 standard defines the sampling of video components based on 13.5 MHz, and AES/EBU defines sampling of 44.1 (used for CDs) and 48 kHz for audio.

For pictures the samples are called pixels, which contain data for brightness and color.
See also: AES/EBU, Binary, Bit, Into digits (Tutorial 1), Pixel


Asymmetrical Digital Subscriber Line - working on the copper ‘local loop’ normally used to connect phones. ADSL provides a broadband downstream channel (to the user) of maximum 8 Mb/s and a narrower band upstream channel (from the user) of maximum 128-1024 kb/s, according to class.

Exactly how fast it can run ultimately depends on the performance of the "line" (hier ist das Kupfer-Kabel gemeint), often dictated by the distance from the telephone exchange (die Ortsvermittlungstelle) where the DSLAM terminates the line.

The highest speeds are usually only available within 1.5 km of the DSLAM. The service is normally always-on, no need to dial up. Its uses include high-speed Internet connections and streaming video.

A newer version, ADSL-2 can run up to 12 Mb/s up to 2.5 km, and ADSL-2+ can deliver 24 Mb/s over up to 1.5 km. ADSL-2/2+ effectively doubles this rate by putting two services together. (All distances are approximate). These are sufficient to carry live SD or HD provided that the service is continuous, or can be recorded before viewing.


The Audio Engineering Society (AES) and the EBU (European Broadcasting Union) together have defined a standard for Digital Audio, now adopted by ANSI (American National Standards Institute).

Commonly referred to as ‘AES/EBU’ and officially as AES3, this digital audio standard permits a variety of sampling frequencies, for example CDs at 44.1 kHz, or DATs and digital VTRs at 48 kHz. 48 kHz is widely used in broadcast TV production although 32-192 kHz are allowed. One cable and connector, usually an XLR, carries two channels of digital audio.


Aspect Ratio Converters change picture aspect ratio - usually between 16:9 and 4:3. Other aspect ratios are also allowed for, such as 14:9. Custom values can also be used. Technically, the operation involves independent horizontal and vertical resizing and there are a number of choices for the display of 4:3 originals on 16:9 screens and vice versa (e.g. letterbox, pillar box, full height and full width). Whilst changing the aspect ratio of pictures, the objects within should retain their original shape with the horizontal and vertical axes expanded equally.

ASCII (ein Begriff aus der Computertechnik)

American Standard Code for Information Interchange.

This is a standard computer character set used throughout the industry to represent keyboard characters as digital information. There is an ASCII table containing 127 characters covering all the upper and lower case characters and non displayed controls such as carriage return, line feed, etc. Variations and extensions of the basic code are used in special applications.


Application Specific Integrated Circuit. Custom-designed integrated circuit with functions specifically tailored to an application. (Beispiel : In den Digital Diamond Mischern der BTS war ein ASIC eine der zentralen CPUs). These replace the many discrete devices that could otherwise do the job but work up to ten times faster with reduced power consumption and increased reliability.

ASICs are now only viable for very large-scale high volume products due to high startup costs and theirinflexibility as other programmable devices, such as FPGAs (field programmable gate arrays), offer more flexible and cheaper opportunities for small to medium-sized production levels.

Aspect ratio (die Seitenverhältnisse Breite zu Höhe)

1. - of pictures. The ratio of length to height of pictures. All TV screens used to be 4:3, i.e. four units across to three units in height, but now almost all new models, especially where there is digital television, are widescreen, 16:9.

Pictures presented this way are believed to absorb more of our attention and have obvious advantages in certain productions, such as sport. In the change towards 16:9 some in-between ratios have been used for transmission, such as 14:9.

2. - of pixels. The aspect ratio of the area of a picture described by one pixel. The ITU-R BT.601 digital coding standard defines luminance pixels which are not square. In the 525/60 format there are 486 active lines (Zeilen !!) each with 720 samples of which 711 may be viewable due to blanking. Therefore the pixel aspect ratios on 4:3 and 16:9 screens are:

486/711 x 4/3 = 0.911 (tall)
487/711 x 16/9 = 1.218 (wide)

For the 625/50 format there are 576 active lines (Zeilen !!) each with 720 samples of which 702 are viewable so the pixel aspect ratios are:

576/702 x 4/3 = 1.094 (wide)
576/702 x 16/9 = 1.458 (wider)

The digital HD image standards all define square pixels.

Account must be taken of pixel aspect ratios when, for example, executing DVE moves such as rotating a circle. The circle must always remain circular and not become elliptical.

Another area where pixel aspect ratio is important is in the movement of images between platforms, such as computers and television systems. Computers generally use square pixels so their aspect ratio must be adjusted for SD television-based applications.

Asynchronous (data transfer) (ein Begriff aus der Computertechnik)

Carrying no separate timing information. There is no guarantee of time taken but a transfer uses only small resources as these are shared with many others. A transfer is ‘stop and go’ - depending on handshakes to check data is being received before sending more. IBMs Token-Ring was synchronous, Ethernet is asynchronous. Being indeterminate, asynchronous transfers of video files are used between storage devices, such as disks, but are not ideal for ‘live’ operations.

ATM (ein Begriff aus der Glasfasertechnik)

Asynchronous Transfer Mode (ATM) provides connections for reliable transfer of streaming data, such as television. With speeds ranging up to 10Gb/s it is mostly used by telcos. 155 and 622Mb/s are most appropriate for television operations.

Unlike Ethernet and Fibre Channel, ATM is connection-based: offering good Quality of Service (QoS) by establishing a path through the system before data is sent.

Sophisticated lower ATM Adaptation Layers (AAL) offer connections for higher layers of the protocol to run on. AAL1 supports constant bit rate, time-dependent traffic such as voice and video. AAL3/4 supports variable bit rate, delay-tolerant data traffic requiring some sequencing and/or error detection. AAL5 supports variable bit rate, delay-tolerant connection-oriented data traffic - often used for general data transfers.


The (US) "Advanced Television Systems Committee". Established in 1982 to co-ordinate the development of voluntary national technical standards for the generation, distribution and reception of high definition television.

In 1995 the ATSC published “The Digital Television Standard” which describes the US Advanced Television System. This uses MPEG-2 compression for the video and AC-3 for the audio and includes a wide range of video resolutions (as described in ATSC Table 3) and audio services (Table 2). It uses 8 and 16 VSB modulation respectively for terrestrial and cable transmission.


"Advanced Television". The term used in North America to describe television with capabilities beyond those of analog NTSC. It is generally taken to include digital television (DTV) and high definition (HDTV).


Advanced Video Codec High Definition, a joint development between Panasonic and Sony, applies MPEG-4’s AVC video coding and Dolby Digital (AC-3) or linear PCM audio coding, to meet the needs of the high definition consumer market with 1080i and 720p formats. The use of AVC provides at least twice the efficiency of MPEG-2 coding, used in HDV and MiniDV, to offer longer recording times or better pictures - or both. Possible recording media include standard DVD disks, flash memory and hard drives.


A codec that is H.264-compliant and uses only intra-frame compression. AVC-Intra technology, aimed at professional users, has been adopted by Panasonic for its P2 cameras (AVC-Intra P2) and offers considerably more efficient compression than the original DVCPRO HD codec - maybe as much as 2:1.

It is significant that Panasonic have chosen to keep with intra-frame coding (GOP of 1) making the coded material easily editable at every frame. This is at a time when long GOP coding is being used in products including HDV and XDCAM HD. With increased coding efficiency some believe the use of long GOP coding in professional recorders will fade.

AVI (.avi)

Audio Video Interleave - a Microsoft multimedia container format introduced in 1992 as part of its Video for Windows technology. AVI files can hold audio and video data in a standard container and provide synchronous video/audio replay. Most AVI files also use the OpenDML file format extensions, forming AVI 2.0 files.

Some consider AVI outdated, as there are significant overheads using it with popular MPEG-4 codecs that seemingly unduly increase file sizes. Despite that, it remains popular among file-sharing communities - probably due to its high compatibility with existing video editing and playback software, such as Windows Media Player.

Bandwidth (Bandbreite)

The amount of information that can be passed in a given time. In television a large bandwidth is needed to show sharp picture detail in realtime, and so is a factor in the quality of recorded and transmitted images.

For example, ITU-R BT.601 and SMPTE RP 125 allow analog luminance bandwidth of 5.5 MHz and chrominance bandwidth of 2.75 MHz for standard definition video. 1080-line (Zeilen) HD has a luminance bandwidth of 30 MHz (ITU-R BT.709).

Digital image systems generally require large bandwidths hence the reason why many storage and transmission systems revert to compression techniques to accommodate the signal.


An analog component VTR system for PAL and NTSC standard television introduced in 1982, using a half-inch tape cassette - very similar to the domestic Betamax.

This was developed by Sony and was marketed by them and several other manufacturers. Betacam records the Y, R-Y and B-Y component signals onto tape; many machines were operated with coded (PAL or NTSC) video in and out.

Initially developed for the industrial and professional markets the system was enhanced to offer models with full luminance bandwidth (Betacam SP 1986), PCM audio and SDI connections with a great appeal to the broadcast market.

Digital Betacam - Introduced in 1990 it was a development of the original analog Betacam VTR that records SD component video and audio digitally onto Betacam-style cassettes. It uses mild intra-field compression to reduce the ITU-R BT.601 sampled video data by about 2:1 to provide a good and much cheaper alternative to the uncompressed D1 format.

Betacam SX (1996) was a digital tape recording format which uses a constrained version of MPEG-2 compression at the 4:2:2 profile, Main Level (422P@ML). The compression is 10:1 and uses a 2-frame GOP (one I and one B frame), making it more difficult to edit. It uses half-inch tape cassettes.


Mathematical representation of numbers to base 2, i.e. with only two states, 1 and 0; on and off; or high and low. This is the basis of the mathematics used in digital systems and computing. Binary representation requires a greater number of digits than the base 10, or decimal, system most of us commonly use everyday. For example, the base 10 number 2 5 4 is 1111111 0 in b inary.

There are important characteristics which determine good digital video equipment design. For example, the result of a binary multiplication contains the sum of digits of the original numbers. For example:

1 0 1 0 1111 x 11 0 1 0 1 0 0 = 1 0 0 1 000011101100
(in decimal 175 x 212 = 37,100)

Each digit is known as a bit. This example multiplies two 8-bit numbers and the result is always a 16-bit number. So, for full accuracy, all the resulting bits should be taken into account.

Multiplication is a very common process in digital television equipment (e.g. keying, mixes and dissolves).


Short-range, up to 100m, wireless data connection in a Personal Area Network. Bluetooth is used in products such as phones, printers, modems and headsets and is acceptable where two or more devices are in proximity to each other and not needing high bandwidth (2 Mb/s max.).

It is easy to set up without configuration as Bluetooth devices advertise all services they provide making using the service easily accessible, without network addresses, permissions and all the other considerations that go with typical networks.

Blu-ray Disc (BD)

This optical disk can hold 25 GB on a single-layer CD-sized (12cm) disk using 405 nanometer blue-violet lasers. Dual layer disks hold up to 50 GB.

The companies that established the basic specifications are: Hitachi Ltd., LG Electronics Inc., Matsushita Electric Industrial Co. Ltd., Pioneer Corporation, Royal Philips Electronics, Samsung Electronics Co. Ltd., Sharp Corporation, Sony Corporation, and Thomson Multimedia.

Players must be able to decode MPEG-2, H.264/AVC (MPEG-4 part 10) and SMPTE VC-1 coded material. MPEG-2 offers backward compatibility for DVDs while the other two more modern codecs are at least 50 percent more efficient, using less disk space or producing higher quality results.

Audio codecs supported are Linear PCM, Dolby Digital, Dolby Digital Plus, Dolby TrueHD, DTS Digital Surround, DTS-HD.

The baseline data rate is 36 Mb/s - giving over one-and-a-half hours recording of HD material on a single layer, or about 13 hours of SD.

For Blu-ray Disc movies (BD-ROM) the maximum transfer rate is 54 Mb/s for audio and video, with a maximum of 40 Mb/s for video. Random access allows easy video editing and simultaneous record and playback.

The first BD disks were contained in a protective plastic caddy to avoid scratch damage. This made them somewhat bulky so now they are coated with a hard top layer to reduce the possibility of scratch damage and there is no caddy. Blu-ray won the competition with HD DVD.

Broadband (Breitband)

General term referring to "faster-than-telephone-modem" connections, i.e. receiving (download) much faster than 56 kb/s and transmitting (upload) faster than 28 kb/s. Broadband connects subscribers to the internet via DSL or ADSL over the original copper telephone lines. Cable can offer higher data rates. The higher broadband speeds are capable of carrying live digital TV to homes.

Byte (B), kilobyte (kB), megabyte (MB), gigabyte (GB), terabyte (TB) and petabyte (PB)

1 Byte (B) = 8 bits (b) which can describe 256 discrete values (brightness, color, etc.).

Traditionally, just as computer-folk like to start counting from zero, they also ascribe 2 raised to the power 10, 20, 30, etc. (2 hoch 10 , 2 hoch 20, 2 hoch 30, etc.) to the values kilo, mega, giga, etc. which become, 1,024, 1,048,576, 1,073,741,824, etc.

This can be difficult to handle for those drilled only in base-10 mathematics. Fortunately, disk drive manufacturers, who have to deal in increasingly vast numbers, describe their storage capacity in powers of 10, so a 100 GB drive has 100,000,000,000 bytes capacity. Observation suggests both systems are continuing in use... which could lead to some confusion.

Traditional__________________ New_________ Approx duration      
    @601 @709 1080/60i 2k  
1 kB = 2 hoch 10 bytes = 1,024 B 10 hoch 3 B 2/3 line 1/5 line 1/8 line  
1 M B = 2 hoch 20 bytes = 1,048,576 B 10 hoch 6 B 1 frame 1/5 frame 130 lines  
1 GB = 2 hoch 30 bytes = 1.074 x 10 hoch 9 B 10 hoch 9 B 47 sec 6.4 sec 3.5 sec  
1 TB = 2 hoch 40 bytes = 1.099 x 10 hoch 12 B 10 hoch 12 B 13 1/4hrs 1 3/4hrs 58 min  
1 PB = 2 hoch 50 bytes = 1.126 x 10 hoch 15 B 10 hoch 15 B 550 days 74 days 40 days  

Mit "lines" sind hier "Zeilen" gemeint

Currently (wir sind in 2008 !!) 3.5-inch hard disk drives store from about 70 GB to 1 TB. Solid-state store chips, RAMs, increment fourfold in capacity every generation now offering up to 8Gb chips (i.e. 8 x 2 hoch 30). Flash memory is now used in some camcorders such as Panasonic’s P2 series.

A full frame of standard definition digital television, sampled at 10 bits according to ITU-R BT.601, requires around 1 MB of storage (1.037 MB for 576-line, 876 kB for 480-line systems). HDTV frames comprise up to 5 or 6 times more data, and 2K digital film frames sampled in RGB or X´Y´Z´ (DCI colorspace) are about 12 MB.


Charge Coupled Device (CCD) - either assembled as a linear or two-dimensional array of light sensitive elements. Light is converted to an electrical charge in a linear fashion - proportional to the brightness impinging on each cell. The cells are coupled to a scanning system which, after analog to digital conversion, presents the image as a series of binary digits.

Early CCD arrays were unable to work over a wide range of brightness but they now offer low noise, high resolution imaging up to HDTV level and for digital cinematography.

CCIR - übernommen in ITU-R

Comité Consultatif International des Radiocommunications. This has been absorbed into the ITU under ITU-R.
See also: ITU


International Telegraph and Telephone Consultative Committee. As the name suggests this was initially set up to establish standards for the telephone industry in Europe. It has now been superseded by ITU-T so putting both radio frequency matters (ITU-R) and telecommunications under one overall United Nations body.
See also: ITU

Chroma keying

Chroma keying

The process of overlaying one video signal over another, the areas of overlay being defined by a specific range of color, or chrominance, on the background signal.

For this to work reliably, the chrominance must have sufficient resolution, or bandwidth. PAL or NTSC coding systems restrict chroma bandwidth and so are of very limited use for making a chroma key which, for many years, was restricted to using live, RGB camera feeds.

An objective of the ITU-R BT.601 and 709 digital sampling standards was to allow high quality chroma keying in post production. The 4:2:2 sampling system allows far greater bandwidth for chroma than PAL or NTSC and helped chroma keying, and the whole business of layering, to thrive in post production.

High signal quality is still important to derive good keys so some high-end operations favor using RGB (4:4:4) for keying - despite the additional storage requirements. Certainly anything but very mild compression tends to result in keying errors appearing - especially at DCT block boundaries.

Chroma keying techniques have continued to advance and use many refinements, to the point where totally convincing composites can be easily created.

You can no longer see the join and it may no longer be possible to distinguish between what is real and what is keyed.


The colorpart of a television signal, relating to the hue and saturation but not to the brightness or luminance of the signal. Thus pure black, gray and white have no chrominance, but any colored signal has both chrominance and luminance.

Although imaging equipment registers red, blue and green television pictures are handled and transmitted as U and V, Cr and Cb, or (R-Y) and (B-Y), which all represent the chrominance information of a signal, and the pure luminance (Y).


International Commission on Illumination (Commission Internationale de l’Eclairage) is devoted to international cooperation and exchange of information among its member countries on all matters relating to the science and art of lighting.

It is a technical, scientific and cultural, non-profit autonomous organization that has grown out of the interests of individuals working in illumination. It is recognized by ISO as an international standardization body.


See Common Image Format


"Common Internet File System" - is a platform-independent file sharing system that supports rich, collaborative applications over the internet which could be useful for collaborative post workflows.

It defines a standard remote file-system access protocol, enabling groups of users to work together and share documents via the Internet or within intranets. CIFS is an open, cross-platform technology based on native file-sharing protocols in Windows and other popular PC operating systems, and is supported on other platforms, so users can open and share remote files on the Internet without installing new software or changing work methods.

CIFS allows multiple clients to access and update the same file, while preventing conflicts with sophisticated file-sharing and locking semantics. These mechanisms also permit aggressive caching and read-ahead/write-behind without loss of cache coherency. CIFS also supports fault tolerance in the face of network and server failures.

In Quantel’s Genetic Engineering teamworking infrastructure, the Sam data server virtualizes media on-the-fly to give third-party applications instant access to all stored media using the CIFS protocol for no-API, out-of-the-box connectivity.

Cineon (file)

An RGB bitmap file format (extension .cin) developed by Kodak and widely used for storing and transferring digitized film images. It accommodates a range of film frame sizes and includes up to full Vista Vision.

In all cases the digital pictures have square pixels and use 10-bit log sampling. The sampling is scaled so that each of the code values from 0-1023 represents a density difference of 0.002 - describing a total density range of 2.046, equivalent to an exposure range of around 2,570:1 or about 11.3 stops. Note that this is beyond the range of current negative film.

The format was partly designed to hold virtually all the useful information contained in negatives and so create a useful ‘digital negative’ suitable as a source for post production processing and creating a digital master of a whole program.


The name is taken from the film industry and refers to a segment of sequential frames made during the filming of a scene. In television terms a clip is the same but represents a segment of sequential video frames. In Quantel editing systems, a clip can be a single video segment or a series of video segments spliced together. A video clip can also be recorded with audio or have audio added to it.


An exact copy, indistinguishable from the original. As in copying recorded material, eg copy of a non-compressed recording to another non-compressed recording. If attempting to clone compressed material care must be taken not to decompress it as part of the process or the result will not be a clone.

Color cube

A representation of color space by a three-dimensional diagram. For example, all definable colors of an RGB color space can be contained in an RGB color cube where R, G and B are axes at right angles to each other (like x, y and z at the corner of a cube). Different color spaces and interpretations of color are defined by different color cubes.
If the exact spectral values of R, G and B are defined, that cube defines an absolute color space. Such cubes are available from a number of vendors.

Color Decision List (CDL)

The American Society of Cinematographers’ Color Decision List (ASC-CDL) is a proposed metadata interchange format for color correction, developed to ensure that images appear the same when displayed in different places and on different platforms. This should enable consistency of look from on-set monitoring through post production to the final grade.

Color space

The color range between specified references. Typically three references are quoted in television: for example

RGB, Y R-Y B-Y and Hue, Saturation and Luminance (HSL) are all color spaces. In print, Cyan, Magenta, Yellow and Black (CMYK) are used.

Film is RGB while digital cinema uses X´Y´Z´. Moving pictures can be moved between these color spaces but it requires careful attention to the accuracy of processing involved. Operating across the media - print, film and TV, as well as between computers and TV equipment - will require conversions in color space.

Electronic light sensors detect red, blue and green light but TV signals are usually changed into Y, R-Y and B-Y components at, or very soon after they enter the electronic realm via camera, scanner or telecine (Filmabtaster).

There is some discussion about which color space is best for post production - the most critical area of use being keying. However, with most video storage and infrastructure being component-based, the full RGB signal is usually not available so any of its advantages can be hard to realize for television-based productions.

However, in the rapidly growing Digital Intermediate process, where film ‘post production’ is handled digitally, RGB color space predominates.

The increasing use of disk storage, networking able to carry RGB and digital cameras with RGB outputs RGB infrastructure and operations are more widely used. Even so, RGB takes up 50 percent more storage and, for most productions, its benefits over component working are rarely noticed.

One area that is fixed on RGB use is in 2K and 4K digital film (digital intermediates). Modern digital techniques allow the use of both RGB and Y R-Y B-Y to best suit production requirements.

CMOS (ein Halbleiter Baustein Technology)

Complementary metal-oxide-semiconductor technology very widely used to manufacture electronic integrated circuits (chips). CMOS chip digital applications include microprocessors, RAM memory and microcontrollers. There are also a wide variety of analog applications.

CMOS devices are favored for their immunity to high levels of noise, low static power drain, with significant power only drawn while the transistors switch, and high density packing of logic functions. Being so widely used, the technology is relatively cheap to manufacture.

Recently the application of CMOS technology to image sensors has increased. The chips are cheaper than the alternative CCDs, they consume less power, can be more sensitive (faster), have less image lag and can include image-processing functions on the sensor chip. Later developments are aimed at improving performance in the areas of noise, dynamic range and response.

Color management (in Monitoren)

The management of color through a process - such as DI (digital imaging) or video grading. Television engineering folklore says that a picture never looks exactly the same on two picture monitors. Certainly it has been hard to achieve a convincing match … until now.

By the use of probes to measure the colors on a screen, and equipment with adjustable color LUTs, the look of color can be set to the same across all monitors - within their technical limits. Such care is needed in DI where grading is a big part of the process.

Today DI suites may be equipped with a digital projector and large screen. The look of colors on the screen can be set to match those expected for a chosen print stock, so the DoP and grader can see the footage exactly as it will look in the cinema, or other chosen audience. This is color management.

Common Image Format (CIF)

The ITU has defined common image formats. A standard definition image of 352 x 240 pixels is described for computers. For HDTV production the HD-CIF preferred format is defined in ITU-R BT.709-4 as 1920 x 1080, 16:9 aspect ratio with progressive frame rates of 24, 25 and 30 Hz (including segmented) and interlace field rates of 50 and 60 Hz. This has helped to secure the 1920 x 1080 format as the basis for international program exchange.

Component video

The normal interpretation of a component video signal is one in which the luminance and chrominance remain as separate components, e.g. analog components in Betacam VTRs, digital components Y, Cr, Cb in ITU-R BT.601 and 709.

RGB is also a component signal. Pure component video signals retain maximum luminance and chrominance bandwidth and the frames are independent of each other. Component video can be edited at any frame boundary.

Composite video (bei uns auch FBAS genannt)

Luminance and chrominance are combined along with the timing reference sync and color burst information using one of the coding standards - NTSC, PAL or SECAM - to make composite video.

The process, which is an analog form of video compression, restricts the bandwidths (image detail) of the color components. In the composite result color is literally added to the monochrome (luminance or Y) information using a visually acceptable technique.

As our eyes have far more luminance resolving power than for color, the color sharpness (bandwidth) of the coded signal is reduced to well below that of the luminance. This provides a good solution for transmission and viewing but it becomes difficult, if not impossible, to accurately reverse the process (decode) back into pure luminance and chrominance.

This limits its use in post production as repetitive decode, recode cycles severely impair the pictures. Deriving keys from composite video gives poor results.


Multi-layering for moving pictures. Modern composites often use many techniques together, such as painting, retouching, rotoscoping, keying/matting, digital effects and color correction as well as multi-layering to create complex animations and opticals for promotions, title sequences and commercials as well as in program content.

Besides the creative element there are other important applications for compositing equipment such as image repair, glass painting and wire removal - especially in motion pictures.

The quality of the finished work, and therefore the equipment, can be crucial especially where seamless results are demanded. For example, adding a foreground convincingly over a background - placing an actor into a scene - without any telltale blue edges or other signs that the scene is composed.

Compression (audio)

Reduction of bandwidth or data rate for audio. Many digital schemes are in use, all of which make use of the way the ear hears (e.g. that a loud sound will tend to mask a quieter one) to reduce the information sent. Generally this is of benefit in areas where bandwidth or storage is limited, such as in delivery systems to the home, handheld players, etc.

Compression (video)

The process of reducing the bandwidth or data rate of a video stream. The analog broadcast standards used today, PAL, NTSC and SECAM are, in fact, compression systems which reduce the information contained in the original RGB sources.

Digital compression systems analyze their picture sources to find and remove redundancy both within and across picture frames. The techniques were primarily developed for digital data transmission but have been adopted as a means of reducing broadcast transmission bandwidths and storage requirements on disks and VTRs.

A number of compression techniques are in regular use for moving images. These include ETSI, JPEG, Motion JPEG, JPEG 2000, DV, MPEG-1, MPEG-2, MPEG-4, AVC Intra, Windows Media and Real.

Where different techniques are used in the same stream, problems can occur and picture quality can suffer more than if the same method is used throughout.

The MPEG-2 family of compression schemes, which was originally designed for program transmission, has been adapted for studio use in Betacam SX and IMX recorders.

While there is much debate, and new technologies continue to be developed, it remains true that the best compressed results are produced from the highest quality source pictures. Poor inputs do not compress well. Noise, which may be interpreted as important picture detail, is the enemy of compression.

Compression ratio

This is the ratio of the amount of data in the non-compressed digital video signal compared to the compressed version.

Modern compression techniques start with component television signals but a variety of sampling systems are used,

4:2:2 (‘Studio’ MPEG-2),
4:2:0 (MPEG-2),
4:1:1 (NTSC, DVCPRO), etc.

The compression ratio should not be used as the only method to assess the quality of a compressed signal. For a given technique, greater compression can be expected to result in lower picture quality but different techniques give widely differing quality of results for the same compression ratio.

The more modern technologies, MPEG-4 (H264), VC-1, and JPEG 2000 are more efficient than MPEG-2. The only sure method of judgment is to make a very close inspection of the resulting pictures - where appropriate, reassessing their quality after onward video processing.

- Werbung Dezent -
Zur Startseite - © 2006 / 2021 - Deutsches Fernsehmuseum Wiesbaden - Copyright by Dipl. Ing. Gert Redlich - DSGVO - Privatsphäre - Redaktions-Telefon - zum Flohmarkt
Bitte einfach nur lächeln: Diese Seiten sind garantiert RDE / IPW zertifiziert und für Leser von 5 bis 108 Jahren freigegeben - kostenlos natürlich.