CPC Definition - Subclass H04N
This place covers:
Television systems
- Television systems, whether general or specially adapted for colour television
- Details of television systems of general applicability, or specific to colour television, and also including scanning details of television systems
- Coding, decoding, compressing or decompressing of digital video signals
- Stereoscopic television systems, whether general or specially adapted for colour television, or details therefor
- Selective distribution of pictorial content, in particular interactive television or video on demand [VOD]
- Diagnosis, testing or measuring for television systems or details therefor
- Transmission of pictures or their transient or permanent reproduction either locally or remotely, by methods involving both the following steps:
- Step (a): the scanning of a picture, i.e. resolving the whole picture-containing area into individual picture-elements and the derivation of picture-representative electric signals related thereto, simultaneously or in sequence;
- Step (b): the reproduction of the whole picture-containing area by the reproduction of individual picture-elements into which the picture is resolved by means of picture-representative electric signals derived therefrom, simultaneously or in sequence;
- (In group H04N 1/00) Systems for the transmission or the reproduction of arbitrarily composed pictures or patterns in which the local light variations composing a picture are not subject to variation with time, e.g. documents (both written and printed), maps, charts, photographs (other than cinematograph films);
- Circuits specially designed for dealing with pictorial communication signals, e.g. television signals, as distinct from merely signals of a particular frequency range.
Other pictorial communication
- Scanning, transmission or reproduction of documents or the like, in particular facsimile transmission
- Details pertaining to scanning, transmission or reproduction of documents or the like, in particular details of facsimile transmission
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Instruments for performing medical examinations of the interior of cavities or tubes of the body combined with television appliances | |
Arrangements of television sets in vehicles; Arrangement of controls thereof | |
Mounting of cameras operative during drive of a vehicle; | |
Arrangements of cameras in aircraft | |
Controlling or regulating single-crystal growth by pulling from a melt, using television detectors | |
Inspecting textile materials by television means | |
Scanning a visible indication of a measured value and reproducing this indication at a remote place, e.g. on the screen of a cathode-ray tube | |
Burglar, theft, or intruder alarms using television cameras | |
Structural combination of reactor core or moderator structure with television camera |
Attention is drawn to the following places, which may be of interest for search:
Video games, i.e. games using an electronically generated display having two or more dimensions | |
Systems for the reproduction according to the above-mentioned step (b) of pictures comprising alphanumeric or like character forms and involving the generation according to the above-mentioned step (a) of picture-representative electric signals from a pre-arranged assembly of such characters, or records thereof, forming an integral part of the systems | |
Printing, duplication or marking processes, or materials therefor | |
Systems for the reproduction according to step (b) of Note (1) of pictures comprising alphanumeric or like character forms but involving the production of the EQUIVALENT of a signal which would be derived according to the above-mentioned step (a), e.g. by cams, punched card or tape, coded control signal, or other means | |
Systems for the direct photographic copying of an original picture in which an electric signal representative of the picture is derived according to the said step (a) and employed to modify the operation of the system, e.g. to control exposure, | |
Systems in which legible alphanumeric or like character forms are analysed according to step (a) of Note (1) to derive an electric signal from which the character is recognised by comparison with stored information | |
Image data processing or generation, in general | |
Circuits or other parts of systems which form the subject of other subclasses | |
Broadcasting |
In this place, the following terms or expressions are used with the meaning indicated:
television systems | Systems for the transmission and reproduction of arbitrarily composed pictures in which the local light variations composing a picture MAY change with time, e.g. natural "live" scenes, recordings of such scenes such as cinematograph films |
CCD | Charge-coupled device, that is, a device made up of semiconductors arranged in such a way that the electric charge output of one semiconductor charges an adjacent one |
MPEG | Motion Picture Experts Group; a family of standards used for coding audio-visual information in a digital compressed format |
NTSC | National Television System Committee |
PAL | Phase alternating line |
Picture signal generator | Circuits or arrangements receiving as input an image of a scene and delivering as output an electric signal that contains all the information required to reproduce the image of the scene |
Picture reproducer | Circuits or arrangements receiving as input an electric signal characteristic of an image of a scene and producing as output a visual display of that image |
SECAM | Séquentiel couleur à mémoire (Sequential Colour with Memory) |
This place covers:
- systems for the transmission or the reproduction of arbitrarily composed pictures or patterns in which the local light variations composing a picture are not subject to variation with time, e.g. documents (both written and printed), maps, charts, photographs (other than cinematograph films);
- transmission of time-invariant pictures, e.g. documents (both written and printed), maps, charts, photographs (other than cinematograph films), or their transient or permanent storage or reproduction either locally or remotely by methods involving both scanning and reproduction;
- systems involving the generation, transmission, storage or reproduction of time-invariant pictures; image manipulation for such reproduction on particular output devices;
- devices applied to the transmission, storage or reproduction of time-invariant pictures, e.g. facsimile apparatus, digital copiers, (digital) scanners, multifunctional peripheral devices;
- circuits specially designed for dealing with pictorial communication signals, e.g. facsimile signals or colour image signals, as distinct from merely signals of a particular frequency range;
- storage or transmission aspects of still video cameras.
- H04N 1/00 is an application place for a large number of IT technologies, which are covered per se by the corresponding functional places
- Image servers, hosts and clients use internally specific computing techniques. Corresponding techniques used in general computing are found in G06F or G06Q. This concerns data storage, software architectures, error detection or correction in general computing, monitoring, image retrieval, browsing, Internet browsing, computer security, billing or advertising
- Image servers, hosts and clients use specific telecommunication techniques for the image transmission process. Corresponding techniques used in generic telecommunication networks are found in subclasses H04B, H04H, H04L, H04M, H04W. This concerns monitoring or testing of transmitters/receivers, broadcast or multicast, maintenance, administration, testing, data processing in data switching networks, home networks, real-time data network services, data network security, applications for data network, wireless networks per se
- Image scanners use specific scanning techniques. Corresponding techniques are found in G02B. This concerns optical scanning systems
- General image processing techniques are found in G06T
Attention is drawn to the following places, which may be of interest for search:
Scanning details of electrically scanned solid-state devices | |
Scanning of motion picture films | |
Television signal recording | |
Circuits for processing colour television signals | |
Capture aspects of still video cameras | |
Printing mechanisms | |
Supporting or handling copy material in printers | |
Handling thin or filamentary material | |
Colorimetry | |
Electrography; Magnetography | |
Handling of copy material in photocopiers | |
Constructional details of equipement or arrangements specially adapted for portable computer application | |
Power management in computer systems | |
Input and output arrangements for computers | |
Interaction techniques for graphical user interfaces | |
Storage management | |
Digital output to printers | |
Adressing or allocating within memory systems or architectures | |
Image retrieval | |
Retrieval from Web | |
Computer security | |
Sensing record carriers | |
Producing a permanent visual presentation of output data | |
Payment schemes, Commerce | |
General-purpose image data processing | |
Image watermarking | |
Geometric image transformation in the plane of the image | |
Image enhancement or restoration | |
Image analysis | |
Image coding | |
Editing figures and text; Combining figures or text | |
Methods or arrangements for recognising scenes | |
Character recognition, recognising digital ink or document-oriented image-based pattern recognition | |
Methods or arrangements for recognising human body or animal bodies or body parts | |
Methods or arrangements for acquiring or recognising human faces, facial parts, facial sketches, facial expressions | |
Access-control involving the use of a pass | |
Access-control by means of a password | |
Coding, decoding or code conversion, for error detection or error correction | |
Monitoring or testing of transmitters/receivers | |
Broadcast communication | |
Secret communication; Jamming of communication | |
Arrangements for detecting or preventing errors in the information received | |
Arrangements for secret or secure communication; Encryption | |
Charging arrangements in data networks | |
Data processing in data switching networks | |
Data network security | |
Real-time data network services | |
Applications for data network services | |
Simultaneous speech and telegraphic or other data transmission over the same conductors | |
Telephonic metering arrangements | |
Connection management in wireless communications networks |
In this main group Indexing Codes are used:
The numbering of the codes is based on the numbering of the subgroups;
- codes, e.g. H04N 1/0455 , which have a numbering the first part of which corresponds to a subgroup which is at the tip end of a subgroup branch, e.g. H04N 1/0402, are used to classify detailed information and may be applied to that subgroup, e.g. H04N 1/0455 may be used in combination with H04N 1/0402;
- codes, e.g. H04N 2201/0402, which have a numbering the first part of which corresponds to a subgroup which is at the head or node end of a subgroup branch, e.g. H04N 1/04, are used to classify orthogonal information and may be applied to any subgroups in the corresponding subgroup branch, e.g. H04N 2201/0434 may be used in combination with H04N 1/0402 and/or H04N 1/1013.
In this place, the following terms or expressions are used with the meaning indicated:
Additional information | any information other than the still picture itself, but nevertheless associated with the still picture |
Documents or the like | documents (both written and printed), maps, charts, photographs (other than cinematograph films) |
Main-scan | the first completed scan |
Mode | way or manner of operating |
Scanning | the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa |
Still picture apparatus | any apparatus generating, storing, transmitting or reproducing non-transient images |
Single-mode communication | a communication in which the mode is not changed |
In patent documents, the following abbreviations are often used:
IP | Internet Protocol |
OS | Operating System |
PC | Personal Computer |
GPS | Global Positioning System |
MFP | Multifunctional peripheral |
MFD | Multifunctional device |
RFID | Radio-frequency identification |
In patent documents, the following words/expressions are often used as synonyms:
- Complex device and Multifunctional peripheral
- Complex machine and Multifunctional peripheral
- Hybrid device and Multifunctional peripheral
- Hybrid machine and Multifunctional peripheral
- Digital camera and Still video camera
- Metadata and Additional information
- Fast scan and Main scan
- Slow scan, Subscan and Sub scan
This place does not cover:
Determining the necessity for preventing unauthorised reproduction | |
Detecting scanning velocity or position | |
Fault detection in circuits or arrangements for control supervision between transmitter and receiver or between image input and image output device | |
Discrimination between different image types | |
Discrimination between the two tones in the picture signal of a two-tone original | |
Control or modification of tonal gradation or extreme levels, e.g. dependent on the contents of the original or references outside the picture, |
This place does not cover:
Transmitting or receiving computer data via an image communication device | |
Transmitting or receiving image data via a computer or computer network | |
Circuits or arrangements for control or supervision between transmitter and receiver |
Attention is drawn to the following places, which may be of interest for search:
Data processing systems for commerce |
Attention is drawn to the following places, which may be of interest for search:
Message switching systems, e.g. e-mail systems |
This place does not cover:
Push-based network services |
Attention is drawn to the following places, which may be of interest for search:
Portable computers comprising integrated printing or scanning devices |
Typically with apparatus of the kind classified in G03G.
Typically with apparatus of the kind classified in B41J or G06K 15/00.
Typically with apparatus of the kind classified in other H04N main groups.
This place does not cover:
Television studio circuitry, devices or equipment per se |
Typically with apparatus of the kind classified in H04N 5/222 and subgroups.
Attention is drawn to the following places, which may be of interest for search:
Supporting or handling copy material in printers | |
Handling thin or filamentary material | |
Handling of copy material in photocopiers |
This place does not cover:
Marking an unauthorised reproduction with identification | |
Restricting access |
Attention is drawn to the following places, which may be of interest for search:
Preventing copies being made in photocopiers |
Attention is drawn to the following places, which may be of interest for search:
Details of scanning arrangements |
Attention is drawn to the following places, which may be of interest for search:
Television cameras |
Attention is drawn to the following places, which may be of interest for search:
Light-guides per se |
This place does not cover:
Means for collecting light from a line or an area of the original and for guiding it to only one or a relatively low number of picture element detectors |
This place does not cover:
Composing, repositioning or otherwise modifying originals |
Attention is drawn to the following places, which may be of interest for search:
Details of scanning heads | |
Optical scanning systems | |
Projection optics in photocopiers | |
Character printers involving the fast moving of a light beam in two directions |
Where possible both the main and sub scanning arrangements should be classified, using a class for the invention and an Indexing Code for subsidiary information. Manual scanning and scanning using two-dimensional arrays are exceptions to this rule.
In this place, the following terms or expressions are used with the meaning indicated:
Main scan direction | The direction of the first completed scan line |
This place does not cover:
Scanning different formats; Scanning with different densities of dots per unit length, e.g. different numbers of dots per inch (dpi); Conversion of scanning standards | |
The scanning speed being dependent on content of picture |
Attention is drawn to the following places, which may be of interest for search:
Detection, control or error compensation of scanning velocity or opposition in photographic character printers involving the fast moving of an optical beam in the main scan direction |
Where possible, when classifying in this subgroup, details of the main and subscan should also be classified using other subgroups of H04N 1/04.
Where possible, when classifying in this subgroup, details of the main scan should also be classified using other subgroups of H04N 1/04.
Where possible, when classifying in this subgroup, details of the subscan should also be classified using other subgroups of H04N 1/04.
Attention is drawn to the following places, which may be of interest for search:
Feeding a sheet in the subscanning direction by rotation about its axis only |
Attention is drawn to the following places, which may be of interest for search:
Arrangements for the main-scanning |
This place does not cover:
Optical details of the scanning system |
Attention is drawn to the following places, which may be of interest for search:
Character printers involving the fast moving of a light beam in two directions |
This place does not cover:
Optical details of the scanning system |
Attention is drawn to the following places, which may be of interest for search:
Optical printers using dot sequential main scanning by means of a light deflector | |
Character printers involving the fast moving of an optical beam in the main scan direction |
This place does not cover:
Scanning arrangements using multi-element arrays |
Attention is drawn to the following places, which may be of interest for search:
Character printers involving the fast moving of an optical beam in the main scan direction |
Attention is drawn to the following places, which may be of interest for search:
Optical printers using arrays of radiation sources | |
Photographic character printers simultaneously exposing more than one point |
Attention is drawn to the following places, which may be of interest for search:
Photographic character printers simultaneously exposing more than one point on more than one main scanning line |
Attention is drawn to the following places, which may be of interest for search:
Details of the sub-scanning | |
Photographic character printers simultaneously exposing more than one point on one main scanning line |
This place covers:
Where the storage results in a record that is not merely transient.
This place does not cover:
Storage resulting in a transient record, for control or supervision between image input and image output device | |
Composing, repositioning or otherwise modifying originals | |
Bandwidth or redundancy reduction |
In this place, the following terms or expressions are used with the meaning indicated:
Intermediate | having no limiting meaning |
Attention is drawn to the following places, which may be of interest for search:
Image capture in digital cameras | |
Still video cameras |
This place covers:
Arrangements providing the output copy of a document in a system performing the scanning, transmission and reproduction of documents or the like, e.g. printing arrangements integrated within a facsimile device
Attention is drawn to the following places, which may be of interest for search:
Details of scanning heads | |
Scanning arrangements | |
Perforating or marking objects by electrical discharge | |
Selective printing mechanisms per se |
Attention is drawn to the following places, which may be of interest for search:
Magnetography |
This place covers:
Where the control or supervision is between two devices that can be embedded within the same apparatus or be embedded in multiple apparatuses.
Where the front-end device has images that are intended to be sent to the back-end device and the control can be from either the front-end device, the back-end device or both.
For example:
- a still-image camera or scanner and an another separate device (i.e. printer, display, server)
- a still-image camera or multi-function peripheral [MFP] and its internal memory
This place does not cover:
Circuits or arrangements for blanking or otherwise eliminating unwanted parts of pictures | |
Composing, repositioning or otherwise modifying originals |
Attention is drawn to the following places, which may be of interest for search:
Devices for controlling television cameras or cameras comprising electronic image sensors | |
Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device | |
Digital output from electrical digital data processing unit to print unit | |
Real-time session management in data packet switching networks | |
Session management in data packet switching networks |
Attention is drawn to the following places, which may be of interest for search:
Automatic arrangements for answering calls in telephonic equipment |
Attention is drawn to the following places, which may be of interest for search:
Telephonic equipment for signalling identity of wanted subscriber |
Attention is drawn to the following places, which may be of interest for search:
Television systems for the transmission of television signals using pulse code modulation, using bandwidth reduction involving the insertion of extra data | |
Television bitstream transport arrangements involving transporting of additional information | |
Broadcast communication systems specially adapted for using meta-information |
In patent documents, the following words/expressions are often used as synonyms:
- "metadata" for "additional information"
Attention is drawn to the following places, which may be of interest for search:
Image watermarking | |
Audio watermarking |
This place does not cover:
In colour image data |
Attention is drawn to the following places, which may be of interest for search:
Transmission of digital television signals using bandwidth reduction and involving the insertion of extra data |
This place covers:
Storage results in a transient record, e.g. buffering
Attention is drawn to the following places, which may be of interest for search:
Stored and forward switching systems in transmission of digital information |
Attention is drawn to the following places, which may be of interest for search:
Coding, decoding or code conversion, for error detection or error correction | |
Arrangements for detecting or preventing errors in received digital information |
Attention is drawn to the following places, which may be of interest for search:
Simultaneous speech and other data transmission over the same conductors in telephonic communication systems |
Attention is drawn to the following places, which may be of interest for search:
Negotiation of communication capabilities for communication control in transmission of digital information |
Attention is drawn to the following places, which may be of interest for search:
Systems modifying digital information transmission characteristics according to link quality |
Attention is drawn to the following places, which may be of interest for search:
Coin-freed or like apparatus per se | |
Telephonic metering |
This place covers:
Obsolete subject matter, analog facsimile communication.
Attention is drawn to the following places, which may be of interest for search:
Synchronisation of pulses |
This place covers:
Removing parts of the image e.g. smudges, extracting part of an image, screen out unwanted image regions, removing finger shadow, removing perforated holes when copying a perforated paper.
This place does not cover:
Composing, repositioning or otherwise modifying originals |
Drop out for parts of the image while changing color is in H04N 1/62, form drop out data in H04N 1/4177.
This place covers:
Composing e.g. combining 2 images. Reading of books and correction for geometric distortions due to curved (book page) original. Geometric modifications caused through warping of image.
Attention is drawn to the following places, which may be of interest for search:
Photoelectric composing of characters | |
Editing, producing a composite image by copying with focus on copy machine | |
Text processing | |
Pagination | |
Image data processing or generation, in general | |
Geometric modification and warping in general | |
Teaching/communicating with deaf, blind, mute people |
This place covers:
Eg. combining chart, text, logo (low resolution/bit depth) and photo (high resolution/bit depth) or foreground and background, with focus on image processing. Also high dynamic range (HDR) imagery when combined with H04N 1/407.
Attention is drawn to the following places, which may be of interest for search:
Inserting foreground into background with focus on camera | |
Combining objects while rendering PDL |
This place covers:
Image cropping, cutting out, masking with arbitrary shape.
Attention is drawn to the following places, which may be of interest for search:
Selection / ordering of images (from movies) |
This place covers:
User defines the corner coordinates to extract image for repositioning. Cutting out, cropping, number of points is important. Low resolution pre-scan and high-resolution main scan of part of platen.
This place covers:
Part of the image is enlarged/reduced to fit new position. Reducing for medium, zoom, belief map.
This place does not cover:
Corrections or small zoom factors | |
Enlarging or reducing |
This place covers:
Combining two images which have been scanned by a scanner which does not cover the entire image. Panoramic image creation, combination, stitching. Process is done digitally and not mechanically.
Attention is drawn to the following places, which may be of interest for search:
Mechanical corrections | |
Mosaic images or mosaicing. | |
Determination of transform parameters for the alignment of images, i.e. image registration |
This place covers:
Rotating the image by any amount, e.g. 90degree. Also when printing double sided or 4 images on 1 page.
Attention is drawn to the following places, which may be of interest for search:
When focus is on image processing. |
This place covers:
Limited to detecting and correcting skew, i.e. errors during scanning: normally less than 45degree.
See also in G06V 10/243.
Attention is drawn to the following places, which may be of interest for search:
Mechanical skew detection |
This place covers:
Mainly the mechanical enlargement process, whole image, DIN A4 to DIN A3 (larger than DIN A4).
This subgroup takes precedence over H04N 1/04.
This place covers:
Digitally enlarging or reducing images with a change of resolution including e.g. interpolation (digital).
Beware of H04N 1/40068 which has resolution conversion where physical size is irrelevant.
Attention is drawn to the following places, which may be of interest for search:
Interpolation in general |
This place covers:
General documents regarding quality aspects, quantization (errors), scanning either B/W or color, video printer, frame grabber, memory arrangement or management, smear reduction for CCD.
This place does not cover:
Composing, repositioning or otherwise modifying originals |
Attention is drawn to the following places, which may be of interest for search:
Moving images, e.g. television |
This place covers:
Converting coloured documents into B&W so they can be printed on monotone printers, e.g. changing green into stripes, red into dots.... Converting from RGB via thresholding to grayscale.
This place covers:
Writing: control of print heads, stilus heads, electrostatic heads. Continuous driving signals.
Overlap with G06K 15/12.
This place does not cover:
Compensating positionally unequal response of the pick-up or reproducing head | |
Control or modification of tonal gradation or of extreme levels |
Attention is drawn to the following places, which may be of interest for search:
Multipass inkjet |
This place covers:
Writing: multiple print elements, essentially LED or thermal printheads, but also using several lasers in parallel.
This place covers:
Mainly continuous tone laser printers.
This place covers:
Control of light during reading of a document; circuits for driving diodes, analogue switches for light control. Also exposure time of sensor etc.
This place does not cover:
Compensating positionally unequal response of the pick-up or reproducing head | |
Control or modification of tonal gradation or of extreme levels |
Attention is drawn to the following places, which may be of interest for search:
Mechanical details | |
Lamps per se |
This place covers:
Image segmentation, finds regions in bitmap image e.g. text, table, photo, line image; also "mixed raster content" or "MRC".
Attention is drawn to the following places, which may be of interest for search:
Segmentation; Edge detection in general | |
Character recognition, OCR |
This place covers:
Change resolution while physical size is irrelevant, e.g. original image is 600dpi and printer is only capable of printing 300dpi, so conversion is necessary.
This place does not cover:
With modification of image resolution, i.e. determining the values of picture elements at new relative positions |
Attention is drawn to the following places, which may be of interest for search:
Increasing or decreasing spatial resolution |
This place covers:
Descreening and/or rescreening, self-explanatory.
Attention is drawn to the following places, which may be of interest for search:
Resolution enhancement by intelligently pacing sub-pixels when focus is on write head control. | |
General edge enhancement |
This place covers:
Provides more than just level 0 and level 255 for image, e.g. has levels 0, 127 and 255, i.e. multi-level halftoning. Typical documents: EP817464 (Seiko) shows two types of ink C1 and C2 (color multi-toning in H04N 1/52), EP889639 (Xerox) shows levels of white, light gray, dark gray and black.
Attention is drawn to the following places, which may be of interest for search:
Variation of dot size | |
General bit depth reduction |
This place covers:
Very few applications. Local modifications, e.g. making lighter and posterization of natural images.
This place covers:
Limited to image readers, mostly line sensors. Shading correction, illumination profile, head calibration, positionally varying noise etc. Also defects in the image sensors. Compensation of ambient illumination.
This place does not cover:
Discrimination between the two tones in the picture signal of a two-tone original |
Attention is drawn to the following places, which may be of interest for search:
Ambient illumination also in | |
Control of light source | |
Correction of isolated defects in image | |
Defect maps for area sensors |
This place covers:
Printers, corrects misaligned or defective heads; head calibration.
Attention is drawn to the following places, which may be of interest for search:
Malfunctioning inkjet nozzles |
Attention is drawn to the following places, which may be of interest for search:
Shaping pulses by limiting or thresholding, in general |
This place covers:
Halftoning in general, either B&W only or each color layer separately. Examples are EP126782, EP673150.
This place covers:
Dispersed dots, i.e. dots that are not concentrated in clusters which spread out from a central point. Examples are US5317418 - e.g. Gaussian filter, blue noise; US5426122 - FM rasters.
This place covers:
Error diffusion for halftoning, note that error diffusion is also used for other purposes in other parts of H04N 1/00. Examples are EP507354, EP808055.
This place covers:
Illustrative examples of subject matter classified in this group are WO8906080, EP715451.
This place covers:
Halftone dots grow from a central point and spread in all directions. Also dispersed clusters. Illustrative examples are US3688033, EP651560.
This place covers:
Growth of halftone dot in one direction only, includes Pulse Width Modulation. Illustrative examples are EP212990, US4951152.
This place covers:
Different dot sizes, each dot has the same density. Illustrative examples are EP647059 (fig.5), US4680645 (fig.1).
For dots of different densities (inks) classify in H04N 1/40087.
This place covers:
Illustrative examples are GB2026283, WO9307709.
This place does not cover:
Pattern varying in one dimension only |
This place covers:
Selection of particular gamma correction table, correction depending on media scanned or printed on, film type correction, correction of tone scale for dot gain.
Similar to H04N 1/6027 for colour.
This place covers:
Analysis of image content to determine final correction to be applied, e.g. automatic background deletion.
Attention is drawn to the following places, which may be of interest for search:
Conversion to binary |
This place covers:
Histogram analysis to determine tone correction parameters.
Attention is drawn to the following places, which may be of interest for search:
In context of pure image processing |
This place covers:
Pre-scanning to read reference strips (B&W), which is used to set max and min levels. Very limited test patterns containing only black (offset correction) and white (gain correction), e.g. printed next to an image or as separate image. Standard pattern on monitor (no light for black reference and light on for white reference).
Attention is drawn to the following places, which may be of interest for search:
Monitor calibration per se |
This place covers:
Test pattern analysis for gray scale corrections.
Attention is drawn to the following places, which may be of interest for search:
For colour |
This place covers:
Noise or error correction. Elimination of "streaky effects".
Attention is drawn to the following places, which may be of interest for search:
Scanning correction due to reader error | |
Image enhancement or restoration | |
Noise filtering in arrangements for image or video recognition or understanding |
This place covers:
Fairly self-explanatory. Has also first edge detection and then correction. Edge emphasis, sharpness correction, unsharp masking, smoothing.
Attention is drawn to the following places, which may be of interest for search:
For color | |
For cameras | |
Deblurring; Sharpening |
In patent documents, the following words/expressions are often used as synonyms:
- "show-through" and "see-through"
This place covers:
Removal of streaks, dust, blemishes, tears, scratches, hairs. Removing scratches from photographs using infrared image.
This place covers:
General coding groups for still images, B&W, gray scale or each color component separately. This head group has using different coding techniques within the same document, combination of different techniques, or choosing from different available coding methods (e.g. characters with technique 1, pictures with technique 2).
This place does not cover:
Bandwidth or redundancy reduction by scanning | |
Television systems for the transmission of television signals using bandwidth reduction | |
For mixed image compression |
Attention is drawn to the following places, which may be of interest for search:
Coding of color images | |
Bandwidth or redundancy reduction for data acquisition | |
Coding for image data processing in general | |
Data Compression in general |
This place covers:
Image to be coded must be halftoned image only.
This place covers:
B&W images, i.e. binary coding.
Attention is drawn to the following places, which may be of interest for search:
Continuous tone compression |
This place covers:
Eg. huffman coding.
This place covers:
Lossless coding, has a variety of coding methods, e.g. comparing different codings of a line and choosing shortest code; universal coding.
This place covers:
Block coding, also mix of Huffman and run-length coding.
This place covers:
Predictive coding, arithmetic coding.
This place covers:
Different resolutions of the image, wavelet coding for binary images.
This place covers:
Differential coding, i.e. coding the change data between two lines.
This place covers:
Templates, encodes the data change only; encode difference of image when template is known, e.g. scanned images of filled out form sheets.
Attention is drawn to the following places, which may be of interest for search:
Color form drop-out |
This place covers:
B&W runlength encoding.
This place does not cover:
Baseband signal showing more than two values or a continuously varying baseband signal is transmitted or recorded | |
Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information using predictive or differential encoding |
This place does not cover:
Circuits or arrangements for control or supervision between transmitter and receiver |
Attention is drawn to the following places, which may be of interest for search:
Television systems for two-way working | |
Selective content distribution, e.g. interactive television |
This place does not cover:
Preventing unauthorised reproduction |
Attention is drawn to the following places, which may be of interest for search:
Analogue secrecy television systems | |
Security arrangements for protecting computers or computer systems against unauthorised activity | |
Secret communication in general | |
Arrangements for secret or secure communication in transmission of digital information |
Attention is drawn to the following places, which may be of interest for search:
Restricting access to computer systems | |
Access-control involving the use of a pass | |
Verifying the identity or authority of a user of a system for the transmission of digital information | |
Protecting transmitted digital information from access by third parties | |
Access control in transmission of digital information |
Attention is drawn to the following places, which may be of interest for search:
Systems rendering a television signal unintelligible and subsequently intelligible | |
Ciphering or deciphering apparatus for cryptographic purposes | |
Secret communication by adding a second signal to make the desired signal unintelligible |
Attention is drawn to the following places, which may be of interest for search:
Arrangements for secret or secure communication using public key encryption algorithm |
This place covers:
Colour edit systems, printers with different recording modes for color and monochrome, decision as to print/scan color or B&W, general color applications for fax. Very general group.
Attention is drawn to the following places, which may be of interest for search:
Colorimetry |
This place covers:
Very straightforward, conversion into color documents, e.g. pattern chart to color (opposite to H04N 1/40012). Generating false color representations.
This place covers:
Filter wheels to separate components.
This place covers:
The use of different lights to read the image, e.g. first R, then G, finally B, e.g. successive RGB LED lighting.
This place covers:
Using separate R, G and B sensor elements, typically line sensors. Has also documents on correcting chromatic aberrations of 3-line CCD sensor, also RGB sensor with additional monochrome sensor.
Attention is drawn to the following places, which may be of interest for search:
For area sensors (Bayer matrix) | |
Demosaicing |
This place covers:
Splitting light using prisms, half (dichroic) mirrors, diffraction grating - most applications deal with line sensors.
This place covers:
Dot by dot printing, point-wise scanning, essentially either inkjet or laser beam printer.
Attention is drawn to the following places, which may be of interest for search:
More details on inkjets |
This place covers:
Line-by-line printing.
Attention is drawn to the following places, which may be of interest for search:
Alignment of dots |
This place covers:
Picture-by-picture printing, i.e. one complete color separation after the other. Focus on image signal circuits, e.g. start-of-scan determination, sync marks on print medium, misregistration correction correcting misalignment of individual print heads with respect to each other. Facet or face-to-face errors. This is the typical way color laser printers work, when one latent image is generated after the other and one after the other developped and transferred.
Attention is drawn to the following places, which may be of interest for search:
Trapping is also used against misregistration, but is an image modification | |
Temperature | |
Purely mechanical corrections | G09G15/01 |
This place covers:
Using one drum for more than one color, thermal transfer printers.
This place covers:
Colour halftoning, colour multi-toning e.g. with use of more than one cyan (C1 and C2), screens, error diffusion.
H04N 1/40087 or some subgroup of H04N 1/405 may be applied additionally to H04N 1/52.
This place covers:
Printing with additional colours, e.g. using orange and brown pigments additionally or white or gold, CMYKRGB printers.
This place covers:
General color image processing, color to 2-color converstion (e.g.. RGB to black/red). Film type, document type, slide type, text+image, detection of mouse marker.
This place does not cover:
Circuits or arrangements for halftone screening |
This place covers:
Self-explanatory regarding noise and edge. A substantial part of this subgroup deals with trapping (spreading and choking image objects), either on bitmap or on page description language (PDL) level.
This place does not cover:
Retouching, i.e. modification of isolated colours only or in isolated picture areas only |
Attention is drawn to the following places, which may be of interest for search:
For integration of trapping in PDL workflow |
This place covers:
All kinds of color correction. Estimating spectrum from XYZ input.
This place does not cover:
Conversion of colour picture signals to a plurality of signals some of which represent particular mixed colours |
This place covers:
Color corrections involving representation of the image on monitor, e.g. for interactive correction or for use as soft proofer.
This place does not cover:
Matching printer and monitor for softproofing per se | |
With simulation on a subsidiary picture reproducer |
This place covers:
Fairly self-explanatory, typically the user selects one of the several similated, corrected images.
This place covers:
Usually transformations from RGB to CMY, but also used generally for transformations to output device values, as far as the focus is on the transformation. Here (matrix) equations are used.
This place covers:
Look-up tables for color conversion, typically to CMY. Also interpolation methods to calculate the in-between values not stored in the tables, e.g. tetrahedal or cubic interpolations.
This place does not cover:
Generating a fourth subtractive colour signal using look-up tables |
This place covers:
Essentially the transformations to CMYK which involve use of equations. Gray component replacement (GCR), undercolor removal (UCR).
This place covers:
Four-colour look-up tables, also their interpolation.
This place covers:
General control and correction of tone reproduction curves. Gray balance, white balance as result thereof. Aspects of saturation correction.
This place does not cover:
Reduction of colour to a range of reproducible colours |
Attention is drawn to the following places, which may be of interest for search:
When focus is on white balance per se. | |
White balance in cameras |
This place covers:
Device profiles, e.g. ICC profiles, profile management for several devices, profile editing.
This place covers:
Printer or scanner calibration using color test patterns.
This place does not cover:
Matching two or more picture signal generators or two or more picture reproducers using test pattern analysis |
Attention is drawn to the following places, which may be of interest for search:
For B&W | |
Camera calibration | |
Color charts as such | G01G3/52 |
In electrophotography |
This place covers:
Specifically matching two (or more) devices to each other, e.g. for proofing, i.e. printer to printer or printer to monitor.
This place covers:
Limited to the two device scenario.
This place covers:
Gamut mapping and gamut conversion. Mainly within a device-independent space in order to map color reproducability of one device onto that of another device.
Attention is drawn to the following places, which may be of interest for search:
In relation to general image processing and computer graphics |
This place covers:
Corrections to an image which depends on the type of image object, i.e. different corrections within one page, e.g. text and picture differently corrected.
Attention is drawn to the following places, which may be of interest for search:
Discrimination of image (object) types per se - (B&W), | |
Discrimination of image (object) types per se - (colour). |
This place covers:
Only hue changes, not luminance or chroma or saturation.
This place does not cover:
Saturation correction |
This place covers:
Correction of e.g. color fog or blue shift in image.
H04N 1/6027 has precedence.
This place covers:
Eg. histogram technique in L*a*b* color space.
This place covers:
Environmental factors.
This place covers:
Eg. correction for sunlight on monitor, artifical lighting, flare.
This place covers:
Different film types have different properties, thus need to be corrected. For newspaper, correction due to the yellowing is necessary.
This place covers:
Correction limited to particular colors, e.g. the red of a red apple is selected and enhanced. Changing color information in a region.
Attention is drawn to the following places, which may be of interest for search:
For skin color |
This place covers:
With display of image on monitor for user selection and editing.
This place covers:
Colour coding closely related to apparatus.
Attention is drawn to the following places, which may be of interest for search:
Compression of B&W | |
Compression per se |
This place does not cover:
For different coding for different image types, but limited to B&W | |
Similar but for colour correction and not coding |
This place covers:
Palletized colors, including methods of obtaining the palletization and their coding. Rounding, change from true color to 8bit using a pallette.
This place covers:
Limited to e.g. YUV, Lab, etc.
This place does not cover:
Adapting to different types of images, e.g. characters, graphs, black and white image portions | |
Using a reduced set of representative colours, e.g. each representing a particular range in a colour space |
This place covers:
Limited to CMY or RGB, raw sensor data.
This place does not cover:
Adapting to different types of images, e.g. characters, graphs, black and white image portions | |
Transmitting or storing colour television type signals |
Attention is drawn to the following places, which may be of interest for search:
Demosaicing |
This place covers:
- Scanning arrangements using moving aperture, refractor, reflector or lens
- Scanning arrangements using switched light sources, solid-state devices or cathod-ray tube by deflecting elctron beams
- Scanning arrangements for motion picture films
This place does not cover:
Scanning of motion picture films |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture signal generators using optical-mechanical scanning means only, for colour television systems |
Attention is drawn to the following places, which may be of interest for search:
Scanning systems using movable or deformable optical elements for controlling the intensity, colour, phase, polarisation or direction of light |
This place does not cover:
Scanning of motion picture films |
Attention is drawn to the following places, which may be of interest for search:
Devices or arrangements for the control of the direction of light arriving from an independent light source |
This place covers:
Scanning details of electrically scanned solid-state picture reproducers.
This place does not cover:
Transforming light or analogous information into electric information using solid-state image sensors |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture reproducers using solid-state colour display devices |
This place covers:
Deflection circuits for cathode-ray tubes, when specially adapted for television
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture reproducers, specially adapted for colour television systems, using cathode-ray tubes |
Attention is drawn to the following places, which may be of interest for search:
Cathode ray oscillographs | |
Deflection circuits, of interest only in connection with cathode-ray tube indicators | |
Control arrangements or circuits using single beam cathode-ray tubes, the beam directly tracing characters, the information to be displayed controlling the deflection as a function of time in two spatial coordinates | |
Electric discharge tubes of discharge lamps | |
Linearisation of ramp of a sawtooth shape pulse |
Attention is drawn to the following places, which may be of interest for search:
Regulation of dc voltage in general |
This place does not cover:
Circuits for controlling dimensions of picture on screen by maintaining the cathode-ray tube high voltage constant |
Attention is drawn to the following places, which may be of interest for search:
Control arrangements or circuits using single beam cathode-ray tubes, the beam tracing a pattern independent of the information to be displayed, this latter determining the parts of the pattern rendered respectively visible and invisible |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Arrangements for convergence or focusing in cathode-ray tubes specially adapted for colour television systems |
Attention is drawn to the following places, which may be of interest for search:
Focussing circuits in general |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Scanning of colour motion picture films, e.g. for telecine |
Attention is drawn to the following places, which may be of interest for search:
Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine |
This place covers:
Hardware-related or software-related aspects of television signal processing at the transmitter side or the receiver side
- H04N 5/00 features transmitter techniques specially adapted to analog transmission of television signals. The corresponding function place for generic transmission are found in subclasses H04N 21/00, H04B, H04H, H04L, H04W. This concerns servers, broadcast or multicast, home networks, wireless networks per se.
- H04N 5/00 features receiver techniques specially adapted to the reception of analog television signals. The corresponding place for digital television receivers is H04N 21/00.
This place does not cover:
Scanning details of television systems; Combination thereof with generation of supply voltages |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Details of colour television systems | |
Wall TV displays | |
Details of stereoscopic television systems | |
Servers specifically adapted for the selective distribution of content | |
Client devices specifically adapted for the reception of, or interaction with, content in selective content distribution |
Attention is drawn to the following places, which may be of interest for search:
Selective content distribution | |
Constructional details related to the housing of computer displays | |
Constructional details or arrangements for portable computers | |
Power management in computer systems | |
Image enhancement or restoration | |
Image analysis | |
Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes | |
Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators | |
Diversity receivers | |
Broadcast synchronizing | |
Broadcast receivers | |
Synchronizing in TDMA | |
Receiver synchronizing | |
Home automation networks |
H04N 5/00 features a number of symbols corresponding to a same number of Indexing Codes (e.g., H04N 5/4448 as symbol and H04N 5/4448 as Indexing Code symbol).
Allocation of symbols and/or Indexing Code symbols:
- A document containing invention information relating to details of television elements will be given a H04N 5/00 group.
- A document containing additional information relating to details of television elements will be given a H04N 5/00 group.
- A document merely mentioning further details of television elements will not be given a group, but it may receive an Indexing Code if the disclosure is considered relevant, e.g. when conversion of interlace to progressive scanning (H04N 7/012 ) involves motion estimation, H04N 5/145 is added.
In this place, the following terms or expressions are used with the meaning indicated:
Edging | detection of edges |
Movement estimation | motion vector generation |
KTC | Thermal noise on capacitor |
In patent documents, the following abbreviations are often used:
GPS | global positioning system |
PC | personal computer |
STB | set top box |
This place does not cover:
Synchronising systems used in the transmission of pulse code modulated video signals with other pulse code modulated signal |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Colour synchronization | |
Synchronisation processes at server side for selective content distribution |
Attention is drawn to the following places, which may be of interest for search:
Synchronisation between a display unit and other units, e.g. other display units, video-disc players | |
Synchronisation of pulses having essentially a finite slope or stepped portions | |
Synchronisation of generators of electronic oscillations or pulses | |
Arrangements for synchronising receiver with transmitter in the transmission of digital information |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuits for reinsertion of dc and slowly varying components of colour signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuits for controlling the amplitude of colour signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Suppression of noise in television signal recording |
This place covers:
Circuitry, devices and other equipment specially adapted to be used in television studio, e.g. for mixing images or generation of special effects.
This place does not cover:
Cameras or camera modules comprising electronic image sensors; Control thereof |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Studio equipment related to broadcast communication |
Attention is drawn to the following places, which may be of interest for search:
Buildings for studios for broadcasting, cinematography, television or similar purposes |
This place covers:
Picture signal generation by scanning motion picture films i.e. cinematographic films in video signals e.g. telecine.
This place does not cover:
Scanning details therefor | |
Standards conversion therefor |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Scanning of colour motion picture films, e.g. for telecine |
This place covers:
Obsolete technology.
This place does not cover:
Picture signal generating by scanning motion picture films or slide opaques |
This place covers:
Studio circuits providing video special effects like combining different images, changing image aspect (geometric, orientation, etc.) or aesthetic/artistic aspect, providing transitions between images, background and foreground images synthesizing, mixing and switching.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Arrangements for broadcast or for distribution combined with broadcast |
This place covers:
Mobile studios, e.g. television studio equipment installed in vehicles for outdoor broadcasting.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Broadcast communication aspects of mobile studios |
This place covers:
Circuitry (electronic circuits) and driving.
This place does not cover:
Scanning details of television systems | |
Cameras or camera modules comprising electronic image sensors or control thereof | |
Circuitry of solid-state image sensors [SSIS]; Control thereof |
This place covers:
X-ray imaging systems that directly or indirectly detect incident X-ray photons.
This place does not cover:
Cameras or camera modules for generating image signals from X-rays | |
Circuitry of solid-state image sensors [SSIS] for transforming X-rays into image signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment | |
Measuring length, thickness or similar linear dimensions, angles, areas or irregularities of surfaces or contours, using X-rays | |
Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays, by transmitting the radiation through the material and forming a picture |
Attention is drawn to the following places, which may be of interest for search:
Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation | |
Photographic processes for X-rays, infrared or ultraviolet light | |
Electrographic processes using X-rays, e.g. electroradiography | |
Apparatus for electrographic processes using X-rays, e.g. electroradiography |
This place covers:
Image sensors other than solid state image sensors and control thereof for near and far infrared [IR] cameras, for example pyroelectric imaging tubes or image intensifier tubes.
This place does not cover:
Cameras or camera modules for generating image signals from infrared radiation | |
Circuitry of solid-state image sensors [SSIS] for transforming only infrared radiation into image signals |
Attention is drawn to the following places, which may be of interest for search:
Radiation pyrometry, e.g. infrared or optical thermometry | |
Investigating or analysing materials using infrared light | |
Photographic processes for X-ray, infrared or ultraviolet ray | |
Organic devices sensitive to infrared radiation | |
Thermoelectric devices comprising a junction of dissimilar materials | |
Thermoelectric devices without a junction of dissimilar materials |
This place does not cover:
Picture signal circuitry for video frequency region |
Attention is drawn to the following places, which may be of interest for search:
Characteristics or internal components of servers | |
Transmitter circuits per se |
Attention is drawn to the following places, which may be of interest for search:
Digital communication modulator circuits | |
Modulation per se |
This place does not cover:
Picture signal circuitry for video frequency region |
Attention is drawn to the following places, which may be of interest for search:
Characteristics or internal components of clients | |
Receiver circuits per se |
Attention is drawn to the following places, which may be of interest for search:
Characteristics of or internal components of the client for processing graphics | |
Generation of individual character patterns for visual indicators | |
Graphics pattern generators for visual indicators |
Attention is drawn to the following places, which may be of interest for search:
Displaying supplemental content in a region of the screen |
Attention is drawn to the following places, which may be of interest for search:
Decoding digital information by demodulating in clients | |
Digital communication demodulator circuits | |
Demodulation per se |
Attention is drawn to the following places, which may be of interest for search:
Accessing communication channels in clients, tuning | |
Tuning resonant circuits per se | |
Automatic frequency control per se |
Attention is drawn to the following places, which may be of interest for search:
Muting the audio signal in clients |
Attention is drawn to the following places, which may be of interest for search:
Invisible or silent tuning | |
Processing of audio elementary streams in clients |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Generation of supply voltages, in combination with electron beam deflecting |
Attention is drawn to the following places, which may be of interest for search:
Power management in clients | |
Regulating of voltage or current in general | |
Transformers | |
Supplying or distributing electric power, in general | |
Static converters |
This place does not cover:
Furniture aspects of television cabinets |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Combinations of a television receiver with apparatus having a different main function |
This place does not cover:
Scanning details of television systems |
This place covers:
Circuit details of cathode-ray display tubes pertaining to the conversion of electrical information into light information, when specially adapted for television displays.
This place does not cover:
Scanning in television systems by deflecting electron beam in cathode-ray tube |
This place covers:
Circuit details of the conversion of electric information into light information in electroluminescent television displays.
Attention is drawn to the following places, which may be of interest for search:
Control arrangements or circuits using electroluminescent elements for presentation of a single character by selection from a plurality of characters, or by composing the character by combination of individual elements | |
Control arrangements or circuits using electroluminescent panels for presentation of an assembly of a number of characters, by composing the assembly by combination of individual elements arranged in a matrix |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Projection devices for colour picture display |
This place covers:
Video data recording:
- Specially adapted recording devices such as a VCR, PVR, high speed camera, camcorder or a specially adapted PC
- Interfaces between recording devices and other devices for input and/or output of video signals such as TVs, video cameras, other recording devices
- Video recorder programming
- Adaptations of the video signal for recording on specific recording media such as HDD, tape, drums, holographic support, semiconductor memories
- Adaptations for reproducing at a rate different from the recording rate such as trick play modes and stroboscopic recording
- Processing of the video signal for noise suppression, scrambling, field or frame skip, bandwidth reduction
- Impairing the picking up, for recording, of a projected video signal
- Regeneration of either a recorded video signal or for recording the video signal
- Video signal recording wherein the recorded video signal may be accompanied by none, one or more video signals (stereoscopic signals or video signals corresponding to different story lines)
- Production of a motion picture film from a television signal
Details specific to this group:
- The recording equipment is for personal use and not for studio use
- The subgroups H04N 5/92, H04N 5/93, H04N 5/94 and H04N 5/95 are for black and white (monochrome) video signals only while the remaining subgroups H04N 5/7605, H04N 5/765, H04N 5/78, H04N 5/80, H04N 5/84, H04N 5/89, H04N 5/903, H04N 5/907 and H04N 5/91 are for both black and white and colour video signals
- The subject-matter in the range H04N 5/92 - H04N 5/956 deals with recording and processing for recording of only black and white video signals while H04N 9/79 - H04N 9/898 deals with recording and processing for recording colour video signals.
- H04N 5/76 (video recording) distinguishes itself from editing, which is found in G11B 27/00, in that the signals recorded and reproduced are video signals.
- H04N 5/76 is a function place for recording or processing for recording. H04N 21/433 describes applications for recording in a distribution system.
- H04N 5/76 features recording devices specially adapted to video data recording that can be programmed. The programming may be done by a user or a using an algorithm. Business methods where the video recording feature or step is well known is generally classified in G06Q 30/02 .
- H04N 5/76 contains video cameras that record video data to a recording medium. Video cameras constructional details are found in H04N 23/00.
- H04N 5/76 is an application place for video data trick play. Reproducing data in general at a rate different from the recording rate is found in G11B 27/005.
- H04N 5/76 contains applications of video data processing for scrambling/encrypting video data for recording. Systems for rendering a video signal unintelligible are found in H04N 7/16 and H04N 21/00.
- H04N 5/76 is an application place for video data reduction for recording. Video data compression is found in H04N 19/00.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Video surveillance | |
Selective content distribution | |
Controlling video cameras | |
Alarm system using video cameras |
Attention is drawn to the following places, which may be of interest for search:
Production of a video signal from a motion picture film | |
Interfaces | |
Video data coding | |
Network video distribution | |
Personal video recorder in selective content distribution systems | |
User interface of set top boxes | |
Video camera constructional details | |
Video data processing for printing | |
Systems for buying and selling, i.e. video content | |
Business methods related to the distribution of video data content | |
Information storage based on relative movement between record carrier and transducer | |
Video editing | |
Recording techniques specially adapted to a recording medium for recording digital data in general | |
Control of video recorders where the video signal is not substantially involved | |
Static stores |
A document does not explicitly mention that the video signal is a monochrome video signal is to be interpreted as being a colour video signal. As a consequence some classes in H04N 5/76 specific to monochrome signal recording have fallen out of use. Instead the corresponding colour symbols should be given to such documents:
Allocation of CPC symbols:
- A document containing invention information relating to video data recording will be given an H04N 5/76 CPC group.
- A document containing additional information relating to video data recording (in particular, if the document discloses a detailed video recording device) will be given a H04N 5/76 Indexing Code symbol.
- A document containing invention information for more than one invention it may be given more than one H04N 5/76 CPC group.
- A document merely mentioning recording will not be given an CPC group, but it may receive an Indexing Code if the disclosure is considered relevant.
Allocation of Indexing Code symbols in combination with CPC:
- When assigning H04N 5/76 as CPC group, giving an additional Indexing Code is mandatory.
Combined use of Indexing Code symbols:
- Indexing Code symbols maybe allocated as necessary to describe additional information in document.
Symbol allocation rules:
- Documents defining recording devices that have an interface, e.g., connected to a network, should have at least one of the more specific H04N 5/765 Indexing Code symbols.
- Documents dealing with invention information about measures to prevent recording of projected images should be given the H04N 2005/91392 Indexing Code symbol.
In this place, the following terms or expressions are used with the meaning indicated:
Video or video data | Video signal, analogue or digital, with or without accompanying audio |
Attention is drawn to the following places, which may be of interest for search:
Arrangements for the associated working of recording or reproducing apparatus with related apparatus |
This place covers:
Video cameras as recording devices.
Attention is drawn to the following places, which may be of interest for search:
Television cameras |
Attention is drawn to the following places, which may be of interest for search:
TV-receiver details | |
Recording/reproduction devices integrated in TV-receivers | |
Synchronisation between a display unit and video-disc players |
This place covers:
Magnetic disks.
Attention is drawn to the following places, which may be of interest for search:
Recording on, or reproducing or erasing from, magnetic drums | |
Recording on, or reproducing or erasing from, magnetic disks | |
Magnetic drum carriers | |
Magnetic disk carriers |
This place covers:
Video recording programming applications, although it reads (recording) "on tape".
Video recorder programming (reservation recording).
Attention is drawn to the following places, which may be of interest for search:
Recording on, or reproducing or erasing from, magnetic tapes | |
Magnetic tape carriers | |
Arrangements for device control affected by the broadcast information |
Attention is drawn to the following places, which may be of interest for search:
Fixed mountings of heads relative to magnetic record carriers |
Attention is drawn to the following places, which may be of interest for search:
Disposition or mounting of heads on rotating support |
This place covers:
- Trick play modes as well as processing for recording to enable the reproduction of video data at a rate different from the recording rate.
- High speed recording cameras.
- Speed control during recording, reproducing, reproducing at variable speed.
This place does not cover:
Television signal processing recording |
Attention is drawn to the following places, which may be of interest for search:
Recording or reproducing using electrostatic charge injection; record carriers therefor |
This place does not cover:
Attention is drawn to the following places, which may be of interest for search:
Recording or reproducing by optical means; record carriers therefor |
This place covers:
Optical discs.
Attention is drawn to the following places, which may be of interest for search:
Recording or reproducing by optical means with cylinders | |
Recording or reproducing by optical means with disks |
This place does not cover:
Television signal processing recording |
Attention is drawn to the following places, which may be of interest for search:
Recording, reproducing or erasing by using optical interference patterns, e.g. holograms |
This place does not cover:
Television signal processing recording |
Attention is drawn to the following places, which may be of interest for search:
Recording or reproducing using record carriers having variable electrical capacitance; Record carriers therefor |
This place does not cover:
Television signal processing recording |
Attention is drawn to the following places, which may be of interest for search:
Television signal recording based on relative movement between record carrier and transducer | |
Static stores per se |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Processing of colour television signals in connection with recording |
This place does not cover:
Regeneration of analogue synchronisation signals |
Attention is drawn to the following places, which may be of interest for search:
Circuitry for suppressing or minimizing disturbance in television systems in general |
This place covers:
- Scrambling and encryption of video data for recording.
- Copy-protection systems.
At least one Indexing Code H04N 5/913 symbol should be allocated to such document to further specify the scrambling method.
This place covers:
Compression of analogue video signals.
Attention is drawn to the following places, which may be of interest for search:
Transformation of the television signal for recording by pulse code modulation | |
Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal | |
Systems for the transmission of television signals using pulse code modulation | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Processing of colour television signals for recording the signal in a plurality of channels, the bandwidth of each channel being less than the bandwidth of the signal |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Transformation of the colour television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback |
Attention is drawn to the following places, which may be of interest for search:
The corresponding colour symbol should be allocated: H04N 9/82
This place does not cover:
Television signal processing for recording for bandwidth reduction by dividing samples or signal segments |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Transformation of the television signal for recording involving pulse code modulation of the colour picture signal components | |
Transformation of the television signal for recording involving pulse code modulation of the composite colour video-signal |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Transformation of the television signal for recording involving pulse code modulation of the colour picture signal components with processing of the sound signal |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Regeneration of colour television signals by assembling picture element blocks in an intermediate memory |
The corresponding colour symbol should be allocated: H04N 9/87
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Regeneration of colour television signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Signal drop-out compensation in the regeneration of colour television signals |
The corresponding colour symbol should be allocated: H04N 9/88
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Signal drop-out compensation for signals recorded by pulse code modulation in the regeneration of colour television signals |
Attention is drawn to the following places, which may be of interest for search:
Error detection or correction of digital signals for recording in general |
This place does not cover:
Regeneration of analogue synchronisation signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Time-base error compensation in the regeneration of colour television signals |
The corresponding colour symbol should be allocated: H04N 9/89
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Time-base error compensation using an analog memory, e.g. a CCD-shift register, the delay of which is controlled by a voltage controlled oscillator, in the regeneration of colour television signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Time-base error compensation using a digital memory with independent write-in and read-out clock generators, in the regeneration of colour television signals |
This place covers:
- structural or hardware-related aspects of television systems, involving
- analogue television signals or digital television signals processed at low level (e.g. physical layer in the OSI model);
- details on conversion of television standards;
- circuits for recovering digital non-picture data in analogue television signals;
- specific arrangements allowing transmission of television signals via electric cables, optical fibres or using a GHz frequency band.
H04N 5/00 covers details of television systems and circuitry for processing analogue television signals or digital television signals processed at pixel level. Conversion between television standards and circuits for recovering digital non-picture data (slicers) are however classified in H04N 7/00.
H04N 9/00 and H04N 11/00 are to be considered when the focus is on colour aspects.
Aspects of diagnosis, testing and measuring for television systems are covered by H04N 17/00.
Television systems involving digital television signals not processed at low level should normally be classified in H04N 21/00.
Broadcast systems which are not specifically adapted for television signals should be classified in H04H.
This place does not cover:
Scanning details of television systems; Combination thereof with generation of supply voltages | |
Generation of supply voltages, in combination with electron beam deflecting | |
Details of television systems | |
Details of systems specific to colour television | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals | |
Selective content distribution |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Colour television systems | |
Stereoscopic television systems |
Attention is drawn to the following places, which may be of interest for search:
Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes | |
Aspects of video games | |
Arrangements in vehicles for holding or mounting or controlling radio sets, television sets, telephones, or the like | |
Mounting of cameras operative during drive (of a vehicle) | |
Arrangements for entertainment or communications for passenger or crew in aircraft, e.g. radio, television | |
Recognition of data in general | |
Commerce, e.g. shopping or e-commerce | |
Image data processing or generation, in general | |
Remote control devices | |
Electrically-operated teaching apparatus or devices working with questions and answers | |
Simulators for teaching or training purposes | |
Miscellaneous advertising or display means not provided for elsewhere | |
Combined visual and audible advertising or displaying, e.g. for public address | |
Broadcast communication in general | |
Wireless networks in general |
A document containing invention information relating to one of the subgroups will be given the relating CPC symbol.
A document containing additional information relating to one of the subgroups will be given the relating Indexing Code.
A document merely mentioning a television system will not be given aCPC symbol, but it may receive an Indexing Code if the disclosure is considered relevant.
In this place, the following terms or expressions are used with the meaning indicated:
3:2 pull-down pattern | a pattern of images where the first image is repeated 3 times and the second image is repeated twice |
HDTV | High Definition TeleVision |
ISDN | Integrated Services Digital Network (ISDN): a circuit-switched telephone network system |
Letter-box system | television system which displays images comprising a central part and black bars above and below the central part |
MAC | Multiplexed Analogue Components (MAC): a satellite television transmission standard |
Video signal | television signal |
In patent documents, the following abbreviations are often used:
CRT | Cathode Ray Tube, a technology of display |
CATV | Community Antenna Television |
CCTV | Closed Circuit TeleVision |
EPG | Electronic Programme Guide |
GUI | Graphics User Interface |
MATV | Master Antenna TeleVision |
MPEG | Motion Picture Experts Group; a family of standards used for coding audio-visual information in a digital compressed format |
PC | Personal Computer |
PVR | Personal Video Recorder |
STB | Set-Top Box |
URL | Uniform Resource Locator |
VOD | Video On Demand |
This place does not cover:
Transmission of still pictures via a television channel |
Examples of places in relation to which this place is residual:
Systems with supplementary picture signal insertion during a portion of the active part of a television signal, e.g. during top and bottom lines in a HDTV letter-box system | |
Conversion of standards | |
High-definition television systems | |
Systems for the transmission of digital non-picture data, e.g. of text during the active part of a television frame | |
Systems for the transmission of one television signal, i.e. both picture and sound, by a single carrier | |
Systems for the simultaneous transmission of one television signal, i.e. both picture and sound, by more than one carrier | |
Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band | |
Adaptations for transmission by electric cable | |
Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal | |
Systems for two-way working | |
Analogue secrecy systems; Analogue subscription systems | |
Closed circuit television systems, i.e. systems in which the signal is not broadcast | |
Adaptations for transmission via a GHz frequency band, e.g. via satellite | |
Adaptations for optical transmission | |
Systems for the transmission of television signals using pulse code modulation |
This place covers:
- systems with auxiliary information data allowing improved picture quality transmitted during the active part of the TV signal, e.g. in black bands at the upper and lower edges of the picture;
- letter-box systems.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Conversion of colour television standards |
Attention is drawn to the following places, which may be of interest for search:
Studio circuits for television systems involving alteration of picture size or orientation | |
Receiver circuitry for receiving on more than one standard at will | |
Saving bandwidth in low bit-rate video transmission | |
Circuits specific for processing colour signals | |
Processing specific to video coder/decoder: subsampling at the coder and/or sample restitution by interpolation at the coder or decoder | |
Processing specific to video coder/decoder: transcoding to realise interoperability between different video coding standards | |
Image scaling in general | |
Video signal processing specific to visual displays | |
Adapting incoming signals to the display format of the display terminal | |
Frame rate conversion for reducing blurring effect in a hold-type liquid crystal display (LCD) | |
Interpolation filters |
In patent documents, the following abbreviations are often used:
FRC | Frame Rate Converter |
FRUC | Frame Rate Up-Converter |
MC-FRC | Motion Compensation - Frame Rate Converter |
NTSC | National Television System Committee |
PAL | Phase alternating line |
SECAM | Séquentiel couleur à mémoire (Sequential Colour with Memory) |
This place covers:
- analogue high-definition systems;
- digital high-definition systems when they do not fall within the scope of other groups (H04N 13/00, H04N 19/00, H04N 21/00).
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
High-definition colour television systems |
This place covers:
Systems using the active part of a television signal or part of it for transmitting digital non-picture data not foreseen to be watched as such on a display.
This place does not cover:
Transmission of still pictures via a television channel | |
Transmission of digital non-picture data during the vertical blanking interval |
This place covers:
- Circuits for recovering data transmitted during the non-active part of the television signal, e.g. vertical or horizontal blanking interval;
- Circuits for recovering data transmitted during the active part of the television signal instead of the pictorial signal.
This place covers:
Systems for transmitting in a particular way both picture and sound by a single carrier.
This place does not cover:
Systems for the transmission of more than one television signal of with signal insertion during the horizontal blanking interval | |
Systems for the transmission of more than one television signal of with signal insertion during the vertical blanking interval |
This place covers:
Systems for transmitting in a particular way both picture and sound by more than one carrier.
This place does not cover:
Systems for the transmission of more than one television signal of with signal insertion during the horizontal blanking interval | |
Systems for the transmission of more than one television signal of with signal insertion during the vertical blanking interval |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Systems with supplementary picture signal insertion during a portion of the active part of a television signal, e.g. during top and bottom lines in a HDTV letter-box system |
In patent documents, the following abbreviations are often used:
CC | Closed Caption |
CRI | Clock Run-In |
HBI | Horizontal Blanking Interval |
RIC | Run-In Clock |
VBI | Vertical Blanking Interval |
This place does not cover:
Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal |
Attention is drawn to the following places, which may be of interest for search:
Coaxial connectors for coaxial cables | |
Networks for connecting several sources or loads, working on different frequencies or frequency bands, to a common load or source, particularly adapted for use in common antenna systems | |
Networks for connecting several sources or loads, working on the same frequency or frequency band, to a common load or source, particularly adapted for use in common antenna systems | |
Line transmission systems, in general | |
Repeater circuits for signals in two different frequency ranges transmitted in opposite directions over the same transmission path | |
Arrangements of wired systems for broadcast | |
CATV systems | |
Home automation networks | |
Distribution of signals within a home automation network, e.g. involving splitting/multiplexing signals to/from different paths |
This place does not cover:
Systems for the transmission of television signals using pulse code modulation |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Colour television systems with bandwidth reduction |
Attention is drawn to the following places, which may be of interest for search:
Scanning details of television systems | |
High-definition television systems |
This place does not cover:
Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal | |
Analogue secrecy or subscription systems with two-way working, e.g. subscriber sending a programme selection signal |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Client devices specifically adapted for the reception of, or interaction with, content, e.g. STB [set-to-box]; Operations thereof |
Attention is drawn to the following places, which may be of interest for search:
Systems for two-way working in the scanning, transmission or reproduction of documents or the like | |
Telephonic communication systems combined with television receiver for reception of entertainment or information matter |
Attention is drawn to the following places, which may be of interest for search:
Systems for two-way working between two video terminals, e.g. videophone | |
Arrangements for conference in data switching networks | |
Telephonic conference arrangements | |
Multimedia conference systems |
This place covers:
Television systems, where transmitters such as head-ends distribute analog television signals to television receivers. Typically, the access to analog television information is restricted based on a subscription system, where the television viewer will be charged for accessing the programs he/she has selected. To prevent eavesdropping, the transmitted analog signals are scrambled by the transmitter, e.g. in the time or amplitude domain, and descrambled at reception. Such systems can work in a unidirectional mode, where the transmitter decides which analog television programs the subscriber is entitled to view or in a bidirectional mode, where the user can request to view a movie.
Unidirectional or bidirectional television systems involving the distribution of digital video signals fall within the scope of H04N 21/00.
This place does not cover:
Coin-freed apparatus |
Attention is drawn to the following places, which may be of interest for search:
Coin-freed and like apparatus in general |
This place covers:
- Entitlement systems, where the receiver is entitled to access the analog television program. Usually the user is billed therefor
- Analog conditional access systems
Attention is drawn to the following places, which may be of interest for search:
Payment schemes | |
E-commerce |
This place covers:
Television programs are broadcast in a scrambled form and only receivers fitted with e.g. a conditional access card can descramble them.
This place covers:
Systems where typically head-ends select which receivers are entitled to receive the analog television programs.
This place does not cover:
Registering at central by two-way working | |
Centralised control of user terminal subsequent to an upstream request signal |
This place covers:
- Systems operating in the time domain, e.g. by displacing synchronisation signals relative to active picture signals or vice versa or by changing or reversing the order of active picture signal portions
- Systems operating in the amplitude domain, e.g. by modifying synchronisation signals or by inverting the polarity of active picture signal portions
Attention is drawn to the following places, which may be of interest for search:
Secret communication by adding a second signal to make the desired signal unintelligible |
This place covers:
Scrambling or descrambling of analog television signals based on digital keys
This place does not cover:
Pseudo-random number generators in general | |
Computer security | |
Cryptography in general | |
Network security |
This place covers:
Bidirectional systems
Attention is drawn to the following places, which may be of interest for search:
Systems for two-way working in the scanning, transmission or reproduction of documents or the like | |
Client devices specifically adapted for the reception of, or interaction with, content, e.g. STB [set-top-box]; Operations thereof |
This place covers:
Details of analog signal processing, coding or modulating in the upstream channel
This place covers:
Typically on-demand systems for analog TV programs
This place covers:
Details of analog Video-on-Demand servers
Attention is drawn to the following places, which may be of interest for search:
Signal generation from motion picture films | |
Instruments for performing medical examinations of the interior of cavities or tubes of the body combined with television appliances | |
Real time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles | |
Arrangements in vehicles for holding or mounting or controlling radio sets, television sets, telephones, or the like | |
Mounting of cameras operative during drive of a vehicle; Arrangements of control thereof relative to the vehicle | |
Arrangements for entertainment or communications for passenger or crew in aircraft, e.g. radio, television | |
Scanning a visible indication of a measured value and reproducing this indication at a remote place, e.g. on the screen of a cathode-ray tube | |
Recognition of data in general | |
Image processing in general | |
Burglar, theft, or intruder alarms using television cameras |
Attention is drawn to the following places, which may be of interest for search:
Space-based or airborne stations for radio transmission systems | |
Arrangements of satellite networks for broadcast |
Attention is drawn to the following places, which may be of interest for search:
Transmission systems employing electromagnetic waves other than radio waves, e.g. light | |
Arrangements of optical systems for broadcast |
This place does not cover:
Source coding or decoding of a digital video signal | |
Error protection or correction of a digital video signal | |
Selective content distribution, e.g. interactive television |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Colour television systems using pulse code modulation |
Attention is drawn to the following places, which may be of interest for search:
Transmission systems using pulse code modulation, in general |
H04N 21/00 takes precedence, except for source coding or decoding of a digital video signal (H04N 19/00 takes precedence in this case), and for error protection, detection or correction of a digital video signal (H04N 19/89 takes precedence in this case):
- in particular, for video bitstream assembling and disassembling, H04N 21/236 and H04N 21/434 take precedence;
- for channel coding and decoding of a digital video signal, H04N 21/2383 and H04N 21/4382 take precedence.
This place does not cover:
Assembling of a multiplex stream, by combining a video stream with other content or additional data, remultiplexing of multiplex streams, insertion of stuffing bits into the multiplex stream, assembling of a packetised elementary stream at server side | |
Disassembling of a multiplex stream, remultiplexing of multiplex streams, extraction or processing of Service Information, disassembling of packetised elementary stream at client side |
This place covers:
older multiplexing/demultiplexing and transport technologies which were used before the introduction of MPEG system layer, based on a format, e.g. a frame format, usable for transmission or recording of compressed or uncompressed video data, possibly combined with other content, e.g. audio
This place does not cover:
Multiplexing/demultiplexing of asynchronous signals, e.g. MPEG system layer type signals, involving the use of transport streams, program streams | |
Use of PCR for clock recovery | |
Use of time stamps (PTS, DTS) for content synchronisation |
Multiplexing/demultiplexing video and audio: H04N 21/2368, H04N 21/4341 take precedence;
multiplexing/demultiplexing video and additional data: H04N 21/23614,
H04N 21/4348 take precedence;
multiplexing/demultiplexing several video streams: H04N 21/2365, H04N 21/4347 take precedence;
multiplexing/demultiplexing isochronously with video sync, e.g. according to bit-parallel or bit-serial interface formats, as SDI: H04N 21/23602, H04N 21/4342 take precedence.
This place covers:
Synchronisation for signals falling under H04N 7/54
This place does not cover:
Use of PCR for clock recovery | |
Use of time stamps (PTS, DTS) for content synchronisation |
Attention is drawn to the following places, which may be of interest for search:
This place covers:
- Picture signal generators
- Picture reproducers using opto-mechanical scanning, cathode-ray tubes, solid-state colour displays or projection devices
- Conversion of monochrome to colour image signals
- Colour synchronisation
- Processing brightness and chrominance signal in relation with each other
- Processing of colour signals in general as well as specifically for recording
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Details of stereoscopic colour television systems |
Attention is drawn to the following places, which may be of interest for search:
In this place, the following terms or expressions are used with the meaning indicated:
ESLM | Electronic Spatial Light Modulator |
DMD | Deformable mirror device |
LCLV | Liquid Crystal Light Valve |
D-ILA | Direct Drive Image Light Amplifier |
HDR | High Dynamic Range |
LCOS | Liquid Crystal On Silicon |
DSP | Digital Signal Processor |
DLP | Digital Light Processor |
CRT | Cathode Ray Tube |
RGB | Red Green Blue |
CYM | Cyan Yellow Magenta |
Attention is drawn to the following places, which may be of interest for search:
Details of scanning of motion picture films, e.g. for telecine, applicable to television systems in general | |
Picture signal generating by scanning motion picture or slide opaques, e.g. for telecine |
This place covers:
Video walls (excluding multi-projection displays)
This place does not cover:
Scanning of colour motion picture films, e.g. for telecine | |
Video walls or multiscreen displays when each modular display is a projection device. |
Attention is drawn to the following places, which may be of interest for search:
This place does not cover:
Scanning of colour motion picture films, e.g. for telecine |
Attention is drawn to the following places, which may be of interest for search:
Scanning by optical-mechanical means only, applicable to television systems in general |
This place does not cover:
Scanning of colour motion picture films |
Attention is drawn to the following places, which may be of interest for search:
Scanning by deflecting electron beam in cathode-ray tube, applicable to television systems in general | |
Control arrangements or circuits using colour cathode-ray tube indicators | |
Cathode-ray tubes per se |
This place covers:
Details of degaussing circuits.
Attention is drawn to the following places, which may be of interest for search:
Details of CRT or electron-beam tubes |
This place covers:
- Image projection using an electronic spatial light modulator [ESLM], i.e. processing of electrical image signals provided to the ESLM for the generation of projector control signals, for controlling the ESLM, e.g. control of the light source
- based on electronic image signal, light conditioning specially adapted for the ESLM
- in-projector image processing, electronic image data manipulation, e.g. during display or projection
- details of projectors peculiar to the use of an ESLM, e.g. dichroic
- and polarizing arrangements specially adapted for the ESLM
- remote control of projectors peculiar to the ESLM, e.g. affecting their operation, or based on a generated image signal;
- adaptations peculiar to the use of an ESLM and/or the display, the transmission, recording or other use of electrical image data
- and related circuitry, e.g. mounting of ESLM, integrated
Subclass G03B contains subject-matter relating to the following aspects:
- Aspects of apparatus/methods for projecting or viewing images using an electronic spatial light modulator [ESLM], insofar as they correspond to those of said apparatus/methods for projecting or viewing images using film stock, photographic film or slides, i.e. insofar as not peculiar to the presence of the ESLM, e.g. mounting of optical elements not peculiar to the presence of the ESLM, and their related controls not peculiar to the presence of the ESLM, e.g. cooling, beam shaping, optical keystone correction;
- (opto-)mechanical image enhancement in printers or projectors (e.g. keystone correction);
- constructional aspects of projectors, e.g. cooling, beam shaping, light
- integrating means not peculiar to the ESLM;
Subclass G02B contains subject-matter relating to the following aspects:
- Optical image modulation using direction light control e.g. deformable mirror devices (DMD's),
- laser speckle optics,
- head-up projection displays (head-mounted displays).
Subclass G02F contains subject-matter relating to the following aspects:
- Control of light using liquid crystals.
Subclass G09F 9/00 contains subject-matter relating to the following aspects:
- Indicating arrangements for variable information (e.g. street or stadium displays).
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Image reproducers |
Attention is drawn to the following places, which may be of interest for search:
Projection arrangements for image reproduction e.g. using eidophor | |
Optical systems in general | |
Devices for controlling direction of light e.g. DMD's | |
Head-up displays | |
Speckle reduction | |
Light control | |
Film projection and photography | |
Projection devices using film stock, photographic film or slides | |
Details of film projectors | |
Projection screens | |
Image processing per se | |
Displaying of variable information using colour tubes | |
Control of colour illumination sources | |
Liquid crystal colour display with specific pixel layout | |
Characterised by the way in which colour is displayed | |
Using circuits for interfacing with colour displays | |
Using colour palettes |
This place covers:
Scanning projection devices wherein a light beam (e.g. a point beam or a linear beam from a laser or an LED) is scanned across a screen (e.g. using scanning mirrors).
Attention is drawn to the following places, which may be of interest for search:
Projectors using laser light sources in general | |
XY Scanning, scanning systems in general | |
Laser speckle optics | |
Semiconductors lasers |
This place covers:
- Synchronisation of the modulated colour signal in relationship with the colour subcarrier,
- colour subcarrier generation in relationship with the extracted burst.
This place covers:
- Colour video sampling format conversion, e.g. 4:2:2 to 4:2:0
- Gamut mapping and colour space conversions
- Multiprimary colour signal conversion
- Colour sampling in digital video, e.g. 4:4:4, 4:2:0, 4:1:1
- Processing of the modulated or demodulated colour television signal
- Input colour signal detection relating to the type and standard of colour signals
- Synchronous modulation and demodulation of the colour signals
- Image enhancement or disturbance suppression specific to the modulated or demodulated colour television signal
- Colour space transformation of the demodulated colour signal
- Amplitude control and gamma control of the modulated or demodulated colour television signal
- DC control of the modulated colour television signal according to vertical blanking reference
- White balance control of the demodulated colour signal for display
- Mixing of foreground and background colour video signals using chroma keying
With respect to colour or chrominance aspects, main group H04N 1/00 contains subject-matter relating to the following aspects:
- Aspects of apparatus/methods for controlling or correcting colour video signals originating from a scanned picture signal, e.g. facsimile, document, photo.
Subclass G06T contains subject-matter relating to the following aspects:
- General purpose data processing of an image or enhancement of such image not particularly adapted to a motion video signal.
Subclass H03D contains subject-matter relating to the following aspects:
- Demodulation of amplitude modulated signals.
Demodulation circuits adapted to a particular standard are classified in:
- H04N 11/146 for NTSC,
- H04N 11/165 for PAL, and
- H04N 11/186 for SECAM.
This place does not cover:
Circuits for processing the brightness signal and the chrominance signal relative to each other | |
Camera processing pipelines for processing colour signals |
Attention is drawn to the following places, which may be of interest for search:
This place covers:
Circuits for multiple input selection or for selecting a particular colour signal type.
This place does not cover:
Multi-standard receivers |
This place covers:
Multistandard colour decoding circuits.
This place covers:
- Face detection circuits,
- Hue control.
This place does not cover:
Acquiring or recognising human faces, facial parts, facial sketches, facial expressions |
Attention is drawn to the following places, which may be of interest for search:
Hue control relating to non moving picture signals |
Attention is drawn to the following places, which may be of interest for search:
Demodulation circuits adapted to the NTSC standard | |
Demodulation circuits adapted to the PAL standard | |
Demodulation circuits adapted to the SECAM standard |
This place covers:
Colour space transformation circuits.
This place does not cover:
Camera processing pipelines for matrixing of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Colour space transformation circuits relating to non-moving picture signals |
This place does not cover:
Circuits for processing colour signals for colour killing combined with colour gain control | |
Colour balance circuits | |
Camera processing pipelines for controlling the colour saturation of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Circuitry for controlling amplitude response, applicable to television systems in general |
This place does not cover:
Camera processing pipelines for reinsertion of DC or slowly varying components of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Circuitry for reinsertion of dc and slowly varying components of signals, applicable to television systems in general |
This place covers:
Colour balance control.
This place does not cover:
Camera processing pipelines for colour balance |
Attention is drawn to the following places, which may be of interest for search:
Color balance control relating to non moving picture signals |
This place covers:
- Separation of luminance and chrominance signals from a multiplexed composite colour television signal
- Processing of luminance and chrominance signals in relationship to each-other (differential gain, differential phase, luminance and chrominance correlated enhancement or noise suppression...)
This place does not cover:
Circuits for matrixing |
This place covers:
Video data recording:
- Specially adapted recording devices such as a VCR, PVR, high speed camera, camcorder or a specially adapted PC
- Interfaces between recording devices and other devices for input and/or output of video signals such as TVs, video cameras, other recording devices
- Video recorder programming
- Adaptations of the video signal for recording on specific recording media such as HDD, tape, drums, holographic support, semiconductor memories
- Adaptations for reproducing at a rate different from the recording rate such as trick play modes and stroboscopic recording
- Processing of the video signal for noise suppression, scrambling, field or frame skip, bandwidth reduction
- Impairing the picking up, for recording, of a projected video signal
- Regeneration of either a recorded video signal or for recording the video signal
- Video signal recording wherein the recorded video signal may be accompanied by none, one or more video signals (stereoscopic signals or video signals corresponding to different story lines)
- Production of a motion picture film from a television signal
Details specific to this group:
- The recording equipment is for personal use and not for studio use
- The subgroups of H04N 9/79 are for colour video signals
- Recording and processing for recording of video signals covered by the subject-matter in the range H04N 5/76 - H04N 5/907 is classified in said range irrespectively of said video signals being in colour or black and white.
- The range H04N 9/79 - H04N 9/898 deals with recording and processing for recording colour video signals while the corresponding range H04N 5/92 - H04N 5/956 deals with recording and processing for recording black and white video signals.
- H04N 9/79 (video recording) distinguishes itself from editing, which is found in G11B 27/00, in that the signals recorded and reproduced are video signals.
- H04N 9/79 is a function place for recording or processing for recording. H04N 21/433 describes applications for recording in a distribution system.
- H04N 9/79 features recording devices specially adapted to video data recording that can be programmed. The programming may be done by a user or a using an algorithm. Business methods where the video recording feature or step is well known is generally classified in G06Q 30/02 .
- H04N 9/79 contains video cameras that record video data to a recording medium. Video cameras constructional details are found in H04N 23/00.
- H04N 9/79 is an application place for video data trick play. Reproducing data in general at a rate different from the recording rate is found in G11B 27/005.
- H04N 9/79 contains applications of video data processing for scrambling/encrypting video data for recording. Systems for rendering a video signal unintelligible are found in H04N 7/16 and H04N 21/00.
- H04N 9/79 is an application place for video data reduction for recording. Video data compression is found in H04N 19/00.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Video surveillance: | |
Selective content distribution | |
Controlling video cameras: | |
Alarm system using video cameras |
Attention is drawn to the following places, which may be of interest for search:
Production of a video signal from a motion picture film | |
Interfaces | |
Television signal processing in connection with recording, in general | |
Video data coding | |
Network video distribution | |
User interface of set top boxes | |
Video camera constructional details | |
Video data processing for printing | |
Systems for buying and selling, i.a. video content | |
Business methods related to the distribution of video data content | |
Video editing | |
Recording techniques specially adapted to a recording medium for recording digital data in general | |
Control of video recorders where the video signal is not substantially involved |
A document does not explicitly mention that the video signal is a monochrome video signal is to be interpreted as being a colour video signal. As a consequence some classes in H04N 5/76 specific to monochrome signal recording have fallen out of use. Instead the corresponding colour symbols should be given to such documents.
Allocation of CPC symbols:
- A document containing invention information relating to video data recording will be given an H04N 9/79 CPC group.
- A document containing additional information relating to video data recording (in particular, if the document discloses a detailed video recording device) will be given a H04N 9/79 Indexing Code symbol.
- A document containing invention information for more than one invention it may be given more than one H04N 9/79 CPC group.
- A document merely mentioning recording will not be given an CPC group, but it may receive an Indexing Code if the disclosure is considered relevant.
- Allocation of Indexing Code symbols in combination with CPC:
- When assigning H04N 9/79 or a subclass thereof as CPC group, giving an additional Indexing Code is optional.
- Combined use of Indexing Code symbols:
- Indexing Code symbols maybe allocated as necessary to describe additional information in document.
- Symbol allocation rules:
- Documents defining recording devices that have an interface, e.g., connected to a network, should have at least one of the more specific H04N 5/765 Indexing Code symbols.
- Documents dealing with invention information about measures to prevent recording of projected images should be given the H04N 2005/91392 Indexing Code symbol.
In this place, the following terms or expressions are used with the meaning indicated:
Video or video data | video signal analogue or digital with or without accompanying audio |
Attention is drawn to the following places, which may be of interest for search:
Recording a plurality of video formats |
This place does not cover:
Transformation of the television signal for recording involving pulse code modulation of the colour picture signal components | |
Transformation of the television signal for recording the individual colour picture signal components being recorded sequentially only | |
Transformation of the television signal for recording the individual colour picture signal components being recorded simultaneously only |
Attention is drawn to the following places, which may be of interest for search:
Television signal processing for bandwidth reduction, by dividing samples or signal segments among a plurality of recording channels, in general |
Attention is drawn to the following places, which may be of interest for search:
Attention is drawn to the following places, which may be of interest for search:
Transformation of the television signal for recording by pulse code modulation, in general; Inverse transformation for playback thereof |
This place covers:
Coding/decoding when done using an MPEG standard.
This place covers:
Systems, where additional information, necessary to retrieve the video data, e.g., chapter marks, navigation packs, time stamps is recorded with the video information, either on the same recording medium or on an associated recording medium.
This place does not cover:
Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback |
Attention is drawn to the following places, which may be of interest for search:
Regeneration of the television signal or of selected parts thereof, in general |
Attention is drawn to the following places, which may be of interest for search:
Regeneration of the television signal or of selected parts thereof, by assembling picture element blocks in an intermediate store in general |
Attention is drawn to the following places, which may be of interest for search:
Signal drop-out compensation for signals recorded by pulse code modulation, applicable to television systems in general | |
error detection or correction of digital signals for recording in general |
Attention is drawn to the following places, which may be of interest for search:
Time-base error compensation by using an analogue memory, e.g. a CCD-shift register, the delay of which is controlled by a voltage controlled oscillator, applicable to television systems in general |
This place does not cover:
Transformation of the television signal for recording, the recorded chrominance signal occupying a frequency band under the frequency band of the recorded brightness signal |
Attention is drawn to the following places, which may be of interest for search:
Time-base error compensation by using an analogue memory, e.g. a CCD-shift register, the delay of which is controlled by a voltage controlled oscillator, applicable to television systems in general |
This place covers:
Hardware-related or software-related aspects specific to transmission of colour television signal, in particular for transmission of analog colour television signal (e.g. NTSC, PAL, SECAM)
H04N 11/00 distinguishes itself from transmission systems using pulse code modulation with bandwidth reduction, wherein the chrominance component or any type of colour component is submitted to a processing equivalent to the processing of the luminance component, e.g. MPEG standards, which are found in H04N 7/00, H04N 21/00.
This place does not cover:
Attention is drawn to the following places, which may be of interest for search:
High-definition television systems |
H04N 11/00 features a number of EC symbols corresponding to a same number of Indexing Codes (e.g., H04N 11/14 as EC symbol and H04N 11/14 as Indexing Code symbol)
Allocation of EC symbols and/or Indexing Code symbols:
A document containing invention information relating to colour television systems will be given a H04N 11/00 EC group
A document containing additional information relating to colour television systems will be given a H04N 11/00 EC group
A document merely mentioning details of colour television systems will not be given an EC group, but it may receive an Indexing Code if the disclosure is considered relevant.
This place does not cover:
Using pulse code modulation | |
High definition television systems |
Attention is drawn to the following places, which may be of interest for search:
Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal, in general | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals |
This place does not cover:
High definition television systems |
Attention is drawn to the following places, which may be of interest for search:
Systems for the transmission of television signals using pulse code modulation, in general | |
Pulse code modulation in general | |
Transmission systems using pulse code modulation, in general |
This place does not cover:
Dot sequential systems |
Attention is drawn to the following places, which may be of interest for search:
Conversion of standards in television systems in general |
Attention is drawn to the following places, which may be of interest for search:
High-definition television systems in general |
This place covers:
Systems that generate stereoscopic or multi-view signals from cameras, or provide stereoscopic or multi-view signals to displays. It also covers electronic signal processing aspects of such systems.
Examples:
- Stereoscopic and multi-view electronic image pick up devices (video cameras, digital still cameras)
- Stereoscopic and multi-view display devices
- Electronic signal processors: for stereoscopic signal processing; monoscopic to stereoscopic conversion; for stereoscopic image generation (including from a computer model); for stereoscopic displays (e.g. for left/right synchronisation, stereoscopic format conversion or depth adaptation); for displays providing different 2D images to different viewers (e.g. for use in vehicles); for devices that generate a two-dimensional "look around" effect, e.g. non-stereoscopic multi-view systems (see however exclusions here below)
- Devices generating a real 3D image, i.e. an image having a volume (volumetric displays)
- Pseudo-stereoscopic systems
Systems in which the viewer's eyes do not see different images, but which may provide a pseudo-stereoscopic effect, are classified in H04N 13/00. The effect must go beyond that provided by the mere display of a 3D object on a 2D screen (like in a CAD system).
Example: Wiggle stereoscopy: pseudo-stereo systems providing a three dimensional effect by means of normal 2D image signals, by periodic oscillating motion of a 3D object.
- Multi-view systems: systems providing different 2D or 3D views of the same scene to one or more viewers according to the viewpoint location (called "look around" effect); systems providing different 2D or 3D views of different scenes to different viewers (called "privacy" systems)
These systems are classified in H04N 13/00 if they provide said views simultaneously or at least at a sufficiently high frame rate so as to be simultaneously viewed by the viewers.
However, multi-view systems wherein said 2D views are provided to a viewer one at a time, e.g. by user selection, are not classified in H04N 13/00, because they are actually normal 2D systems although the viewpoint can be selected at will.
Examples of multi-view devices falling under H04N 13/00:
- "look-around" display systems including displays in which a lenticular lens provides different views of a common scene from different viewing positions
- "privacy" display systems including displays in which a parallax barrier provides different views of different scenes to different viewers in 2D or 3D (for example in a vehicle, wherein on a common screen the driver is watching GPS while the passenger is watching a movie)
- Multi-user displays displaying different pictures for different viewers wearing shutter glasses to select one of said pictures (this is also "privacy"), wherein said pictures are 2D or 3D pictures.
Subgroups under H04N 5/00 and H04N 7/00 relate to the basic monoscopic video aspects from which corresponding stereoscopic aspects are derived.
Classification and search in these sections is therefore to be considered every time no specifically stereoscopic aspects are present.
Analysis of video signals to perform real time control of a stereoscopic video cameras, or to identify the image transmission format to drive a stereoscopic display, is classified in H04N 13/00.
Ordinary 2D displays arranged to display solid objects, e.g. in a CAD system, are sometimes called 3D displays. Such displays allow the viewer to rotate 3D objects to see them from any direction. Such displays are not classified in H04N 13/00. This is because a viewer sees the same picture with both eyes and because, if there is more than one viewer, all viewers see the same picture. The manipulation of 3D models or images for computer graphics is covered by G06T 19/00.
Volumetric displays are classified in H04N 13/388 and holographic displays are classified in G03H 1/26, whereas autostereoscopic displays are classified in H04N 13/302.
Attention is drawn to the following places, which may be of interest for search:
Projection displays | |
Video standard conversion | |
Colour signal processing circuits | |
Video stream synchronisation / multiplexing /packetisation aspects | |
Video signal reformatting | |
Aspects concerning subtitles or other OSD information | |
Generation or processing of metadata | |
Television cameras | |
Arrangements of television cameras | |
Optical systems | |
Stereoscopic photography | |
Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them | |
Image processing or generation in general | |
Calculation or rendering of a monoscopic view of a 3D graphics object | |
Generation of 3D graphical models or scenes for digital data transmission as such |
In this place, the following terms or expressions are used with the meaning indicated:
Stereoscopic | Providing (exactly) two different views, one for the left eye and one for the right eye |
2D | Two dimensional |
3D | Three dimensional, sometimes also used to mean stereoscopic |
Autostereoscopic displays | A display device not requiring glasses to provide a stereoscopic effect to the viewer. A display device not requiring glasses to provide a stereoscopic effect to the viewer. An autostereoscopic display uses a parallax generating optic which projects or displays different images to the viewer, thus creating a sense of depth. The parallax-generating optic may include, for example, parallax barriers; lenticular lenses; an array of controllable light sources or a moving aperture or light source; - fly-eye lenses; dual and multilayer devices that are driven by algorithms to implement compressive light field displays; such devices are also called Content-Adaptive Parallax Barriers; varifocal lenses or mirrors. It is noted that volumetric displays are classified in H04N 13/388 and holographic displays are classified in G03H 1/26, whereas autostereoscopic display are classified in H04N 13/302. |
Multi-view | Providing more than two different views to one or more viewers according to their viewing position or direction; the views can be 2D or 3D |
Automultiscopic displays | This is a shorter synonym for the expression "multi-view autostereoscopic 3D display" |
Volumetric displays | A device generating a "solid" image, i.e. not an image on the surface of a display, but one having a real depth, for example by projecting 2D image slices at different planes within a viewing volume. Such systems have been considered to fall within the definition of stereoscopic systems because the viewer's eyes perceive two different pictures. |
Lenticular lenses | An array of thin cylindrical lenslets (normally less than 1mm wide) placed vertically in front of, or behind a display or light modulator in order to generate optically directive views in autostereoscopic displays or cameras. |
Parallax barriers | An array of opaque strips and thin slits arranged to occlude portions of a displayed image in left and right viewing regions. The slits are spatially arranged to ensure that the left/right image portions are only visible in the corresponding left/right viewing regions for which they are intended. The parallax barrier may be provided by a static physical layer in which the slits are precisely positioned, or electronically generated on an adaptive intermediate LCD layer. The parallax barrier may also be adjacent to camera circuitry for image collection. |
Fly-eye lenses | An array of very small bidimensional lenses (typically circular / hemispherical) placed in front of a display, light modulator or image sensor like a normal lenticular lens, providing bidimensional parallax. |
Pseudo-stereoscopic | Relating to stereoscopic or 3D visual effects obtained without sending different views to the viewer's eyes. The same term is sometimes used to denote the effect whereby the left and right images are seen by the wrong eyes, due to viewing from an unsuitable position in front of an auto-stereoscopic display. |
Integral imaging | A technique of image capture or display which uses a fly's eye or a lenticular lens in front of the image sensor/display in order to capture/display images with parallax. |
Plenoptic cameras | A camera, normally non-stereoscopic, using a technique allowing focusing after image capture, by means of a lenticular lens array combined with a plurality of (small) image sensors. A plenoptic camera is also known as a light-field camera. |
In patent documents, the following abbreviations are often used:
LCD | Liquid Crystal Display |
SLM | Spatial Light Modulator |
OSD | On-Screen Display |
CAD | Computer Aided Design |
DMD | Digital Micromirror Device |
In patent documents, the following words/expressions are often used as synonyms:
- "3D" and "stereoscopic"
- "automultiscopic" and "multi-view autostereoscopic"
- "lenticular screen", "lenticular lens array" and "lenticular array"
- "plenoptic camera" and "light-field camera"
This place covers:
Device-independent processing of stereoscopic or multi-view image signals
This place does not cover:
Multi-view video sequence encoding |
Attention should be paid to the word "transformation": here a new virtual image is generated starting from one or more already existing stereoscopic images, e.g. by interpolation. In contrast new computer-generated stereoscopic images not derived from existing images are classified in H04N 13/275.
This place covers:
Modification of image signals to enhance the viewer's perception of the 3D effect. Such modification may include:
- Addition of depth cues such as defocusing, colouring, shadows
- Geometric correction or warping
- Left/right or temporal crosstalk reduction
If the content is not modified, this group is not relevant.
If the 3D impression is improved by horizontally shifting one of the images with respect to the other, or by modifying the depth map, then the document should be classified in H04N 13/128.
This place does not cover:
Adjusting depth or disparity |
This place covers:
Depth adjustment, e. g.:
- Control of disparity between L and R images
- Processing of depth maps
- Non-linear processing of depth in order to adapt it to display features such as screen size
Reduction of depth parameters to reduce eye strain (fatigue) caused by flicker should be classified here and in H04N 13/144, providing the depth parameters are controlled by the image signal and not by the display parameters.
If depth adjustment is obtained by acting only on device parameters, i.e. there is no stereoscopic image signal processing, the document should not be classified here but only in the relevant device groups, H04N 13/20 and H04N 13/30.
For example, if depth is adjusted by controlling the baseline (the physical distance between two cameras of a stereo camera), the adjustment should be classified in H04N 13/239 in combination with H04N 13/296.
This place covers:
Conversion of any kind of stereoscopic format into another one, e.g. from side-by-side to top-bottom or "2D+depth", or still to side-by-side but with a different size, resolution or frame rate
The generation of stereoscopic signals from monoscopic source signals is classified in H04N 13/261 or in relevant groups under H04N 13/20. Format conversion should be classified here only if it concerns stereoscopic (or multi-view) signals and if the conversion goes beyond the equivalent processing of monoscopic image signals.
Standards conversion of monoscopic TV signals (e.g. PAL to NTSC), or the adaptation of signals to the display format of a display terminal, should be classified in H04N 7/01.
This place covers:
The generation of stereoscopic (or multi-view) images from at least two source images, wherein the contents of both source images remain visible in the resultant mixed image, i.e. the generation of one image including the weighted sum of said two source images.
The reproduction of mixed stereoscopic images or mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay on a monoscopic image background, is classified in H04N 13/361.
Overlays such as subtitles and similar graphic images are to be classified in H04N 13/183.
Attention is drawn to the following places, which may be of interest for search:
Mixing monoscopic television image signals |
This place covers:
- Aspects of manipulating the structure of the stereoscopic video signal, i.e. how the different image signals which constitute a stereoscopic (or multi-view) image signal are encoded or combined in order to form a complete video signal, e.g. for storage or transmission.
- The separation of stereoscopic or multi-view image signals into their respective constituent (e.g. left and right) components.
"Multiplexing" and "demultiplexing" are to be interpreted in the general sense mentioned above, i.e. any manner of forming a stereoscopic image frame, stream or signal from e.g.
- left and right signals
- a 2D image and a depth image by arranging the components in a format having e.g.
- alternate L/R frames or fields
- side by side L/R images
- top/down L/R images
- main layer / enhancement layer
- component images having different resolutions
Aspects relating to the general encoding of stereoscopic or multi-view image signals are classified here. Prediction encoding to compress the image signal (e.g. using temporal or spatial prediction techniques) specially adapted for multi-view video sequences is classified in H04N 19/597.
Further, attention should be paid to the term "image signal components" which is used in a strict sense. Non-image signal components are to be classified in H04N 13/172 and subgroups thereof.
This place does not cover:
Prediction encoding specially adapted for multi-view video sequences |
This place covers:
Metadata concerning stereoscopic features included in a stereoscopic video stream or image file
This place covers:
Details relating to subtitles or other OSD information, which are included in a stereoscopic video stream separate from the image(s), e.g. information describing how to merge subtitles with the main image, or how to avoid depth conflicts, depth interference etc.
This group is used to classify aspects concerning the recording of stereoscopic or multi-view image signals and the reproduction thereof. Recording of monoscopic video signals and monoscopic aspects of stereoscopic video signals is classified in H04N 5/76.
This group is used to classify aspects relating to the transmission of stereoscopic or multi-view image signals. Such aspects are often quite close to the corresponding monoscopic ones, because once a stereoscopic video stream has been assembled, it is generally recorded or transmitted with monoscopic techniques. Transmission of monoscopic image signals is classified elsewhere in H04N, e.g. H04N 5/38, H04N 21/00 (for selective content distribution systems).
This place covers:
- The generation of electronic image signals representative of stereoscopic or multi-view images
- Computer-generated stereoscopic or multi-view image signals;
- Signal processing and control systems therefor.
Note:
The generated stereoscopic signals may be in any format, e.g. L + R, 2D +depth map, 3D + depth map. Note however that the devices which do not capture optical images (e.g. 3D scanners, time-of-flight cameras, rangefinders etc.) are not classified in H04N 13/00: they are classified in the groups indicated here below.
Monoscopic plenoptic cameras generating a single viewpoint are classified in the relevant groups H04N 23/00.
Plenoptic cameras / integral imaging cameras, which provide more than one viewpoint, are to be classified in H04N 13/20, in particular in H04N 13/282 if they provide more than two different geometrical viewpoints.
Attention is drawn to the following places, which may be of interest for search:
Projection displays | |
Recording, including multiplexing another television signal | |
Video standards conversion | |
Colour signal processing circuits | |
Video stream synchronization / multiplexing / packetization aspects | |
Video signal reformatting | |
Aspects concerning subtitles or other OSD information | |
Generation or processing of metadata | |
Television cameras | |
Arrangements of television cameras (not for capturing stereoscopic images) | |
Time-of-flight [TOF] cameras | |
Optical systems for producing stereoscopic or other three dimensional effects | |
Stereoscopic photography by sequential recording | |
Stereoscopic photography by simultaneously recording | |
3D scanners | |
Depth or shape recovery | |
Generation of a depth map from stereoscopic image signals | |
Calculation or rendering of a monoscopic view of a 3D graphics object | |
Generation of 3D graphical models or scenes | |
Manipulating 3D models or images for computer graphics |
This place covers:
Alternate acquisition of images from different viewpoints, each image acquired at a different time
This place covers:
Simultaneously capturing images from several geometrical viewpoints, each image having different spectral characteristics
This place covers:
Simultaneously capturing images from several geometrical viewpoints on different parts of the image pickup sensor
Plenoptic cameras, i.e. lens array cameras for providing stereoscopic or 3D images, are classified here even if each lens of the fly-eye lens is placed on a different chip (the image sensor is considered to be one even if it is composite)
This place does not cover:
using three or more 2D image sensors |
This place covers:
Aspects relating to the control of a stereoscopic camera in order to obtain aligned images, i.e. images that only differ by a horizontal disparity, but that have no relative rotation, or other geometric distortion, there between.
The so-called stereo (camera) calibration aspects wherein an already captured image pair is processed to determine and compensate the same above mentioned distortions are to be classified in G06T 7/80 such aspects differing from the aspects classified in this group in that they do not "relate to the control of a stereoscopic camera".
This place covers:
Aspects relating to the use of light for obtaining a stereoscopic image, e.g. illumination with structured light in order to capture depth, or illumination from different sides or with different colours to obtain left and right images.
Normal illumination devices (flash or continuous illumination) are classified in H04N 23/00 and if exposure aspects are involved, in H04N 23/70. If structured illumination is used for measuring contours or curvatures, see G01B 11/25. Procedures and apparatus for illuminating a scene in general, see G03B 15/02.
Attention is drawn to the following places, which may be of interest for search:
Laser ranging using the projection of structured light to facilitate image analysis for depth or shape recovery |
This place covers:
Devices obtaining a stereoscopic image from one or more existing monoscopic image
In this group the capturing conditions of the monoscopic images are unknown or irrelevant, whereas in H04N 13/207 and subgroups stereoscopic images are generated from a camera controlled to provide images of different viewpoints, so that no "conversion" is necessary.
This place covers:
Systems using a computer for generating a stereoscopic images, e.g. a fully synthetic stereoscopic image from a CAD-type 3D object model
The generation of a new image from a virtual viewpoint from existing stereoscopic images is covered in H04N 13/111 and its subgroup. 3D modelling for computer graphics G06T 17/00.
This place covers:
- Devices for stereoscopic or multi-view electronic image signal display.
- Devices for electronic image signal display for generating different views of a scene according to the viewpoint location.
- Devices for electronic image signal display for generating different views for different viewers.
- Devices for electronic image signal display for generating a view visible only by a specific viewer.
- Devices for volumetric three dimensional electronic image signal display.
- Devices for pseudo-stereoscopic display systems. For example: wiggle stereoscopy or pseudo-stereo systems providing a three-dimensional effect by means of normal 2D image signals, by periodic oscillating motion of a 3D object.
- Devices which generate different two-dimensional views in the vertical direction by using horizontally arranged parallax optic and displaying different images in the vertical direction.
- Electronic signal processing and control therefor. For example signal processors and controllers.
- for left/right synchronization, stereoscopic format conversion or depth adaptation
- for backlight control or electrical control of properties of a lenticular lens
- for providing different 2D images to different viewers (e.g. for use in vehicles)
- for devices which generate a two-dimensional "look around" effect, e.g. non-stereoscopic multi-view systems, when the user's position is tracked or when different images are displayed in the vertical direction on a display using a horizontally arranged parallax optic
- for controlling image flipping (or inverse image), caused by the noticeable transition between the viewing zones
- for controlling picket fence effect, a moiré-like artefact caused by the gaps between sub-pixels being magnified by the lenticular sheet, for example by use of a slanted parallax optic. (Blurring the boundaries between the viewing zones can increase the apparent number of views, broadening the observation angle of the pixels)
- for reducing of ghosting or crosstalk
- for controlling resolution loss of images with high perceived depth, for example by controlling the distance between the pixels and the array of lenticular lens elements,
- for controlling the stereoscopic image generation in dependence on the user position and orientation
- for controlling the stereoscopic image generation in dependence on the display position and orientation
- Constructional arrangements and manufacturing methods for stereoscopic display devices for example details related to:
- colour pixel arrangement with respect to the parallax optic layout or shape of pixels
- mechanical control of position of the parallax optic user interfaces for controlling or indicating the stereoscopic image display properties, like amount of displayed depth or switching between 2D/3D mode
- arrangements for improving the stereoscopic impression, e.g. by using an additional frame placed in front of the screen
This place does not cover:
Optical systems for producing stereoscopic or other three dimensional effects |
Attention is drawn to the following places, which may be of interest for search:
Holographic volumetric displays |
In patent documents, the following words/expressions are often used as synonyms:
In patent documents, the expression " multi-view display" is often used as synonym for describing "privacy" displays, for example, multi-user displays displaying different pictures for different viewers wearing shutter glasses to select one of said pictures (this is also "privacy"), wherein said pictures may be 2D or 3D pictures. Such type of privacy displays are not multi-view displays for the purpose of this classification. However, these privacy display devices (which, for example, use an image separation optic, e.g. a parallax optic, a shutter or polarisation glasses for generating privacy images for a specific viewer) also fall under H04N 13/30.
In patent documents, the following words/expressions are often used with the meaning indicated:
In patent documents, the expression "Three dimensional (3D)" is often used with the meaning "stereoscopic". However, this expression has a broader meaning and encompasses for instance 2D images displayed with monoscopic depth cues, computer generated (CG) 3D models or stack of images arranged in depth direction (e.g. tomographic images).
This place covers:
Electronic signal processors and controllers specially adapted for driving and controlling of autostereoscopic displays, automultiscopic displays, integral imaging displays or privacy displays using a parallax generating optic which projects or displays different images to the left and right eyes, thus creating a sense of depth. The parallax generating optic may include:
- parallax barriers;
- lenticular lenses;
- an array of controllable light sources or a moving aperture or light source;
- a fly-eye lens;
- dual and multilayer devices that are driven by algorithms such as computed tomography and non-negative matrix factorisation and non-negative tensor factorisation to implement compressive light field displays; such devices are also called Content-Adaptive Parallax Barriers;
- a varifocal lens or mirror.
Constructional arrangements and manufacturing methods for autostereoscopic displays, automultiscopic displays, integral imaging displays or privacy displays, for example, details related to the colour pixel arrangement with respect to the parallax barrier, layout or shape of pixels or mechanical control of position of the lenticular lens.
Illustrative examples
(Autostereoscopic displays showing dependence on the user position - blended zones where the left and right images are seen with both eyes, inverse image zone where the left image is seen by the right eye and the right image is seen by the left eye)
Volumetric displays and holographic displays are not autostereoscopic displays for the purpose of this group. Examples of relevant classification places for volumetric and holographic displays can be found under the informative references below.
Attention is drawn to the following places, which may be of interest for search:
Constructional details related to television receivers | |
Volumetric displays, i.e. systems where the image is built up from picture elements distributed over a volume | |
Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics | |
Structural association of optical devices, e.g. polarisers, reflectors or illuminating devices, with liquid crystal display cells | |
Stereoscopic photography by sequential viewing | |
Stereoscopic photography by simultaneous viewing | |
Stereoscopic photography by simultaneous viewing using aperture or refractive resolving means on screen or between screen and eye | |
Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them | |
Constructional details related to the housing of computer displays | |
Advertising or display means | |
Displaying different signs depending upon the view-point of the observer | |
Involving the use of mirrors | |
Control arrangements or circuits to produce spatial visual effects, for example rotating displays |
This group should be assigned when no explicit reference to the particular type of the autostereoscopic display device is disclosed and when the autostereoscopic display device is not defined in the subgroups.
(Autostereoscopic displays using an array forming a moving aperture (120) and lenticular lens (110) placed behind the display should be classified in H04N 13/305 in combination with H04N 13/32)
(Autostereoscopic displays using an array of controllable light sources and lenticular lenses placed in front of and behind the display should be classified in H04N 13/305 in combination with H04N 13/32)
This place covers:
Electronic signal processors and controllers specially adapted for driving and controlling of autostereoscopic displays using lenticular lenses, as well as constructional arrangements and manufacturing methods for such autostereoscopic display devices.
Autostereoscopic displays using lenticular lenses are not limited to the cases where the lenticular lenses are arranged in front of the display only.
Illustrative examples
(Autostereoscopic displays using lenticular lenses placed in front of the display)
(Electrical control of properties of the lenticular lens
(In the first mode, the light output directing function is provided by an array 35 that is closer to the display panel 33. This mode provides a limited amount of perceived depth, but high resolution, and is therefore suitable for use in a "monitor" application where high resolution is more important. In the second mode, the light output directing function is provided by an array 37 that is further from the display panel 33.)
The present group should be assigned in combination with other groups of H04N 13/00 for example:
a lenticular lens is used in combination with a moving aperture or controllable light sources | |
when the lenticular lens is used in combination with a parallax barrier | |
when the lenticular lens is slanted | |
when the autostereoscopic display is for multi-view display | |
when the user is tracked |
In patent documents, the following words/expressions are often used as synonyms:
In patent documents, the expressions " lenticular" and "lens array" are often used as synonyms, although these terms may also be used for fly-eye lens arrays.
This place covers:
Volumetric or integral imaging displays that use a fly-eye lens array.
Integral imaging systems consisting of a two-dimensional (2D) lens array and display system. An elemental image on a 2D panel gives a different perspective to each elemental lens, as shown in the figure bellow. The lens array integrates the elemental images to form a 3D image with full parallax (horizontal and vertical) and an almost continuous view.
(A display comprises a pixel array (104) and an optical element array (102) disposed in close proximity to the pixel array. The pixel array is operated to display two or more images. The optical element array is configured and operated to direct each image to an associated viewing position, enabling a viewer to separately view each image from the respective associated viewing position.)
(A naked eye stereoscopic display includes a plurality of projectors (1), a microlens array (2) for collecting light beams of an image projected from the projectors, and a diffuser panel (120) for diffusing the light beams collected by the microlens array. Furthermore, the diffuser panel is arranged such that a virtual light collection point is formed among a plurality of light collection points of light beams by a plurality of microlenses constituting the microlens array.)
This group is the only group where integral imaging displays are classified.
In patent documents, the following words/expressions are often used as synonyms:
- "microlens array", "lens array" and "fly-eye lens array"
This place covers:
Autostereoscopic displays which use parallax barriers. A parallax barrier is a device placed in front of or behind an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses.
(A display panel 10 and a parallax barrier 20)
This group should always be assigned when the parallax barrier is a device placed in front of the image source.
This group should be assigned in combination with other groups of H04N 13/30 for example:
a parallax barrier is used in combination with a moving aperture or controllable light sources | |
when the user is tracked |
This place covers:
Autostereoscopic displays which use a parallax barrier behind an image source. If the parallax barrier is placed behind the LCD pixels, the light from a slit passes the left image pixel in the left direction, and vice versa. This produces the same basic effect as a front parallax barrier. In both cases the image displayed is column interlaced.
(The parallax barrier slit is visible at the same horizontal position within each pixel (R pixel, L pixel) of one view (R, L)).
(Figure 1 shows two images displayed on the display layer 4, with the two images displayed on alternate columns of pixels; one image is displayed on pixel columns Cl, C3, C5 and a second image is displayed on pixel columns C2, C4, C6. The image display device is illuminated by light 7 from a light source.)
This group should be assigned always when the parallax barrier is placed behind the image source.
Optical masks which form part of a controllable light source should not be classified in the group, but in H04N 13/32.
This place covers:
Autostereoscopic displays where the parallax optic, for example a lenticular lens or parallax barrier is slanted with respect to the pixels matrix of the SLM.
Autostereoscopic displays where the pixels or the pixel matrix is slanted with respect to the parallax optics.
In 1996, van Berkel proposed that the lenticular sheet could be placed at a slant over a standard LCD screen. This approach removes the picket fence effect, creates smooth transition between the views and at the same time balances the horizontal vs. vertical resolution of a view. Another solution with similar effects is called "wavelength-selective filter array". Essentially, the filter is a slanted parallax barrier which covers the display and defines particular light penetration direction of each sub-pixel.
Illustrative examples
(A display panel (10) which includes a plurality of pixels (21) arranged in a plurality of coloured sub-pixels and displays an image frame; a viewing area separating unit (120) arranged as a filter in front of the display panel.)
(The display system 100 comprises a pixel array 102 and lenses 106 disposed over the pixel array 102. In an embodiment, pixel array 102 may include pixels 104 that are slanted relative to the lenses 106.)
This group should be assigned in combination with other groups of H04N 13/30, for example:
Colour aspects of stereoscopic or multi-view image producers, e.g. for control or arrangement of colour sub-pixels |
This place covers:
Autostereoscopic displays using controllable light sources or arrangements, adjustment of which directs the light in different directions, so as to direct a displayed image (or portion thereof) toward a viewer's eye.
Autostereoscopic displays in which the direction of the displayed image is manipulated by movement of apertures, by movement of light sources or by using optical masks that form part of a controllable light source.
Illustrative example
(The backlight module (1) comprises a first light guide plate (21) and a second light guide plate (22) which are stacked, and comprises a first light source (11) disposed opposite to the first light guide plate (21), and a second light source (12) disposed opposite to the second light guide plate (22).)
Illumination arrangements using parallax barriers are classified in this group and not in H04N 13/312.
Attention is drawn to the following places, which may be of interest for search:
Light guides |
Backlight modules commonly comprise lenticular lenses or parallax barriers. In such cases, this group should be assigned in combination with:
using a lenticular lens | |
using a parallax barrier |
In these examples the lenticular lenses or parallax barriers are part of the backlight modules:
(Fig. 2 and 4 show 3D display systems that use a lenticular lens 22 or a parallax barrier 26, along with a shutter plate 30, as a light directing device to allow a viewer's right eye to see a right image and the left eye to see a left image on a display panel. The right and left images are alternately displayed.
Although the parallax barrier 26 is placed behind the spatial light modulator [SLM] 10, H04N 13/312 shall not be allocated, since it is a part of the controllable illumination arrangement.)
Volumetric display systems where the image is built up from picture elements distributed over a volume are classified in H04N 13/388.
This place covers:
- Colour or brightness adjustment with respect to the stereoscopic images when considering specific optical and constructional properties of a specific stereoscopic type display
- Geometric correction of stereoscopic images with respect to errors arising from the relative positions between the different optical elements, such as the pixels and the parallax optic;
- Mechanical or electrical change of properties or position of optical elements, such as the lenticular screen, to compensate for misalignments between the optical elements.
Calibration can be performed automatically or by the user when viewing a predetermined calibration or test image
Attention is drawn to the following places, which may be of interest for search:
Improving the 3D impression of a stereoscopic image by modifying image signal contents | |
Equalising the characteristics of different image components in stereoscopic images, e.g. colour balance |
This group should be always assigned in combination with a respective display type. For example, calibration of autostereoscopic displays should be classified in both H04N 13/302 and H04N 13/327.
This place covers:
- Stereoscopic displays using an anaglyph display method, e.g. by displaying the image for each eye using filters of different (usually chromatically opposite) colours, typically red and cyan. When viewed through "anaglyph glasses", wherein each lens comprises a corresponding colour filter, an integrated stereoscopic image is perceived by the viewer.
- Stereoscopic displays using a full colour anaglyph display method, in which different images represented by triplets of slightly different primary colours (e.g. RLGLBL and RRGRBR) are presented to the left and right eyes respectively and viewed through glasses with selective filters. This technique may also be referred to as 'wavelength multiplex visualization'.
- Stereoscopic displays using Pulfrich display method obtained from a light/dark filter arrangement.
An example of spectral multiplexing comprises simultaneously displaying left and right images separated by using glasses with different spectral characteristics
In the example, the dotted lines represent the wavelengths seen by the left eye and the continuous lines represent those seen by the right eye. The left eye sees RGB image components of slightly different wavelengths than those seen by the right eye. When provided with the correct set of filters, e.g. Fabry-Perot filters, which let through light within limited, chosen ranges of wavelengths, the viewer will perceive a full colour stereoscopic image.
Attention is drawn to the following places, which may be of interest for search:
Stereoscopic photography by simultaneous viewing using polarised or coloured light for separating different viewpoint images |
This place covers:
- Display systems which display stereoscopic images simultaneously or sequentially, each image presented by light of a different polarisation. Such systems conventionally require passive glasses having different polarising characteristics for each eye.
- Display systems which display different images simultaneously or sequentially for different viewers wearing glasses having differing polarising characteristics. Viewers wearing differently polarised glasses see different displayed images (e.g. "privacy" displays).
An example of using polarisation multiplexing comprises simultaneously displaying left and right images which are separated by using glasses with different polarising characteristics
(Odd pixel lines (running horizontally) are rotated clockwise, and even pixels line counter-clockwise, using circular polarisation. Viewing glasses make it possible for the right eye to see only the odd lines, and the left only the even lines, again using polarising films, producing the 3D image.)
Attention is drawn to the following places, which may be of interest for search:
Stereoscopic photography by simultaneous viewing using polarised or coloured light for separating different viewpoint images |
This group should not be assigned for shutter type displays which use polarisers in the glasses as part of the shutter system for completely blocking the light, but H04N 13/385.
However, this group should be assigned in combination with H04N 13/341 for cases when both eyes see an image, if polarisation alternating glasses are used.
This place covers:
Formation of a stereoscopic image by simultaneously displaying left and right images on different parts of a display and using glasses to optically recombine the stereoscopic image, e.g. with prisms or mirrors
This place does not cover:
Stereoscopic displays using polarisation multiplexing, for simultaneously displaying left and right images |
This place covers:
Formation of a stereoscopic image by alternately displaying left and right images separated in time and by using glasses, e.g. with shutters, alternately to block the right and left eye.
Shutter type display systems using a frame sequential method of displaying 3D images. Full high-definition (HD) images are alternated between left and right eyes each frame, using glasses with synchronised liquid crystal shutters alternately to block left and right eye vision.
Frame sequential methods of displaying 3D images when the optical properties, such as colour filtering or polarisation characteristics, of each lens of the shutter glasses are alternated with each frame, i.e. active glasses.
Shutter type display systems using frame sequential method of displaying different pictures for different viewers wearing shutter glasses to select one of said pictures ("privacy"), wherein said pictures are 2D or 3D pictures.
(Timing diagrams for shutter type stereoscopic displays showing synchronisation between the LCD panel, the LED backlight and the Glasses of Shutter type stereoscopic displays)
Attention is drawn to the following places, which may be of interest for search:
Frame sequential stereoscopic displays using passive glasses |
This place covers:
- head-mounted displays for stereoscopic viewing
- head-mounted displays specially adapted for augmented reality systems
- head-mounted displays comprising viewer tracking for generating look around images
(The head-mounted display 100 illustrated has display panels 104L and 104R for the left eye and the right eye at a side surface facing a face of a user.)
Attention is drawn to the following places, which may be of interest for search:
Optical head-up displays | |
Manipulating 3D images for computer graphics, e.g. for virtual reality (VR) or augmented reality (AR) display |
This group maybe assigned in combination with several further groups if the head mounted display is used for example in augmented or mixed reality systems or if the user position is tracked. For example:
Stereo video generation from a 3D object model, e.g. computer-generated stereoscopic image signals | |
Mixing stereoscopic image signals |
This place covers:
- 3D display arrangements which use a semi-transparent mirror or prism for optically mixing or separating left and right images.
This group should normally be assigned in combination with a respective display type, for example:
Volumetric display with depth sampling | |
stereoscopic displaying with polarisation multiplexing, for simultaneously displaying left and right images |
This place covers:
Multi-view displays which simultaneously or sequentially display multiple (three or more) viewpoints (perspectives or views) of the same scene in different directions (zones, lobes, cones) with respect to the optical axis of the display in order to generate a look-around effect (motion parallax) when the user moves around the display. The viewpoints (views) are displayed irrespective of whether the viewer is tracked or not.
Multi-view displays which simultaneously or sequentially display multiple (three or more) viewpoints of different scenes in different directions (zones, lobes, cones), for example for privacy displays. The viewpoints (views) are displayed irrespective of whether the viewer is tracked or not.
Multi-view displays that display three or more viewpoints (perspectives or views of one or more scene) in different directions (zones, lobes, cones) with respect to the optical axis of the display.
The definition "without viewer tracking" does not mean that such display systems do not include viewer tracking. Some displays can include viewer tracking, e.g. for preventing image flipping, but not for the creation of the multi-view effect as such.
This place does not cover:
Autostereoscopic displays using fly-eye lenses |
Attention is drawn to the following places, which may be of interest for search:
Volumetric displays |
The generation of multiple viewpoints (look around or motion parallax effect) of a scene according to the viewer position is classified in H04N 13/117 for the image signal processing aspects or in H04N 13/279 for the image signal generation aspects. The displaying of such viewpoints on a display to simulate a look around effect does not mean that the display is multi-view. Therefore, such stereoscopic systems should be not classified in H04N 13/349 if they do not comprise a multi-view display as defined above.
This group should normally be assigned in combination with a respective display type, which is normally of autostereoscopic type.
The term "multi-view" is also used for privacy display devices which display different video content to different viewers. Such displays however are not necessarily multi-view displays if they cannot generate multiple (three or more) viewpoints and cones irrespective of whether the viewer is tracked or not. For example, one type of privacy displays (similar to an autostereoscopic display) displays (only) two different views in two different directions. Such a privacy display does not fall into the above definition for Multi-view displays and should be not classified in H04N 13/349.
This place covers:
Sequential display of different images for different viewpoints at different time intervals. By controlling the display with a sufficiently high frame rate, viewers at different viewpoints will see different content, depending upon their position.
(A first lens structure LS1 emits two viewpoint images displayed on the display panel 200 to viewpoint positions VW1, VW2, during the first interval of the frame. Then, the second lens structure LS2 emits two viewpoint images displayed on the display panel 200 to viewpoint positions VW3, VW4, during the second interval of the frame.)
This place covers:
Stereoscopic displays that are selectively switchable between a monoscopic (2D) mode and a stereoscopic (3D) mode.
The change in mode may be effected by electrically or mechanically modifying the properties of the display device or by change of the image content - for example:
- by switching off the parallax optic
- by removing the parallax optic
- by controlling the shutter glasses
- by displaying the same image in a stereoscopic display mode
This group should be always assigned in combination with groups representing the respective display type.
This place covers:
Details relating to the switching of the display between monoscopic and stereoscopic modes, e.g.:
- Synchronisation between the displayed image and the time of switching off the parallax barrier or the shutter glasses
- Control of backlight level or brightness in 2D mode and in 3D mode
- The display of warning messages before switching to 2D mode
- Switching to 2D mode upon detection of a specific event, like detection of user fatigue, or that a user's position is not suitable for stereoscopic viewing
This place covers:
Displays capable of simultaneously displaying both monoscopic and stereoscopic video content.
Image display control, e.g. determining the position at which a parallax barrier should be activated to display a stereoscopic image upon a monoscopic image background.
(Among the plurality of pixel regions included in the display panel 10, in the pixel region in which the user views an image through a region 20a in the selectively light-blocking panel 20, a 3D image (L) for the left eye and a 3D image (R) for the right eye are displayed, whilst 2D images are displayed in the pixel regions other than the region.)
Generating mixed monoscopic and stereoscopic images when the mixing is performed irrespective of the display type is not covered by this group.
Examples of classification places which may be relevant for search, e.g. mixing a stereoscopic GUI or subtitles with a stereoscopic or monoscopic image, can be found in the informative references below.
Attention is drawn to the following places, which may be of interest for search:
Mixing stereoscopic image signals | |
Subtitles or other on-screen display [OSD] information, e.g. menus |
This group should be always assigned in combination with a respective display type, for example:
for autostereoscopic displays |
This place covers:
Stereoscopic display systems using projection devices
This place does not cover:
Volumetric displays |
Attention is drawn to the following places, which may be of interest for search:
Projection displays | |
Stereoscopic photography by simultaneous viewing using two or more projectors | |
Stereoscopic photography by simultaneous viewing using single projector with stereoscopic-base-defining system |
This group should be always assigned in combination with a respective display type, for example:
projection devices for autostereoscopic displays |
This place covers:
- Stereoscopic or multi-view display systems using micromechanical devices, e.g. MEMS mirror devices or DMD based spatial light modulators (SLMs)
This place covers:
Different aspects of viewer tracking for control of stereoscopic systems, for example:
- adjusting the viewing zones of an autostereoscopic display
- adjusting the depth according to a user's position or orientation
- generating different perspectives
- controlling the image capturing process, e.g. adjusting the camera separation between real (or virtual) cameras used for image generation
- determining user fatigue
- performing geometrical corrections, e.g. vertical parallax
- switching the display between 2D and 3D mode
- rotating the display toward the viewer
- generating motion parallax
- adjusting depth parameters or for crosstalk cancellation
This group also covers display systems which detect the presence of a viewer in front of the display, e.g. by detecting that the shutter glasses are switched on.
Attention is drawn to the following places, which may be of interest for search:
Input arrangements or combined input and output arrangements for interaction between user and computer, for example viewer tracking for gesture recognition |
This group should be assigned in combination with a group for the respective display type, when specific properties of the display device are controlled, e.g. when the parallax optic is moved as a function of the user position:
(Control of the parallax barrier as a function of the user position)
This group should be assigned in combination with groups under H04N 13/20 for respective stereoscopic picture signal generators when specific properties of the picture signal generator (e.g. camera base line distance, convergence point, zoom or orientation) are controlled as a function of the viewer position with respect to the display screen.
This group should be assigned in combination with groups under H04N 13/10 for respective stereoscopic image processing, when specific image signal properties are controlled as a function of the viewer position.
This place covers:
Generation of respective viewing zones for multi-viewer autostereoscopic displays.
Providing different perspectives to different viewers depending on their positions.
Providing different images to different viewers upon detection of multiple users (e.g. for privacy purposes).
Adjusting viewing zones of multi-view image displays when viewed by several viewers.
(A multi-user autostereoscopic display)
(Stereoscopic display for providing different perspective to different user depending on their position)
(A multi-view image display when viewed by several viewers, with backlight 10, light emitting area control unit 300 with barrier part BP and transparent slit part TP, directional control unit 400 with a lenticular lens, display unit 100 with a plurality of color pixels, liquid crystal barrier panel as viewpoint generating unit 200 and image plane IP which includes an illumination area LA and a non-illumination area NLA.)
Attention is drawn to the following places, which may be of interest for search:
Input arrangements or combined input and output arrangements for interaction between user and computer, e.g. eye tracking input arrangements |
This place covers:
- Temporally multiplexed displays
Polarisation multiplexing displays, using time alternating display of left and right images and passive polarising glasses, are classified in H04N 13/337.
Autostereoscopic displays, using an array of controllable light sources or a moving aperture or light source when the left and right images are alternately displayed in time, are classified in H04N 13/32.
This place does not cover:
Autostereoscopic displays using time-variant parallax barriers | |
Stereoscopic displays for viewing with the aid of special glasses or head-mounted displays [HMD], using temporal multiplexing |
This group should be always assigned in combination with a respective display type.
This place covers:
Display devices forming a visual representation of an object in 3D
The volumetric display creates 3D images by the selective emission, scattering, or relaying of illumination from defined points within the 3D viewing volume.
Neither holographic nor multi-view displays should be classified in this group.
Most volumetric 3-D displays create 3-D imagery visible to the unaided eye. However, other displays not relying upon additional viewing aids (e.g. glasses) should be classified in their relevant groups. For example, autostereoscopic displays are classified in H04N 13/302, whilst multi-view displays are classified in H04N 13/349.
Attention is drawn to the following places, which may be of interest for search:
Autostereoscopic displays | |
Multi-view displays | |
Holographic processes or apparatus using light for obtaining images from holograms |
This place covers:
Volumetric displays wherein image content is displayed upon, and synchronised with, the position of a moving surface such that the viewer perceives a 3D volume. Examples include swept-volume displays in which a 3D object is decomposed into 2D slices which are sequentially displayed or projected upon a rotating planar surface. If the rate of sequential display and corresponding surface rotation are sufficiently high, the human eye perceives a displayed 3D volume, due to persistence of vision.
(The volumetric 3D display includes a transparent enclosure 252, a projection screen 254, rasterization electronics 256, a projection engine 258, and relay optics 260.)
This place covers:
Volumetric displays in which the 3D image volume is decomposed into a series ('stack') of constituent 2D image planes, each of which is displayed individually, for example by separate display units or by changing the depth of focus of the image projection optics. When the 2D image planes are viewed together, or in rapid succession, the viewer perceives a 3D volume.
(The 3D display apparatus comprises an array of multiple layers of display units comprising at least two layers of display units 10)
This place covers:
Synchronisation between left and right images output to a display.
Synchronisation between a temporally varying parallax optic and the corresponding image signal provided to the display.
Synchronisation between shutter glasses and the image display period of a shutter display.
Controlling the position of a parallax optic in order to change the depth resolution.
Controlling a display to switch between different modes of operation.
Controlling shutter glasses to switch off when not in use.
Controlling the display timing, backlight or shutter glasses in order to reduce crosstalk.
Controlling the synchronisation protocols between shutter glasses and shutter type display.
Controlling the number of generated views depending upon user selection or upon the number of detected viewers.
User interfaces for controlling stereoscopic display properties.
This group should normally be assigned in combination with a respective display type, for example:
Synchronisation or control aspects for autostereoscopic displays | |
Control arrangements or circuits to produce spatial visual effects, for example rotating displays |
When classifying in this group, classification in G09G 3/00 should also be considered, particularly if aspects of synchronisation or control are present, which relate to the type of display panel (e.g. whether it is an LCD, an OLED, etc.).
This place covers:
Hardware-related or software-related aspects specific to measuring or testing of values involved in the television signal processing at the transmitter side and/or the receiver side, for analog or digital television signal.
H04N 17/00 features test techniques for all the devices which belong to the television chain: television cameras, transmission path, television receivers or recorders, distribution systems which are found in H04N 5/00, H04N 7/00, H04N 9/00, H04N 11/00, H04N 21/00.
Attention is drawn to the following places, which may be of interest for search:
Arrangements for locating faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere | |
Testing correct operation of photographic apparatus or parts thereof | |
Electronic inspection or testing of displays and display drivers, e.g. of LED or LCD displays | |
Monitoring of transmission systems; Testing of transmission systems |
H04N 17/00 features a limited number of CPC symbols and has an associated Indexing Code scheme with additional subdivisions: H04N 17/00
Allocation of CPC symbols and/or Indexing Code symbols:
- a document containing invention information relating to testing of television systems or details will be given an H04N 17/00 CPC symbol as invention information
- a document containing additional information relating to testing of television systems or details will be given an H04N 17/00 CPC symbol as invention information
- a document merely mentioning details of colour television systems will not be given a CPC symbol as invention information, but it may receive an Indexing Code if the disclosure is considered relevant.
Monitoring aspects are also covered in the appropriate main groups, e.g. H04N 5/00, H04N 7/00, H04N 21/00.
This place covers:
- Methods or arrangements for coding or compressing an input digital video sequence for the purpose of onward transmission (e.g. by broadcasting), or of storage (e.g. at servers, set-top boxes or hard-disks) for subsequent reproduction in viewers' premises.
- Processing in accordance with standards such as MPEG-x or H.26x.
- Methods or arrangements for transform coding of static images.
- The scope of H04N 19/00 and its subgroups is limited to the part of digital video coding and compression strictly comprised between the digital video input and the compressed video output.
- Processing of the compressed video (e.g. fragmentation in packet units, encapsulation, medium adaptation for transport, video distribution) is covered by H04N 21/00 or H04H.
- Processing of not yet compressed video signals or after decoding, such as re-sampling, interpolation, cropping, rotation, is generally covered by G06T, unless it interacts with aspects of processing for compression, in which case it is covered by relevant subgroups of H04N 19/00.
- Computer graphics compression is covered by G06T 9/00.
- General compression algorithms are covered by H03M 7/30.
- Processing of documents or images for scanning, transmission or reproduction (e.g. telefax) is covered by H04N 1/00.
- Details of digital television cameras, digital television receivers and digital video recorders are covered by H04N 5/00.
Attention is drawn to the following places, which may be of interest for search:
Processing of documents or images for scanning, transmission or reproduction (e.g. telefax) | |
Bandwidth or redundancy reduction for scanning, transmission or reproduction of documents or the like, e.g. compression of two-tone or discrete tone static images | |
Colour conversion | |
Studio equipment, e.g. video cameras or devices for controlling television cameras | |
Television receivers | |
Video recording and play (e.g. trick play) | |
Closed circuit TV systems, details of video-surveillance cameras and circuits | |
Stereoscopic or multiview television systems | |
Diagnosis, testing or measuring for television systems | |
Selective content distribution | |
Information retrieval and database structures therefor, e.g. in image databases | |
Pattern recognition | |
General purpose image data processing, e.g. hardware for image processing | |
Geometric image transformation in the plane of the image | |
Image restoration | |
Image analysis, e.g. analysis of motion | |
Image coding | |
2D image generation | |
2D image animation (e.g. sprites in general) | |
3D image rendering | |
3D image modelling | |
Arrangements for image or video recognition or understanding | |
Scenes; Scene-specific elements | |
Character recognition, recognising digital ink or document-oriented image-based pattern recognition | |
Recognition of biometric, human-related or animal-related patterns in image or video data | |
Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis | |
Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel, e.g. signal processing for video editing and recording on a special recording medium | |
General data coding | |
Details of multimedia broadcast systems |
In patent documents, the following abbreviations are often used:
JPEG | Joint Photographic Experts Group |
AVC | Advanced Video Coding |
SVC | Scalable Video Coding |
HEVC | High Efficiency Video Coding |
This place covers:
Static or dynamic adaptation in the interaction of the different building blocks or processes of the digital video compressor or decompressor, e.g. regulation of the parameters involved in the compression algorithm as a function of the channel capacity or of the desired quality of the reconstructed video signal.
Attention is drawn to the following places, which may be of interest for search:
Controlling the complexity of the video stream at the transmitter side, e.g. by scaling the resolution or bitrate of the video stream | |
Content or additional data management, e.g. controlling the complexity of the video stream at the receiver side |
When classifying in this group, each aspect relating to adaptive coding should, insomuch as possible, be classified in each one of subgroups H04N 19/102, H04N 19/134, H04N 19/169, and H04N 19/189.
This place covers:
The definition of the element, parameter or selection, which is affected by the adaptive coding, wherein element is to be understood as a functional block or process in the digital video compressor or decompressor.
This place covers:
The selection of the reference unit (as contained e.g. in the memories in the figure below) for prediction within a chosen coding or prediction mode, e.g.:
- weighted prediction
- adaptive choice of position and number of pixels used for prediction
- choice between different motion estimators or compensators (e.g. between diamond search and full search, between global and local motion compensation) skip mode, merge mode
- adaptive choice of the reference frame or block in predictive encoding, e.g. spatial, temporal, interlayer or interview compensation.
- adaptive reference picture list management
Attention is drawn to the following places, which may be of interest for search:
Non-adaptive reference picture list management | |
Multiple frame prediction | |
Bidirectional image interpolation, B-frames | |
Long-term prediction |
This place covers:
The selection between spatial and temporal predictive coding, e.g. picture refresh by insertion of an intra-coded frame, as e.g. periodically or at scene change, or decision among intra-mode and inter-mode as in the figure.
In this place, the following terms or expressions are used with the meaning indicated:
Intra-frame, I-frame | Frame coded with spatial prediction |
Inter-frame, P-frame | Frame coded with temporal prediction in one temporal direction |
Bidirectional-frame, B-frame | Frame coded with temporal prediction in both temporal directions |
Anchor frame | A frame usable for prediction of other frames, i.e. an intra-frame or an inter-frame |
This place covers:
The selection among a plurality of temporal predictive coding modes, e.g. a plurality of inter-prediction modes as in the standard H.263 or H.264.
This place covers:
The selection among a plurality of spatial predictive coding modes, e.g. a plurality of intra-prediction modes as the directional block intra-prediction modes in the standard H.264 shown below.
This place covers:
The selection of a given display mode, e.g. interlaced or progressive as in the figure (as in MBAFF of H.264), and of the associated coding or prediction mode.
Attention is drawn to the following places, which may be of interest for search:
Conversion of standards in television systems, e.g. at the pixel level of a picture from interlaced to progressive display mode and vice versa |
In this place, the following terms or expressions are used with the meaning indicated:
MBAFF | Macroblock-adaptive frame-field coding |
This place covers:
- The adaptation of the length or the composition of a GOP, e.g. by changing the number of B-frames between anchor frames or by changing the number of P-frames between I-frames.
- The selection of the structure of a group-of-pictures [GOP], e.g. of the number of P-frames, B-frames between two anchor frames, e.g. as in the figure below.
This place does not cover:
The selection between spatial and temporal predictive coding |
Attention is drawn to the following places, which may be of interest for search:
Bidirectional image interpolation, B-frames |
In this place, the following terms or expressions are used with the meaning indicated:
Group-of-pictures | A group of successive pictures forming a logical unit within a coded video sequence in H.26x and MPEG standards. |
Open GOP | A GOP which uses referenced pictures from the previous GOP at the current GOP boundary. |
Closed GOP | A GOP that uses no referenced pictures from the previous GOP at the current GOP boundary (e.g. the classic GOP starting with an I frame). |
In patent documents, the following abbreviations are often used:
GOF | Group of frames. |
GOP | Group of pictures. |
This place covers:
The selection of the target rate or code volume assigned to a coding unit before coding the unit itself, e.g. to a picture or a group-of-pictures, as done within the rate controller in the figure below, or selection of frame rate.
Attention is drawn to the following places, which may be of interest for search:
Data rate or code amount at the encoder output |
This place covers:
Subject matter wherein the filtering is required to be part of an adaptive coding process, e.g. quantization controlling the filtering process, adaptive switching function after filtering process, optional filtering characteristics, adaptive selection of a filter type or of filter parameters, like strength and taps, as within the filter indicated in the figure below in function of a threshold determination.
This place does not cover:
Sub-band based transform characterised by filter definition or implementation details |
Attention is drawn to the following places, which may be of interest for search:
Details of filtering operations specially adapted for video compression and not necessarily of adaptive nature | |
Pre-processing or post-processing specially adapted for video compression | |
Image enhancement or restoration by use of local operators | |
Impedance networks; Resonators |
This place covers:
- Adaptive segmentation aspects during video compression, e.g. ROI segmentation.
- The selection of the subdivision of a picture into coding blocks, i.e. the determination of the grid of blocks covering a picture.
- The selection may involve the shape, e.g. rectangular or non-rectangular, or the size of the blocks, e.g. in the standard H.264 with selection among 4 x 4, 4 x 8, 8 x 4, 8 x 8 pixel block sizes as shown in the figures below.
In this place, the following terms or expressions are used with the meaning indicated:
Macroblock | A MPEG coding unit including 16 x 16 pixels subdivided into four 8 x 8 blocks. |
In patent documents, the following expressions/words block", "sub-block", "tile" are often used as synonyms.
In patent documents the word "tile" is often used in the context of the standard JPEG 2000 and of transform coding of static images.
This place covers:
Selection from a plurality of alternative compression algorithms within a video compressor, e.g.
- Selection among discrete cosine transforms [DCT] and subband transforms.
- Selection from a plurality of video compression standards, e.g. selection among H.263 and H.264, selection among MPEG-2 and MPEG-4.
- Selection between lossy and lossless compression.
- Transform skip mode (cf., hevc).
Attention is drawn to the following places, which may be of interest for search:
Video compression based on transform coding | |
Special coding techniques and algorithms |
This place covers:
The selection of transform size within the same predetermined transform algorithm, e.g. 4x4 or 8x8 DCT as in the figure below, or 8x8 or 2x4x8 DCT for frame-based and for field-based block compression, respectively, or sub-band transforms of varying hierarchical structure or type.
This place covers:
Subject matter wherein specific details of a controlled quantiser is provided, e.g. frame type or input video characteristics controlling the quantiser, adaptive quantisation based on output or transmission buffer fullness, choice between fine or coarse quantisation.
This place covers:
Special algorithms used for quantisation in video compression, e.g. the choice of normalisation parameters or matrices, details of variable uniform quantisers or the calculation of quantisation weighting matrices.
This place covers:
The control of resource allocation or assignment (e.g. CPU time, memory, allocation of digital processing units, workload distribution among processors), e.g. skipping of encoding or decoding steps or switching off computing or hardware units, like e.g. motion estimation/compensation or transform units.
Attention is drawn to the following places, which may be of interest for search:
Filtering control | |
Sampling, masking or truncation of coding units | |
Availability of hardware or computational resources, e.g. adapting coding based on assigned resources | |
Implementation details or hardware specially adapted for video compression or decompression |
This place covers:
The adaptation of the scanning of coding units, e.g. the choice of a zig-zag scan of transform coefficients in a transform compressor, as in the figure, or the use of flexible macroblock ordering [FMO].
Attention is drawn to the following places, which may be of interest for search:
Definition of the coding unit | |
Video coding involving rearrangement of data among different coding units |
This place covers:
Subject matter wherein the entropy coding is adapted, e.g. frame type determining the coding table, CABAC, CAVLC, adaptive Huffman coding, choosing among different VLC methods for coding as in the figure.
Attention is drawn to the following places, which may be of interest for search:
Non-adaptive entropy coding for video compression | |
Non-adaptive run-length coding for video compression | |
Conversion to or from variable length codes in general | |
Conversion to or from run-length codes in general |
In patent documents, the following abbreviations are often used:
VLC | Variable Length Coding |
CABAC | Context-Adaptive Binary Arithmetic Coding |
CAVLC | Context-Adaptive Variable Length Coding |
This place covers:
Adaptive sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high frequency transform coefficient masking, i.e. suppression or setting to zero, macroblock skipping, as in the figure.
Attention is drawn to the following places, which may be of interest for search:
Adaptive prioritisation of hardware or computational resources | |
Definition of the coding unit | |
Temporal sampling or interpolation for video coding | |
Spatial sampling or interpolation for video coding |
This place covers:
The definition of an element, a parameter or criterion, which exercises the control of an adapted element or selection as classified in H04N 19/102 in the adaptive coding, wherein element is to be understood as a functional block or process in the digital video compressor or decompressor.
This place covers:
Determination of motion inside a coding unit, e.g. amount of temporal prediction errors, such as average difference calculated on a field, on a frame or on a block in two different time instants.
Attention is drawn to the following places, which may be of interest for search:
Motion estimation or compensation for video compression | |
Analysis of motion in general |
This place covers:
The measure of motion performed by explicitly using motion vectors (e.g magnitude, direction, variance, reliability measures).
Attention is drawn to the following places, which may be of interest for search:
Motion estimation or compensation for video compression | |
Analysis of motion in general |
This place covers:
Determination of coding unit complexity, e.g. by means of an activity detection, as in the figure below by means e.g. of flatness detection or energy of transform coefficients, by means of the detection of edge presence or by means of determination of the amount of spatial prediction error.
This place does not cover:
Measure of complexity defined by data rate or code amount at the encoder output |
This place covers:
The adaptive control of the video compression in response to detected scene cut or change.
Attention is drawn to the following places, which may be of interest for search:
Picture signal circuitry for video frequency region, e.g. scene change detection in television systems | |
Methods involving scene cut or scene change detection in combination with video compression |
This place covers:
The adaptive control of video compression by using information about the data rate or code amount at the encoder output.
Attention is drawn to the following places, which may be of interest for search:
Adaptation of the selection of the code volume for a coding unit prior to coding |
This place covers:
The adaptation of encoding as a function of data rate or code amount determined according to rate-distortion criteria, e.g. as a function of a cost function.
This place does not cover:
Rate distortion as a criterion for motion estimation |
Attention is drawn to the following places, which may be of interest for search:
Adaptation based on measured or subjectively estimated visual quality after decoding | |
Adaptation using optimisation based on Lagrange multipliers |
In this place, the following terms or expressions are used with the meaning indicated:
Cost function | A function of target parameters, as output rate and quality measurement after decoding (e.g. distortion). |
This place covers:
The estimation of the code amount by means of a model, e.g. a mathematical model or a statistical model, as done in the MPEG-2 Test Model 5 (TM5)
Attention is drawn to the following places, which may be of interest for search:
Methods or arrangements, for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding |
This place covers:
The estimation of the code amount by off-line encoding, i.e. encoding without storing at the transmission buffer, e.g. by means of a separate encoder as in the figure below, and counting of the actual data size of the compressed elementary stream.
Attention is drawn to the following places, which may be of interest for search:
Data rate or code amount at the encoder output by estimating the code amount by means of a model |
This place covers:
The control of the video coding by using the measurement of fullness in the transmission buffer, where the buffer may be implicit, as e.g. in the cases of a storage medium, a memory, a physical channel having a certain bit capacity.
Attention is drawn to the following places, which may be of interest for search:
Processing of video elementary streams |
This place covers:
The control of video coding by means of quality after decoding, as measured, e.g. by means of distortion measurement, or as estimated by means of subjective tests.
This subgroup should be assigned, when quality is not particularly linked to output bit-rate.
This place does not cover:
Use of rate-distortion criteria |
Attention is drawn to the following places, which may be of interest for search:
Data rate or code amount at the encoder output, e.g. where the quality measure is directly linked to output bit-rate |
This place covers:
The control of video coding in dependence of the availability of hardware or computational resources, e.g. encoding based on power-saving criteria, time constrained encoding.
Attention is drawn to the following places, which may be of interest for search:
Prioritisation of hardware or computational resources, e.g. adaptively controlling the assignment of coding resources | |
implementation details or hardware specially adapted for video compression or decompression |
This place covers:
- The control of video coding as a function of the coding mode assigned to the unit to be coded, i.e. the coding mode of the unit to be coded is predefined or preselected.
- The subgroup H04N 19/159 covers the case that the coding mode is the prediction type used for the unit to be coded, e.g. intra, inter or bidirectional, as in the figure directly below.
The subgroup H04N 19/16 covers the case that the assigned coding mode is for a given display mode, e.g. for interlaced or progressive display mode, as in the figure directly below.
This place covers:
The control of the video encoding by means of the input from a user, e.g. from a user interface.
This place covers:
- The control of encoding the elementary video stream as a function of the feedback from the client/receiver or from the transmission channel, as e.g. in the figure below.
- The subgroup H04N 19/166 covers in particular the case that the feedback contains a certain amount of transmission errors, e.g. by means of a bit- or packet-error-rate detection.
Attention is drawn to the following places, which may be of interest for search:
Embedding additional information in the video signal during the compression process | |
Control signalling related to video distribution between receiver, transmitter, and network components | |
Transmission of management data between client and server |
In patent documents, the following abbreviations are often used:
BER | Bit Error Rate |
PER | Packet Error Rate |
This place covers:
- The control of the video encoding as a function of a coding unit's position within a video image, e.g. the adoption of coding parameters adapted to a region of interest, different coding of foreground and of background, different coding at the image centre and at the image borders.
- Adaptive video coding depends generally indirectly on the position within an image, e.g. coding parameters may be varied across coding units, e.g. blocks.
- The present subgroup covers the case when the spatial position within the image is explicitly and directly defined as a criterion.
Attention is drawn to the following places, which may be of interest for search:
Image region as coding unit |
In patent documents, the following abbreviations are often used:
ROI | Region Of Interest |
This place covers:
Definition of the video coding units that are controlled by or controlling the adaptive coding. The subgroups of H04N 19/169 define explicitly which coding units are meant.
Attention is drawn to the following places, which may be of interest for search:
with respect to H04N 19/179, referring to scene or shot as coding unit:
Methods involving scene cut or scene change detection in combination with video compression |
with respect to H04N 19/187, referring to scalable layer as coding unit:
Hierarchical and scalability techniques |
In this place, the following terms or expressions are used with the meaning indicated:
(Video) Object | MPEG-4 object, i.e. a region of the image with arbitrary shape |
Slice | A set of blocks within an image, e.g. a line of blocks. |
Block | A rectangular matrix of pixels. |
Macroblock | MPEG coding unit formed by four blocks arranged as a 2 x 2 matrix. |
Group of pictures | MPEG coding unit formed by a set of consecutive pictures. |
Scalable video layer | Coding unit of a scalable encoded video elementary stream |
In patent documents, the following abbreviations are often used:
GOB | Group of Blocks |
GOP | Group of Pictures |
GOF | Group of Frames |
FMO | H.264 Flexible Macroblock Ordering |
In patent documents, the following words/expressions are often used as synonyms:
- "slice" and "GOB"; "block" and "tile"
This place covers:
Adaptive coding applied to regions of interest [ROI].
This place covers:
Adaptive coding on any groups of blocks as long as these are linked to each other in a well-defined manner, such as slices in AVC and tiles in HEVC.
This place covers:
Special mathematical or algorithmic formulations for the methods or tools used for video coding adaptation.
This group is residual with respect to its subgroups.
This place covers:
The formulation in terms of optimisation based on Lagrange multiplier techniques, as e.g. in the cost function defined as C = R + LD, where R is the output rate, L is the Lagrange multiplier, and D is the distortion after decoding.
This place covers:
- Iterative and recursive algorithms and techniques applied to the adaptation of video coding.
- The special case of two-pass or two-step algorithms are covered by H04N 19/194.
This place covers:
Details of the mathematical laws or algorithms used for computation of encoding parameters (like e.g. quantisation step, coding mode), e.g. estimating a current encoding parameter by averaging previously computed encoding parameters, deriving the coding mode for the current coding unit from the coding mode of the neighbouring coding units. Neighbouring coding units may relate to views, layers, spatial or temporal neighbours.
This place does not cover:
Formulations for processing of calculated motion vectors |
Attention is drawn to the following places, which may be of interest for search:
Formulations for initializing motion vector search |
This place covers:
Details of object-based video coding, as e.g. according to the standard MPEG-4.
Attention is drawn to the following places, which may be of interest for search:
Hierarchical and scalability techniques (cf. H04N 19/29) | |
Processing of video elementary streams in the server, e.g. for generating or manipulating the scene composition of objects | |
Processing of video elementary streams in the server involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements, e.g. by decomposing video signals into objects | |
Processing of video elementary streams in the client device, e.g. involving rendering scenes according to scene graphs | |
Contour coding |
In this place, the following terms or expressions are used with the meaning indicated:
(Video) Object | MPEG-4 object, i.e. a region of the image with arbitrary shape |
Alpha-plane | A discrete bitmap (generally binary) defining the part of a frame constituting a given object, e.g. in terms of the position of the pixels belonging to the object or in terms of the position of the blocks covering the object. |
Sprite | A unified background image derived by compositing the backgrounds of the single frames of a video sequence, e.g. having a camera motion throughout a video segment (within e.g. a scene, a shot, a GOP, a sequence). It may be static or dynamic. |
Scene description coding | The coded representation of the spatiotemporal positioning of audio-visual objects as well as their behaviour in response to interaction, as e.g. in the standard MPEG-4 Part 11. |
Synthetic/natural hybrid coding | Part of the MPEG-4 standard relating to coding facial animation and mesh compression. |
Synthetic picture component | A picture component that is coded by geometric modelling with synthesizing at reconstruction (e.g. avatar). |
Natural picture component | A picture component that is coded "as it stands" without geometric modelling. |
In patent documents, the following abbreviations are often used:
BIFS | BInary Format for Scenes |
SNHC | Synthetic/Natural Hybrid Coding |
VOL | Video Object Layer |
VOP | Video Object Plane |
In patent documents, the following words/expressions are often used as synonyms:
- "object", "video object", and "video object plane (VOP)"
This place covers:
- Details of video coding, where the elementary video stream is coded so that it contains a hierarchy of different compressed representations of the same video sequence, wherein each representation may correspond e.g. to a different video resolution or video format. Layered coding is also covered here.
- The hierarchy may be incremental, as e.g. in scalable video coding (like the extension of the standard H.264 called Scalable Video Coding [SVC]).
This place does not cover:
Transform coding using sub-band based transform, e.g. wavelets |
Attention is drawn to the following places, which may be of interest for search:
Processing of video elementary streams in the server involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements, e.g. by decomposing video signals into layers at the transmitter side | |
Controlling the complexity of the video stream at the transmitter side, e.g. by scaling the resolution or bitrate of the video stream | |
Processing of video elementary streams in the client device involving reformatting operations of video signals for household redistribution, storage of real-time display, e.g. by decomposing video signals into layers at the receiver side | |
Content or additional data management, e.g. controlling the complexity of the video stream at the receiver side |
In this place, the following terms or expressions are used with the meaning indicated:
Temporal scalability | Scalability in terms of frame rate, meaning that a given bit stream includes different sub-streams each with a different frame rate or sub-streams that, when combined, increase the output frame rate. |
Spatial scalability | Scalability in terms of spatial video sampling rate or resolution (e.g. quantisation step size, pixel bit depth), meaning that a given bit stream includes different sub-streams each with a different frame size or resolution or sub-streams that, when combined, increase the output frame size or resolution. |
This place covers:
Performing hierarchical or layered coding by acting on temporal resolution, e.g. temporal scalability.
Attention is drawn to the following places, which may be of interest for search:
Predictive coding using temporal sub-sampling or interpolation |
This place covers:
Performing hierarchical or layered coding by acting on spatial resolution, e.g. spatial scalability.
Attention is drawn to the following places, which may be of interest for search:
Predictive coding involving spatial sub-sampling or interpolation |
This place covers:
The preliminary organisation of the video elementary stream with assignment of different priorities or importance to data to be further transmitted, e.g. for transmission or dropping.
Attention is drawn to the following places, which may be of interest for search:
Error resilience techniques for digital video coding involving data partitioning | |
Control signalling in networks for selective content distribution, e.g. multimode transmission | |
Cryptographic protocols | |
Network security protocols | |
Protocols for real-time services in data packet switching networks | |
Network protocols for data switching network services |
This place covers:
Transcoding of the elementary video stream at the level of digital video coding, i.e. partial or full decoding of a coded input stream and re-encoding of the decoded output stream.
Attention is drawn to the following places, which may be of interest for search:
Video standard conversion at the pixel level, e.g. for analog television | |
Video conference systems, e.g. reformatting video signals | |
Processing of video elementary streams at a server involving reformatting operations of video signals | |
Processing of video elementary streams at a client device involving reformatting operations of video signals | |
Information retrieval, e.g. distillation of HTML documents for optimising the visualization of content or computer file format conversion | |
Cryptographic protocols | |
Network security protocols | |
Protocols for real-time services in data packet switching networks | |
Network protocols for data switching network services |
This place covers:
Implementation details or hardware specific for elementary video compression or decompression, e.g. dedicated software implementation, memory arrangements, parallel processing or hardware for motion estimation or compensation.
This place does not cover:
Filter definition or implementation details for defining sub-band transforms |
Attention is drawn to the following places, which may be of interest for search:
Decoder specific implementations | |
Binary arithmetic | |
Execution of machine instructions | |
Pipelines | |
Resource allocation | |
Transfer of information, buses | |
Digital computing | |
Complex mathematical operations | |
Software or hardware implementations of Fourier, Walsh or analogous domain transformations |
This place covers:
- Details of memory arrangements or management specifically dedicated to video compression.
- The subgroup H04N 19/426 covers details of memory downsizing techniques.
This place does not cover:
Techniques for memory access in motion estimation or compensation |
Attention is drawn to the following places, which may be of interest for search:
Accessing, addressing or allocating within memory systems or architectures in general | |
Memory management for general purpose image data processing | |
Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators, e.g. display memories | |
Static storage for general purpose data processing, e.g. memories, shift registers |
This place covers:
Video decoders not symmetric with the corresponding encoders, i.e. decoding means or steps are not a mere reversal of the corresponding encoding means or steps, or specific hardware or software implementations details for the video decoder.
Attention is drawn to the following places, which may be of interest for search:
Implementation details or hardware specific for video encoding and decoding | |
Complex mathematical operations |
This place covers:
- Subject matter wherein additional information is provided and transmitted within the compressed video signal, e.g. flag information or ancillary encoding information without details of syntax related data structure, watermarking.
- Encoding parameters are generally included for transmission in the video elementary stream.
- This group or its subgroups should be assigned if special details are provided about their insertion for transmission in the stream, e.g. compression is covered by H04N 19/463.
This place does not cover:
Motion vector coding and transmission | |
Insertion of resynchronisation markers into the bitstream | |
Syntax aspects related to video coding |
This place covers:
Details of the embedding of additional information during the coding process, which is embedded into the image part or into the auxiliary information of the elementary video bit stream in order to be invisible, e.g. by watermarking.
Attention is drawn to the following places, which may be of interest for search:
Circuits or arrangements for control or supervision between transmitter and receiver, e.g. display, printing, storage or transmission of additional information in scanning, transmission or reproduction of documents or the like | |
Generation or processing of content or additional data for video distribution by content creator independently of the distribution process; Content for video distribution per se | |
Generation of protective data involving watermarking as additional data for video distribution |
This place covers:
Details of compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, of VLC data or of run-length data, filtering in the compressed domain.
This place does not cover:
Processing of decoded motion vectors | |
Motion estimation in a transform domain |
This place covers:
Predictive digital video coding techniques not otherwise provided in other subgroups.
This place does not cover:
Transform coding (constitutes a significant non trivial detail) used in combination with predictive coding |
This place covers:
- Predictive digital video coding techniques involving temporal prediction not otherwise provided in other subgroups.
- Details of temporal prediction are classified here.
This place does not cover:
Adaptive coding with adaptive selection between spatial and temporal predictive coding | |
Adaptive coding with adaptive selection among a plurality of temporal predictive coding modes |
This place covers:
- Temporal predictive coding using conditional replenishment, i.e. transmitting only a portion of a picture, in which a change has been detected with respect to the corresponding co-located portion of the immediately previous picture.
- Conditional replenishment may be seen also as motion compensated temporal predictive encoding, using only skipping or transmission with zero motion vector.
This place covers:
- Details of disparity estimation and compensation in stereoscopic or multi-view video coding are also covered in this subgroup and in its subgroups. For a synopsis of motion estimation techniques in video coding, see the figure below.
Attention is drawn to the following places, which may be of interest for search:
Picture signal circuitry for video frequency region, e.g. for movement detection in television systems not related to digital video coding | |
Conversion of standards for analogue television systems, e.g. at pixel level involving interpolation processes involving the use of motion vectors | |
Analysis of motion by image analysis in general |
In this place, the following terms or expressions are used with the meaning indicated:
Motion vector | A two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture. |
Global motion estimation | Process to estimate the part of motion in a video sequence caused by camera motion, e.g. background motion by panning or zooming. |
Multiresolution motion estimation | Motion estimation performed on the same picture of a video sequence at different spatial sampling resolutions (coarse-to-fine: starting from the lowest resolution; fine-to-coarse: starting from the highest resolution). |
Block-based matching motion estimation | Classic motion estimation based on the search of a best matching block in a reference frame. |
Occlusion | A part of background or of a foreground object that is hidden in one frame and then uncovered in a following frame. |
(Motion) Search window | A region in a reference frame, where the search for the block or feature best matching the current block or feature is performed. |
In patent documents, the following abbreviations are often used:
MV | Motion Vector |
GMV | Global Motion Vector |
MAE | Mean Absolute Error |
MAD | Mean Absolute Difference |
SAD | Sum of Absolute Differences |
MSE | Mean Squared Error |
CCF | Cross-Correlation Function |
PDC | Pixel Difference Classification |
DFD | Displaced Frame Difference |
In patent documents, the following words/expressions are often used as synonyms:
- "reference frame" and "anchor frame"
This place covers:
- Subject matter wherein the determined or existing motion vectors are subjected to further processing or modification, e.g. scaling of motion vectors for scalability or transcoding purposes, encoding of motion vectors, reducing or dropping of motion vectors.
- Motion vector coding and predictive coding is covered in the subgroups.
Attention is drawn to the following places, which may be of interest for search:
Processing of encoding parameters different from motion vectors |
This place covers:
Motion estimation wherein motion vectors are attached to specific feature points or points of a mesh, e.g. affine motion models.
Attention is drawn to the following places, which may be of interest for search:
Rate distortion as a criterion for adaptive coding |
This place covers:
Uni-directional or bi-directional motion compensation with more than one reference frame per direction
In this place, the following terms or expressions are used with the meaning indicated:
Bi-directional motion frame interpolation | Temporal interpolation where a frame is predicted as a function both of a preceding anchor frame and of a succeeding anchor frame, e.g. by averaging. |
This place covers:
Bi-directional motion compensation with one or more than one reference frame per direction
In this place, the following terms or expressions are used with the meaning indicated:
Bi-directional motion frame interpolation | Temporal interpolation where a frame is predicted as a function both of a preceding anchor frame and of a succeeding anchor frame, e.g. by averaging. |
This place covers:
Prediction of a frame (Ppred) from an anchor frame (Panc) that is not the closest anchor frame preceding or succeeding the frame to be predicted, cf. figure.
This place does not cover:
Video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic |
This place covers:
Sub-sampling or interpolation in the temporal domain during digital video compression or decompression.
Attention is drawn to the following places, which may be of interest for search:
Conversion of standards for analogue television systems, at pixel level involving interpolation processes | |
Adaptive sampling for adaptive digital video coding | |
Video compression using hierarchical techniques in the temporal domain |
This place covers:
- Sub-sampling or interpolation in the spatial domain during digital video compression or decompression.
- Details of sub-sampling or interpolation operations during motion estimation and compensation with sub-pixel accuracy are also covered here.
Attention is drawn to the following places, which may be of interest for search:
Conversion of standards for analogue television systems, at pixel level involving interpolation processes | |
Adaptive sampling for adaptive digital video coding | |
Video compression using hierarchical techniques in the spatial domain | |
Motion estimation or motion compensation with sub-pixel accuracy | |
Scaling the whole image or part thereof, e.g. by interpolation based image scaling |
This place covers:
Digital video compression involving spatial prediction techniques, e.g. details of intra prediction.
Attention is drawn to the following places, which may be of interest for search:
Adaptive coding with adaptive selection between spatial and temporal predictive coding | |
Adaptive coding with adaptive selection among a plurality of spatial predictive coding modes |
This place covers:
Details of stereoscopic or multi-view digital video coding including processing (e.g. compression) of depth maps.
Attention is drawn to the following places, which may be of interest for search:
Motion estimation or compensation, e.g. details of vector based interview estimation and compensation. |
Attention is drawn to the following places, which may be of interest for search:
Fourier, Walsh or analogous domain transformations in general, e.g. implementation details of DCT or wavelet transforms |
In patent documents, the following abbreviations are often used:
DCT | Discrete Cosine Transform |
KLT | Karhunen-Loève Transform |
DST | Discrete Sine Transform |
FFT | Fast Fourier Transform |
WLT | Wavelet Transform |
MCTF | Motion Compensated Temporal Filtering |
EZW | Embedded Zerotrees of Wavelets |
In patent documents, the following words/expressions are often used as synonyms:
- "discrete cosine transform" and "cosine transform"
This place covers:
Transform based predictive video coders of the type displayed in the figure below, i.e. where the transform is operated before or after the prediction loop.
Attention is drawn to the following places, which may be of interest for search:
Implementation details or hardware specially adapted for video compression or decompression |
This place covers:
Techniques applied at the level of encoding the elementary video stream for the purpose of increasing the error resilience thereof.
Attention is drawn to the following places, which may be of interest for search:
Systems for detection or correction of transmission errors in the transmission of television signals using pulse code modulation | |
Selective content distribution, e.g. error resilience techniques for storage at video servers or for channel coding adapted to video distribution | |
Channel coding of digital bit-stream for video distribution | |
Coding, decoding or code conversion, e.g for error correction in general | |
Arrangements for detecting or preventing errors in the information received, e.g. preventing errors by adapting the channel coding |
In this place, the following terms or expressions are used with the meaning indicated:
Resynchronisation marker | A special Variable Length Coding binary word inserted to allow re-initialisation of VLC decoding, which is forced by the marker. |
Reversible Variable Length Coding | VLC allowing backward decoding of the stream, i.e. decoding of a VLC coded binary string starting from the end to the beginning. |
In patent documents, the following abbreviations are often used:
Resync marker | Resynchronisation marker |
RVLC | Reversible Variable Length Coding |
UEP | Unequal Error Protection |
This place covers:
Subject matter wherein details about standards related coding syntax or about using the syntax in the coding process are provided, e.g. H.264 supplemental enhancement information [SEI], headers definitions, details of elementary stream parsing.
In this place, the following terms or expressions are used with the meaning indicated:
Syntax | The definition of the binary codes and values that make up a conforming elementary video bit stream. |
Semantics | The definition of the meaning of the syntax and of the process flow for decoding the syntax elements to produce the digital video output. |
Profile/Level | Operational level of a standard compliant decoder, which uses a predefined subset of the features defining the complete decoder according to the standard. The definition of the predefined subset falls also within the prescriptions of the standard. |
This place covers:
Subject matter wherein a filtering operation specifically adapted to video compression is included but not necessarily adaptive in the video compression or decompression process, with details of the filtering operation provided.
This place does not cover:
Filter definition or implementation for sub-band based transform | |
Filtering for removal of coding artifacts |
Attention is drawn to the following places, which may be of interest for search:
Adaptive filtering operation | |
Pre-processing or post-processing specially adapted for video compression | |
Image filtering for image enhancement or restoration using local operators | |
Impedance networks, e.g. resonant circuits, filters in general |
This place covers:
- The insertion of the filtering within a prediction loop and details of such filter.
- This subgroup is of relevance, only if it contributes to define non trivial details of the filtering operation as in-loop filtering, regardless whether the filtering is adapted in the sense of H04N 19/117 or not.
Attention is drawn to the following places, which may be of interest for search:
Adaptive filtering operation | |
Filter definition or implementation for sub-band based transform |
This place covers:
- Subject matter wherein the pre or post processing operation is present as a functional block but not necessarily adaptive in the video coding process, e.g. the pre or post processing is respectively performed prior to the input of, or after the output of, the video coding process.
- This subgroup is of relevance, only if the subject-matter to be classified contributes to define non trivial details of pre- or post-processing, regardless whether the filtering is adapted in the sense of H04N 19/117 or not.
Attention is drawn to the following places, which may be of interest for search:
In-loop filtering |
This place covers:
Processing techniques (e.g. filtering or interpolation in the spatial or in the temporal domain) adapted to reduce artefacts caused by digital video compression, e.g. blockiness from block-based transform compression, frame freeze or jerkiness from dropping frames at compression or transmission, false contours from limited bit depth resolution.
Attention is drawn to the following places, which may be of interest for search:
Circuitry for suppressing or minimising disturbance (e.g. moiré, halo) in television systems | |
In-loop filtering | |
Filtering or interpolation as an error concealment technique |
Attention is drawn to the following places, which may be of interest for search:
Picture signal circuitry for video frequency region, e.g. circuitry for scene change detection in television systems. | |
Scene cut detection in adaptive video coding |
This place covers:
Techniques for the rearrangement of data among different coding units at the level of a single elementary video stream within the operation of the video coder, e.g. shuffling, interleaving, scrambling, permutation of pixel data or permutation of transform coefficient data among different blocks.
Attention is drawn to the following places, which may be of interest for search:
Analogue secrecy systems in television systems | |
Adaptive scanning of coding units | |
Processing of video elementary streams for video distribution involving video stream encryption at the transmitter side | |
Processing of video elementary streams involving video stream decryption | |
Processing of video elementary streams involving video stream encryption at the receiver side |
This place covers:
- Techniques for detecting transmission errors at the digital video decoder and at the level of the elementary video stream.
- The subgroup H04N 19/895 covers details of detection in combination with error concealment.
Attention is drawn to the following places, which may be of interest for search:
Decoders specifically adapted for coding, decoding, compressing or decompressing digital video signals | |
Methods or arrangements, for coding, decoding, compressing or decompressing digital video signals using error resilience | |
Interfacing the downstream path of the transmission network originating from a server, e.g. channel decoding in selective content distribution | |
Monitoring of processes or resources, e.g. of downstream path of the transmission network at the receiver side | |
Monitoring of client processing errors or hardware failure in selective video distribution | |
Control signalling between network components and server or clients, e.g. monitoring network process errors by the network | |
Coding, decoding or code conversion for error detection or error correction in general |
This place does not cover:
Methods or arrangements, for coding, decoding, compressing or decompressing digital video signals | |
using adaptive coding | |
using video object coding | |
using hierarchical techniques, e.g. scalability | |
using video transcoding | |
Implementation details or hardware specially adapted for video compression or decompression | |
Decoders specifically adapted for coding, decoding, compressing or decompressing digital video signals | |
Embedding additional information in the video signal during the compression process | |
using compressed domain processing techniques other than decoding | |
using predictive coding | |
using transform coding | |
using error resilience | |
characterised by syntax aspects related to video coding | |
Details of filtering operations specially adapted for video compression | |
Pre-processing or post-processing specially adapted for video compression |
This place covers:
Subject matter wherein the entropy coding is especially adapted to video compression, e.g. specifics of table entries for fixed and variable length coding, details of MPEG Huffman coding, details of H.264 arithmetic coding.
Attention is drawn to the following places, which may be of interest for search:
Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC] | |
Run-length coding for video compression | |
Conversion to or from variable length codes in general |
In patent documents, the following abbreviations are often used:
VLCA | Variable Length Coding |
This place covers:
- Subject matter wherein the run-length coding is especially adapted to video compression.
- In run-length coding a run, i.e. a sequence of identical data values, is coded by a representation of the data value together with the length of the sequence.
Attention is drawn to the following places, which may be of interest for search:
Variable length coding in an adaptive video coding process | |
Conversion to or from run-length codes in general |
In patent documents, the following abbreviations are often used:
RLE | Run-Length Encoding |
This place covers:
Video compression using vector quantisation, i.e. by dividing a large set of points into groups (vectors) having approximately the same number of points closest to them and by representing each group by a single code, which is associated with its centroid point.
Attention is drawn to the following places, which may be of interest for search:
Compression in general, e.g. vector coding |
In patent documents, the following abbreviations are often used:
VQ | Vector Quantisation |
This place covers:
- Video compression using tree coding.
- Two-dimensional tree coding is called quad-tree coding and is performed by partitioning an image or a video frame by recursively subdividing it into four quadrants or regions, until each region may be represented by a single colour or code word, and coding the resulting tree data structure in which each internal node has exactly four children and each termination node (leaf) corresponds to a resulting region with the colour or code word associated to it, cf. R. Finkel and J.L. Bentley (1974). "Quad Trees: A Data Structure for Retrieval on Composite Keys". Acta Informatica 4 (1): 1–9.
- Tree coding in higher dimension is defined correspondingly (e.g. octree, performed in three-dimensions by subdivision into eight volumetric regions).
Attention is drawn to the following places, which may be of interest for search:
Image coding using tree coding, e.g. quadtree, octree |
This place covers:
Video compression using matching pursuit coding, cf. G. Mallat and Z. Zhang, "Matching Pursuits with Time-Frequency Dictionaries", IEEE Transactions on Signal Processing, December 1993, pp. 3397–3415.
In patent documents, the following abbreviations are often used:
MP | Matching Pursuit |
This place covers:
- Video compression using adaptive-dynamic-range coding, cf. Kondo et al., "Adaptive dynamic range coding scheme for future HDTV digital VTR", Proceedings of Signal Processing of HDTV, III. Fourth International Workshop on HDTV and Beyond, Turin, Italy, 4-6 Sept. 1991, p. 43-50.
The term "adaptive" in the "Adaptive-Dynamic-Range Coding" refers to the dynamic range being adaptive and not to the coding being adaptive, which is covered by H04N 19/10 and subgroups.
In patent documents, the following abbreviations are often used:
ADRC | Adaptive-Dynamic-Range Coding |
This place covers:
- Lossy video compression using fractal algorithms, as described in Y. Fisher, D. N. Rogovin and T.-P. J. Shen, "Fractal (Self-VQ) Encoding of Video Sequences", Proc. of the Conference on Visual Communications and Image Processing '94, Chicago, IL, USA, 25-29 Sept. 1994, SPIE, vol. 2308, p.1359-1370 (1994).
Attention is drawn to the following places, which may be of interest for search:
Methods for coding digital video signals using vector quantisation |
This place covers:
- Interactive video distribution processes, systems, or elements thereof, which are characterised by point-to-multipoint system configurations, and which are mainly used for motion video data unidirectional distribution or delivery resulting from interactions between systems operators, e.g. access or service providers, or users e.g. subscribers, and system elements
- Such systems include dedicated communication systems, such as television distribution systems, which primarily distribute or deliver motion video data in the manner indicated, which may, in addition, provide a framework for further, diverse data communications or services in either unidirectional or bi-directional form. However, video will occupy most of the downlink bandwidth in the distribution process.
- Typically, system operators interface with transmitter-side elements or users' interface with receiver-side elements in order to facilitate, through interaction with such elements, the dynamic control of data processing or data flow at various points in the system. This interaction is typically occasional or intermittent in nature.
- Processes, systems or elements thereof specially adapted to the generation, distribution and processing of data, which is either associated with video content, e.g. metadata, ratings, or related to the user or his environment and which has been actively or passively gathered. This data is either used to facilitate interaction or to alter or target the content.
- H04N 21/00 is an application place for a large number of IT technologies, which are covered by the corresponding functional places
- Video servers and clients use internally specific computing techniques. Corresponding techniques used in general computing are found in G06F. This concerns data storage, software architectures, error detection or correction, monitoring, video retrieval, browsing, Internet browsing, computer security, billing or advertising
- Video servers and clients use specific telecommunication techniques for the video distribution process. Corresponding techniques used in generic telecommunication networks are found in subclasses H04B, H04H, H04L, H04W. This concerns monitoring or testing of transmitters/receivers, synchronisation in time-division multiplex, broadcast or multicast, maintenance, administration, testing, data processing in data switching networks, home networks, real-time data network services, data network security, applications for data network, wireless networks per se.
This place does not cover:
Real-time bi-directional transmission of motion video data |
Attention is drawn to the following places, which may be of interest for search:
Synchronising circuits with arrangements for extending range of synchronisation at the transmitter end | |
Television picture signal circuitry for Scene change detection | |
Reproduction of recorded television signals | |
Interface circuits between an apparatus for recording television signals and a television receiver | |
Television signal recording using magnetic recording on tape for reproducing at a rate different from the recording rate | |
Conversion of standards in analog television systems | |
Adaptations for transmission by electric cable for domestic distribution in television systems | |
Signal processing in analog two-way television systems | |
Reproduction of recorded television signals | |
Diagnosis, testing or measuring for television receivers | |
Systems for the transmission of television signals using pulse code modulation using bandwidth reduction involving transcoding | |
Flight-deck installations for entertainment or communications | |
Resetting in general | |
Constructional details of equipment or arrangements specially adapted for portable computer application | |
Power management in computer systems | |
Input arrangements for interaction with the human body based on nervous system activity detection | |
Interaction techniques for graphical user interfaces | |
Storage management | |
RAID arrays per se | |
Interfaces to printers | |
Digital output for controlling a plurality of local displays | |
Software architectures; Program control | |
Error detection or correction; Monitoring | |
Addressing or allocating within memory systems or architectures | |
Prefetching while addressing of a memory level in which the access to the desired data or data block requires associative addressing means within memory systems or architectures | |
Retrieval of video data | |
Retrieval from the web | |
Computer security | |
Printing data | |
Computer systems using learning methods | |
Billing; Advertising | |
Banking in general | |
Image watermarking in general | |
Image enhancement or restoration in general | |
Methods or arrangements for recognising scenes | |
Methods or arrangements for recognising human body or animal bodies or body parts | |
Methods or arrangements for acquiring or recognising human faces, facial parts, facial sketches, facial expressions | |
Methods or arrangements for recognising movements or behaviour | |
Adapting incoming signals to the display format of the display terminal | |
Details of formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes | |
Details of audio signal transcoding | |
Arrangements for data linking, networking or transporting, or for controlling an end to end session in a satellite broadcast system | |
Arrangements for wireless networking or broadcasting of information in indoor or near-field type systems | |
Monitoring or testing of transmitters/receivers | |
Broadcast communication | |
Synchronisation in time-division multiplex | |
Allocation of channels according to the instantaneous demands of the users in time-division multiplex systems | |
Arrangements for detecting or preventing errors in the information received by adapting the channel coding | |
ARQ protocols | |
Arrangements for synchronising receiver with transmitter | |
Arrangements for synchronising receiver with transmitter by comparing receiver clock with transmitter clock | |
Arrangements for synchronising receiver with transmitter wherein the receiver takes measures against momentary loss of synchronisation | |
Key distribution for secret or secure communication | |
Key distribution for secret or secure communication, using a key distribution center, a trusted party or a key server | |
Arrangements for secret or secure communication including means for verifying the identity or authority of a user of the system | |
Charging arrangements in data networks | |
Broadcast or multicast in data switching networks | |
Data processing in data switching networks | |
Analog front ends or means for connecting modulators, demodulators or transceivers to a transmission line | |
Maintenance or administration in data switching networks | |
Message switching systems | |
Data network security | |
Real-time data network services | |
Network arrangements or protocols in data packet switching networks for supporting network services or applications | |
Wireless networks |
The classification scheme has a matrix structure and symbols taken from its different cells allow classification of the relevant aspects of a document as seen below.
In cases where a document is not teaching clearly whether operations are performed on the server-side or client-side, it should be placed by default in the server part of the scheme (H04N 21/20). Further, if both embodiments (server-side, client-side) are present, symbols from H04N 21/20 and H04N 21/40 should be allocated.
H04N 21/20 Server
Architectures
Specialised | |
Source | |
Secondary | |
Hardware |
Elementary / Bitstream Operation
Storage | |
Retrieval | |
Audio | |
Video | |
Add Data | |
Multiplex | |
Add Server | |
Downstream | |
Upstream | |
Monitoring | |
OS | |
SYNC |
Management operations
Learn | |
Shop | |
Client Data | |
Scheduling | |
Content Management |
End-User App
Storing Data | |
Directory Services |
H04N 21/40 Client
Architectures
Peripherals | |
Specialised | |
External | |
Input | |
Internal |
Elementary / Bitstream Operation
SYNC | |
Visual Interface | |
Retrieval | |
Storage | |
DeMultiplex | |
Add Data | |
Local Net | |
Upstream | |
Downstream | |
Video | |
User ID | |
Monitoring | |
OS |
Management operations
User Data | |
Filtering | |
Scheduling | |
Data Management | |
Learn |
End-user App
Request | |
Inputting | |
Add Services | |
Selection | |
Configuration | |
Data Services |
for details of servers or processes related to the reception of the content from the content provider or related to the distribution of content to clients. Network interfaces are included but not the communication aspects with clients
for structural details of client devices or processes related among others to the processing, storing or displaying of the received content as well as user interfaces for accessing video services
for the nature of the downlink / uplink or the exchange of control signals or data between clients, servers, network
for specific multimedia content or processes taking place before distribution (usually by the content provider) and independently according to their appropriate layer:
System architecture and topology
Functional and application aspects related to bit-stream processing or elementary operations
Functional and application aspects related to system management
Services and functionalities offered to the end-user
In this place, the following terms or expressions are used with the meaning indicated:
Additional data | designates still pictures, textual, graphical or executable data such as software. It is used to convey supplemental information and can be generated prior to or during the distribution process itself, e.g. metadata, keys |
Content | designates video or audio streams, which may be combined with additional data. Video data will always be present and occupy most of the downlink bandwidth in the distribution process |
Server | designates an apparatus designed for adapting the content received from the content provider to the distribution network. It also manages the distribution to client devices or intermediate components over a network. Further servers may also be present for gathering or generating additional data, e.g. rights management server |
Additional data server | designates a server, which sole purpose is the distribution or management of additional data. It is not in charge of the distribution of video or audio data |
Client | designates an apparatus such as a TV receiver, a set-top-box, a PC-TV, a mobile appliance (e.g. mobile phone or receiver in a vehicle), for receiving video, audio and possibly additional data from one or several servers or intermediate components via a network for further processing, storing or displaying. It can also transmit this data on a home-based local network to further devices, e.g. a home server transmitting video to PCs and set-top-boxes within a home |
Local network | pertains to a restricted area, e.g. a home or a vehicle, and designates the link between a client and its peripheral devices |
Network | is to be distinguished from "local network": "network" designates the link between the server and the clients, or between the server and the intermediate components, or between the intermediate components and the clients, or between remotely located clients |
Distribution | encompasses broadcasting, multicasting and unicasting techniques for transmitting content from one or more sources to one or more receiving stations. The distribution follows a request by a receiving station to the source, e.g. VOD or from a customization of the content by the source, e.g. targeting advertisements to a demographic group in a unidirectional or bidirectional system. Additionally, distribution encompasses techniques where the client acts as a source and another client acts as a receiving station, e.g. a peer-to-peer system for sharing video among client devices |
End-user | designates a physical person, e.g. a TV viewer, who consumes the content using the client device. He is the final recipient of the content distributed by the server |
Interaction | covers actions occurring between or among two or more objects that have an effect upon one another, wherein objects comprise users, system operators, system elements, or content. The user may interact with content locally at the client device, e.g. for requesting additional data stored within the client device. The user may interact with content remotely through a server e.g. for VOD playback control or for uploading video to a server. The client device may interact with the content e.g. selecting content based upon the user profile. The client device may interact with a server using a return channel, e.g. for authenticating client or uploading client hardware capabilities. The server may interact with a client device, e.g. to force a client to tune to an advertisement channel |
Upstream | designates the direction of data flow towards the source, e.g. a server receiving a request via a mobile phone network |
Downstream | designates the direction of data flow towards a client, e.g. a client receiving data originating from a server |
Elementary stream | An elementary stream (ES) as defined by the MPEG communication protocol designates the output of an audio or video encoder |
In patent documents, the following abbreviations are often used:
VOD | Video On Demand |
SI | Service Information |
IP | Internet Protocol |
OS | Operating System |
PCR | Program Clock References |
STB | Set-top-box |
PC | Personal Computer |
PVR | Personal Video Recorder |
GPS | Global Positioning System |
ECM | Entitlement Control Message |
EMM | Entitlement Management Message |
ROI | Region Of Interest |
PIN | Personal identification number |
DSM-CC | Digital Storage Media - Command and Control Protocol |
RTP | Real-time Transport Protocol |
UMID | Unique Material Identifier |
MHEG | Hypermedia information coding Expert Group |
XML | eXtensible Markup Language |
This place covers:
Subject matter comprising methods and components in the main broadcast server, headend, video-on-demand server, or server associated with the headend/video-on-demand server, which includes services, management and operations performed on the bitstream for distribution to client devices or an intermediate component over a network. The server adapts the content received from the content provider to the distribution network and only provides a network interface. Addressing issues and the exchange of control signals with the clients or the network are placed by definition in the T-model. The first layer of this subgroup pertains to the physical description of the server, e.g. its internal components, the sources of the content. The server may consist of a single physical entity or of a plurality of interconnected sub-servers. The second layer is directed to elementary specialized functions such as the storage and retrieval of the content, the processing of the elementary multimedia streams, the multiplexing thereof, the insertion of additional data, the processing of the data at the downstream and upstream network interfaces (e.g. channel coding, network adaptation, handling of clients requests), the monitoring of internal processes, e.g. server load, or of network interfaces, e.g. downstream bandwidth. The third layer describes the management of the content and of the system, such as client device or user management, scheduling issues e.g. according to bandwidth or billing policies, creation of virtual channels, management of services not directly linked to the distribution of multimedia content, e.g. billing, shopping, rights. The last layer is directed to data services directly accessible by the user, such as hosting of private data. The subgroup is directed to documents related to the insertion of server related data into a signal, such as time information inserted into EPG information. Raw multimedia data per se, is placed in H04N 21/80. The subgroup is directed to documents related to server functions, such as transmitting data to the user however, server characteristics initiated or performed on behalf of a user request is placed in the H04N 21/60. Examples of documents placed in the S-model (1) This subgroup is directed towards a server, which could be the source of additional information related to the World Wide Web. (2) This subgroup is directed towards alteration of the scene composition in regards to video objects (e.g. MPEG-2 or MPEG-4 objects). (3) This subgroup is directed towards multiplexing of video and audio streams for transmission. (4) This subgroup is directed towards the distribution of video data throughout a dwelling where the user is unaware of other users (e.g. a Hotel, Airplane or Train). Systems that provide video distribution within a dwelling where the user is aware of other users (e.g. a home gateway) is classified elsewhere. (5) This subgroup is directed to local storage built into (or next to) the server (H04N 21/218) and placement of the data onto the local storage device (H04N 21/231) (note: this is typically used in a VOD environment). Systems which are concerned about the specific details of storage or recording of video data, where the claimed invention is directed to how the video is stored or recorded (e.g. placement of the recording heads within a local storage device on a server), are classified elsewhere. (6) The subgroup is directed to documents related to the insertion of server related data into a signal, such as time information inserted into EPG information.
This place does not cover:
Streaming audio/video via internet | |
Generation of the timestamps used for synchronization purposes | |
URLs sent in the video signal |
This place covers:
Physical description of the multimedia server. As most of the components are always present (e.g. modulator, memory), a symbol should be allocated only if one of the component has a critical function in the invention. It should be further noted, that most of the components have already an entry in other technical fields and that for example the circuitry of a modulator is not part of this model. The server is used to distribute the content in a very limited geographical area, such as a single building. It is localized in the same building. It can be for example a hotel or hospital. The server and clients are localized in a movable object, such as an aircraft, a train or a bus.
This place covers:
Servers been specially adapted to systems located in a confined environment.
This place does not cover:
Arrangements specially adapted for local area broadcast systems |
This place covers:
The server is used to distribute the content in a very limited geographical area, such as a single building. It is localized in the same building. It can be for example a hotel, multiple dwelling units, hospital or museum, movie theater if serving different projection rooms.
This place does not cover:
Adaptations for transmission by electric cable for domestic distribution in television systems | |
Arrangements specially adapted for plural spots in a confined site in broadcast systems |
This place covers:
Server and clients are localized in a movable object, such as an aircraft, a train or a bus.
This place does not cover:
Flight-deck installations for entertainment or communications | |
Arrangements specially adapted for transportation systems in broadcast systems | |
Moving wireless networks |
This place covers:
The source, from which the multimedia server accesses the multimedia content.
This place does not cover:
Details of retrieval in video databases |
This place covers:
- The same scene shot by different cameras under different angles.
- Panoramic video.
This place covers:
The source located remotely, like in other video servers, when all available movies are distributed over a plurality of video servers of same importance.
Example(s) of documents found in this subgroup: WO0158163
This place does not cover:
Distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS] |
Attention is drawn to the following places, which may be of interest for search:
Systems involving a hierarchy between servers |
This place covers:
The video source is built into the server or next to it. It is typical for a VOD server.
This place covers:
Videos stored on disk arrays.
This place does not cover:
RAID arrays per se | |
Use of parity to protect data in RAID systems |
This place covers:
Videos retrieved from magnetic or optical tapes.
This place covers:
Physical aspects of the cache.
Example(s) of documents found in this subgroup: EP1315091
This place does not cover:
Caches in web servers or browsers | |
Intermediate storage and caching in data networks |
Attention is drawn to the following places, which may be of interest for search:
Caching operation on the server side |
This place covers:
Live feeds from cameras, or satellite at a headend.
This place covers:
- Local servers for serving mobile terminals.
- The concept of secondary server is used to describe a hierarchy among several servers, as for example in distributed systems.
This place does not cover:
Provisioning of proxy services in data packet switching networks |
In this place, the following terms or expressions are used with the meaning indicated:
Secondary server | A server belonging to a hierarchy of servers, e.g. servers forming part of distributed systems or local servers for serving mobile terminals |
This place covers:
Local server in a broadcast system.
This place does not cover:
CATV in broadcast systems |
This place covers:
Public access point, where content can be downloaded to / uploaded from clients.
This place does not cover:
Arrangements specially adapted to plural spots in a confined site in broadcast systems |
This place covers:
Local VOD server to serve a small area.
This place covers:
Identification number of the server. It can be used for authenticating the server.
This place does not cover:
Network arrangements in data packet switching networks, protocols or services for addressing or naming |
This place covers:
Elementary specialized functions. They can be implemented in software or hardware. Their task is to control the corresponding hardware component and to provide a service to the upper layer, e.g. network synchronization using a master clock for downstream/upstream transmissions. Synchronization of transmitters.
Attention is drawn to the following places, which may be of interest for search:
Handling or recovery of errors occurring in the server |
This place covers:
Organization and the action of storage as well as writing actions. Storage can be performed in disk arrays as found in VOD servers as well as internal databases, caching of movies or data or any memory related problem.
Attention is drawn to the following places, which may be of interest for search:
Retrieving and reading data in the server | |
Server-side memory management |
This place covers:
Methods describing the placement or distribution of content on different disks or different servers with the aim of providing a balanced load within the (distributed) system.
Example(s) of documents found in this subgroup: US 2004/0202444 A1
This place does not cover:
Storage management | |
Allocation of resources considering the load in multiprogramming arrangements | |
Techniques for rebalancing the load in a distributed system | |
Access to distributed or replicated servers, e.g. load balancing, in data networks |
Attention is drawn to the following places, which may be of interest for search:
Data replication on different disks or servers |
This place covers:
Caching action, for example of movies in a local VOD server. The storage has a temporary aspect and must be distinguished from buffering as performed in the video encoder which holds the multimedia data for a brief period of time.
Example(s) of documents found in this subgroup: US 2002/0169926 A1
This place does not cover:
Buffering on the encoder side | |
Prefetching while addressing of a memory level in which the access to the desired data or data block requires associative addressing means within memory systems or architectures | |
Caching at an intermediate stage in a data network |
This place covers:
Details of the generation and the management of a local database because it is trivial that local data are always stored in some kind of database (from simple lists to complex structures).
This place does not cover:
Details of retrieval of video data and associated meta data in video databases |
This place covers:
Algorithms, describing which data are prioritized for deletion (e.g. oldest or less used data) are classified here.
This place does not cover:
Storage management, e.g. defragmentation | |
Unloading stored programs | |
Housekeeping operations in file systems, e.g. deletion policies | |
Buffering arrangements in a network node or in an end terminal in packet networks |
This place covers:
Content replicated over different servers or over different hard disks.
This place does not cover:
Synchronization of replicated data | |
Error detection or correction by means of data replication | |
Replication in distributed file systems | |
Replication in distributed file systems | |
Replication or mirroring of data in data networks |
This place covers:
Data block placement strategies in the disk array of video servers.
This place does not cover:
Data placement in general |
This place covers:
- Successive file blocks stored on different disks.
- A whole sector localized on one disk only.
This place covers:
A data sector distributed over several disks (RAID technology).
In this place, the following terms or expressions are used with the meaning indicated:
RAID | Redundant array of independent disks. A storage technology that combines multiple disk drive components into a logical unit |
This place covers:
Operations linked to the retrieval of the multimedia stream from the disks, e.g. disk scheduling and file mapping.
This place does not cover:
Storage management | |
Details of querying and searching of video data from a database |
Attention is drawn to the following places, which may be of interest for search:
Content storage operation |
This place does not cover:
Monitoring, identification or recognition of audio in broadcast systems |
Attention is drawn to the following places, which may be of interest for search:
Details of formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes: |
This place covers:
Reformatted audio stream, e.g. by converting from one coding standard to another.
This place does not cover:
Details of audio signal transcoding |
This place covers:
- Video stream management.
- The control of the encoder, video scaling and transcoding aspects, synchronization, interactive control of playback, composition of MPEG-4 objects or embedding of graphics or text.
Attention is drawn to the following places, which may be of interest for search:
Video encoding or transcoding processes per se | |
Involving embedding information at multiplex stream level, e.g. embedding a watermark at packet level | |
Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs |
This place covers:
Buffer level control.
This place covers:
Spatial composition of MPEG-4 objects at the program generation using a scene graph.
Attention is drawn to the following places, which may be of interest for search:
Scene rendering using a scene graph |
This place covers:
Detection of features (e.g. logo) in a video stream, extraction of characteristics directly from the video stream.
This place does not cover:
Television picture signal circuitry for Scene change detection | |
Filtering for image enhancement | |
Methods or arrangements for recognising scenes | |
Arrangements characterised by components specially adapted for monitoring,identification or recognition of video in broadcast systems |
Attention is drawn to the following places, which may be of interest for search:
Image analysis per se |
This place covers:
Splicing of at least one video stream with another stream (video or not) at the server level. It can be used for inserting or substituting a piece of video such as a commercial.
This place covers:
The original A/V stream received from the content provider is reformatted.The output format is defined here.
Example(s) of documents found in this subgroup: US 2008/001791 A1
This place does not cover:
Video transcoding | |
Media packet handling at the source in data packet switching networks |
Attention is drawn to the following places, which may be of interest for search:
Details of conversion of video standards at pixel level |
This place covers:
Transcoding between standards (e.g. MPEG-2 to MPEG-4) or between format such as Quicktime to Realvideo.
This place does not cover:
Conversion of standards in analog television systems |
This place covers:
The components have been coded according to MPEG-4 and become objects.
This place covers:
- Content divided in layers, e.g. base layer and one or more enhancement layers.
- Multiple Description Coding [MDC].
This place covers:
- Transcoding between modalities, e.g. audio to text.
- Slideshow of still pictures transformed in a video.
This place covers:
The reformatting operation is performed on part of the stream, the part being spatial region of the image or a time segment.
This place covers:
- New quantization parameters are introduced allowing to change the resolution of each video frame.
- Degradation of the signal by addition of noise.
This place covers:
The server provides a video with a spatial resolution commensurate with, e.g. the display capabilities of the client
This place covers:
Server reformats video to alter aspect ratio, e.g. between 4:3 and 16:9.
This place covers:
Different versions of the same audio/video stream are created and stored for later immediate retrieval.
This place covers:
- Scrambling of the video stream, encryption of the content stream.
- Scrambling of multimedia content in general.
Example(s) of documents found in this subgroup: US 2002/0085734 A1
This place does not cover:
Multiplex stream encryption in the server | |
Arrangements using cryptography for the use of broadcast information or broadcast-related information |
Attention is drawn to the following places, which may be of interest for search:
Analogue secrecy systems | |
Arrangements for secret or secure communication | |
Arrangements for preventing the taking of data from a data transmission channel without authorisation | |
Security arrangements in wireless networks |
This place covers:
Covering encryption of content before storage in a (VOD) server, also known as off-line encryption.
This place covers:
Not all of the signal is scrambled or different parts are encrypted differently, e.g. to reduce processor load or to enable a reduced quality presentation.
This place covers:
Insertion of software modules and additional data in the video stream. The specific nature of the additional data is not considered.
Attention is drawn to the following places, which may be of interest for search:
Calculation of the repetition rate and of the timing of insertion of additional data by the server-side scheduler | |
Processing of additional data on the client side | |
Arrangements for simultaneous |
This place does not cover:
Arrangements using cryptography for the use of broadcast information or broadcast-related information |
This place covers:
Coding/compression or more generally modification of additional data associated with the content.
This place covers:
Additional informations such as an HTML page are reformatted by the server. Translation in a different language.
Example(s) of documents found in this subgroup: WO 02/071264 A2
This place does not cover:
Optimising the visualization of content for information retrieval from the Internet | |
Tracking of instant messages | |
Media packet handling at the source in data packet switching networks |
This place covers:
Modified resolution of the additional information. It can be used, e.g. to reformat additional data for different destination client devices.
This place covers:
The server generates at least one other version of the original additional data, which is available together with the original version.
This place covers:
Transport stream generation. Takes as input video or audio streams or already multiplexed AV stream (remultiplexing) and outputs a single Transport Stream.
This place does not cover:
Multiplexing of data packets for data networks |
This place covers:
Modification of bitstream parameters, e.g. restamping, transmultiplexing, remapping of PIDs.
This place covers:
Insertion of stuffing bits/bytes/packets in the packetised stream to e.g. obtain a constant bitrate.
This place does not cover:
Synchronisation arrangements in time-division multiplex systems using bit stuffing for systems with different or fluctuating information rates |
This place covers:
Multiplexing in an MPEG stream according to the DVB standard or generally speaking, insertion of additional data in the streaming of a digital TV system.
This place does not cover:
Arrangements for simultaneous broadcast of plural pieces of information |
This place covers:
Insertion in a DVB carousel.
This place does not cover:
Arrangements for broadcast or for distribution of identical information repeatedly in broadcast distribution systems |
This place covers:
Generation of MPEG SI and PSI tables.
This place covers:
The typical structure of a stat mux is a multiplexer which sends command signals back to the video coder(s) to make them change parameters (e.g. bitrate) so as to optimise the global use of the bandwidth.
Attention is drawn to the following places, which may be of interest for search:
Generation of timestamps for synchronization purposes |
This place covers:
Processing the transport stream after its assembly and sending it over the network.
This place does not cover:
Hybrid Fiber Coaxial [HFC] networks for downstream channel allocation for video distribution | |
Flow control in packet networks | |
Real-time communication protocols in data switching networks | |
Scheduling or organising the servicing of application requests in data packet switching networks |
This place covers:
The video pump is responsible for feeding the program content to the network at the correct data rate, for example after having received a control signal from the network.
Attention is drawn to the following places, which may be of interest for search:
Video streams retrieval |
This place covers:
Bitstream adapted to a specific network. The type of network or protocol used is classified elsewhere.
This place does not cover:
Transmission of MPEG streams over ATM |
Attention is drawn to the following places, which may be of interest for search:
Adapting the video stream to a specific local network, e.g. a Bluetooth® network |
This place covers:
Protection of the digital bitstream (e.g. RS coding) and modulation.
This place does not cover:
Arrangements for detecting or preventing errors in the information received by adapting the channel coding | |
Analog front ends or means for connecting modulators, demodulators or transceivers to a transmission line |
This place covers:
Channel and bandwidth allocation.
Example(s) of documents found in this subgroup: WO 03/088667 A1
This place does not cover:
Allocation of channels according to the instantaneous demands of the users in time-division multiplex systems | |
Admission control, resource allocation in open networks | |
Arrangements for maintenance or administration in data switching networks involving bandwidth and capacity management | |
Negotiating bandwidth in wireless networks |
This place covers:
Management of the video stream after receiving an upstream playback control signal from the client, for example in a VOD system to pause or ffwd the video stream.
This place covers:
Processing of the transport stream as received from the network and before being adapted to the delivery medium.
This place covers:
- Embedding of data in a piece of content, for example picture, text in a video.
- The operations performed by a content provider at a workstation to create an interactive multimedia presentation.
This place covers:
Only the descrambling/decrypting of the transport stream is described here. The descrambling/decrypting of the video stream is described elsewhere.
This place covers:
This interface manages the uplink signals coming from all the clients and is used for example to handle requests (e.g. requests for a particular multimedia service).
This place does not cover:
Hybrid Fiber Coaxial [HFC] networks for upstream channel allocation for video distribution | |
Flow control in data networks | |
Real-time communication protocols in data switching networks | |
Scheduling or organising the servicing of application requests in data packet switching networks |
This place does not cover:
Scheduling or organising the servicing of application requests in data packet switching networks |
This place covers:
Admission policies of clients in video servers.
This place does not cover:
Admission control, resource allocation in open networks | |
Arrangements for network security using user profiles for access control | |
Access security in wireless networks |
This place covers:
Monitoring is an internal process, which checks permanently user requests, the bandwidth available at the different network interfaces or any internal processes. It can generate reports of system usage.
This place does not cover:
Monitoring of server performance or load | |
Monitoring or testing of transmitters in general | |
Arrangements for observation, testing or troubleshooting for broadcast or for distribution combined with broadcast |
This place covers:
The server monitors the client buffer.
This place covers:
Monitoring of the available bandwidth or bit rate.
This place does not cover:
Traffic monitoring in data switching networks | |
Monitoring data switching networks utilization |
This place covers:
Detection of an error during content distribution, content loading, multiplex management, hardware failure.
This place does not cover:
Error or fault detection | |
Monitoring in general |
This place covers:
The load or processing capabilities of the server are monitored.
This place does not cover:
Allocation of resources in multiprogramming arrangements | |
Performance measurement of computer activity |
This place covers:
Monitoring of aired content for logging and verification purposes. It can be sent to a rights server or an advertiser for billing. Includes the number of times content has been downloaded (not requested, which is classified elsewhere).
This place does not cover:
Arrangements for monitoring programmes for broadcast or for distribution combined with broadcast |
This place covers:
- Requests from clients received at the upstream interface are monitored.
- Includes log files of client requests.
This place does not cover:
Monitoring data switching networks utilization | |
Scheduling or organising the servicing of application requests |
This place covers:
Basic functions provided by the operating system like memory management, event handling, multitasking, multithreading, setup.
Attention is drawn to the following places, which may be of interest for search:
OS processes, e.g. booting a STB, implementing a Java virtual machine in a STB or power management in a STB | |
Program loading or initiating in general | |
Multiprogramming arrangements |
This place covers:
Synchronization issues.
This place does not cover:
Arrangements for synchronising broadcast or distribution via plural systems in broadcast distribution systems |
Attention is drawn to the following places, which may be of interest for search:
Synchronising circuits with arrangements for extending range of synchronisation at the transmitter end | |
Synchronisation arrangements in time-division multiplex systems | |
Arrangements for synchronising receiver with transmitter |
This place covers:
Server-side system management
This place does not cover:
Maintenance or administration in data networks |
This place covers:
Server-side agents are similar to the agents implemented on the client and perform similar operations.
This place does not cover:
Details of learning user preferences for the retrieval of video data in a video database | |
Computer systems using learning methods |
Attention is drawn to the following places, which may be of interest for search:
Learning process for intelligent management |
This place covers:
Preference data are processed to determine similarities between users. They can be clustered to have a limited number of groups of viewers. They are used to enrich the profile of one user by adding data from similar users.
Attention is drawn to the following places, which may be of interest for search:
Deriving a common profile for several users on the same client, e.g. family profile |
This place covers:
- Non-video distribution application.
- A whole range of services, which do not deal directly with the distribution of multimedia content. They play a crucial part of the associated business model but because of their non-technical nature, they are separated from the other management functions. They can also be provided by a 3rd party.
- Support/help center, the HLR of a mobile phone network for collecting the position of a mobile client.
This place does not cover:
Arrangements for maintenance or administration in data networks | |
network services using third party service providers |
This place covers:
External server specially adapted to perform rights management operations.
This place does not cover:
Protecting software against unauthorised usage in a vending or licensing environment | |
Security in data switching network management | |
Security management or policies for network security | |
Access security in wireless networks |
Attention is drawn to the following places, which may be of interest for search:
Client-side monitoring of content usage | |
Definition of usage data |
This place covers:
Shopping and product management aspect. The shopping application is classified elsewhere.
This place does not cover:
Payment schemes, payment architectures or payment protocols for electronic shopping systems |
This place covers:
Billing aspects.
This place does not cover:
Payment architectures, schemes or protocols | |
Commerce, e.g. shopping or e-commerce | |
Arrangements for billing for the use of broadcast information or broadcast-related information |
Attention is drawn to the following places, which may be of interest for search:
Billing systems or methods specially adapted for commercial purposes | |
Charging arrangements in data networks |
This place covers:
The price depends on the nature of the program offered. It can be also inversely proportional to the amount of commercials inserted.
This place covers:
Billing aspects not pertaining to the end-user or subscriber but to a third party such as an advertiser. Billing can be performed according to monitored viewer selections.
This place covers:
Customer management. Maintains databases for storing data about the clients it is connected to and their users.
This place does not cover:
Arrangements for services using the result on the distributing side of broadcast systems | |
Profiles in network data switching protocols |
This place covers:
The management system stores data pertaining to the client device, regardless of its user.
This place does not cover:
terminal profiles in network data switching protocols |
This place covers:
The server authenticates the client device.
This place does not cover:
Restricting access to computer systems by authenticating users using a predetermined code | |
Cryptographic authentication protocols | |
Network authentication protocols | |
Authentication in wireless network security |
This place covers:
Clients may be diverse by nature and have different display capabilities, e.g. TV, PC, mobile phone or PDA.
This place does not cover:
Optimising the visualisation of content during browsing in the Internet | |
Processing of terminal status or physical abilities in wireless networks | |
Authentication in wireless network security |
Attention is drawn to the following places, which may be of interest for search:
Reformatting of the video stream by the server, e.g. based on client parameters |
This place covers:
A hardware profile contains a client ID, a STB manufacturer, model, general processing and memory/storage capabilities, except for display.
This place does not cover:
Allocation of resources considering hardware capabilities in multiprogramming arrangements | |
Allocation of resources considering software capabilities in multiprogramming arrangements |
This place covers:
The server determines or is aware of the location of the client device. The determination can be performed by retrieving data from a HLR in a mobile phone network or by triangulation methods.
This place does not cover:
Retrieval from the Internet by querying based on geographical locations | |
Arrangements for identifying locations of receiving stations in broadcast systems | |
Location of the user terminal in data switching networks | |
Services making use of the location of users or terminals in wireless networks | |
Locating users or terminals in wireless networks |
This group must be distinguished from user demographical data, classified elsewhere. This group is typically used for targeting location dependent programs or additional information.
This place covers:
The server keeps a list of client devices, which have been reported to been involoved in piracy acts, such as falsifying the decryption card.
This place covers:
The software profile contains a record of the type of software installed on the client, including version number for automatic upgrades.
Attention is drawn to the following places, which may be of interest for search:
Details of operating systems in clients | |
Executable data, e.g. software |
This place covers:
The management system stores data related to its users regardless of the client device they use.
This place does not cover:
Customer care in data networks |
This place covers:
Storage of physical characteristics of the user (e.g. fingerprint). The server authenticates the user of the client device.
This place does not cover:
Restricting access to computer systems by authenticating users using a predetermined code | |
Arrangements for secret or secure communication including means for verifying the identity or authority of a user of the system | |
Network authentication protocols | |
Authentication in wireless network security |
This place covers:
When the user registers for the 1st time, he provides demographical data such as his gender, age, family status, profession, adress and ZIP code. Covers general interests but not viewing interests.
This place does not cover:
Arrangements for identifying locations of users in broadcast systems |
This place covers:
Preferences may be derived from viewing history of the user and have been collected dynamically. Preferences can also be collected at user registration by providing general interests.
This place does not cover:
Retrieval of video data in a video database based on user preferences | |
Arrangements for recognizing users' preferences | |
User profiles in network data switching protocols | |
Processing of user preferences or user profiles in wireless networks |
Attention is drawn to the following places, which may be of interest for search:
Client-side monitoring of end-user | |
Uploading data stored on the client to server |
This place covers:
The function of the scheduler is to plan the distribution of the multimedia content over time. It must guarantee, that the client can access the content, when it is supposed to. The scheduler considers a number of constraints, like different available bandwidths for example at day or night, or higher priorities if a user has paid a higher fee or the best timing for inserting a commercial (prime time). It has also to perform location resolution tasks, like for example assigning a time and channel to a TV program.
This place does not cover:
Scheduling strategies for dispatcher in multiprogramming arrangements | |
Arrangements for scheduling broadcast services or broadcast-related services | |
Flow control in packet networks | |
Establishing a time schedule or organising the servicing of application requests in data packet switching network |
This place covers:
The scheduling algorithm performs optimization operations under constraints recevied as input data.
This place covers:
The scheduler prioritizes the items to be transmiited according to the available network bandwidth.
This place does not cover:
Admission control, resource allocation in open networks | |
Flow control in packet networks | |
establishing a schedule or organising the servicing of application requests taking into account QoS in data packet switching network |
This place covers:
The scheduler defines priorities for the different items to be sent, for example according to billing policy (the user, who has been charged most will be served first).
This place covers:
Duration of a movie or TV program.
This place covers:
Pertains to the time of the day, week,.., for example the best time of the day for inserting a commercial or airing a program suitable for children.
This place covers:
A TV program is delayed because of e.g. an expanded sport event.
This place covers:
Generation of a playlist and scheduling content items according to a playlist.
This place does not cover:
Retrieval of multimedia data based on playlists |
This place covers:
Algorithms considering at which frequency a piece of data should be repeated in the carousel, for example according to its importance. Also pertains to data which is repeated at a constant frequency.
This place does not cover:
Arrangements for broadcast or for distribution of identical information repeatedly in broadcast distribution systems |
This place covers:
Scheduling of NVOD services. Movies are repeated on different channels in a time-staggered manner.
This place covers:
Metadata, such as program descriptors is received from the content provider, which itself is not aware of a transmission schedule. Therefore the creation of the EPG data consisting of metadata and time information is performed by the scheduler. The EPG user interface for program selection by the user is classified elsewhere.
This place covers:
The scheduler decides to update data or software resident in the client for example on a regular basis or according to special events.
This place does not cover:
Deployment, distribution, installation, update of software | |
Error detection or correction during software upgrading | |
Arrangements for updating broadcast information or broadcast-related information |
This place covers:
Gathering multimedia content from different sources, analyzing it and creating appropriate channels for the clients. It receives input from the scheduler.
This place covers:
Documents describing the processing of these descriptors, for example in case-based systems, where an incoming piece of content is classified with other similar ones.
This place covers:
Generation and management of entitlement messages in a conditional access system. Pertains to ECM and EMM only.
This place does not cover:
Arrangements for conditional access to broadcast information or to broadcast-related services |
This place covers:
Trans-encryption of the ECMs resulting from pre-encryption (or re-encryption of the control words used for pre-encryption) for use with a different transmission key, also known as encryption renewal.
This place covers:
Generation and management of keys on the server side.
This place does not cover:
Key distribution for secret or secure communication, using a key distribution center, a trusted party or a key server | |
Network support of key management | |
Key management for network security in communication control or processing |
This place covers:
Different channels can be merged in a single one. For example, in a VOD application, a client served by unicast channel catches up a multicast channel. Stream merging allows to minimize bandwidth.
This place does not cover:
Data multicast over packet-switching network |
This place covers:
The server controls the complexity of the video stream, for example based on its capabilities.
This place covers:
How content is retrieved from different sources (e.g. satellite and internet) or from different content providers.
This place covers:
- Generation of a personalized channel for one or a group of clients, according to their preferences. It also receives data from the scheduler.
- Describes the insertion of targeted commercials by the server (with no further details at bitstream level).
This place does not cover:
Information retrieval from the Internet by querying with filtering and personalisation | |
Arrangements for replacing or switching information during the broadcast | |
Push services over packet-switching network | |
Adaptation of message content in packet-switching networks |
This place covers:
Aplications, where the end-user is aware that they are residing on the server, such as video hosting.
This place covers:
Storage of private data based on the explicit request of the end-user. The server holds private data received from the client as an extra service.
Attention is drawn to the following places, which may be of interest for search:
Management of user data for administrative purposes |
This place covers:
Personal videos have been uploaded by clients, for example to be viewed by other users.
This place covers:
The source can be a storage dedicated for each user for example to record movies if the capacity of his hard disk is not sufficient.
This place covers:
Creation of directory services, for example by indexing metadata for easy retrieval (keyword search of movies).
This place does not cover:
Details of content or meta data based information retrieval of video data in video databases |
This place covers:
Subject matter directed to the structure or operation of the client or end user device, such as a TV receiver, a set-top-box, a PC-TV, a mobile appliance (e.g. mobile phone or vehicle), being defined as receiving video and possibly additional data from one or several servers or intermediate component via network for further processing, storing and displaying. This also includes the transmission of these data on a home-based local network to further devices. The client extracts the raw multimedia content from the streams received from possibly heterogeneous sources (e.g. internet, broadcast network) and only provides a network interface. Exchange of control signals with the server or the network are placed by definition in the T-model as well as the uploading of client data to the server.
The first layer of this subgroup pertains to the physical description of the client and attached devices, e.g. its internal components, plug-in cards, input means, peripherals.
The second layer is directed to elementary specialized functions such as the processing of the data received from the downstream network interface (e.g. channel decoding, descrambling) and transmitted from the upstream network interface, the demultiplexing into elementary streams and the processing thereof, the extraction of additional data, the local storage of the content within the client device or its forwarding to peripheral devices via a local network, the combined display of several pieces of content on the same screen (e.g. news ticker, advertisement in a separate window), the monitoring of e.g. internal processes, user actions, network bandwidth. This layer also encompasses the software structure of the client device.
The third layer describes high level functions such as the selection of content (e.g. in unidirectional systems where the whole content is sent to the client), the management of content usage (e.g. conditional access, rights), the creation of local virtual channels (e.g. by combining streams retrieved from the broadcast network and the hard disk), the adaptation by learning of internal parameters (e.g. viewer profile).
The last layer is directed to services or applications as provided to the end user of the system such as defining setting parameters, selecting programs, making requests to the server or accessing additional services (e.g. banking, shopping, WWW browsing, gaming).
The subgroup is directed to documents related to the reception and processing of received data.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast | |
Arrangements specially adapted for receiving broadcast information |
Attention is drawn to the following places, which may be of interest for search:
Raw multimedia data per se |
This place covers:
Hardware level. Physical description of the multimedia client.
As most of the components are always present, e.g. tuner or memory, there is no need to describe them all to avoid unnecessary classification work. An index should be allocated only if one of the component has a critical function in the invention.
It should be further noted, that most of the components have already an entry in other technical fields and that, for example, the circuitry of a tuner is not part of this model.
This place covers:
A peripheral is considered here as an external device, which receives multimedia data from the client (which excludes cards) being in the immediate vicinity of the client (same room or house). The most common example is the video recorder but other devices on a home network are possible. Covered are also PDA, game console or remote controls with a display.
Attention is drawn to the following places, which may be of interest for search:
Input-only peripherals | |
Communication aspects with peripherals |
This place covers:
Identification number of the peripheral device, network address on the local network.
This place does not cover:
Protecting specific internal or external computer components used for computing or processing information by creating or determining hardware identification | |
Network arrangements in data packet switching network, protocols or services for addressing or naming |
This place covers:
The client device, typically a STB here, is connected to a personal computer to extract data or software multiplexed with the video signal and to forward it to a PC.
This place covers:
The printer can be used for printing coupons or any additional data received by the STB. Covers also electronic paper.
This place does not cover:
Interfaces to printers | |
Printing data |
This place covers:
Additional display device, e.g. projector, not being the main display device, e.g. TV set, which is always present.
This place does not cover:
Digital output for controlling a plurality of local displays |
This place covers:
Device receiving data from the client device, being typically a remote control with a display, a PDA or a mobile phone. Excludes PC, printer, additional display, recorder.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Constructional details of equipment or arrangements specially adapted for portable computer application |
This place covers:
A home appliance can be a lighting or an air conditioning system or metering devices.
This place does not cover:
Home automation data switching networks exchanging configuration information on appliance services |
This place covers:
A recording device can be a VCR, an external hard disk, or DVD players as source of video or additional. Personal video recorders with an internal hard disk are covered elsewhere.
This place does not cover:
Interface circuits between an apparatus for recording television signals and a television receiver |
This place covers:
different emodiments of video client platforms.
This place covers:
The client device is a mobile phone, a PDA or any portable device.
This place does not cover:
Constructional details of equipment or arrangements specially adapted for portable computer application | |
Arrangements specially adapted for mobile receivers in broadcast systems |
This place covers:
Display device viewable by several users in a public space outside their home, e.g. movie theater or information kiosk. Excludes access points for downloading information.
This place covers:
The client device is located in a vehicle, e.g. car entertainment systems.
This place does not cover:
Arrangements specially adapted for transportation systems in broadcast systems |
This place covers:
The client device is a personal computer but not a portable device.
This place covers:
The client device is a personal video recorder, STB with hard disk.
This place does not cover:
Television signal recording | |
Arrangements for broadcast with accumulation-type receivers |
This place covers:
Cards being external components, which can be inserted in a dedicated slot, e.g. smart cards for a conditional access system or extension modules to upgrade the STB capabilities.
Attention is drawn to the following places, which may be of interest for search:
Interfacing a plurality of external cards, e.g. through a DVB Common Interface |
This place covers:
Cards holding a key for Conditional Access purposes, e.g. descrambling, or other decryption operations.
This place covers:
Card holding identification data of the user, preferences, personal settings or any kind of personal data.
This place does not cover:
Restricting access to computer systems by authenticating users using a predetermined code in combination with an additional device, e.g. dongle or smart card |
This place covers:
Cards having its own processing capabilities, e.g. external module for video decoding.
This place covers:
Extension module or storage capabilities, e.g. memory sticks.
This place covers:
Bank, credit card or prepaid card, to be used e.g. in TV shopping applications.
Attention is drawn to the following places, which may be of interest for search:
Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock | |
Information-bearing cards or sheet-like structures characterised by identification or security features for use in combination with accessories specially adapted for information-bearing cards | |
Payment schemes, architectures or protocols | |
Payment architectures where the payment is settled via telecommunication systems | |
Payment architectures, schemes or protocols characterised by the use of cards | |
E-commerce | |
Mechanisms actuated by coded identity card or credit card to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus | |
Charging arrangements in data networks |
This place covers:
Devices used to send information and control signals from the user and its environment to the client. It includes remote controls, keyboards, mouses, microphones as well as cameras or biosensors.
This place does not cover:
Peripherals receiving signals from client devices |
Attention is drawn to the following places, which may be of interest for search:
Input arrangements or combined input and output arrangements for interaction between user and computer |
This place covers:
Can be used as passive input from the user. Such devices can be heat sensors for presence detection, EEG sensors or any limb activity sensors worn by the user.
This place does not cover:
Input arrangements for interaction with the human body based on nervous system activity detection |
This place covers:
Sensors connected to the client, allowing to determine temperature, luminosity, pressure or earthquakes. Includes position sensors, e.g. GPS.
This place covers:
Any sound input device can be used to generate audio streams or to enter voice commands.
This place covers:
Only remote control devices transmitting input data to the client device and located in the direct vicinity thereof.
This place does not cover:
Constructive details of casings for the remote control device | |
User interfaces for controlling a tuning device of a television receiver through a remote control | |
Remote control of peripheral devices connected to a television receiver through the remote control device of the television receiver |
Attention is drawn to the following places, which may be of interest for search:
Computer pointing devices in general | |
Interaction techniques for graphical user interfaces in general | |
Remote control devices in general |
Attention is drawn to the following places, which may be of interest for search:
Touch pads in general |
This place covers:
Cameras allowing the client to become a video source, e.g. for uploading videos to a server or for identification purposes.
This place does not cover:
Television cameras |
In this place, the following terms or expressions are used with the meaning indicated:
Camera | Camera has the meaning of image generating device and covers also scanner (paper, fingerprint, retina) or any kind of imaging device. The camera allows the client to become a video source. It can be used for identifying the user or uploading videos to the server. |
This place covers:
The client device is controlled by input devices located at a distant location. A possible application could be that a user uses a mobile phone, a PDA or an office PC to program his STB at home. Input and client device are connected by a wide area network, e.g. internet.
This place covers:
- Internal components such as tuner, demodulator, demultiplexer, descrambler, video/audio decoder, CPU, volatile memory, hard disk, graphics board/circuitry, modem. It should be noted that certain components are typical for a multimedia client, like the ones used for video processing or the receiver circuitry.
- Additional built-in cards.
This place does not cover:
Receiver circuitry |
This place covers:
Pieces of hardware processing the incoming bitstream.
This place covers:
The presence of at least 2 tuners in a client device.
This place covers:
Internal reader / writer for DVD's, CD-ROM's and similar disks.
This place covers:
Removable hard disk within a client.
This place covers:
Hardware identification or serial number, also MAC address, socket ID.
This place does not cover:
Network arrangements in data packet switching network, protocols or services for addressing or naming |
This place covers:
Elementary specialized functions. They can be implemented in software or hardware. Their task is to control the corresponding hardware component and to provide a service to the upper layer.
This place does not cover:
Real-time communication protocols in data switching networks |
This place covers:
Clock recovery, e.g. extraction of the PCR packets.
This place does not cover:
Arrangements for synchronising receiver with transmitter by comparing receiver clock with transmitter clock | |
Arrangements for synchronising receiver with transmitter wherein the receiver takes measures against momentary loss of synchronisation |
This place covers:
Synchronised presentation of the multimedia content according to the time stamps. Additional data can be synchronized to the main content. Also locking items at given times.
This place covers:
Details of the generation of visual interfaces on a video client, involving graphical features, screen space management. They must be differentiated from applications making use of them such as EPG
Attention is drawn to the following places, which may be of interest for search:
Receiver circuitry for displaying additional information | |
End-user applications using user interfaces | |
Interaction techniques for graphical user interfaces |
This place covers:
Layout arrangement on the screen, overlays, menus in general.
This place covers:
Creation of a grid for all the textual information is fitted, e.g. in a rectangular grid.
This place covers:
A separate window or region is provided to display additional data, like a commercial or additional text.
This place covers:
- The displayed image can be altered according to certain parameters provided for example by an access control or censoring system. The image can be totally blanked or blurred or a region can be masked.
- Only details of the filtering action.
- Damping of brightness.
This place does not cover:
Image enhancement or restoration in general |
Attention is drawn to the following places, which may be of interest for search:
High-level filtering performed on a region of the image |
This place covers:
Retrieval from local storage.
This place does not cover:
Details of retrieval of video data and associated meta data in video databases |
This place covers:
Local storage. The client uses part of his volatile or nonvolatile memory, e.g. hard disk, to store a part of the received multimedia data or data it has generated itself, e.g. monitored data.
This place covers:
Caching operations, for example of commercials for later insertion or the generation of an internal database, for example for holding EPG data. The storage has a temporary aspect and must be distinguished from recording, which aims more at a long term archiving on request of the user. It is not meant either to describe buffering as performed in the video decoder, which holds the multimedia data for a brief period of time. Caching is an action, which is transparent to the end-user unlike recording.
This place covers:
Particular details of the use of a local database. Also generation of directory structure, within the file system of the client device.
This place does not cover:
Interfaces, Database management systems or updating for information retrieval | |
Details of retrieval of video data and associated meta data in video database |
This place covers:
The incoming video stream, e.g. from live broadcast, can be paused. It is stored locally to allow the user to resume viewing later on.
This place covers:
Recording of received data for archiving purposes, i.e. permanent; not caching, which has a temporary aspect.
This place does not cover:
Recording of a television signal | |
Arrangements for recording or accumulating broadcast information or broadcast-related information |
This place covers:
The client device has a limited memory. Algorithms, describing which data are prioritized for deletion, e.g. oldest or less used data.
This place does not cover:
Storage management, e.g. defragmentation | |
Unloading stored programs | |
Storage management in file systems | |
Buffering arrangements in a network node or in an end terminal in packet networks |
This place covers:
Transport stream demultiplexing. It takes as input transport streams and generates after demultiplexing A/V streams or remultiplexes several TS into a new transport stream. Demultiplexing includes PID filtering.
This place does not cover:
Multiplexing of data packets for data networks |
This place covers:
Isochronously with the horizontal video sync, according to bit-parallel or bit-serial interface formats.
This place covers:
Modification of bitstream parameters, e.g. re-stamping, trans-multiplexing, or remapping of PIDs.
This place covers:
Retrieval of the system informations (SI).
This place covers:
Extraction of the software or additional data that have been inserted in the packetised stream by replacement of (or by using the bandwidth occupied by) the stuffing bits/bytes/packets.
This place does not cover:
Synchronisation arrangements in time-division multiplex systems with different or fluctuating information rates |
This place covers:
Extraction of the additional data from a digital video stream.
This place covers:
Extraction process of the data out of the DVB carousel.
This place covers:
Software, additional data and generally speaking to non-streaming data. This part is dedicated to the retrieval of software modules and non audio-video information, such as additional data (descriptors, WWW pages,..)for example by extracting them from a DVB carousel or received from an internet site. Modules are reordered according to a directory module, checked for consistency and eventually the complete package is rebuilt.
This place covers:
After recovery, all modules are ordered and the initial package rebuilt.
This place does not cover:
Arrangements using cryptography for the use of broadcast information or broadcast-related information |
This place covers:
Additional informations such as an HTML page are reformatted by the client device.
This place does not cover:
Optimising the visualization of content for information retrieval from the Internet | |
Adaptation of message content in packet-switching networks | |
Media handling at the source in data packet switching networks |
This place covers:
The resolution of the additional informations is modified. It can be used, e.g. to reformat additional data on a handheld device, attached to the STB.
This place covers:
The client device generates at least one other version of the original additional data, which is available together with the original version.
This place covers:
- A series of interfaces allowing to communicate with cards and peripheral devices. It includes for example the DVB common interface, secure local communication, e.g. with a smart card or a video recorder, via IEEE 1394 connection to other video devices.
- Communication aspects with these devices when the client becomes a home server.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Arrangements specially adapted plural spots in a confined site in broadcast systems |
This place covers:
Connection via common interface (DVB-CI), multiple conditional access.
This place covers:
- Several peripherals connected to a home network.
- Documents describing communication aspects on the home network or describing a home network with no special emphasis on the connected peripheral devices.
This place does not cover:
Home Audio Video Interoperability (HAVI) data switching networks |
This place covers:
Communication between the client and an external connected recording device.
This place covers:
Adaptation of the bitstream to the local network, e.g. transport of video over firewire.
This place covers:
Devices and peripherals connected via a firewire (IEEE1394) link.
This place does not cover:
High-speed IEEE 1394 serial bus |
This place covers:
Devices and peripherals connected via an HDMI connection.
This place does not cover:
Arrangements for wireless networking or broadcasting of information in indoor or near-field type systems |
Attention is drawn to the following places, which may be of interest for search:
Wireless local area data switching networks | |
Flow control in wireless networks |
This place covers:
Secure communication with the peripheral or with a smart card.
Attention is drawn to the following places, which may be of interest for search:
Security arrangements for protecting computers or computer systems against unauthorised activity | |
Arrangements for secret or secure communication per se | |
Security arrangements in wireless networks |
This place covers:
Non-physical details of phone or cable modems. Communication aspects with the server are to be found elsewhere.
This place does not cover:
Flow control in data networks | |
Streaming protocols, e.g. RTP or RTCP | |
Scheduling or organising the servicing of application requests in data packet switching networks |
This place covers:
The Downstream network interface processes the electromagnetic waves received from the network and outputs multimedia streams. It comprises channel tuning to get the baseband signal, channel decoding, descrambling.
This place does not cover:
Transmission of MPEG streams over ATM | |
Flow control in data networks | |
Real-time communication protocols in data switching networks |
This place covers:
The bitstream is adapted to a specific network. The type of network or protocol used is classified elsewhere.
This place does not cover:
Transmission of MPEG streams over ATM |
This place covers:
Demodulation and error correction.
This place does not cover:
Analog front ends or means for connecting modulators, demodulators or transceivers to a transmission line |
This place covers:
Channel selection.
Attention is drawn to the following places, which may be of interest for search:
Tuning indicators; Automatic tuning control |
This place covers:
Fast channel change or rapid tuning relate to techniques, where the STB tries to display as quick as possible an image in the time interval starting after the user has issued a channel change command and before the decoding buffer could be filled. Those techniques comprise of, for example, decoding a low resolution stream or a stream sent at a higher rate.
This place covers:
Processing of the transport stream as received from the network and before being sent to the demultiplexer.
This place covers:
Only the descrambling/decrypting of the transport stream.
This place does not cover:
Arrangements using cryptography for the use of broadcast information or broadcast-related information |
Attention is drawn to the following places, which may be of interest for search:
Multiplex stream encryption | |
Video stream decryption |
This place covers:
Audio stream management.
This place covers:
The audio stream is parsed to extract or recognize some features or to detect embedded triggers.
This place does not cover:
Arrangements characterised by components specially adapted for monitoring, identification or recognition of audio in broadcast systems |
This place covers:
The audio stream is muted because, for example, rights have been violated or for censoring purposes.
This place covers:
Reformatted audio stream, e.g. by converting from one coding standard to another.
This place does not cover:
Details of audio signal transcoding |
This place covers:
- Video stream management. Receives the video stream from the demultiplexer and performs MPEG decoding, synchronization with other streams.
- Management of the video decoder buffer.
This place covers:
Detection of features (e.g. logo) in a video stream, extraction of characteristics or generation of metadata in the client directly from the video stream.
This place does not cover:
Arrangements characterised by components specially adapted for monitoring, identification or recognition of video in broadcast systems |
This place covers:
- Spatial composition of a scene according to the scene graph in the rendering process. Scene graph updating following client/user control is covered here as well as the animation of objects.
- Specific to the processing of MPEG-4 objects.
This place covers:
Splicing of at least one video stream with another stream (video or not) at the client level. It can be used for inserting or substituting a piece of video such as a commercial.
This place does not cover:
Details of conversion of video standards at pixel level | |
Video transcoding | |
Adapting incoming signals to the display format of the display terminal | |
Media handling at the source in data packet switching networks |
This place covers:
The MPEG stream is preprocessed for formatting and recording on a DVD.
This place covers:
The client device generates a layered video stream from the original one.
This place covers:
Transcoding between modalities, e.g. audio to text.
This place covers:
The reformatting operation is performed on part of the stream, the part being spatial region of the image or a time segment.
This place covers:
New quantization parameters are introduced allowing to change the resolution of each video frame.
This place covers:
Client-side alteration of the spatial resolution, mainly for dispalying on peripheral device.
This place covers:
Conversion of signal for displaying with a different aspect ratio or with a different resolution.
This place covers:
Alteration of the frame rate.
This place does not cover:
Television signal recording using magnetic recording on tape for reproducing at a rate different from the recording rate |
This place covers:
Client devices generating at least one other version of the original content, which is available together with the original version.
This place covers:
Descrambling of the video stream, decryption of the content stream.
Attention is drawn to the following places, which may be of interest for search:
Arrangements using cryptography for the use of broadcast information or broadcast-related information | |
Arrangements for secret or secure communication |
This place covers:
Not all of the signal is scrambled or different parts are encrypted differently, e.g. to reduce processor load or to enable a reduced quality presentation.
This place covers:
Client devices re-encrypting the decrypted video stream, e.g. with another key. It can be used for secure recording.
Attention is drawn to the following places, which may be of interest for search:
Arrangements using cryptography for the use of broadcast information or broadcast-related information | |
Arrangements for secret or secure communication |
This place covers:
The user identification can be used to retrieve his settings, viewing preferences or in financial transactions. Monitoring of the user actions are to be classified elsewhere.
Attention is drawn to the following places, which may be of interest for search:
Restricting access to computer systems by authenticating users using a predetermined code | |
Authentication in wireless communication networks |
This place covers:
The user is passively identified by facial/fingerprint/voice recognition.
This place does not cover:
Cryptography using biological data | |
Authentication in networks using biometric |
Attention is drawn to the following places, which may be of interest for search:
Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints | |
Restricting access to computer systems by authenticating users using biometric data | |
Authentication in wireless network security |
When classifying in this group, a corresponding symbol should be added to describe the type of sensor / input device used.
This place covers:
Monitoring is an internal process, which checks permanently user inputs, the bandwidth available at the different network interfaces or any internal processes. It can generate history data, which is later processed for example by a recommender system to build automatically a user profile (implicit profile). Since profiles can be also defined explicitely by the user via a menu, only the passive monitoring of user selections are to be classified here. Creation of explicit profiles is indexed elsewhere.
This place does not cover:
Monitoring of user activities for profile generation for accessing a video database | |
Arrangements for monitoring broadcast services or broadcast-related services | |
Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information | |
Monitoring in wireless networks |
This place covers:
Clients keeping track of how often a piece of content has been viewed or copied or recorded. Covers also records of content ID, percentage of viewed/recorded program.
This place does not cover:
Monitoring of user activities for profile generation for accessing a video database | |
Protecting generic digital content where the protection is independent of the precise nature of the content | |
Arrangements for monitoring the use made of the broadcast services in broadcast systems |
This place covers:
The connection to the server is monitored, for example availability, bandwidth.
Example(s) of documents found in this subgroup: US 2008/0104653 A1
This place does not cover:
Measuring or estimating channel quality parameters | |
Arrangements for maintenance or administration in data switching networks involving bandwidth and capacity management |
This place does not cover:
Arrangements for monitoring the users' behaviour or opinions in broadcast systems |
This place covers:
Sensors are used to detect for example the presence of individuals in front of the TV set, as well as if somebody is entering or leaving the room. Also user reactions, like movements, face expression should be classified here.
This place does not cover:
Methods or arrangements for recognising human body or animal bodies or body parts | |
Methods or arrangements for acquiring or recognising human faces, facial parts, facial sketches, facial expressions | |
Methods or arrangements for recognising movements or behaviour | |
Arrangements for identifying users in broadcast systems |
This place covers:
User selections using for example a remote control are monitored. It covers the selection of programs, duration of viewing, purchase activity, setting reminders answers to quiz, questionaires, advertisements. A log file is generated. Covers clickstream.
This place does not cover:
Monitoring of user selections in data processing systems | |
Arrangements for monitoring the user's behaviour or opinions in broadcast systems |
Attention is drawn to the following places, which may be of interest for search:
Monitoring of user activities for profile generation for accessing a video database | |
Lawful interception | |
Tracking the activity of the user |
This place covers:
The status of the connection or bandwidth variations on the local network are monitored. Also, detection of new devices in the local network.
This place does not cover:
Configuring of peripheral devices in general | |
Monitoring connectivity in data switched networks |
This place covers:
The status of the connected peripheral devices is monitored, e.g. to detect the failure of a VCR or the hard disk problem of an external storage device.
This place does not cover:
Configuring of peripheral devices in general | |
Reporting information sensed by appliance or service execution status of appliance services in a home automation network | |
Monitoring the status of connected device in data switched networks |
This place does not cover:
Computer virus detection and handling | |
Protecting computer platforms against harmful, malicious or unexpected behaviour or activities using intrusion detection and counter measures |
This place covers:
- The client monitors if all its components, internal processes are running properly and reports possible troubles.
- CPU and memory load, processing speed, buffer (other than decoder buffer), timer, counter, percentage of the hard disk space used, authentication of internal components.
This place does not cover:
Diagnosis, testing or measuring for television receivers | |
Error monitoring in general | |
Arrangements for monitoring conditions of receiving stations in broadcast systems |
Attention is drawn to the following places, which may be of interest for search:
Monitoring or testing of receivers with feedback of measurements to the transmitter |
This place covers:
Monitoring of the upstream connection, e.g. its availability or bandwidth.
Attention is drawn to the following places, which may be of interest for search:
Measuring or estimating channel quality parameters |
This place covers:
Monitoring of errors related e.g. to content uploading, demultiplexing or due to hardware failure.
Attention is drawn to the following places, which may be of interest for search:
Monitoring in electrical digital data processing | |
Error detection in general | |
Monitoring in general |
This place covers:
- Operating system aspects.
- Basic functions provided by the operating system like memory management, event handling or details of dedicated software libraries.
Attention is drawn to the following places, which may be of interest for search:
Boot device selection; Loading of operating system | |
Arrangements for program loading or initiating | |
Program loading or initiating in general using non-volatile memory from which the program can be directly executed |
This place covers:
Details of dedicated software libraries or APIs.
This place covers:
Setup parameters can be stored locally or received from the server.
Describes the action of powering on or booting the client device.
This place does not cover:
Resetting in general | |
Bootstrapping in general | |
Program loading or initiating in general | |
Secure boots of computer platforms |
This place covers:
Details of memory access. Pertains only to the RAM and not the hard disk.
This place does not cover:
Allocation of memory to service a request | |
Addressing or allocating within memory systems or architectures |
This place covers:
Battery power management of the receiver, e.g. DVB-H, stand-by mode or shutting down unused parts of the receiver.
This place does not cover:
Power management in computer systems | |
Hibernate or awake process in computer systems |
This place covers:
Presence or details of the implementation of a virtual machine.
This place does not cover:
Virtual machines in general |
This place covers:
- A window manager represents a technical evolution with respect to older techniques of displaying non video data on a screen such as PiP or OSD.
- The creation, management of windows or drawing primitives and generally speaking the management of the interaction with a GUI including event handling.
This place covers:
System management. This layer describes high-level functions of the multimedia client, but which are still transparent for the end-user.
This place covers:
Management functions implemented in the client device.
This place covers:
The hardware profile describing the processing capabilities of the client is used to discard data streams, which the client can not handle and to retrieve software modules or streams compatible with its capabilities.
It covers hardware and software resources, like the version of the software installed.
Attention is drawn to the following places, which may be of interest for search:
Allocation of resources considering hardware capabilities | |
Allocation of resources considering software capabilities | |
Message adaptation based on network or terminal capabilities in packet switching networks | |
Terminal profiles in network data switching protocols | |
Processing of terminal status or physical abilities in wireless networks |
This place covers:
The geographical position of the client, which can be a regional or ZIP code for a fixed client or data provided by a GPS for a mobile client is used to provide the user with information related to its geographical environment, e.g. regional news or ads.
Example(s) of documents found in this subgroup: US 6,948,183 B1
This place does not cover:
Retrieval from the Internet by querying based on geographical locations | |
Systems specially adapted for using geographical information in broadcast systems | |
Protocols in which the network application is adapted for the location of the user terminal in communication control or processing | |
Services making use of the location of users or terminals in wireless networks | |
Locating users or terminals in wireless networks |
This place covers:
The viewer profile is either compiled from history data or defined explicitely by the user or received from the server as demographic data.
Example(s) of documents found in this subgroup: WO 2006/1296988 A1
This place does not cover:
Monitoring of user activities for profile generation for accessing a video database | |
User profiles in network data switching protocols | |
Processing of user preferences or user profiles in wireless networks |
This place covers:
The server sends the same content to a plurality of clients as it does not have any prior knowledge of their requirements. The filtering module will extract the part relevant to the client according to criteria.Advanced filtering systems use learning algorithms to adapt the criteria according to explicit user inputs and/or monitored data. Details of image filtering are described elsewhere.
Attention is drawn to the following places, which may be of interest for search:
Filtering and selective blocking of messages over packet-switching networks |
This place covers:
Usually the filter will extract a small portion of the data from the incoming stream. As this operation is trivial, it does not need to be described. However, some applications consider removing a small part of the information from the incoming stream. Examples can be censoring of scenes, image regions or blocking of advertisements.
This place covers:
Once the filtering criterion is defined, the filter needs to know on which kind of streaming or additional data it has to apply.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Filtering for image enhancement or restoration |
This place covers:
The filter could process MPEG-4 streams, for example to delete some objects.
This place covers:
A region of Interest is defined and displayed, blurred or masked. This can be applied to analog or MPEG-2 streams, where the image has been encoded as a whole and not as a set of objects. Therefore, the region is marked and tracked. Censoring systems can require to mask or blur some regions of the image.
This place covers:
A time segment of the video is filtered out.
This place covers:
The scheduler has a similar function to the scheduler on the server side. It processes incoming streams of data as well as data cached on an internal disk and creates virtual channels. It can also be controlled by the server. It can generate a stream of personalized content.
This place does not cover:
Arrangements for replacing or switching information during the broadcast or during the distribution |
This place covers:
The client solves automatically conflicts in scheduling issues, like having to perform two operations at the same time, e.g. recording two different movies in the same time slot.
This place covers:
The client checks itself if an update operation needs to be performed. This could be implemented by comparing the version of software modules in a DVB carousel with the local version.
This place does not cover:
Deployment, distribution, installation, update of software | |
Error detection or correction of the data by redundancy during software upgrading | |
Arrangements for updating broadcast information or broadcast-related information |
Attention is drawn to the following places, which may be of interest for search:
Program updating while running in general |
This place covers:
Management functions implemented in the client device.
This place covers:
Scalability control is performed by the client device, for example to forward the data to a low-resolution device on a home network.
This place does not cover:
Arrangements for using the results of monitoring on user's side in broadcast systems | |
Flow control in packet networks |
This place covers:
- The client combines data received from different sources, e.g. EPG data from cable operators, satellite services, internet or internally stored.
- Describes also the connection to the same source via different networks, e.g. a part of the content is distributed via TV broadcast and another one via internet.
This place does not cover:
Web site content organization and management for information retrieval from the Internet | |
Transmission by internet of broadcast information | |
Stock exchange data over packet-switching network | |
Push services including data channel over packet-switching network |
This place covers:
Processing of the ECM, EMM messages received from the server. Details of the descrambling are found elsewhere.
This place does not cover:
Arrangements for conditional access to broadcast information or to broadcast-related services |
This place covers:
Described is here the management of the rights attached to the content. It retrieves the rights associated with the content. The rights of the different users are defined using an application described elsewhere.
This place does not cover:
Security in data switching network management | |
Security management or policies for network security | |
Access security in wireless networks |
Attention is drawn to the following places, which may be of interest for search:
Generation of protective data, e.g. certificates | |
Protecting software against unauthorised usage in a vending or licensing environment |
This place covers:
The agent is an intelligent system, which learns and tries to adapt its output to its inputs. It receives input data directly from the viewer (explicit profile) via a corresponding user interface (e.g. movie ratings) as well as from the monitoring module (implicit profile). Its output can be a control signal to the filter or a recommendation list, which will be displayed in a corresponding user interface. Learning can be implemented using one of the method described below, but can also be a combination of several methods.
Attention is drawn to the following places, which may be of interest for search:
Monitoring of user activities for profile generation for accessing a video database | |
Computer systems using learning methods | |
Services using the results of monitoring in broadcast systems |
This place covers:
At the opposite of a case-based agent, a collaborative system is based on the similarity between user profiles. A user is compared to other users and if it is found as an example that he belongs to a user group, the recommendation list of this group will be used for him. As this system is implemented on the client side, only documents related to user profiles on the same client (stored locally) or profiles from other clients but provided by the server to the concerned client should be classified here.
Example(s) of documents found in this subgroup: WO 03/043337 A1
This place does not cover:
User profiles in network data switching protocols |
Attention is drawn to the following places, which may be of interest for search:
Deriving collaborative data from a large group of end-users on the server |
This place covers:
Types of learning method used by the agent.
This place covers:
Bayesian (probabilistic) networks are used.
This place covers:
Decision trees or any type or classifiers are used.
This place covers:
The agent will try to match its output to the feedback provided by the user using a neural network. The learning process requires several iterations to converge.
This place covers:
- User selections are recorded in a history file, i.e. describing processing operations of the history, e.g. trend analysis or clustering.
- The generation of the user profile, if disclosed explicitly. The dynamic adaptation of the profile is performed by an intelligent agent.
This place covers:
Generation of a recommendation or suggestion list.
This place covers:
End user applications in the sense of services provided by the multimedia system to the users. There are basically two categories of applications: the ones providing local interactivity and the ones requiring an uplink.
Attention is drawn to the following places, which may be of interest for search:
Receiver circuitry for displaying additional information | |
Interaction techniques for graphical user interfaces | |
Software engineering for user interfaces | |
Services or applications for real-time multimedia communications |
This place covers:
A request application allows the user to request a program or any additional information. Covers all on-request applications. The request may be fullfilled immediately, with a small delay or later in the future. The headgroup covers also requests for downloading music.
Example(s) of documents found in this subgroup: EP 1 947 855 A1
This place does not cover:
End-user interfaces for retrieving video data from a database | |
Network services for supporting unicast streaming |
This place covers:
True VOD systems allowing to request and receive a movie within a short delay. Therefore, the movie will be streamed only to the requesting user or it will be available on a multicast channel. This group covers also details of the menu to stop, pause, FFWD, RWD or play a movie.
This place covers:
- Users interacting with MPEG-4 objects.
- Editing by the end-user on the client device.
This place covers:
Movies are sent on a regular basis with a time offset (staggered) on different broadcast channels.
This place covers:
Broadcast programs not being NVOD associated to a request for purchasing, e.g. free preview programs.
This place does not cover:
Payment schemes payment architectures or payment protocols |
This place covers:
- The viewer can mark a program displayed in an EPG for later viewing or recording. Pertains to the reservation of time, channel or a piece of content.
- Bookmarking operations as well as the request for notification when an event has occurred, e.g. sport results or stock exchange above a given level.
This place does not cover:
Stock exchange data over packet-switching network | |
Push services over packet-switching network | |
Notification of incoming messages in packet switching networks |
This place does not cover:
Specific graphical features in visual interfaces |
This place covers:
The user requests actively for additional data, e.g. by pressing a button on a remote control, when an icon signaling the presence of interactive content is displayed on the screen.
This place covers:
Additional data is accessed by clicking on a hotspot
This place does not cover:
Processing chained hypermedia data for information retrieval | |
Details of information retrieval from the Internet by using URLs |
This place covers:
Manual selection of a portion of the displayed frame on the screen by the user.
This place covers:
Profile applications allow to define parameters, which will control the viewing experience of the viewer.
This place covers:
Master users, e.g. parents, defining several accounts for the users of the client, e.g. for children.
This place covers:
The user identifies himself to the client by entering a password or a PIN number. Passive identification is to be found elsewhere.
This place does not cover:
Cryptographic authentication protocols | |
Network authentication protocols |
This place covers:
The user enters for example his favorite channels, actors, directors, program genre or just a rating level (as used with a V-chip). Covers menus for parental control in general.
Example(s) of documents found in this subgroup: US 2002/0140728 A1
This place does not cover:
Retrieval personalisation and generation of user profiles for the retrieval of video data | |
User profiles in network data switching protocols |
This place covers:
This application is required for example by the agent module during its learning phase. Items are displayed on the screen and the user is requested to provide a rating.
This place covers:
Questions and answers. It can be used to poll users about their opinion regarding a problem raised during the TV broadcast, to react on an advertisement or in a TV quiz. This group also covers voting.
This place covers:
Described are here applications, which are provided as additional services to the users but do not belong to the core services provided in a multimedia system.
This place covers:
On-line banking application including the trading of stocks.
This place does not cover:
Banking in general |
This place covers:
Only games, which do not interact with the video stream, e.g. MPEG-4 based games, and are not of a question and answer type, e.g. quiz. They can be local or played with remote opponents.
This place covers:
TV home-shopping applications, also requesting quotes for services, excludes the request for additional data.
This place does not cover:
Payment schemes, payment architectures or payment protocols for electronic shopping systems |
This place covers:
The TV terminal is used as a WWW browser (e.g. WebTV) to display WWW pages. It must not be confused with systems where video or program related data is retrieved from the Internet without active browsing by the user. This place should not be used either for PC's with an internet connection.
Attention is drawn to the following places, which may be of interest for search:
Retrieval from the web | |
Web-based protocols |
This place covers:
Users receive awards, coupons, prizes, points or air miles.
Attention is drawn to the following places, which may be of interest for search:
Payment schemes, architectures or protocols | |
E-commerce | |
Charging arrangements in data networks |
This place covers:
E-mail application as known from computer systems but implemented on a Set-Top-Box or TV receiver.
Attention is drawn to the following places, which may be of interest for search:
Message switching systems, e.g. electronic mail systems | |
Wireless messaging |
This place covers:
Systems allowing users from distinct clients to communicate with each other, for example to exchange videos or any kind of data but no E-mails. Covers chat applications, bulletin board, forum.
This place does not cover:
Arrangements for providing for computer conferences, e.g. chat rooms, to substation in data switching networks | |
Distributed application using peer-to-peer [P2P] networks |
This place covers:
Selection menus allowing the user to select actively a piece of content from a plurality, e.g. a function provided by an Electronic Program Guide.
This place does not cover:
Broadcast systems using EPGs |
This place covers:
Programs are displayed in a grid, sorted out by channel and broadcast time.
This place covers:
Channels are selected by entering their name instead of number.
This place covers:
A recommendation list of desirable items has been compiled by the agent module and is displayed to the user. It is mostly an ordered list, where are items are sorted out according to their score, which may be also displayed next to the item descriptor. Items can be programs or channels.
This place covers:
The application provides a search function, for example using keywords to retrieve an actor's name.
This place does not cover:
Retrieval of video data |
This place covers:
Configuration applications allow the user to define the settings of the client.
This place covers:
Image brightness, contrast, setting of the color channels.
This place covers:
Language selection for e.g. configuration or setup menus or subtitles.
This place covers:
Layout parameters such as colors, fonts, size of the windows.
This place covers:
Presentation of information and data services. Classified should be here only applications pertaining to the display of such data.
This place does not cover:
Systems specially adapted for using meteorological information in broadcast systems |
This place covers:
- Display of warnings or reminders. Pertains usually to textual or graphical information, which is displayed for a brief period of time.
- Download status bar.
This place does not cover:
Arrangements for providing short real-time information to substation in data switching networks |
This place covers:
Subtitles or closed-caption.
This place covers:
News, stock exchange, weather data are displayed as a scrolling banner on the screen.
This place covers:
Teletext service.
This place covers:
- Subject matter comprising processes and structure involving exchange of data and control signals between servers, clients and intermediate components connected within a network(s). These processes and structure generally involve communication between intermediate components, the network interface of servers and that of the clients, resulting in data or control signals being exchanged there between in a particular manner and/or for a particular purpose and processing operations performed by the network itself.
- The first layer is directed to the physical description of the network such as the nature of the network used for the downlink and uplink connection (e.g. satellite, cable, internet, GSM) and the components used for the transmission of the electromagnetic waves (e.g. taps, splitters, amplifiers).
- The second layer describes communication aspects such as addressing (e.g. multicasting), the type of protocol used (e.g. DSM-CC, ATM). It also includes the exchange of low-level control signals originating from server (e.g. encryption key), client or network as well as processing operations by the network itself (e.g. protocol conversion, dropping of packets).
- The third layer pertains to the exchange of high-level control signals originating from client and transmitted to the server (e.g. viewing history, VOD control parameters), or issued by the server and sent to the client (e.g. trigger recording, channel tuning or sending setup parameters).
- The subgroup is directed to the invention information including data transactions necessitating communication between server, client and network. Documents where the invention information is related to the transmitter or receiver per se are placed in the S or C models, respectively.
Examples of documents placed in H04N 21/60 (1) The subgroup is directed to the invention information including physical level components of the distribution model. The physical level components of the T-model preclude physical components of the server and client models. For example, documents where the invention information includes physical level components (such as taps, splitters, amplifiers, etc) are placed in H04N 21/6106, whereas server physical components (such as modulators, multiplexers, etc) are placed in H04N 21/21, similarly client physical components (such as tuners, demultiplexers, etc) are placed in H04N 21/41. (2) This subgroup is directed to describing the type of protocol used (e.g. ATM) for transport on a specific type of network. The actual adaptation process is described elsewhere
Attention is drawn to the following places, which may be of interest for search:
Cryptographic protocols | |
Data switching networks | |
Network security protocols | |
Network arrangements or protocols for real-time communications | |
Media handling, encoding, streaming or conversion | |
Network protocols for data switching network services | |
Protocols for client-server architecture | |
Wireless communication networks |
This place covers:
Physical level and network topology.
This place does not cover:
Transmission |
This place covers:
Type of the downlink connection. The first 3 are trivial and should only be used if they play a significant part in the description of a document.
This place covers:
Terrestrial transmission. Preferrably DVB-T documents should be classified here.
This place covers:
- Typically video streaming via Internet. It must not be confused with browsers or WWW servers providing additional data.
- Computer networks in general (e.g. ATM).
This place does not cover:
Transmission by internet of broadcast information |
Attention is drawn to the following places, which may be of interest for search:
IP communication protocol |
This place covers:
Multimedia data are transported over a mobile phone network. Excluded are here wireless transmission in home networks but covers wireless wide area networks and wireless connection to a public access point.
This place does not cover:
Wireless downlink channel access |
This place covers:
Video over a phone line. Excludes xDSL-type connections described elsewhere. Using the H.223 multiplexing and H.245 control standards
This place covers:
Details of signal processing at lowest ISO level.
This place does not cover:
Signal processing in analog two-way television systems |
This place covers:
Type of the uplink connection.
This place covers:
Terrestrial transmission. Preferrably DVB-T documents should be classified here.
This place covers:
Typically for uplinks using a cable modem.
This place does not cover:
Broadcast-related systems characterised by the transmission system being the Internet |
This place covers:
Low-cost clients do not provide a return channel. The uplink can still be established if the user owns a mobile phone. Excluded are here wireless transmission in home networks but covers wireless wide area networks and wireless connection to a public access point.
This place does not cover:
Arrangements for providing broadcast or conference services to substation in data switching networks in combination with wireless systems | |
Wireless uplink channel access |
This place covers:
Uplink using an analog phone modem for example 2-way TV systems.
This place does not cover:
Arrangements for data linking, networking or transporting, or for controlling an end to end session in a satellite broadcast system |
This place covers:
Low-level control signals exchanged between client and server.
This place does not cover:
Real-time session protocols | |
Distributed application using peer-to-peer [P2P] networks |
This place covers:
Transmission of structured video content (i.e. video which is provided as layers, different versions, objects etc.) over different transmission paths or with different error corrections, different scrambling keys, using different transmission protocols etc. It describes the presence of different transmission "conditions" for the layers, versions or objects. Application example: A high error protection and a network with good QoS is used for important video layers, less important layers are sent with no error correction on a more error prone network.
This place covers:
Content can be accessed on the storage medium of client devices, e.g. parts of a movie can be retrieved from the hard disk of other users, instead of using the cache of a server. Chatting applications are classified elsewhere.
This place does not cover:
Broadcast-related systems characterised by transmission among terminal devices | |
Distributed application in data packet switching network using peer-to-peer [P2P] networks |
This place covers:
Server-side control signals. Described are here low-level control signals issued by the server for controlling the network or the client.
This place does not cover:
Management of faults, events, alarms in data networks |
This place does not cover:
Wireless communications network key management | |
Wireless communications network access security |
Attention is drawn to the following places, which may be of interest for search:
Arrangements for secret or secure communication |
This place covers:
Download of keys, for example transmission of ECM/EMM in a conditional access system.
This place does not cover:
Key distribution for secret or secure communication | |
Network support of key management |
This place covers:
Control of the video decoder by the server
This place covers:
Control signals sent by the client device, e.g. for controlling the network or the server.
This place covers:
Control signals sent by the client device, e.g. for controlling the network or the server.
This place covers:
The client sends a control signal to the server (e.g. encoder) or to the network requesting a bitrate modification.
This place does not cover:
Flow control in packet networks |
This place covers:
The client asks the server or the network to retransmit some data packets that have been lost or corrupted.
This place does not cover:
ARQ protocols | |
Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP] |
This place covers:
Control signals issued by the client to the server.
This place does not cover:
One-way streaming services wherein the source is controlled by the destination |
This place covers:
Describes clients communicating public key to the server.
This place does not cover:
Key management | |
Network support of key management |
This place covers:
Control of the video encoder, also requests for transcoding.
This place does not cover:
Network arrangements in data packet switching network, protocols or services for addressing or naming | |
Support for multicast or broadcast of one-way stream services in data packet switching network |
This place covers:
Describes the process of allocating adresses to the clients
This place does not cover:
Address allocation in data networks |
This place covers:
Data is sent to a group of clients.
This place does not cover:
Data broadcast and multicast in packet switching networks |
This place covers:
Data is sent to only one client on a dedicated channel.
This place covers:
Details of protocols are classified elsewhere.
This place does not cover:
Network streaming protocols in data packet switching network, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP] |
This place covers:
Adaptation of MPEG packets for the transport on the ATM network.
This place covers:
DSM-CC has been designed for MPEG systems.
This place covers:
- Network control and processing.
- Low-level control signals issued by the network for controlling the server or the client as well as operations performed by the network on the content.
This place does not cover:
Real-time session protocols in data packet switching network |
This place covers:
The input signal is routed during the transport to a different network, for example the video stream is sent by the server on an IP network and received by the client via a wireless network.
This place covers:
Additional measures to protect the data from forbidden alterations during the transport.
This place does not cover:
Verifying the information received for network security in communication control or processing | |
Integrity in wireless network security |
This place does not cover:
Traffic related reporting in data switching networks |
This place covers:
Monitoring of error during network processing.
This place does not cover:
Recovering in data packet switching network from a failure of a protocol instance or entity |
This place covers:
Monitoring by the network of the congestion level, bandwidth, BER, status of the connection (dropped).
This place does not cover:
Data switched network analysis | |
Monitoring functioning in data switched networks | |
Flow control in packet networks |
This place covers:
Network-side control signals.
This place covers:
The network sends a control signal to the server (e.g. encoder or pump) requesting a bitrate adaptation to the bandwidth.
This place does not cover:
Flow control in packet networks |
This place covers:
The network asks the server to retransmit some data packets that have been lost or corrupted.
This place does not cover:
ARQ protocols | |
Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP] |
This place covers:
The data stream can be altered by the transport medium.
This place does not cover:
Data processing in packet switching systems | |
Flow control in packet networks | |
Intermediate storage or scheduling | |
Provisioning of proxy services in data packet switching networks |
Attention is drawn to the following places, which may be of interest for search:
Secondary or local servers, which could also alter the data |
This place covers:
The control of the complexity is performed on the network / within the transmission medium (e.g. routers drop packets)
This place does not cover:
Intermediate media network packet handling | |
Proxy provisioning conversion or adaptation for reducing the amount or size of exchanged application data | |
Negotiation of resources in wireless networks |
This place covers:
High-level control signals.
This place covers:
Server side Controlling. Described are here all the functions provided in a server for a high level control of the clients.
This place covers:
A further category is related to the actions, which the server forces the client to execute. Meant are channel tuning, retrieving from cache and inserting, recording, retrieving OS software from a carousel and upgrading, generating monitoring data, activating a trigger.
This place does not cover:
Remote booting in general |
This place covers:
It includes the download of system parameters, such as for the decoder, the display of the graphical user interface, the setup (including OS software) of the client.
This place covers:
Client side Controlling. Nature of the uplink signal sent to the server.
This place covers:
It can also transmit reference data such as an URL, for accessing a WWW page, a movie ID for ordering a movie, a product ID for a home shopping application.
This place covers:
The client can transmit stored data, like viewing habits, hardware capabilities, credit card number.
This place does not cover:
Arrangements where receivers interact with the broadcast |
This place covers:
The client responds to an action triggered by the server, for example confirms that a download was successful.
This place covers:
The client sends parameters to control for example a VOD server (pause, fast-forward,..). Also includes viewpoint change, when an event is shot with different cameras.
This place covers:
- Subject matter comprising video data and data related or unrelated thereto, generated by the content provider, wherein the defining feature is the presence of the data per se or processing operations to convert the data into a form suitable for the distribution process or to create an interactive application. This subgroup is directed to raw multimedia objects and processing operations thereof, wherein the operations involved are independent of the distribution process. The resulting data is then provided to the server for distribution purposes. Processing operations dependent of the distribution process are placed in H04N 21/20, H04N 21/60, H04N 21/40, according to the entity (respectively server, network, client) performing the operation.
- The first layer of this subgroup pertains to the nature of the raw multimedia content and covers e.g. video, audio, data, commercials, graphics and software.
- The second layer describes processing functions such as protecting the content by adding e.g. a watermark, certificate, signature, identification or defining content usage, or adding metadata or structuring the content, e.g. by decomposing it into layers, objects and segments.
- The next layer is directed to the assembling of the content, e.g. authoring of an interactive application. Examples of documents placed in the M-model (1) This subgroup is directed to the definition and generation of metadata. (2) This subgroup is directed to protection of rights and covers the identification of the source, content identification, rights specification (e.g. content can be displayed or copied within a certain time period or number of times and by a specific group of users) as well as adding certificates or calculating signatures. Scrambling of the content for transmission purposes are classified elsewhere. Systems that describe the blocking of specific video content transmitted over a network is classified elsewhere. (3) This subgroup is directed towards high-level tools or processes to generate a multimedia application from basic components (such as compiling an interactive application to be run on a target STB). It pertains e.g. to the design of the scene graph, the generation of a trailer, of timestamps, the packaging of the content into an XML file and the linking of multimedia objects to URLs.
This place does not cover:
Arrangements for generating broadcast information |
Attention is drawn to the following places, which may be of interest for search:
Compilation of EPG data containing metadata, also adding additional broadcast schedule data |
This place covers:
They are the basic monomedia components of multimedia content.Classifying these data types can be very useful to describe the kind of data processed in the system. This data will be distributed electronically later on (from a server to a client using a WAN, from the client to its peripherals using a LAN).
This place covers:
Audio. The audio component is usually present and related to the video component. Therefore, this place must be restricted to non trivial aspects such as for example the presence of several tracks for different languages.
This place covers:
Music, songs, MP3 files. Distinct from the audio track of a movie.
This place covers:
The commercial will be itself a mono-media or multimedia object, but may be considered as an external element, which is added to the original content for commercial purposes.
This place does not cover:
Advertising per se |
This place covers:
Data should be provided as an extra service in the multimedia distribution system, e.g. stocks, sport results, news tickers or weather information.
Attention is drawn to the following places, which may be of interest for search:
Operation of end-user applications for supplemental services |
This place covers:
Additional data that are related to the multimedia content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program, etc...but no program descriptors, in the sense of metadata.
This place does not cover:
Arrangements specially adapted for emergency or urgency in broadcast systems | |
Arrangements for providing alarms, notifications, alerts to substation in data switching networks |
This place covers:
Graphical objects can be combined with video, for example, in MPEG-4. They can be of 2D or 3D nature. Text can also be considered as long as it is purely graphical and content of the textual information doesn't matter.
This place covers:
Still images like texture, background or any other to be used in a menu should be classified here.
This place covers:
- Video is the main component in the area of interactive television and will normally be present in all documents. This entry should be thus only used to describe further details.
- Motion vectors.
This place covers:
Executable code can be sent for example to distribute commercial packages or upgrades to clients.
This place does not cover:
Arrangements for executing specific programs | |
Broadcasting computer programmes in broadcast systems | |
Involving the movement of software or configuration parameters in data packet switching networks |
This place covers:
High-level user applications, e.g. new browser, game to be run on the client only.
This place covers:
Software module for the STB operating system.
This place covers:
Software to be transmitted by the client to a peripheral such as PDA software. Covers also IR codes to reprogram a remote control.
This place covers:
STB tools, e.g. decoder software, realplayer, mediaplayer or IPMP tool.
This place covers:
Adding information Manipulating or adding information to the content to ensure its appropriate distribution.
This place covers:
Identification of the source (e.g. motion picture studio), content identification, rights specification as well as adding certificates or calculating signatures to guarantee the integrity of the content and the rights of its provider. Protection is added at the generation of the content, before it enters the distribution system.
This place does not cover:
Protecting software against unauthorised usage in a vending or licensing environment |
Attention is drawn to the following places, which may be of interest for search:
Involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network |
This place covers:
The content receives an identification number, e.g. UMID, describing for example a video clip number, the source (motion picture studio) it comes from.
This place covers:
- The content provider defines how this content has to be used, e.g. if it can be displayed or copied and how often and by which group of users. This information is processed by the client-side rights manager or on the server-side rights management.
- Covers also rental period of a movie.
This place covers:
Structured language for describing usage rules of the content, i.e. REL.
This place covers:
Watermarks being embedded in the content for later verification purposes.
This place does not cover:
Watermarks inserted in still images for transmission purposes | |
Inserting watermarks during video coding | |
Protecting executable software by watermarking | |
Image watermarking in general |
This place covers:
- Program descriptors, e.g. abstract or actors, as video specific metadata defined in MPEG-7. As metadata is a widely used word in a large range of applications, attention should be paid not to classify here aspects like identificators, watermarks or additional data.
- Covers also program categories, reviews by other viewers and scene descriptors for MPEG-4 objects.
This place does not cover:
Systems specially adapted for using meta-information in broadcast systems |
Attention is drawn to the following places, which may be of interest for search:
Compilation of the EPG data as such by adding broadcast schedule data to metadata | |
Supplemental data specifically related to the content |
This place covers:
Version of the content, e.g. version of a software module.
This place does not cover:
Arrangements for version control in computers |
This place covers:
Metadata is available as keywords for quicker matching.
This place covers:
Structuring of the content, for example by decomposing the content into layers, objects.
This place covers:
This place is used to indicate the presence of video structured as in the new coding standard Advanced Video Coding [AVC], also referred to in the literature as JVT, H.264, H.26L, MPEG-4 part 10 (misleading name as the video is NOT coded in object form as in MPEG-4 generally the case).
This place covers:
A piece of content has a set of features, which can be locked or enabled, e.g. optional functionalities in an executable program. Covers keyframes in video signals.
This place covers:
Entry points in the video stream.
This place covers:
A video stream is divided into time slices, e.g. segments or scenes.
This place covers:
Content assembly, performed typically by an operator on a work station in a production studio.
This place covers:
High-level tools or processes to generate a multimedia application from basic components. It compiles for example multimedia descriptors, e.g. MHEG, into an interactive application to be run on target STBs.
This place covers:
Applications having several sub-scenarios, allowing different story developments.
This place covers:
Multimedia application described using a standard description language such as MHEG or XML.
This place does not cover:
Information retrieval of semistructured data, the underlying structure being taken into account, e.g. mark-up language structure data |
This place covers:
Generation of scripts or executable, e.g. applets, to make an application interactive.
This place covers:
Describes the generation of timestamps for synchronising different pieces of content such as video, audio or different objects.
This place covers:
Generation of a trailer, i.e. selected scenes from the original video, or any edited version from an original, e.g. previews.
This place does not cover:
Retrieval in video databases by using presentations in form of a video summary |
This place covers:
Reference between one of the component and anything else, e.g. between two TV programs or a program and additional information on the internet (URL) or to shopping information. Covers also ATVEF triggers in general.
This place covers:
Objects or regions of the visual content are associated to further resources, e.g. hypervideo. Excludes URLs. Covers details of marking regions of an image.
This place covers:
Multimedia components are linked in the editing process to internet resource, with the WWW server. This place is used to describe the automatic access to a WWW server via an embedded URL.
Example(s) of documents found in this subgroup: EP1850594
This place does not cover:
This group does not cover: Processing chained hypermedia data for information retrieval | |
Information retrieval from the Internet by using URLs | |
URL in broadcast information | |
Web-based protocols |
This place covers:
Processes and apparatus related to the concept of electronic image capture using an electronic image sensor and the related control and processing of the generated electronic image signals.
Image pickup devices using electronic image sensors such as digital cameras, video cameras, TV cameras, CCTV cameras, surveillance cameras, camcorders, digital cameras embedded in mobile phones, aspects peculiar to the presence of electronic image sensors in electronic still cameras, digital still cameras, etc.
Electronic image capture by methods or arrangements involving at least the following step: the scanning of a picture, i.e. resolving the whole picture-containing area or scene into individual picture-elements and the derivation of picture-representative electrical signals related thereto, simultaneously or in sequence, e.g. by reading an electronic solid-state image sensor [SSIS] pickup device (e.g. CCD or CMOS image sensor) as an electronic image sensor converting optical image information into said electrical signals.
In colloquial speech said step is frequently formulated as, e.g. capturing a video sequence, digital photographing, etc.
Concerning cameras:
- video cameras, TV cameras (e.g. in studios), CCTV cameras, surveillance cameras, camcorders; constructional and mechanical details related to such cameras even when not peculiar to the presence of the electronic image sensor e.g. housings;
- arrangements/methods for image capture using an electronic image sensor, i.e. (i) sensor read-out; (ii) processing or use of electrical image signals from the electronic image sensor for the generation of camera control signals;
- for controlling the electronic image sensor or its read-out for, e.g. exposure, scene selection for auto-focusing or electronic image enhancement, or processing of image signals captured by the electronic image sensor, e.g. white balance, electronic motion blur correction, noise suppression;
- for controlling other camera functions, e.g. exposure, anti-shake compensation by influencing optical parts of the camera, focusing;
- in-camera image processing, e.g. correction of lens distortion, defective pixel correction, noise suppression, removal of motion blur, improving the dynamic range of the final image;
- electronic viewfinders, control of image pickup devices based on information displayed by the electronic viewfinder;
- electrical and mechanical aspects of camera modules using electronic image sensors and related constructional details as in webcams or mobile phones;
- remote control of cameras peculiar to the electronic image sensor, e.g. affecting their operation, or being based on a generated image signal;
- adaptations peculiar to the presence or use of an electronic image sensor, the transmission, recording or other use of electrical image data and related circuitry, e.g. mounting of electronic image sensor, integrated cleaning system for the electronic image sensor, dust mapping, cooling of the electronic image sensor, controlling the operation of the electronic image sensor by external input signals;
- cameras wherein the inventive contribution lies in the interaction of features covered above with those covered by G03B, e.g. switch-over between electronic motion-blur correction of electronic viewfinder during focusing and optical motion-blur correction of the lens during exposure, electronic-motion blur correction of the electronic image signal based on output signals of additional sensor or interaction between mechanical shutter and electronic control of the charge accumulation period of the electronic image sensor;
- applications concerning studios and image capturing devices that cannot be classified in lower groups such as camera operation in general, e.g. for studio or TV events, processing for simulating film artefacts, virtual studio, virtual depth image, video assist systems, other studio equipment, e.g. autocues and teleprompters.
Groups in G03B are to be considered when the following aspects are concerned:
- apparatus/methods for taking photographs using light sensitive film for image capture, apparatus/methods for printing, projecting or viewing images using film stock, photographic film or slides by optical means, e.g. mounting of optical elements, flashes, and their related controls, e.g. exposure, focus, (opto-)mechanical motion blur (anti-shake), cooling, beam shaping;
- aspects of apparatus/methods for taking photographs using electronic image sensors for image capture, insofar as they correspond to those of said apparatus/methods for taking photographs using light-sensitive film, i.e. not peculiar to the presence or use of the electronic image sensor, e.g. mounting of optical elements or flashes, and their related controls insofar as they are not peculiar to the presence or use of the electronic image sensor, e.g. exposure, focus, (opto-)mechanical motion blur correction (anti-shake);
- optical viewfinders;
- remote control of cameras not peculiar to the electronic image sensor, e.g. not affecting their operation, or being based on a generated image signal;
- optical aspects of camera modules using electronic image sensors and related constructional details (e.g. lens actuators).
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Videophones | |
Closed circuit television systems | |
Cameras adapted for vehicles | |
Image or video recognition or understanding | |
Surveillance systems with alarms | |
Mobile phones |
Attention is drawn to the following places, which may be of interest for search:
Intermediate information storage using still video cameras | |
Video recording | |
Testing of cameras | |
Cameras used as input-only client peripherals for selective content distribution | |
Circuitry of solid-state image sensors [SSIS] or control thereof | |
Radiation diagnosis, diagnostic aspect of medical imaging devices | |
Pyrometry, measuring temperature | |
Measuring X-rays, gamma radiation | |
Optical systems | |
Apparatus or arrangements for taking photographs | |
Image processing in general, i.e. not being exclusively adapted to be used in an image pickup device containing an electronic image sensor, or in studio devices or equipment | |
Editing of recorded image information | |
Associated working of recording or reproducing apparatus with TV camera or receiver in which the television signal is not significantly involved | |
Electric discharge tubes | |
Semiconductor technology of solid-state imaging devices, e.g. CMOS image sensors | |
CCD image sensors | |
Broadcasting | |
Constructional features of telephone sets |
In this place, the following terms or expressions are used with the meaning indicated:
additional sensor | Sensor, other than the electronic image sensor, used for controlling a camera |
camera | Device capturing image information represented by light patterns reflected from or emitted by objects, and exposing a light sensitive film or an electronic image sensor during a timed exposure, usually through an optical lens, and producing an image on a light sensitive film or an electrical image information signal respectively |
electronic image sensor | Optoelectronic transducer, converting optical image information into an electrical signal susceptible of being processed, stored, transmitted or displayed |
electronic spatial light modulator | Optoelectronic transducer converting electric signals representing image information into optical image information |
projector | Device displaying image information by projection of light patterns, usually through an optical lens, wherein the light patterns are generated by illuminating an image, e.g. film or slide, or by converting an electric image signal into an optical signal using an electronic spatial light modulator |
record | Registration (e.g. of sound or images) in permanent form by optical or electrical means for later reproduction |
In patent documents, the following abbreviations are often used:
ADAS | Advanced driver assistance system |
ADC | Analog to digital converter |
AE | Automatic exposure control |
AF | Autofocus |
AFE | Analog front end |
AGC | Automatic gain control |
AI | Artificial intelligence |
ANN | Artificial neural network |
APD | Avalanche photodiode |
APS | Active pixel sensor |
CCD | Charge-coupled device |
CDS | Correlated double sampling |
CFA | Colour filter array |
CIS | Charge injection device |
CIS | CMOS image sensor |
CMOS | Complementary metal–oxide–semiconductor |
CNN | Convolutional neural network |
DSP | Digital Signal Processor |
EMCCD | Electron multiplying charge-coupled device |
ENG | Electronic news gathering |
ESLM | Electronic spatial light modulator |
EVF | Electronic viewfinder |
EVS | Event-based vision sensor |
FOV | Field of view |
FPN | Fixed pattern noise |
FLIR | Forward looking infrared |
FPA | Focal plane array |
FPD | Flat panel detector |
FPGA | Field programmable gate array |
GPU | Graphics processing unit |
GUI | Graphical user interface |
HDR | High dynamic range |
LFM | Light flicker mitigation, LED flicker mitigation |
LWIR | Long wavelength infrared |
MWIR | Mid wavelength infrared |
MTF | Modulation transfer function |
NIR | Near infrared |
NN | Neural network |
NUC | Non-uniformity correction |
OVF | Optical viewfinder |
PD | Phase detection (pixel), phase difference (pixel) |
PDAF | Phase-detection autofocus |
PMD | Photonic mixer device |
PTZ | Pan tilt zoom |
QIS | Quanta image sensor |
QWIP | Quantum well infrared photodetector |
ROIC | Readout integrated circuit |
SBNUC | Scene-based non-uniformity correction (NUC) |
SPAD | Single-photon avalanche diode |
SPD | Single-photon detection |
SSIS | Solid state image sensor |
SWIR | Short wavelength infrared |
TDI | Time delay and integration |
TEC | Thermoelectric cooler |
TFA | Thin film on ASIC |
TOF | Time of flight |
WDR | Wide dynamic range |
In patent documents, the following words/expressions are often used as synonyms:
"digital camera", "camcorder", "video camera", "still video camera", "camera" and "digital still camera"
This place covers:
Camera architectures:
- for generation of colour signals by using switchable colour filters or light sources, or by using different image sensors;
- for generation of RGB; RGBIR; RGBW; RW; R+(N)IR, G+IR, B+IR, W+R signals;
- comprising visible and IR sensors;
- comprising partial IR filters;
- comprising visible light sensors without IR filter, i.e. a pixel captures both visible and IR light (Y+IR);
- comprising switchable IR filters, i.e. the pixels are controlled to capture either only the visible light (Y) or both visible and IR light (Y+IR);
- comprising multiple image sensors, at least one of which is sensitive to IR light.
Attention is drawn to the following places, which may be of interest for search:
Arrangement of colour filter arrays [CFA] or filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths | |
Investigating the spectrum | |
Imaging spectrometer |
Image sensors comprising pixels sensitive to visible light and IR light and image sensors comprising pixels sensitive to both visible and IR light (Y+IR) and pixels sensitive to IR light (IR) are classified in group H04N 25/131.
Attention is drawn to the following places, which may be of interest for search:
Beam splitting or combining systems per se |
Attention is drawn to the following places, which may be of interest for search:
Scanning by optical-mechanical means only, applicable to television systems in general |
Attention is drawn to the following places, which may be of interest for search:
Transforming infrared radiation |
This place covers:
Cameras or camera modules for generating image signals from X-rays.
Attention is drawn to the following places, which may be of interest for search:
Transforming X-rays into electric information | |
Circuitry of SSIS for transforming X-rays into image signals | |
Measuring X-radiation, gamma radiation, corpuscular radiation or cosmic radiation |
This place covers:
Constructional details of cameras or camera modules (housing, mounting of optical parts, mounting of image sensing part, other camera parts).
Constructional details not peculiar to the presence or use of the electronic image sensor in electronic still picture cameras, digital still picture cameras are classified in subclass G03B.
This place covers:
Internal or external camera control for:
- autofocusing operations;
- computer-aided image capturing;
- application programs for camera control;
- detecting malfunction;
- face recognition;
- generating a panoramic field of view;
- power saving or management;
- compensating for shutter delay;
- changing the image capture speed;
- performing zoom operations;
- remote control;
- camera shake detection or correction.
Camera control using GUI (graphics user interface).
Camera control using remote control.
Camera control via network.
Camera control in different operation modes like viewfinder or playback mode, autofocus mode, video mode or still capture mode.
Attention is drawn to the following places, which may be of interest for search:
Circuitry for compensating brightness variation in the scene | |
Mountings, adjusting means or light-tight connections, for optical elements |
Attention is drawn to the following places, which may be of interest for search:
Image or video recognition or understanding |
Attention is drawn to the following places, which may be of interest for search:
Recognition of human faces |
This place covers:
User interfaces to control camera parameters which can be separated from or integrated in the camera.
Attention is drawn to the following places, which may be of interest for search:
Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters, e.g. touchscreens |
This place covers:
Camera viewfinders displaying image signals provided by an electronic image sensor and optionally displaying additional information related to control or operation of the camera.
Optical viewfinders are classified in G03B 13/02.
This place covers:
A graphical user interface, e.g. a touchscreen, which is integrated on an electronic viewfinder.
Attention is drawn to the following places, which may be of interest for search:
Control of parameters via user interfaces |
Details of circuitry for controlling the generation or management of the power supply for a solid-state image sensor [SSIS] is classified in H04N 25/709.
Details of energy supply or management for control of exposure for digital still cameras not peculiar to the electronic image sensor are classified in group G03B 7/26.
Mounting of focusing coils are classified in H04N 23/54.
Focusing aids not based on image signals provided by an electronic image sensor are classified in group G03B 13/18.
Constructional details of means for focusing for cameras are classified in group G03B 13/32.
Attention is drawn to the following places, which may be of interest for search:
Generation of focusing signals, in general |
Rangefinders coupled with focusing arrangements are classified in group G03B 13/20.
Attention is drawn to the following places, which may be of interest for search:
Bracketing for compensating for variations in the brightness |
Attention is drawn to the following places, which may be of interest for search:
Imaging systems using optical elements for stabilisation of the lateral and angular position of the image |
Attention is drawn to the following places, which may be of interest for search:
Fluid-filled or evacuated lenses of variable focal length |
Attention is drawn to the following places, which may be of interest for search:
Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors | |
TV type tracking system | |
Analysis of motion by image processing in general | |
Determining position or orientation of objects by image processing in general | |
Tracking of movement using TV cameras of a target in burglar, theft or intruder alarms |
This place covers:
Circuitry for compensating for variation in the brightness of the object. For example, dynamic range increase, bracketing, use of brightness histograms or brightness compensation by controlling shutter, filter, gain or illumination means.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Control of solid-state image sensor [SSIS] exposure |
Attention is drawn to the following places, which may be of interest for search:
Exposure control for film cameras or cameras using an additional sensor |
This place covers:
Bracketing used for increasing the dynamic range.
Bracketing for image capture at varying focusing conditions is classified in group H04N 23/676.
Attention is drawn to the following places, which may be of interest for search:
Circuitry for suppressing or minimising disturbance in the image signal generation |
This place covers:
Circuitry for suppressing impulsive noise, for gamma control and for processing colour signals.
This group does not cover image signal processing as such or pipelines thereof which is covered by G06T 1/00.
Attention is drawn to the following places, which may be of interest for search:
General purpose image data processing |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Noise reduction or noise suppression involving solid-state image sensors |
Attention is drawn to the following places, which may be of interest for search:
Circuitry for suppressing or minimising impulsive noise of video signals | |
Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination |
Attention is drawn to the following places, which may be of interest for search:
Circuitry for gamma control of video signals |
Attention is drawn to the following places, which may be of interest for search:
Circuits for modifying colour signals by gamma correction |
Attention is drawn to the following places, which may be of interest for search:
Circuits for processing colour signals |
This place covers:
Demosaicing, i.e. interpolating colour pixel values, only if jointly performed in combination with pixel scanning, image readout or different video processing operations within the image sensor.
Attention is drawn to the following places, which may be of interest for search:
Computational demosaicing |
Attention is drawn to the following places, which may be of interest for search:
Circuits for matrixing of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Circuits for controlling the amplitude of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Circuits for reinsertion of DC and slowly varying components of colour signals |
Attention is drawn to the following places, which may be of interest for search:
Colour balance circuits |
This place covers:
Systems using several cameras.
Attention is drawn to the following places, which may be of interest for search:
Constructional details of cameras |
This place covers:
Computational photography requiring combination of optical light modulation and computational reconstruction for acquiring dimensions of the plenoptic function.
Light field imaging systems for light field acquisition:
- using an array of cameras
- using single sensor with temporal, spatial or frequency-domain multiplexing
- temporal multiplexing with a programmable aperture
- spatial multiplexing using an array of lens or prisms
- frequency multiplexing by placing heterodyne mask
Camera systems comprising: Different types of image sensors, sensors of different resolutions, sensors with different field of view or focus.
Lensless imaging using:
- coded aperture masks
- zone plates
- angle-sensitive pixels using diffraction gratings
Coded-aperture imaging;
Extended Depth of Field Photography:
- using focal stacks
- focal sweep (moving the camera during the exposure)
- coded apertures
High speed imaging using:
- multiple devices
- high speed illumination
- stroboscopic illumination
- synthetic shutter speed imaging
This group covers image pickup devices using electronic image sensors for computational photography.
Devices for acquisition of colour spectrum, which is one dimension of the plenoptic function are classified in group G01J 3/00.
Pure image processing techniques used regardless of optical light modulation caused by the image pickup device are classified in groups G06T 3/00, G06T 5/00 and H04N 23/80.
High dynamic range imaging and exposure bracketing are classified in groups H04N 23/741 and H04N 23/743, respectively.
High resolution imaging by shifting the sensor relative to the scene is classified in group H04N 25/48.
This place covers:
Circuitry and driving details of solid-state image sensors, in particular the circuitry and driving details of image sensors are directed to the following purpose and functions:
- Reading out image data from the image sensor;
- Performing image processing within the image sensor;
- Control of exposure time by an electronic shutter;
- Noise removal;
- Improvement of resolution;
- Extension of dynamic ranges.
Solid-state image sensors encompass charge-coupled devices [CCDs], charge injection devices [CIDs], addressable photodiode arrays, complementary metal oxide semiconductor [CMOS] image sensors, etc.
Solid-state image sensors normally capture and output image data as raw images. However, there are special image sensors that capture, process and output the image data. Details of such sensors are classified in the main group H04N 25/00, for example:
- image sensors having on-chip compression means for data rate reduction purposes, e.g. DCT, wavelet transformation in the sensor;
- image sensors having on-chip compression means for data rate reduction purposes by outputting differential data, such as the difference between two exposures or events detecting a predetermined change of the image signal or differences between neighbouring pixels;
- compressive sensing sensors
- image sensors performing global operations such as generation of histograms, sorting, region segmentation/labelling, convolution functions, character recognition, or detecting maximum/minimum level;
- image sensors with edge detection in the sensor, for detecting differences between pixel signals in the spatial domain, for spatial filtering;
- image sensors with motion or event detection in the sensor, i.e. detecting change between pixel signals over time;
- image sensors comprising a dedicated temperature sensor or being controlled by the sensor temperature.
- SSIS with power optimization
- SSIS with processing time optimisation by using for example parallel processing circuitry
While main group H04N 25/00 is, inter alia, used for classifying electronic circuits of solid-state image sensors and their driving, control and readout, the groups in main group H01L 27/00 cover details related to the implementation of the electronic circuits on a semiconductor chip.
Attention is drawn to the following places, which may be of interest for search:
Details of scanning heads | |
Scanning arrangements | |
Receivers for pulse based Lidars | |
Receivers for non-pulse based Lidars | |
Computer systems using neural network models | |
General purpose image data processing | |
Arrangements for image or video recognition | |
Imager structures consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate | |
Charged coupled imagers | |
Compressive sampling or sensing | |
Organic image sensors |
Where the solid-state image sensor function is classified in groups H04N 25/00 - H04N 25/683 classification should also be made in the group corresponding to the sensor technology, i.e. H04N 25/71, H04N 25/76 or H04N 25/79. For example, dark current correction for CCDs should be classified in both H04N 25/63 and H04N 25/71.
In this place, the following terms or expressions are used with the meaning indicated:
image sensor | Sensor that detects and conveys the information that constitutes an image. An image sensor may do so by producing a signal that represents location-dependent attenuation of light (as the light passes through or reflects off a medium). The signal is an electric signal such as an electric voltage or current. The light an image sensor may detect is not limited to visible light, but can be electromagnetic radiation in other wavelengths (e.g., infrared, ultraviolet, X-rays, gamma rays). |
In patent documents, the following abbreviations are often used:
ADC | Analog to digital converter |
AE | Automatic exposure control |
AF | Autofocus |
AFE | Analog front end |
AGC | Automatic gain control |
AI | Artificial intelligence |
ANN | Artificial neural network |
APD | Avalanche photodiode |
APS | Active pixel sensor |
BSI | Back-side illumination |
CCD | Charge-coupled device |
CDS | Correlated double sampling |
CFA | Colour filter array |
CID | Charge injection device |
CIS | CMOS image sensor |
CMOS | Complementary metal–oxide–semiconductor |
CNN | Convolutional neural network |
CTIA | Capacitive transimpedance amplifier |
DPS | Digital pixel sensor |
DSP | Digital signal processor |
EMCCD | Electron multiplying charge-coupled device |
EVS | Event-based vision sensor |
FD | Floating diffusion |
FOV | Field of view |
FPN | Fixed pattern noise |
FLIR | Forward looking infrared |
FPA | Focal plane array |
FPD | Flat panel detector |
FPGA | Field programmable gate array |
GPU | Graphics processing unit |
HDR | High dynamic range |
LFM | Light flicker mitigation, LED flicker mitigation |
LWIR | Long wavelength infrared |
MWIR | Mid wavelength infrared |
MTF | Modulation transfer function |
NIR | Near infrared |
NN | Neural network |
NUC | Non-uniformity correction |
OVF | Optical viewfinder |
PD | Phase detection (pixel), phase difference (pixel) |
PDAF | Phase-detection autofocus |
PMD | Photonic mixer device |
PTZ | Pan tilt zoom |
QIS | Quanta image sensor |
QWIP | Quantum well infrared photodetector |
ROIC | Readout integrated circuit |
SBNUC | Scene-based non-uniformity correction (NUC) |
SPAD | Single-photon avalanche diode |
SPD | Single-photon detection |
SSIS | Solid state image sensor |
SWIR | Short wavelength infrared |
TDI | Time delay and integration |
TEC | Thermoelectric cooler |
TFA | Thin film on ASIC |
TIA | Transimpedance amplifier |
TOF | Time of flight |
WDR | Wide dynamic range |
This place covers:
- Architectures of colour filter arrays, e.g. arrangement of the colours in the colour filter array [CFA], number of the colours in the CFA, CFA comprising white or (N)IR pixels;
- Filter arrays characterised by the selection of primary colours, complementary colours, other colours, e.g. emerald, panchromatic filters, elements with different spectral sensitivity for the same colour, e.g. G1 and G2;
- Elements passing: IR, RGB+IR, W+IR;
- Random arrangement of the colour filter elements;
- CFA characterised by the size of the periodically replicated pattern;
- CFA using repeating patterns with more than one elements of the same colour adjacent to each other, e.g. Quad Bayer;
- Sensors for performing colour separation based on photon absorption depth;
- Circuitry of the sensor for performing colour imaging operations.
Attention is drawn to the following places, which may be of interest for search:
Transforming only infrared radiation into image signals |
This place covers:
Solid state image sensors and control thereof for near and far infrared [IR] cameras.
Attention is drawn to the following places, which may be of interest for search:
Transforming infrared radiation | |
Non-uniformity correction | |
Radiation pyrometry | |
Integrated devices comprising at least one thermoelectric or thermomagnetic element |
In many cases it is necessary to add a code for an identified function or circuitry design covered in group H04N 25/00.
Attention is drawn to the following places, which may be of interest for search:
Details of photometry | |
Electric circuits of radiation detectors for photometry | |
Thermography | |
Formed in or on a common substrate controlled by radiation |
This place covers:
Electronic circuitry of X-ray imaging detectors that directly or indirectly detect incident X-ray photons, including:
- current integrating detectors (CID) or energy integrating detectors (EID);
- photon counting detectors (PCD). Some X-ray PCDs rely on continuous time current monitoring and pulse counting implementation of photon counting. Each pixel typically contains a pulse shaping circuit along with a thresholding system connected to a counter;
- details of generating control signals based on data from the image sensor, like irradiation start/stop detection based on dummy readouts or form signals from specific pixels;
- operation and control of different sensor modes, like entering and control in sleep mode.
Issues with focus on measurement of X-radiation, on a measurement principle or its technological implementation or a dedicated measurement related circuit design should be classified in G01T 1/00.
When the imaging X-ray sensor is described with details related to systems for measuring of X-ray radiation with semiconductor detectors, classification should also be made in G01T 1/00. This is especially the case if details of circuitry for detecting, measuring or adapting the detected signal in order to obtain a correct signal are described, e.g. corrections for pile-up, for trapped charges, for dead-time, to determine energy or spatial corrections.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Apparatus for radiation diagnosis | |
Investigation of materials using radiation | |
Nuclear Magnetic Resonance imaging systems | |
X-ray apparatus or circuits therefor |
Attention is drawn to the following places, which may be of interest for search:
Transforming X-rays into electric information | |
Cameras or camera modules for generating image signals from X-rays | |
Measuring X-radiation, gamma radiation, corpuscular radiation or cosmic radiation | |
Circuit arrangements not adapted to a particular type of detector for measuring radiation intensity | |
Measuring X-radiation, gamma radiation, corpuscular radiation or cosmic radiation with semiconductor detectors | |
Apparatus for taking X-ray photographs | |
X-ray photographic processes | |
Image data processing | |
Medical informatics | |
Collimators | |
X-ray tubes | |
Integrated devices comprising at least one organic element specially adapted for switching | |
Integrated devices comprising organic radiation-sensitive element specially adapted for detecting X-ray radiation | |
Electric solid-state thin-film or thick-film devices |
In many cases, it is necessary to add a symbol for an identified function or circuitry design covered in group H04N 25/00.
This place covers:
Details of extracting pixel data from an image sensor by controlling scanning circuits, for example:
scanning individual pixels or pixel regions;
using specific scanning sequences, like scanning in blocks, pyramidal, in different directions;
scanning and reading out data from a pixel while the pixel accumulates new charges or scanning or reading out data from a block, while the block processes the next data, normally additional storage elements like double buffers or parallel processing circuits are used, e.g. reading a pixel while the next exposure is running, reading out digital ADC data while the ADC is running the next conversion cycle, etc.
Scanning for high-speed operations where number of frames are successively captured and stored in the sensor and then readout from the memories;
reading out more than one sensor
- for increasing the field of view by combining the outputs of two or more sensors, e.g. panoramic imaging;
- having different imaging characteristics, e.g. exposure time, aperture size, gain, resolution or colour;
for performing data compression
- by compressive sensing or sparse sampling;
- by DCT or wavelet transforms;
- by data differencing.
by controlling the frame rate
- of different regions of the image array;
- the regions being variable.
for extracting focusing pixel data;
by non-destructive readout to read signals two or more times during the integration time of the pixel. The figure shows non-destructive readouts at time instants from T1 to T7, while the pixel signal (72, 74 or 76) increases as a result of the exposure during the integration time T;
for performing global operations, e.g. histogramming, sorting, region segmentation/labelling, convolution functions
- for detecting maximum or minimum level
adapted to implement artificial neural networks [ANNs];
for push broom scanning or together with relative movement.
Attention is drawn to the following places, which may be of interest for search:
Circuitry for scanning or addressing the pixel array in CCD sensors | |
Circuitry for scanning or addressing the addressed pixel array |
In addition to classification in H04N 25/40, classification should also be made in H04N 25/74 or H04N 25/779 when specific details of the scanning circuits are provided.
The readout operations in most of cases influence the exposure time of the pixels. Accordingly, classification should also be made in H04N 25/53 when details related to the control of the exposure/integration time are disclosed.
This place covers:
Arrangements and scanning details of image sensing units comprising plurality of image sensor arrays or panels, for example:
- compound image sensing units arranged to direct light from a different section of the field of view onto different image sensors or different image sensor regions;
- large X-ray image sensing unit realised by tessellating several sensor panels;
- image sensing units that form images of the same or at least partially overlapping photographic region upon each of a plurality of pixel regions;
- an imaging unit forms images of the same or at least partially overlapping photographic region upon each of a plurality of pixel regions wherein the pixels are offset at a fraction of the pixel pitch;
- details of correction and alignment between the image sensors and the respective optical systems by selective scanning of the image sensors;
- the image sensors may be not on the same plane or on the same chip and the optical system may comprise mirrors or prisms;
- the image sensors or the different image sensor regions have different imaging characteristics like exposure time, aperture size, gain, resolution;
- the image sensors or the different image sensor regions having different focal planes;
- the image sensors or the different image sensor regions having fields of view of different sizes;
- the image sensors or the different image sensor regions have different resolution;
- the image sensors or the different image sensor regions have different colours and normally overlapping FOV;
- the image sensors or the different image sensor regions have different colours, one of which is for IR or for depth measurement;
- used in push broom scanning images.
Attention is drawn to the following places, which may be of interest for search:
Cameras or camera modules comprising electronic image sensors or control thereof for generating image signals from different wavelengths with multiple sensors | |
Cameras using two or more image sensors | |
Constructional details of television cameras | |
Linear arrays using abutted sensors | |
Modular detectors for measuring radiation intensity |
This place covers:
Image sensors comprising or being switchable between different readout modes, for example between interlaced or non-interlaced mode, or between high- and low-resolution modes, etc.
One of the modes can be related to readout of specific pixels only, for example there may be different modes for reading out focussing pixels and for reading out exposure pixels. The switching between different modes can be initiated, for example:
- upon change of the camera mode - auto exposure, auto focus, AWB, preview mode, video/still picture mode or
- upon scene parameters like motion or object detection.
This place covers:
Partial readout of an SSIS during one frame or sub-frame, including where the image sensor performs scanning of different image sensor regions at different resolutions.
This place covers:
- Scanning only selected rows or columns of the array, for example interlaced scanning or reading only every N-th line of pixels in a frame.
Classification should also be made in H04N 25/46 if the interlaced scanning is combined with binning of the neighbouring pixels. However, if all pixel signals are readout (i.e. provided to the column output lines or to the charge transfer lines of the CCD), and then some of these are added or binned, then classification is only made in H04N 25/46.
This place covers:
Scanning details for reading selected regions of the array, e.g. for performing electronic zooming.
This place covers:
Scanning details for thinned-out reading of pixel signals.
This place covers:
Binning charges in CCD sensors wherein:
- the colours of the colour filter array are preserved;
- the colours of the colour filter array are mixed;
- weighted addition or low pass filtering is performed.
Binning of charges or adding signals in CMOS sensors wherein:
- the colours of the colour filter array are preserved;
- the colours of the colour filter array are mixed;
- weighted addition or low pass filtering is performed.
Binning of charges in CMOS sensors wherein:
- charges of different photodiodes are added to a shared floating diffusion;
- a photodiode is connectable to a different shared floating diffusion.
Combining of pixel voltage or current signals in CMOS sensors wherein:
- the combining is implemented in the ADC – typically the counter or the memory of the ADC is arranged to perform addition of the pixel signals;
- the combining is implemented in a column amplifier;
- column processing analogue circuits are used to perform addition in h- or v- direction;
- summing of the currents of several source followers is used.
This place covers:
Dynamic vision sensors [DVS]: scanning individual pixels or pixel regions based on image data, for example where the scanning is based upon time events, level changes or exposure level. The figure below shows an example of a pixel for such sensor.
Circuitry and control thereof of DVS pixels:
- pre-amplifier stages, which often include a current to voltage converter (e.g. block 410);
- difference or subtractor stages (e.g. block 430);
- comparator stages (e.g. block 440);
- storages of event flags in DVS pixels (e.g. memories in the pixel arbiter to store events until they are read);
- circuitry of arbiters and read-out stages of DVS pixels;
- DVS pixels with an intensity pixel output, e.g. including an active pixel sensor [APS] pixel part. The photodiode can be shared or two separate diodes in pixels;
- DVS pixels used to control an active pixel sensor [APS] pixel output;
- DVS pixels with multiple thresholds to detect more than a ON/OFF event;
- control of threshold and bias settings of DVS pixels, e.g. a global or local signal setting or controlling either bias currents or threshold voltages of DVS pixels;
- reduction of noise events detected by DVS sensors;
- binning or spatial filtering of DVS pixels.
Attention is drawn to the following places, which may be of interest for search:
Pixels for event detection |
This place covers:
Circuits and arrangements for increasing the resolution by shifting the sensor relative to the scene, including:
- implementing the micro-scanning or pixel shift by moving optical parts of the camera;
- implementing the micro-scanning or pixel shift by moving the sensor;
- increasing resolution by moving or exposing at subpixel positions;
- increasing resolution by using the relative motion of the images captured caused by the camera shake.
This place covers:
Circuitry and means realised inside the sensor [SSIS] chip or even inside each pixel circuitry to control the exposure settings, e. g. rolling shutter, global shutter, exposure time, gain, etc.
Attention is drawn to the following places, which may be of interest for search:
Circuitry for compensating brightness variation in the scene |
This place covers:
Details of control of the integration time, in particular:
- details of performing global shutter operations in an image sensor;
- details of performing rolling shutter operations in an image sensor;
- integration time control and synchronisation of the electronic shutter in combination with a light source;
- integration time control and synchronisation of the electronic shutter in combination with mechanical shutter control;
- integration time control and synchronisation of the electronic shutter in function of motion in the scene;
- coded exposure for flutter camera.
Attention is drawn to the following places, which may be of interest for search:
Control of camera brightness compensation by influencing the exposure time | |
Control of camera brightness compensation by influencing the scene brightness using illuminating means | |
Control of camera brightness compensation by influencing optical camera components |
This place covers:
- Details of controlling rolling shutters.
This place covers:
Details of controlling the integration times of different regions of the image sensor wherein:
- the different regions can be predetermined;
- the different regions can be dynamically selected, for example based upon exposure conditions, ROI, speed or user selection;
- the integration time is controlled for each pixel.
If the control of the integration times is related to extension of dynamic range, classification in H04N 25/57 should also be considered.
This place covers:
Details of controlling the integration times depending on the colour of the pixel.
This place covers:
Circuitry and control thereof for extending the dynamic range of an SSIS. The dynamic range of an SSIS is defined as the ratio of the maximum possible, non-saturating input signal (full well capacity), versus the minimum detectable input signal limited by the total noise floor signal (in the dark). Circuitry and control thereof for converting the brightness of the scene into signal values by non-linear response function.
Circuitry and control thereof for reading image signals from which an HDR image can be generated.
This place covers:
- Controlling the sensor dynamic range using image sensors with pixel circuits having a non-linear response;
- Driving and control thereof.
The non-linear response can be achieved in different ways, for example, by using a specific photodetector, by controlling the reset or the transfer gate driving signals, by controlling the gain or by using non-linear amplifiers.
Details of control of the charge storable in the pixel are classified in group H04N 25/59.
While group H04N 25/58 covers extending the dynamic range by using multiple exposures, group H04N 25/571 covers the response characteristic (or the Opto Electronic Conversion Function) of the sensor during a single exposure.
This place covers:
- Image sensors comprising pixel circuits having a logarithmic characteristic;
- Image sensors comprising pixel circuits having a linear log characteristic;
- Driving and control thereof.
This place covers:
Image sensors comprising pixel circuits having multi-slope characteristics and driving and control thereof.
This place covers:
Details for driving and control of image sensors wherein the dynamic range is extended by multiple exposures. The term exposure is not limited to exposure time but rather specifies the overall amount of detected light, which further depends on the pixel size, pixel sensitivity, conversion gain, etc.
Attention is drawn to the following places, which may be of interest for search:
Combination of exposures for increasing the dynamic range | |
Bracketing, i.e. taking a series of images with varying exposure conditions |
This place covers:
Image sensors and driving circuits for controlling sensor dynamic range using two or more simultaneously acquired exposures, including:
- providing pixels for multiple exposures, like long- and short-time exposure pixels, high- and low-sensitivity pixels;
- reading out pixels non-destructively several times within a frame to provide multiple exposures;
- partial readout of pixels of the array (partial charge transfer or charge skimming) during the exposure time.
This place covers:
Controlling sensor dynamic range using two or more simultaneously acquired exposures with different integration times, including:
- providing with pixels for multiple exposures, such as long and short exposure time pixels;
- providing pixels that are read out several times non-destructively during a single exposure period, wherein the readout signals are combined to generate a high dynamic range signal;
- providing pixels that have charge partially transferred to a storage node (charge skimming) during the exposure period, wherein the signals from the partial readout and from the end of exposure are combined to generate a high dynamic range signal.
While group H04N 25/533 covers control of exposure time in different regions of the image sensor, group H04N 25/583 provides for the simultaneous acquisition of two or more exposures using different integration times which are combined in such a way that a new high dynamic range image signal is generated. If a partial or non-destructive readout is used only for setting the exposure period of the pixel, classification should be made in H04N 25/533.
This place covers:
Controlling sensor dynamic range using two or more simultaneously acquired exposures using different pixel sensitivities, including the use of sensors and driving circuits with:
- different sensitivities,
- different sizes,
- different conversion gains
The combination of these signals is used to generate a HDR signal.
This place covers:
Driving and control of image sensors for sequentially taking multiple exposures for extending the dynamic range. The signals from the multiple exposures can be stored in the pixel or outside of the pixel array.
This place covers:
Controlling sensor dynamic range using two or more sequentially acquired exposures with different integration times, e.g. using long and short integration times.
This place covers:
- Details related to image sensors comprising pixels that can modify the charge conversion ratio of the floating node. If a transfer gate is used, the amount of electric charge generated in the photoelectric converter PD is not controlled, but rather the charge to voltage conversion ratio of the floating diffusion.
- Details related to image sensors comprising pixels which can store and read out overflow charges.
Attention is drawn to the following places, which may be of interest for search:
Pixel circuitry comprising storage means other than floating diffusion |
This place covers:
- Noise processing circuits for reduction of random noise, line noise, high frequency noise, temporal noise caused by voltage drop of power supply or of driving circuits when implemented as part of the image sensor;
- Circuits for control of bandwidth of amplifiers or comparators implemented in the image sensor as far as related to the overall noise level of the image sensor;
- Noise processing circuits for reduction of optical crosstalk, light leakage, colour mixing and other noise originating from the components of the associated optical system;
- Noise processing circuits for reduction of frame-to-frame variations caused by the image sensor and not by external illumination variation;
- Image sensor noise characterisation, e.g. methods to derive parametric models to quantify different sensor noise types (such as readout noise or photo-shot noise) in the sensed image according to e.g. Gaussian, Poisson or uniform probability distribution functions; methods to calibrate and obtain noise levels of sensor data for further use, for example in filtering applications.
Attention is drawn to the following places, which may be of interest for search:
Camera processing pipelines or components thereof for suppressing or minimising disturbance in the image signal generation |
This place covers:
- Circuits for detecting and correcting flare;
- Circuits for detecting and correcting shading and vignetting;
- Circuits for detecting and correcting geometrical distortions.
Although not always specific to SSIS, the noise/distortion produced by a lens is nevertheless classified in group H04N 25/61 and not in group H04N 23/81. This has been done to facilitate the search. Corrections of chromatic aberrations, which can also be related to lenses, are classified in group H04N 25/611. All other noise suppression or disturbance minimisation in picture signal generation, e.g. in a camera having an electronic image sensor, should be classified in group H04N 23/81.
Attention is drawn to the following places, which may be of interest for search:
Camera processing pipelines or components thereof for suppressing or minimising disturbance in the image signal generation | |
Correction of chromatic aberration | |
Geometric image transformation in the plane of the image | |
Image enhancement performing geometric correction |
This place covers:
Circuits for detecting and correcting noise originating from the associated optical system involving a transfer function modelling the optical system.
This place covers:
Details of circuits for implementing:
- double sampling [DS] – these circuits compensate for offsets caused by the varying characteristics of pixel amplifiers (source followers);
- correlated double sampling [CDS] – these circuits further reduce the kTC (reset) noise;
- multiple sampling – multiple sampling of a reset signal and an image signal from a pixel is used to reduce or average the random noise;
- (correlated) double/multiple sampling function implemented in the analogue domain, i.e. by using clamping circuits, or by using separate sampling capacitors for the reset signal and the image signal;
- (correlated) double sampling function implemented at least partially in the ADC;
- (correlated) double sampling function implemented in the digital domain;
- CDS circuits per pixel;
- details of arrangement of the CDS circuit as part of the readout circuit;
- CDS arranged per column;
- CDS arranged at the output of the sensor.
If the specific position of the CDS in the image sensor is to be classified, classification should be made under H04N 25/70 according to the respective SSIS architecture. Correlated double sampling is a noise reduction technique in which the reference voltage of the pixel (i.e. the pixel's voltage after it is reset) is subtracted from the signal voltage of the pixel (i.e. the pixel's voltage at the end of integration) at the end of each integration period, to cancel kTC noise (the thermal noise associated with the sensor's capacitance). Therefore, classification should not be made in H04N 25/65 (reduction of kTC noise) if only CDS is used for kTC noise reduction.
This place covers:
Circuits for detecting and reducing electromagnetic interferences and clocking noises.
Such electromagnetic interference can be caused by sources internal or external to the sensor, such as from lens focusing motors.
This place covers:
Circuits for detecting and reducing excess charges produced by the exposure.
This place covers:
- Circuits for control of blooming by resetting pixels that are not readout but are adjacent to pixels that are readout so as to prevent saturation of non-read pixels from effecting adjacent readout pixels;
- Circuits for sweeping out electric charges beforehand so as not to leak while one prior row is being exposed;
- Circuits for controlling pixels comprising a storage element for storing the overflow photo-charges, the stored overflow charge is used to extend the dynamic range of the image sensor;
- Evacuation of excess charges produced by the exposure via the output lines or the reset lines of addressed sensors.
- Active CMOS pixels sensors comprising a dedicated reset or overflow transistor directly connected to the photoelectric converter, such a pixel is known as 5T pixel.
Details related to image sensors comprising pixels that can store and read out overflow charges are to be classified in group H04N 25/59.
Attention is drawn to the following places, which may be of interest for search:
Partially reading an SSIS array | |
Controlling the dynamic range by controlling the amount of charge storable in the pixel |
This place covers:
Anti-blooming drains used in the CCD sensors.
This place covers:
- Evacuation of excess charges produced by the exposure via the output lines or the reset lines of addressed sensors.
- Active CMOS pixel sensors may comprise a dedicated reset or overflow transistor directly connected to the photoelectric converter. Such a pixel is known as a 5T pixel.
This place covers:
Circuits for control of smearing in CCD sensors – the smearing noise appears as vertical stripes in the image.
This place covers:
Circuits for reduction of residual charges.
This place covers:
Circuits for detection and reduction of inverted contrast or eclipsing.
Eclipsing can occur when at least some pixels of the CMOS imager are exposed to strong light such as direct illumination from the sun. The strong light may cause electrons to spill over from the photodiode into the floating diffusion region, which results in an erroneous reset signal to be sampled (e.g. reset signals sampled during reset operations may exhibit voltage levels that are less than the desired reset level). Consequently, the pixel signal computed by column readout circuitry becomes an undesirably small value, the effect of which is manifested when an over-illuminated pixel appears dark while it should be bright.
A typical anti-eclipse circuit is configured to correct the voltage level of the reset signal by pulling the reset level up to a corrected voltage, thereby minimizing the eclipse effect.
In patent documents, the following words/expressions are often used as synonyms:
- "eclipse", "darkening", "blackening", "dark defect", "black crush", "black sun", "dark sun", "black inversion", "white-black inversion", "black dot", "black grave", "black core", "black point", "tanning phenomenon", "sunspot phenomenon", "solar blackening", "blackening phenomenon", "spotlight blackening", "high-brightness darkening", "black depression", "black sinking", "high-intensity blackening".
This place covers:
Circuits for control of noise that appears as horizontal stripes in the image and is normally caused by voltage variations or coupling effects caused by sampling or resetting overexposed pixels. It is also called streaking, pseudo-smear or band-like pattern noise.
This place covers:
- Circuits for detection and reduction of dark current.
- Circuits performing dark frame subtraction that remove an estimate of the mean fixed pattern, but there still remains a temporal noise, because the dark current itself has a shot noise.
- Circuits using optical black pixels for dark current compensation.
- Circuits using optical black pixels provided for each pixel or group of pixels.
The pattern of different dark currents can result in a fixed-pattern noise which is classified in H04N 25/67. Dark current is caused by charges generated in the detector when no radiation is entering the detector. Accordingly, only the fixed pattern noise caused by the dark current can be corrected or compensated. Dark current is temperature, exposure and pixel size dependent.
This place covers:
Pixels shielded from incident light for detecting only dark current.
This place covers:
Circuits for reduction of reset noise:
- by applying soft reset or combination of soft and hard reset;
- by feeding back the reset readout signal to the floating diffusion.
Attention is drawn to the following places, which may be of interest for search:
Noise processing involving a correlated sampling function, e.g. correlated double or triple sampling |
This place covers:
Circuits for detection and reduction of fixed-pattern noise.
This place covers:
Circuits and arrangements for correcting and detecting of non-uniformity caused by sensor characteristics such as:
- different pixel characteristics – sensitivity, gain, offset, response curve;
- different characteristics of sampling circuits, amplifiers, ADCs used for different groups of pixels;
- the resistive or capacitive properties of long readout or control lines.
Circuits and arrangements for correcting and detecting of non-uniformity by
- using dummy pixels and/or dummy structures, not OB pixels for detecting offset variations;
- using correction circuits for correcting gain variations between pixels or groups of pixels;
- performing measurement of the gain variations;
- using correction circuits for correcting offset variations between pixels or groups of pixels;
- performing measurement of the offset variations.
Non-uniformity correction modes for
- measuring the gain responses of the pixels;
- measuring the offset responses of the pixels.
There is a certain similarity between the circuits and methods for correcting dark current (H04N 25/63) and for correcting offset non-uniformities of the pixels. Since both can be temperature dependent, both can be corrected by using a dark frame.
This place covers:
Circuits and arrangements for correcting and detecting of non-uniformity between adjacent regions or output registers.
This place covers:
Circuits that use dedicated dummy pixels for detecting and correcting non-uniformity;
Circuits that use a reference voltage source;
Circuits that use a dark image of the scene.
This place covers:
Circuits that use information from the captured image for determining non-uniformity characteristics, e.g.:
- the scene information may be selected from expected uniform regions;
- the scene information can be defocused to generate uniform like scene;
- the scene can be captured by using pixel shifting, and the difference between the pixels that capture the same part of the scene can be used for detecting non-uniformity.
This place covers:
Details of reducing column or line fixed pattern noise. This noise is caused by different characteristics of column parallel circuits.
This place covers:
Circuits for correction of defects caused by:
- defects or non-responsive pixels;
- defects in column readout lines;
- defects in readout circuits;
- defects in the scanning circuits;
- defects in the control lines.
This place covers:
Details of circuits that detect defects such as non-responsive pixels in real time by using the image signal.
This place covers:
Details of SSIS architecture.
Attention is drawn to the following places, which may be of interest for search:
Imager structures, as devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate, per se |
Attention is drawn to the following places, which may be of interest for search:
Details of scanning heads for picture-information pick-up with photodetectors arranged in a substantially linear array |
This place covers:
SSIS with
- non-planar (e.g. foveal) or curved pixel layout;
- non-identical or non-equidistant pixels distributed over the pixel array.
Attention is drawn to the following places, which may be of interest for search:
Controlling sensor dynamic range using two or more simultaneously acquired exposures using different pixel sensitivities | |
Imager structures |
This place covers:
SSIS comprising dedicated pixels or control thereof, e.g.
- pixels specially for white balance measurement;
- pixels for exposure or ambient light measurement;
- pixels for triggering an exposure period;
- pixels for edge detection;
- pixels for event detection, for motion or difference detection or for level detection;
- pixels for storing additional non-volatile information;
- pixels for measuring substrate temperature.
This place covers:
SSIS using pixels specially adapted for focusing, including:
- SSIS comprising phase difference pixels.
- SSIS comprising only phase difference pixels, i.e. all pixels comprise more than one photodiode per micro lens. The photodiodes can have shared amplifiers or can be connected to different (shared) amplifiers.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Focusing based on the difference in phase signals |
Attention is drawn to the following places, which may be of interest for search:
Systems for automatic generation of focusing signals using different areas in a pupil plane |
This place covers:
SSIS using pixels for depth measurement, e.g. using time of flight [TOF] or using photonic mixer devices [PMD].
Attention is drawn to the following places, which may be of interest for search:
Image signal generators for stereoscopic video systems constituting depth maps or disparity maps | |
Pixels specially adapted for focusing, e.g. phase difference pixel sets | |
Detector arrays as receiver circuits of Lidar systems | |
Time delay measurement, e.g. time-of-flight measurement, at Lidar receivers | |
Lidar systems, specially adapted for specific applications |
Attention is drawn to the following places, which may be of interest for search:
Image sensors with pixel address output; Event-driven image sensors; Selection of pixels to be read out based on image data |
This place covers:
SSIS with circuitry for controlling the power supply, including
- for controlling the control signal levels;
- for controlling different bias and reference voltages;
- biasing circuits for adjusting or controlling the bias of the substrate or other circuitry.
Attention is drawn to the following places, which may be of interest for search:
Charge-coupled imagers |
This place covers:
- Details of transfer registers;
- Details of readout registers;
having for example changeable transfer direction
electron multiplying CCD [EMCCD]
- Split readout registers;
- Multiple readout registers;
for readout in H and V directions
for reading out of different colours.
This place covers:
- Addressing circuits for CCD pixel arrays.
- CCD timing and clock generating circuits typically generate the vertical and horizontal sync signals VT, VH which determine the timing of vertical and horizontal scanning operations. A further driver circuit generates driving signals that force the CCD to transfer the information through the transfer registers. The parent group covers circuits for generating the driving signals and details related to the said driving signals or pulses.
If the document does not provide any specific details related to the row scanning/addressing circuits but rather functionally describes details of performing different sensor readout operations, then group H04N 25/40 only should be used for classification. Similarly, if the document specifies only functional details related to control of the exposure time, then group H04N 25/50 and/or group H04N 25/57 should be used for classification.
Attention is drawn to the following places, which may be of interest for search:
Arrangements for selecting an address in a digital store |
This place covers:
Readout circuits that are applicable to a CCD image sensor.
Readout circuits for CCD sensors arranged at the output of the sensor:
- CCD output stages like output buffers and source followers.
- CCD output stages which are column parallel, i.e. provided for each column.
CCD circuitry for modifying or processing image signals from the pixel array.
This place covers:
Circuits of and for driving, controlling addressed sensors.
There is a wide variety of addressed image sensors using different ways of transforming light to electrical current or voltage. The following aspects are classified in this group.
Active pixels sensors [APS]:
- using photodiodes or two terminal semiconductor elements as photodetector;
- using Graphene Layer as photodetector;
- using Photo-conversion layer as photodetector;
- having pixels with small full-well capacity (200e-), high conversion gain (1 mV/e-), small pixel size (900 nm), e.g. QIS or binary pixels.
Passive pixel sensors:
- using photodiodes or two terminal semiconductor elements as photodetector;
- using bipolar transistors as photodetector;
- using charge injection devices [CID];
- charge modulation, static induction transistor [SIT] or base-stored image sensor BASIS;
- using CMOS-CCD structures;
- using diodes for (row) selection switches.
Bolometers used for far infrared imaging.
This group also covers addressed image sensors:
- comprising an additional frame memory;
- comprising testing structures;
- implemented within a display panel;
- providing specific details of the sensor input/output interfaces;
- providing details of partitioning of the signal processing circuits between the sensor and another chip;
- being a camera on chip.
Examples of places in relation to which this place is residual:
Detection or reduction of inverted contrast or eclipsing effects | |
Detection or reduction of noise due to excess charges produced by the exposure for reducing horizontal stripes caused by saturated regions of CMOS sensors |
Attention is drawn to the following places, which may be of interest for search:
Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors | |
Semiconductor technology of imager structures other than CCD as, e.g. CMOS | |
Charged coupled imagers per se |
This place covers:
Addressed sensors comprising control lines used for a plurality of control functions, for example:
- a control line used to control the transfer gate of one pixel and to control the reset gate of another pixel;
- a control line used as power line, pixel select line or column output line.
This place covers:
- Arrangement of scanning circuits for generating control signals for a multiplexer or an arrangement of switches that connects the column lines of the sensor array to the sensor output. In contrast to CCD sensors, addressed image sensors do not necessarily comprise transfer or readout registers that transfer the image signal to the output.
- Details of analogue (pixel signal) shift registers and scanning circuits thereof.
- Bucket-brigade type shift registers.
- Details of digital (signal) shift registers and scanning circuits thereof.
- Horizontal and vertical lines to read out the pixel array in x- and y- directions.
- Multiple horizontal readout lines for different sensor regions.
- Multiple horizontal readout lines for different colours.
- Details of multiplexer or switches for horizontal scanning used for performing horizontal binning between signals from different column lines.
- Details of multiplexer or switches for outputting signals from a column line to different readout line.
Circuits like AD converters, correlated double sampling or amplifiers provided for each column are not part of the readout registers, but all these circuits can be part of a readout circuit. Details of column parallel AD converters, CDS circuits or column amplifiers are classified in group H04N 25/78. Group H04N 25/767 covers details of how the data is transmitted to the output.
This place covers:
Details of pixel circuits and control thereof. However, since pixel circuits known as 3T, 4T, 5T or passive pixels are well known, these pixel structures as such are not classified in this group unless the invention relates to specific pixel properties.
Pixels characterised by their mode of operation:
- pixels having different modes – e.g. a pixel configurable to work as TOF, as a photon counter, for event detection, as an integration pixel, etc.
- pixels having different read-out modes.
Pixel details related to the pixel output interface. For example, pixels:
- having multiple outputs;
- having digital and analogue output;
- having passive and active output, i.e. pixels which can be read out as passive and active pixels.
- characterised by the type and the characteristics of the amplifier used. For example, pixels having specific details related to the source follower in the APS and of the source follower transistor, e.g. type of the SF transistor, load of the SF implemented in the pixel, control of the SF voltage;
- multistage amplifiers, e. g. two stage source followers;
- multiple source followers per pixels connected in parallel;
- distributed amplifiers, i.e. pixels comprising only part of the amplifier, the remaining part is shared for a group of pixels or for a column of pixels;
- CTIA or common drain amplifiers, not source followers.
- pixels characterised by the type and the characteristics of the charge transfer elements. For example, pixels:
- with details of control of the transfer gate;
- with details of transfer gate transistor: enhancement-, p- type;
- with plurality of transfer gates connected in parallel;
- with plurality of transfer gates connected in series.
Note: a plurality of transfer gates for connecting additional storage means within the pixel are classified in group H04N 25/771.
- having direct injection gate.
- having charge multiplying portion.
- having time segregation structure for arrival time measuring.
- reading the photocurrent.
Pixels characterised by the type and the characteristics of the reset switch. For example, pixels:
- with reset level control;
- with details of the reset transistor: enhancement-, p- type.
Pixels comprising control circuits using signals from the neighbouring pixels, e.g. for control of pixel conversion gain or exposure time in function of the average signal value of the neighbouring pixels.
Pixels comprising capacitors for applying control signals (RST, SEL) through it.
H04N 25/77 and subgroups do not cover associated circuits. For example, an A/D converter [ADC] in the readout circuit outside the matrix is classified in group H04N 25/78 and not in subgroup H04N 25/772.
This place covers:
Addressed sensors with pixel circuits comprising additional storage means, i.e. storage means other than the floating diffusion.
The storage means can be analogue storage means:
- in the charge domain;
- in the voltage domain, i.e. after the source follower.
The storage means can be digital memories or non-volatile memories.
The storage means are used for different purposes. For example:
- for storing reset and exposure signals for performing CDS;
- for storing several exposure periods;
- for performing high frame rate imaging;
- for performing HDR imaging;
- for storing overflow charges during the exposure period;
- for storing non-destructive readout signals during the exposure period.
This group is not used for memories provided in the AD converters. Such pixels are classified in group H04N 25/772.
Attention is drawn to the following places, which may be of interest for search:
Extending the dynamic range of solid-state image sensors [SSIS] by controlling the amount of charge storable in the pixel | |
Noise processing involving correlated double sampling [CDS] performed in the pixel | |
CDS performed in readout circuits for addressed sensors |
This place covers:
Addressed sensors with pixels or group of pixels comprising A/D, V/T, V/F, I/T or I/F converters. The converters should be at least partially implemented in the pixel array.
Stacked chip structures in which a pixel or a group of pixels is connected to an A/D converter implemented on a different chip.
This group does not cover image sensors in which a column of pixels is connected to an A/D converter.
A/D converters can be of any type and can be specifically designed for photoelectric pixel circuits and/or to work in combination with other pixel elements like transfer gates, reset gates, source followers, etc. A/D converters can be used to convert the image signal from the pixel to a digital value. A/D converters can be used to generate a digital value for controlling different characteristics of the pixel like its exposure time or sensitivity.
Some pixels circuits comprising converters provide an analogue and a digital output or a multiplexed digital and analogue output.
The converters convert current or voltage levels to signals with different frequency – current to frequency [I/F] converter or use voltage-controlled oscillator to perform voltage to frequency conversion [V/F].
The converters convert the signal from the photo sensor to a time-dependent signal (V/T or I/T converter). These circuits are sometimes called ADC using pulse width modulation [PWM]. A comparator measures the duration of the exposure time needed for the pixel to reach a predetermined threshold. The duration of the pulse corresponds to the pixel level. The duration of the pulse can be converted to a digital value by using a counter or to analogue signal using a ramp signal.
The converters are ADC converters that count the number of exposure periods. These circuits are sometimes also called voltage to frequency converters or ADC using pulse frequency modulation [PFM]. The duration of each exposure period is defined by a control circuit that determines when the signal from the photodiode reaches a predetermined threshold. The control circuit normally performs a reset operation and starts the new exposure period. Note that a part of or the entire control circuit can be implemented outside the pixel array.
The converters are part of photon counting pixels that generate one-bit signals corresponding to a detected photon, and the number of detected photons for a predetermined time is counted to provide a digital value (Details for such pixel circuits can be found in groups G01T 1/247, G01J 1/46 as a part of a radiation measuring system).
The converters are single slope ADCs (Details of single slope ADCs as such can be found in group H03M 1/56).
The converters are flash type ADCs.
The converters are sigma delta ADCs.
Attention is drawn to the following places, which may be of interest for search:
In this place, the following terms or expressions are used with the meaning indicated:
A/D converter | Circuit for analogue-to-digital conversion [ADC] of a signal |
V/T converter | Circuit for converting a pixel output voltage to a time signal |
V/F converter | Circuit for converting a pixel output voltage to a frequency signal |
I/T converter | Circuit for converting a pixel output current to a time signal |
I/F converter | Circuit for converting a pixel output current to a frequency signal |
Attention is drawn to the following places, which may be of interest for search:
Photometry using electric radiation detectors | |
Semiconductor sensitive to radiation in which the potential barrier is working in avalanche mode, e.g. avalanche photodiode |
This place covers:
Addressed sensors with pixel structures in which multiple photodiodes are provided. Respective transfer gates are used to transfer the charges accumulated in the photodiodes to a floating diffusion, and the floating diffusion is connected to a gate of an amplifier transistor. The amplifier is implemented within the pixel array.
Passive pixel sensors comprising a shared amplifier per column are classified in group H04N 25/76.
Active pixels sensors comprising column parallel amplifiers are classified in group H04N 25/78.
Where shared pixel structures are used for different applications, classification is also made in groups relating to the application. For example, when charges of the shared photodiodes are binned in the floating diffusion classification is also made in H04N 25/46. Where shared photodiodes have different sensitivities, classification is also made in H04N 25/585. Pixels specially adapted for focusing (e.g. phase difference pixel sets) are also classified in H04N 25/704.
Shared photodiodes have different sensitivity | |
Pixels specially adapted for focusing, e.g. phase difference pixel sets | |
Charges of the shared photodiodes are binned in the floating diffusion |
This place covers:
- Addressed image sensors such as CMOS image sensors using row and column scanning or addressing circuits.
- Addressed sensor circuitry where the column scanning/addressing circuits are only used to provide row pixel data to the output of the sensor.
- Addressed sensor circuitry where the row scanning/addressing circuits, in addition to the row select signals, provide further control signals to the pixels such as transfer gate [TG] or reset [RST] signals.
- Details of scanning/addressing circuits for addressed image sensors;
- Details related to the electronic circuitry of the scanning circuits, multiple scanning circuits, details related to the generation of driving pulses for TG, RST, ROW SEL.
If the document does not provide any specific details related to the row scanning/addressing circuits but instead describes functional details of performing different sensor readout operations, then classification should only be made in H04N 25/40. Similarly, if the document only specifies functional details related to control of the exposure, then classification should be made in H04N 25/50 and/or H04N 25/57 as appropriate.
Attention is drawn to the following places, which may be of interest for search:
Arrangements for selecting an address in a digital store |
This place covers:
Details of timing or clock signal generating circuits. These circuits drive the row electronics, the column electronics and control the readout of the pixel area.
This place covers:
Readout circuits for addressed image sensors defining details related to the column readout lines and the circuits associated with them. Although the readout lines are placed in the sensor array, they are a functional part of the readout circuits.
These details are, for example, readout arrangements with:
- several column readout lines per column of pixels;
- column lines connectable by switches to perform analogue signal averaging/binning;
- multiple column lines are multiplexed to be processed by common processing means, like CDS, ADC, buffers;
- column lines connectable to different processing means (CDS, ADC, buffers) to randomise the column pattern noise;
- column lines randomly connectable to different processing means (CDS, ADC, buffers) to randomise the column pattern noise;
- a column line being shared for pixels in a row;
- several storage capacitors per column used for CDS, binning, multi frame storage, etc;
- reset or clamping circuits connected to the column lines.
Details related to the load circuit, e.g. current source of the source follower and control thereof.
Details related to ADC circuits (ADC circuits as such – group H03M 1/12) used in sensor array readout circuits.
These details are for example, related to:
- ADC type, like single slope, flash, SAR, sigma-delta, ADC combined with the gain of a programmable gain amplifier (ADC of this type as such group H03M 1/18);
- ADC arrangement in the readout circuit;
- ADC arranged per-column or for group of columns;
- ADC arranged at the output of the sensor;
- ADC ramp voltage generation - different slopes and directions, non-linear, ramp amplitude;
- Processing implemented in the ADC, like CDS, binning.
- Details related to output amplifiers:
- CTIA amplifiers (normally used in passive image sensors);
- Amplifiers with controllable gain GCA, PGA;
- Amplifiers arranged per-column or for group of columns;
- Amplifiers arranged at the output of the sensor.
- Details of arrangement of the CDS circuit as part of the readout circuit:
- CDS arranged per column;
CDS arranged at the output of the sensor.
CDS circuits as such are classified in group H04N 25/616.
This place covers:
Solid state image sensor [SSIS] circuitry divided between different or multiple substrates, including:
- Details of circuits and control thereof adapted for stacked image sensors and the like;
- Details of partitioning the image sensor functional blocks such as the pixel array, scanning circuits, readout circuits and memories between different stacked chips;
- Details of pixel circuitry distributed between different layers;
- Details of ADC circuitry distributed between different layers;
- Details of specific control arrangements or control lines adapted for stacked sensors.
Attention is drawn to the following places, which may be of interest for search:
Flat panel detectors for transforming X-rays into image signals with butting of tiles | |
Line sensors using abutted sensors arranged in a long line |