CPC Definition - Subclass G06T
This place covers:
- Processor architectures or memory management for general purpose image data processing
- Geometric image transformations
- Image enhancement or restoration
- Image analysis
- Image coding
- Two-dimensional image generation
- Animation
- Three-dimensional image rendering
- Three-dimensional modelling for computer graphics
- Manipulating three-dimensional models or images for computer graphics
G06T is the functional place for image data processing or generation. Image data processing or generation specially adapted for a particular application is classified in the relevant subclass. Documents which merely mention the general use of image processing or generation without detailing of the underlying details of such, are classified in the application place. Where the essential technical characteristics of an invention relate both to the image processing or generation and to its particular use or special adaptation, classification is made in both G06T and the application place.
Attention is drawn to the following places, which may be of interest for search:
Apparatus for radiation diagnosis | |
Aspects of games using an electronically generated display having two or more dimensions | |
Measuring, by optical means, length, thickness or similar linear dimensions, angles, areas, irregularities of surfaces or contours | |
Reading or recognising printed or written characters or recognising patterns, e.g. fingerprints | |
Coding, decoding or code conversion | |
Pictorial communication, television systems |
Symbols under G06T 1/00 - G06T 19/20 may only be allocated as invention information.
Whenever possible, additional information should be classified using one or more of the Indexing Codes from the range of G06T.
The indexing codes under G06T 2200/00 - G06T 2219/2024 may only be allocated to documents to which a symbol under G06T 1/00 - G06T 19/20 is allocated as invention information as well.
The following list of symbols from the series G06T 2200/00 are for allocation to documents within the whole range of G06T except G06T 9/00:
Indexing scheme for image data processing or generation, in general - Not used for classification | |
involving 3D image data - processing of 3D image data, i.e. voxels; relevant for G06T 3/00, G06T 5/00, G06T 7/00 or G06T 11/00 | |
involving all processing steps from image acquisition to 3D model generation - complete systems from acquisition to modelling | |
involving antialiasing - dejagging, staircase effect | |
involving adaptation to the client's capabilities - adapting the colour or resolution of an image to the client's capabilities | |
involving computational photography | |
involving graphical user interfaces [GUIs] | |
involving image processing hardware - relevant for groups not directly related to hardware; not used in G06T 1/20, G06T 1/60, G06T 15/005 | |
involving image mosaicing - image mosaicing, panoramic images | |
Review paper; Tutorial; Survey - basic documents describing the state of the art. |
There are further series of symbols for G06T whose use is reserved to particular maingroups or ranges of maingroups and whose full list and description are given in the FCRs of the respective maingroups:
G06T 2201/00 for G06T 1/0021 only
G06T 2207/00 for G06T 5/00 and G06T 7/00 only
G06T 2219/00 for G06T 9/00 only
G06T 2210/00 for G06T 11/00 - G06T 19/00 only; see list below
G06T 2211/40 for G06T 11/003 only
G06T 2213/00 for G06T 13/00 only;
G06T 2215/00 for G06T 15/00 only;
G06T 2219/00 for G06T 19/00 only;
G06T 2219/20 for G06T 19/20 only
Symbols from the series G06T 2210/00 for allocation in the range of G06T 11/00 - G06T 19/00 only:
Indexing scheme for image generation or computer graphics - Not used for classification | |
architectural design, interior design - interior/garden/facade design, architectural layout plans | |
bandwidth reduction | |
bounding box - convex hull for polygons or 3D objects | |
cloth - animation, rendering or modeling of cloth/garment/textile, virtual dressing rooms | |
collision detection, intersection - intersection/collision detection of 3D objects | |
cropping - cropping of image borders | |
fluid dynamics - animation, rendering or modelling of fluid flows | |
force feedback - virtual force | |
image data format - conversion between different image or graphics formats | |
level of detail - level of detail, also for textures (e.g. mip-mapping) | |
medical - medical applications concerning e.g. heart, lung, brain, tumors | |
morphing - morphing or warping | |
parallel processing | |
particle system, point based geometry or rendering - rendering and animation of particle systems (e.g. fireworks, dust, clouds), point clouds, splatting | |
scene description - scene graphs, scene description languages, e.g. VRML | |
semi-transparency - screen-door effect, change of transparency values | |
weathering - weathering effects like e.g. aging, corrosion |
In this place, the following terms or expressions are used with the meaning indicated:
2D | Two-dimensional |
3D | Three-dimensional |
4D | Four-dimensional, 3D in time |
CAD | Computer-Aided Design (in computer graphics); Computer-Aided Detection (in image analysis) |
MR | Magnetic Resonance (in image analysis); Mixed Reality (in computer graphics) |
Stereo | Treatment of the images of exactly two cameras in a pairwise manner |
In patent documents, the following abbreviations are often used:
ANN | Artificial Neural Network |
AR | Augmented Reality |
CT | Computed Tomography |
DCE-MRI | Dynamic Contrast-Enhanced Magnetic Resonance Imaging |
DCT | Discrete Cosine Transform |
DRR | Digitally Reconstructed Radiograph |
DTS | Digital Tomosynthesis |
GUI | Graphical User Interface |
IC | Integrated Circuit |
ICP | Iterative Closest Point |
LCD | Liquid Crystal Display |
MRF | Markov Random Field |
MRI | Magnetic Resonance Imaging |
PCB | Printed Circuit Board |
RGB | Red, Green, Blue |
ROI | Region of Interest |
SLAM | Simultaneous Localisation And Mapping |
SNR | Signal-to-Noise Ratio |
SPECT | Single Photon Emission Computed Tomography |
US | Ultrasound |
VOI | Volume of Interest |
VR | Virtual Reality |
This place covers:
Capturing or storing images from or to memory
This place does not cover:
Scanning, transmission or reproduction of documents or the like | |
Television cameras |
This place covers:
- Machine vision or tool control
- Image feedback for robot navigation or walking
- 3D vision systems.
This place does not cover:
Vision controlled manipulators | |
Accessories fitted to manipulators including video camera means | |
Control of position, course, altitude or attitude of land, water, air or space vehicles using means capturing signals occurring naturally from the environment for determining position or orientation |
This place covers:
- Image watermarking in general.
- Applications or software packages for watermarking.
Illustrative example - Hiding a digital image (message) into another digital image (carrier) (US6094483 - UNIV NEW YORK STATE RES FOUND):
This place does not cover:
Testing specially adapted to determine the identity or genuineness of paper currency or similar valuable papers | |
Audio watermarking | |
Arrangements for secret or secure communication using encryption of data | |
Arrangements for secret or secure communication using electronic signatures |
Attention is drawn to the following places, which may be of interest for search:
Security arrangements for protecting computers or computer systems against unauthorised activity | |
Circuits for prevention of unauthorised reproduction or copying | |
Scanning, transmission or reproduction of documents involving image watermarking |
This place covers:
- Adaptations based on Human Visual System [HVS].
- Perceptual masking.
- Preservation of image quality; Distortion minimization.
- Methods to measure quality of watermarked images.
- Measuring the balance between quality and robustness, i.e., fixed robustness, adapting quality, or vice versa.
Illustrative example - Changing a portion of an image based on an embedding strength map (EP1170938 - HITACHI LTD):
This place covers:
- Embedding without modifying the size of input.
- Embedding or modifying the watermark directly in a coded image or video stream, without decoding first.
This place covers:
- Birthday attacks.
- Forgery.
Illustrative example - Changing pixels at selected positions according to a replacement table (WO2011021114 - NDS LIMITED):
This place covers:
- Resistance; Resistance to attacks or distortions; Distortion compensation.
- Strength.
- Collusion attacks; Average attacks; Averaging.
- Reliable detection, e.g. with reduced likelihood of false positive/negative.
Illustrative example - Watermarking an image using the difference of average intensity of two adjacent blocks (EP1927948 - FUJITSU LTD):
This place covers:
Watermarking techniques for JPEG or MPEG or for a wavelet transformed image.
Illustrative example - Embedded a watermark in a DC component region of a wavelet transformed image (US2004047489 - KOREA ELECTRONICS TELECOMM):
This place covers:
- Robust against resizing or rotation or cropping, etc.
- Determining the rescaling factor or rotation angle by using the watermarks so as to compensate the image, i.e. as a calibration signal.
- Desynchronization attacks.
Illustrative example - Combining a reference mark with an identification mark and embedding them in image textures to detect the applied transformations (GB2378602 - CENTRAL RESEARCH LAB LTD):
This place covers:
- Many, possibly different, watermarks on the same image, e.g. for copy or distribution control.
- Same watermark repeated on different parts of the image.
Illustrative example - Encoding payload in relative positions and/or polarities of multiple embedded watermarks (WO0111563 - KONINKL PHILIPS ELECTRONICS NV):
This place covers:
Using thresholds to define ranges of detection probability or ranges of robustness.
Illustrative example - Multiple thresholds for reducing false detection likelihood
(EP1271401 - SONY UK LTD):
This place covers:
Watermarks spread over several images or frames or a sequence.
Illustrative example - Alternating watermark patterns (e.g. by translation, mirror, rotation) to improve the reliability of scale factor measurement (WO2005109338 - KONINKL PHILIPS ELECTRONICS NV):
This place covers:
Illustrative example - Calculating capacity of DCT coefficients of a digital image file and selecting the ones apted to embedding, thereby providing robustness (US6724913 - HSU WEN-HSING):
This place covers:
- Graphics accelerators; Graphic processing units (GPUs).
- Graphics pipelines.
- Parallel or massively parallel data bus specially adapted for image data processing.
- Architecture or signal processor specially adapted for image data processing.
- VLSI or SIMD or fine-grained machines specially adapted for image data processing.
- Multiprocessor or multicomputer or multi-core specially adapted for image data processing.
Illustrative example - Ring architecture for image data processing:
Attention is drawn to the following places, which may be of interest for search:
Architectures of general purpose stored program computers |
In this place, the following terms or expressions are used with the meaning indicated:
Pipelining | the use of a sequence (pipeline) of image processing stages for execution of instructions in a series of units, arranged so that several units can be used for simultaneously processing appropriate parts of several instructions. |
Multiprocessor | processor arrangements comprising a computer system consisting of two or more processors for the simultaneous execution of two or more programs or sequences of instructions. |
In patent documents, the following abbreviations are often used:
GPU | Graphics Processing Unit |
This place covers:
- Address generation or addressing circuit or BitBlt for image data processing.
- 3D or virtual or cache memory specially adapted for image data processing.
- Frame or screen or image memory specially adapted for image data processing.
Illustrative example - Cache memory for image processing (EP0589724 - QUANTEL LTD)
This place does not cover:
Accessing, addressing or allocating within memory systems or architectures | |
Ping-pong buffers | |
Arrangements for selecting an address in a digital store | |
Digital stores characterised by the use of particular electric or magnetic storage elements |
This place covers:
Geometric image transformations in the plane of the image.
Attention is drawn to the following places, which may be of interest for search:
Image enhancement or restoration | |
Image animation | |
Geometric effects for 3D image rendering | |
Perspective computation for 3D image rendering | |
Geographic models in 3D modelling for computer graphics | |
Matrix or vector computation | |
Conversion of standards for television systems |
This place covers:
- Affine transformations not further specified.
- Combinations of affine transformations including rotation, scaling or shear.
This place does not cover:
Transformations for image registration using affine transformations | |
Image mosaicing, e.g. composing plane images from plane sub-images |
This place covers:
- Selective warping according to an importance map; Smart image reduction.
- Seam carving; Liquid resizing; Image retargeting.
Illustrative example of subject matter classified in this place:
This place does not cover:
Panospheric to cylindrical image transformation |
This place covers:
Establishing a lens for a region-of-interest.
Illustrative example of subject matter classified in this place:
This place covers:
- Side or corner panels; Perspective wall.
- Document lens.
Illustrative example of subject matter classified in this place:
This place does not cover:
Fisheye, wide-angle transformation |
This place covers:
Flattening the scanned image of a bound book.
Illustrative example of subject matter classified in this place:
Attention is drawn to the following places, which may be of interest for search:
Panospheric to cylindrical image transformation | |
Texture mapping | |
Manipulating 3D models or images for computer graphics |
This place covers:
Curved planar reformation [CPR].
Attention is drawn to the following places, which may be of interest for search:
Manipulating 3D models or images for computer graphics |
This place covers:
Mapping a surface of revolution to a plane, e.g. mapping a pot or a can to a plane.
Illustrative example of subject matter classified in this place:
This place covers:
- Geometric image transformation for projecting an image on a multi-projectors system or on a geodetic screen; Dome imaging.
- Geometric image transformation for projecting an image through multi-planar displays.
Attention is drawn to the following places, which may be of interest for search:
Texture mapping |
This place covers:
- Selecting the interpolation method depending on the scale factor.
- Selecting the interpolation method depending on media type or image appearance characteristics.
Illustrative example of subject matter classified in this place:
This place covers:
- Omnidirectional or hyperboloidal to cylindrical image transformation or mapping; Catadioptric transformation, e.g. images from surveillance cameras.
- Panospheric image transformation or mapping by using the output of a multiple cameras system.
Illustrative example of subject matter classified in this place:
This place covers:
Geometric image transformations:
- for iterative image registration;
- for spline-based image registration;
- for mutual-information-based registration;
- for phase correlation or FFT-based methods;
- using fiducial points, e.g. landmarks;
- for maximised mutual information-based methods.
Attention is drawn to the following places, which may be of interest for search:
Determination of transform parameters for the alignment of images, i.e. image registration |
This place covers:
- Elastic mapping or snapping or matching; Deformable mapping.
- Diffeomorphic representations of deformations to control the image registration process.
Illustrative example of subject matter classified in this place:
This place covers:
- Video cubism; Video cube.
- Dynamic panoramic video.
- Stylized video cubes.
Attention is drawn to the following places, which may be of interest for search:
Image animation |
This place covers:
- Resampling; Resolution conversion.
- Zooming or expanding or magnifying or enlarging or upscaling.
- Shrinking or reducing or compressing or downscaling.
- Pyramidal partitions; Storing sub-sampled copies.
- Area based or weighted interpolation; Scaling by surface fitting, e.g. piecewise polynomial surfaces, B-splines or Beta-splines.
- Two-steps image scaling, e.g. by stretching.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming |
Attention is drawn to the following places, which may be of interest for search:
Polynomial surface description for image modeling | |
Enlarging or reducing for scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission | |
Studio circuits for television systems involving alteration of picture size or orientation | |
Frame rate conversion; De-interlacing | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution |
This place covers:
- Linear or bi-linear or tetrahedral or cubic image interpolation.
- Adaptive interpolation, e.g. the coefficients of the interpolation depend on the pattern of the local structure.
Illustrative example of subject matter classified in this place:
This place does not cover:
Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns | |
Edge-driven scaling; Edge-based scaling |
This place covers:
- CFA demosaicing or demosaicking or interpolating.
- Bayer pattern.
- Colour-separated images, i.e. one colour in each image quadrant.
Illustrative examples of subject matter classified in this place:
1. Image demosaicing
2. Colour-separated image
This place covers:
- Pixel or row deletion or removal.
- Pixel or row insertion or duplication or replication.
- Decimating FIR filters.
- Array indexes or tables, e.g. LUT.
Illustrative example of subject matter classified in this place:
Decimating by using two arrays of indexes
This place covers:
- Edge adaptive or directed or dependent or following or preserving interpolation; Edge preservation.
- Edge map injecting or projecting or combining or superimposing.
Illustrative example of subject matter classified in this place:
Correcting for abnormalities next to boundaries
This place covers:
- Image mosaicing or mosaiking.
- Panorama views.
- Mosaic of video sequences; Salient video still; Video collage or synopsis.
Illustrative example of subject matter classified in this place:
Image mosaicing for microscopy applications
Attention is drawn to the following places, which may be of interest for search:
Image processing arrangements associated with discharge tubes with provision for introducing objects or material to be exposed to the discharge |
This place covers:
- Using neural networks specially adapted for image interpolation.
- Using neural networks specially adapted for interpolation coefficient selection.
Illustrative example of subject matter classified in this place:
Using a neural network to select the coefficients of a polynomial interpolation
Attention is drawn to the following places, which may be of interest for search:
Image enhancement or restoration using machine learning, e.g. neural networks | |
Neural networks | |
Machine learning | |
Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks |
This place covers:
- Super resolution by fitting the pixel intensity to a mathematical function.
- Super resolution from image sequences; Images or frames addition, coaddition or combination.
- Super resolution by iteratively applying constraints, e.g. energy reduction, on the transform domain and inverse transforming.
Illustrative example of subject matter classified in this place:
Fitting a mathematical function and resampling:
Attention is drawn to the following places, which may be of interest for search:
Image enhancement or restoration using two or more images, e.g. averaging or subtraction |
This place covers:
Fusion of multi-sensor or multiband images fusion.
This place covers:
Illustrative example of subject matter classified in this place:
Displaying sub-frames at spatially offset positions
This place covers:
Illustrative example of subject matter classified in this place:
Iterative correction of the high-resolution image:
This place covers:
- DCT coefficients decimation or insertion for image scaling.
- Zero padding DCT coefficients for image scaling.
- Downscaling by selecting a specific wavelet sub-band.
Illustrative example of subject matter classified in this place:
Enlargement/reduction through DCT interpolation/decimation
This place covers:
Adapting the image resolution to the client's capabilities.
Illustrative example of subject matter classified in this place:
In the figure above, the processing unit is coupled downstream from video cross-point switcher for generating additionally scaled video streams by additional video scaling on initially scaled video stream.
Attention is drawn to the following places, which may be of interest for search:
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream | |
Server adapted for processing of video elementary streams, involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements | |
Selective content distribution in client devices adapted for processing of additional data | |
Selective content distribution in client devices adapted for processing of video elementary streams involving reformatting operations of video signals for household redistribution, storage or real-time display |
This place covers:
- Transpose or continuous write-transpose-read.
- Mirror.
- Rung-length (RL) rotation.
Attention is drawn to the following places, which may be of interest for search:
Scanning, transmission or reproduction of documents involving composing, repositioning or otherwise modifying originals | |
Studio circuits for television |
This place covers:
Illustrative example of subject matter classified in this place:
Rotation by recursive reversing
This place covers:
Illustrative example of subject matter classified in this place:
Continuous read-transpose-write
This place covers:
- Shift processing;
- Rotation by shearing.
Illustrative example of subject matter classified in this place:
Image rotation by two-pass de-skewing
This place covers:
Image enhancement or restoration:
- using non-spatial domain filtering;
- using local operators;
- using morphological operators, i.e. erosion or dilatation;
- using histogram techniques;
- using two or more images, e.g. averaging or subtraction;
- using machine learning, e.g. neural networks;
- Denoising; Smoothing;
- Deblurring; Sharpening;
- Unsharp masking;
- Retouching; Inpainting; Scratch removal;
- Geometric correction;
- Dynamic range modification of images or parts thereof.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors | |
Camera processing pipelines in cameras or camera modules comprising electronic image sensors | |
Noise processing, e.g. detecting, correcting, reducing or removing noise in circuitry of solid-state image sensors [SSIS] |
Attention is drawn to the following places, which may be of interest for search:
Neural networks | |
Image preprocessing for image or video recognition or understanding | |
Image processing adapted to be used in scanners, printers, photocopying machines, displays or similar devices, including composing, repositioning or otherwise modifying originals | |
Picture signal circuits adapted to be used in scanners, printers, photocopying machines, displays or similar devices | |
Processing of colour picture signals in scanners, printers, photocopying machines, displays or similar devices | |
Computational photography systems, e.g. light-field imaging systems |
This group focuses on image processing algorithms. Although such algorithms sometimes need to consider characteristics of the underlying image acquisition apparatus, inventions to the image acquisition apparatus per se are outside the scope of this group.
Whenever possible, additional information should be classified using one or more of the indexing codes from the ranges of G06T 2200/00 (see definitions re. G06T) or G06T 2207/00 (see definitions re. G06T 2207/00).
The classification symbol G06T 5/00 should be allocated to documents concerning:
- Interactive / multiple choice image processing, e.g. choosing outputs from multiple enhancement algorithms;
- Image restoration based on properties or models of the human vision system [HVS]
In patent documents, the following abbreviations are often used:
HDR | high dynamic range |
HDRI | high dynamic range imaging |
HMM | hidden Markov model |
PSF | point spread function |
SDR | standard dynamic range |
This place covers:
All transform domain-based enhancement methods, e.g. using:
- Fourier transform, discrete Fourier transform [DFT] or fast Fourier transform [FFT];
- Hadamard transform;
- Discrete cosine transform [DCT];
- Wavelet transform, discrete wavelet transform [DWT].
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine | |
Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors | |
Camera processing pipelines in cameras or camera modules comprising electronic image sensors | |
Noise processing, e.g. detecting, correcting, reducing, or removing noise in circuitry of solid-state image sensors [SSIS] |
This place covers:
- Convolution with a mask or kernel in the spatial domain;
- High-pass filter, low-pass filter;
- Gauss filter, Laplace filter;
- Averaging filter, mean filter, blurring filter;
- Differential filters (e.g. Sobel operator);
- Median filter;
- Bilateral filter;
- Minimum, maximum or and rank filtering;
- Wiener filter;
- Phase-locked loops, detectors, mixers;
- Recursive filter;
- Distance transforms;
- Local image processing architectures.
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine | |
Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors | |
Camera processing pipelines in cameras or camera modules comprising electronic image sensors | |
Noise processing, e.g. detecting, correcting, reducing, or removing noise in circuitry of solid-state image sensors [SSIS] |
Attention is drawn to the following places, which may be of interest for search:
Applying local operators for during image preprocessing for image or video recognition or understanding |
This place covers:
All morphology-based operations for image enhancement, e.g. using:
- Thickening, thinning;
- Opening, closing;
- Erosion, dilation;
- Structuring elements;
- Skeletons;
- Geodesic transforms.
Illustrative examples of subject matter classified in this place:
1.
2.
Attention is drawn to the following places, which may be of interest for search:
Segmentation or edge detection involving morphological operators | |
Smoothing or thinning of patterns during image preprocessing for image or video recognition or understanding |
This place covers:
All histogram-based image enhancement methods.
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors | |
Camera processing pipelines in cameras or camera modules comprising electronic image sensors |
Attention is drawn to the following places, which may be of interest for search:
Dynamic range modification of images or parts thereof | |
Histogram techniques adapted to be used in scanners, printers, photocopying machines, displays or similar devices | |
Equalising the characteristics of different image components, e.g. their average brightness or colour balance, in stereoscopic or multi-view video systems |
This place covers:
- Image averaging;
- Image fusion, image merging;
- Image subtraction;
- Enhanced final image by combining multiple, e.g. degraded, images, while maintaining the same number of pixels (for increased number of pixels: see G06T 3/40);
- Full-field focus from multiple of depth-of-field images, e.g. from confocal microscopy;
- Processing of synthetic aperture radar [SAR] images;
- Energy subtraction;
- Bright field, dark field processing;
- Angiography image processing;
- High dynamic range [HDR] image processing;
- Multispectral image processing;
- Computational photography, e.g. coded aperture imaging.
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors | |
Camera processing pipelines in cameras or camera modules comprising electronic image sensors |
Attention is drawn to the following places, which may be of interest for search:
Scaling of whole images or parts thereof based on super-resolution | |
Unsharp masking | |
Radar or analogous systems, specially adapted for mapping or imaging using synthetic aperture techniques | |
Spatial compounding in short-range sonar imaging systems | |
Confocal scanning microscopes | |
Computational photography systems, e.g. light-field imaging systems |
This place covers:
All machine learning-based image enhancement methods, e.g. using:
- artificial neural networks [ANN], convolutional neural networks [CNN], generative adversarial networks [GAN] or deep learning;
- decision trees;
- support-vector machines;
- regression analysis;
- Bayesian networks;
- Gaussian processes;
- genetic algorithms.
Illustrative example of subject matter classified in this place:
Attention is drawn to the following places, which may be of interest for search:
Neural networks | |
Learning methods | |
Machine learning | |
Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks | |
Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks |
This place covers:
- Removing noise from images;
- Temporal denoising, spatio-temporal noise filtering;
- Removing pattern noise from images;
- Image smoothing;
- Image blurring, adding motion blur to images, adding blur to images;
- Edge-adaptive smoothing;
- Smoothing of depth map in stereo or range images;
- Antialiasing by image filtering;
- Denoising or smoothing using singular value decomposition [SVD].
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Camera processing pipelines for suppressing or minimising disturbance in the image signal generation | |
Noise processing in circuitry of solid-state image sensors [SSIS], e.g. detecting, correcting, reducing or removing noise |
Attention is drawn to the following places, which may be of interest for search:
Antialiasing during drawing of lines | |
Antialiasing during filling a planar surface by adding surface attributes, e.g. colour or texture | |
Noise filtering in image pre-processing for image or video recognition or understanding | |
Noise or error suppression in colour picture communication systems | |
Processing image signals for flicker reduction in stereoscopic or multi-view video systems |
This place covers:
- Deblurring;
- Removing motion blur from images;
- Point-spread function [PSF] model of blurring;
- Deconvolution;
- Modulation transfer function [MTF];
- Sharpening, crispening;
- Edge enhancement, edge boosting.
Illustrative examples of subject matter classified in this place:
1.
2.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Vibration or motion blur correction for stable pick-up of the scene in cameras or camera modules comprising electronic image sensors |
Attention is drawn to the following places, which may be of interest for search:
Edge-driven scaling | |
Edge or detail enhancement for scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission | |
Edge or detail enhancement in colour picture communication systems |
This place covers:
- Unsharp masking;
- Adding or subtracting a processed version of an image to or from the image.
Illustrative example of subject matter classified in this place:
This place covers:
- Concealing defective pixels in images;
- Scratch removal;
- Inpainting by image filtering or by replacing patches within an image using a generated image or texture patch, or a patch retrieved from another source, e.g. image databases or the internet;
- Correcting red-eye defects.
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Scratch removal adapted to be used in scanners, printers, photocopying machines, displays or similar devices | |
Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine | |
Noise processing, e.g. detecting, correcting, reducing or removing noise in circuitry of solid-state image sensors [SSIS] |
Attention is drawn to the following places, which may be of interest for search:
Segmentation or edge detection in image analysis | |
Analysis of geometric attributes in image analysis | |
Determining position or orientation of objects or cameras in image analysis | |
Determination of colour characteristics in image analysis | |
Texture generation as such | |
Recognition of eye characteristics | |
Modification of content of picture, e.g. retouching | |
Retouching colour images adapted to be used in scanners, printers, photocopying machines, displays or similar devices | |
Red-eye correction adapted to be used in scanners, printers, photocopying machines, displays or similar devices |
This place covers:
- Correcting lens distortions or aberrations;
- Correcting pincushion, barrel, trapezoidal or fish-eye distortions;
- Calibrating parameters of lens distortion;
- Reference grids, coordinate mapping.
Illustrative example of subject matter classified in this place:
Attention is drawn to the following places, which may be of interest for search:
Geometric image transformations in the plane of the image | |
Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration | |
Normalisation of the pattern dimension during image preprocessing for image or video recognition |
This place covers:
Contrast enhancement based on a combination of local and global properties.
Illustrative examples of subject matter classified in this place:
1.
2.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors | |
Bracketing, i.e. taking a series of images with varying exposure conditions | |
Control of the dynamic range in Circuitry of solid-state image sensors [SSIS] |
Attention is drawn to the following places, which may be of interest for search:
Equalising the characteristics of different image components of stereoscopic or multi-view image signals, e.g. their average brightness or colour balance |
This place covers:
- Global contrast enhancement or tone mapping to increase the dynamic range of an image, based on properties of the whole image, e.g. global statistics or histograms;
- Contrast stretching, brightness equalisation;
- Gamma and gradation correction in general;
- Tone mapping for high dynamic range [HDR] imaging;
- Intensity mapping, e.g. using lookup tables [LUT].
Illustrative example of subject matter classified in this place:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Picture signal circuitry for controlling amplitude response in television systems | |
Gamma control in television systems | |
Circuitry for compensating brightness variation in the scene | |
Camera processing pipelines comprising electronic image sensors |
This place covers:
- Local contrast enhancement, e.g. locally adaptive filtering;
- Retinex processing.
Illustrative examples of subject matter classified in this place:
1.
2.
Attention is drawn to the following places, which may be of interest for search:
Unsharp masking |
This place covers:
- Analysis of motion, i.e. determining motion of an image subject, or of the camera having acquired the images; Tracking; Change detection; e.g. by block matching, feature-based methods, gradient-based methods, hierarchical or stochastic approaches, motion estimation from a sequence of stereo images.
- Analysis of texture, i.e. analysis of colour or intensity features which represent a perceived image texture, e.g. based on statistical or structural descriptions.
- Analysis of geometric attributes, e.g. area, perimeter, diameter, volume, convexity, concavity, centre of gravity, moments or symmetry.
- Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration; Calibration of stereo cameras, e.g. determining the transformation between left and right camera coordinate systems
- Computational analysis of images to determine information, e.g. parameters or characteristics, therefrom
- Inspection-detection on images, e.g. flaw detection; Industrial image inspection using e.g. a design-rule based approach or an image reference. Industrial image inspection checking presence / absence; Biomedical image inspection.
- Segmentation, i.e. partitioning an image into regions, or edge detection, i.e. detection of edge features in an image, e.g. involving probabilistic or graph-based approaches, deformable models, morphological operators, transform domain-based approaches or the use of more than two images.
- Motion-based segmentation.
- Determination of transform parameters for the alignment of images, i.e. image registration, e.g. by correlation-, feature- or transform domain-based or statistical approaches.
- Depth or shape recovery, i.e. determination of scene depth parameters by consideration of image characteristics; Depth or shape recovery from shading, specularities, texture, perspective effects, e.g. vanishing points, or line drawings; Depth or shape recovery from multiple images involving amongst others contours, focus, motion, multiple light sources, photometric stereo or stereo images.
- Determining the position or orientation of objects, e.g. by feature- or transform domain-based or statistical approaches.
- Determination of image colour characteristics.
G06T 7/00 covers the details of image analysis algorithms, insofar as it deals with the related image processing algorithms per se. Documents which merely mention the general use of image analysis, without details of the underlying image analysis algorithms, are classified in the application place. Where the image analysis is functionally linked and restricted to specific image acquisition or display hardware or processes, it is classified in the application place; otherwise, it is classified in G06T 7/00. Where the essential technical characteristics relate both to the image analysis detail and to its particular use or special adaptation, classification is made in both G06T 7/00 and the application place.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Computed tomography | |
Signal processing for Nuclear Magnetic Resonance (NMR) imaging systems | |
ICT specially adapted for processing medical images, e.g. editing 30/40 | |
Scanning, transmission or reproduction of documents or the like | |
Stereoscopic television systems | |
Methods of arrangements for coding, decoding, compressing or decompressing digital video signals | |
Transforming light or analogous information into electric information using solid-state image sensors |
Attention is drawn to the following places, which may be of interest for search:
Image Acquisition | |
Processor architectures; Processor configuration, e.g. pipelining | |
Processing seismic data | |
Methods or arrangements for reading or recognising printed or written characters or for recognising patterns | |
Bioinformatics | |
Medical informatics |
Where the essential technical characteristics of the invention relate both to the image analysis detail and to its particular use or special adaptation, classification is made in both G06T 7/00 and the relevant application place in other subclasses.
G06T 7/00 focuses on image processing algorithms. Although such algorithms sometimes need to take into account characteristics of the underlying image acquisition apparatus, inventions to the image acquisition apparatus per se are outside the scope of this group.
Additional information should be classified using one or more of the Indexing Codes from the ranges of G06T 2200/00 or G06T 2207/00. Their use is obligatory.
The classification symbol G06T 7/00 is allocated to documents concerning:
- Architectures of image analysis systems, fif not provided for elsewhere
- Extraction of MPEG7 descriptors, if not provided for elsewhere
In this place, the following terms or expressions are used with the meaning indicated:
Stereo | Treatment of two images, e.g. from two cameras or a single camera that is displaced, in a pairwise manner |
Feature | a significant image region or pixel with certain characteristics, for example a feature point, landmark, edge, corner or blob, typically determined by image operators. |
Image analysis | the extraction of information from images through the use of image processing techniques acting upon image data, such as intensity, colour, motion and spatial frequency characteristics. |
In patent documents, the following abbreviations are often used:
AAM | Active appearance model |
ASM | Active shape model |
HMM | Hidden Markov Model |
LBP | Local Binary Pattern |
LPE | ligne de partage des eaux (French expression for watershed segmentation) |
RANSAC | Random Sampling (and) Consensus |
CAD | Computer-Aided Detection |
SLAM | Simultaneous Localization and Mapping |
This place covers:
- Quality, conformity control
- Defects, abnormality, incompleteness
- Acceptability determination
- User interface for automated visual inspection
- Database-to-object inspection
- Image quality inspection
Attention is drawn to the following places, which may be of interest for search:
Determining position or orientation of objects | |
Validation, performance evaluation or active pattern learning techniques | |
Pattern matching criteria, e.g. proximity measures | |
Clustering techniques for pattern recognition | |
Classification techniques for pattern recognition | |
Image or video pattern matching | |
Pattern recognition or machine learning using clustering within arrangements for image or video recognition or understanding | |
Detection or correction of errors in pattern recognition | |
Evaluation of the quality of the acquired pattern in pattern recognition |
In relation to the remaining, function-oriented groups of G06T 7/00, this subgroup is an application-oriented group. Therefore, documents classified herein should also be classified in a function-oriented group under G06T 7/00, if they contain a considerable contribution on the respective function.
For image quality inspection G06T 2207/30168 (Image quality inspection) should be added.
This place covers:
- Quality, conformity control in industrial context
- Defects, abnormality in industrial context
- Acceptability determination in industrial context
- User interfaces for automated visual inspection in industrial context
- "Teaching" (macros for inspection algorithms)
- Database-to-object inspection in industrial context
- Printing quality
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Investigating the presence of flaws or contamination on materials |
Attention is drawn to the following places, which may be of interest for search:
Contactless testing using optical radiation for printed circuits | |
Contactless testing using optical radiation for individual semiconductor devices | |
Photolithography mask inspection | |
Component placement (in PCB manufacturing) |
When classifying in this group, the use of the indexing scheme G06T 2207/30108 - G06T 2207/30164 is mandatory for additional information related to industrial image inspection.
For user interfaces for automated visual inspection in industrial context, Indexing code G06T 2200/24 (involving graphical user interfaces [GUIs]) should be added.
This place covers:
Verifying geometric design rules or known geometric parameters, e.g. width or spacing of structures, repetitive patterns
Illustrative example:
This place covers:
- Detecting the absence of an item that should be there
- Detecting incompleteness
Illustrative examples:
This place covers:
- Industrial image inspection where an image is compared to a reference image, standard image, ground truth image, gold standard: either by image comparison at image level, e.g. by image correlation, or by comparison of parameters extracted from the images
- Reference images originated from an image acquisition apparatus or derived from computer-aided design data
Illustrative examples:
Attention is drawn to the following places, which may be of interest for search:
Determining representative reference patterns or generating dictionaries | |
Determining representative reference patterns or generating dictionaries for image or video recognition or understanding |
This place covers:
Defects, abnormality in biomedical context
Computer-aided detection [CAD]
Detecting, measuring, scoring, grading of
- Disease, pathology, lesions
- Cancer, tumor, tumour, malignancy, nodule
- Emphysema
- Microcalcifications
- Polyps
- Scar, non-viable tissue
- Osteoporosis, fracture risk prediction, Arthritis
- Alzheimer disease
- Scoring wrinkles, ageing
- Tissue abnormalities in microscopic images, e.g. inflammation, deformations
- Grading of living plants
Illustrative examples:
Characterising skin imperfections
Evaluating spine balance
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Apparatus for radiation diagnostics | |
Diagnosis using ultrasound | |
Signal processing for Nuclear Magnetic Resonance (NMR) imaging systems | |
Ultrasound imaging | |
ICT specially adapted for processing medical images, e.g. editing |
Attention is drawn to the following places, which may be of interest for search:
Recognising microscopic objects | |
Bioinformatics | |
Medical informatics |
When classifying in this group, the use of the indexing scheme G06T 2207/30004 - G06T 2207/30104 is mandatory for additional information related to biomedical image processing.
In this place, the following terms or expressions are used with the meaning indicated:
Biomedical | biological or medical |
This place covers:
- Comparison to a reference image, standard image, atlas...
- Reference image taken from different patient or patients, or reference image taken from spatially different anatomical regions of the same patient, e.g. comparison of left and right body parts.
Illustrative examples
Superposition of a perfusion image and the brain atlas images in contour representation
Attention is drawn to the following places, which may be of interest for search:
Determining representative reference patterns or generating dictionaries | |
Determining representative reference patterns or generating dictionaries for image or video recognition or understanding |
This place covers:
- Follow-up studies, comparison of images from different points of time, temporal difference images, temporal subtraction images, biomedical change detection.
- Reference image taken from the same patient and the same anatomical region.
- Subtraction angiography for abnormality detection.
- Assessment of dynamic contrast enhancement, wash-in/wash-out for abnormality detection.
- Plethysmography based on image analysis
Illustrative example:
Floating image, reference image and temporal subtraction image
Attention is drawn to the following places, which may be of interest for search:
Analysis of motion, e.g. change detection in general | |
Pattern matching criteria, e.g. proximity measures | |
Temporal feature extraction for image or video recognition or understanding | |
Image or video pattern matching |
For plethysmography based on image analysis, Indexing Code G06T 2207/30076 should be added.
This place covers:
- Segmentation, i.e. partitioning an image into regions
- Edge detection, i.e. detection of edge features in an image
This place does not cover:
Motion-based segmentation |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Separation of touching or overlapping patterns for pattern recognition,e.g. character segmentation for optical character recognition (OCR) | |
Extraction of image features/characteristics for pattern recognition | |
Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition |
Attention is drawn to the following places, which may be of interest for search:
Analysis of texture | |
Determination of colour characteristics | |
Clustering techniques in pattern recognition | |
Classification techniques in pattern recognition | |
Feature extraction related to colour, for pattern recognition | |
Pattern recognition or machine learning using clustering within arrangements for image or video recognition or understanding |
This place covers:
Methods evaluating properties or features of image regions to determine the segmentation result, e.g.:
- Thresholding, fixed threshold binarisation, multiple and histogram-derived thresholds
- Region growing, splitting and merging
- Colour-based segmentation
- Texture-based segmentation
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition | |
Extraction of features or characteristics of the image related to colour, for pattern recognition | |
Cutting or merging image elements, e.g. region growing, watershed, clustering-based techniques, for pattern recognition |
This place covers:
Methods evaluating (closed) contours, edges or outlines of image portions to determine the segmentation result, e.g.:
- Contour-based segmentation
- Detection of straight edge-lines (e.g. buildings or roads from aerial images) which partition an image into regions
- Finding and linking edge candidate points or segments (edgels)
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition | |
Extraction of features or characteristics of the image by coding the contour of a pattern, for pattern recognition |
This place covers:
In contrast to G06T 7/12, this group covers documents pertaining purely to edge-detection without partitioning an image into regions, e.g.:
- Derivative methods (first-order or gradient, second order, e.g. Laplacian)
- Zero crossing
- Corner detection
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition | |
Extraction of features or characteristics of the image by coding the contour of a pattern, for pattern recognition |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition |
This place covers:
- Statistical/Probabilistic methods for segmentation
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Classification techniques based on a parametric (probabilistic) model, for pattern recognition | |
Markov models or related models, Markov random fields or networks embedding Markov models for pattern recognition | |
Detecting partial patterns or configurations by analysing connectivity relationship of elements of the pattern, for pattern recognition | |
Pattern recognition or machine learning using classification within arrangements for image or video recognition or understanding |
This place covers:
- Model-based segmentation (in particular when applied to biomedical images)
- Methods based on active shape models
- Methods based on active appearance models
- Methods based on active contours, active surfaces, snakes or deformable templates
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Pattern recognition techniques involving a deformation of the sample or reference pattern or elastic matching | |
Matching of contours based on a local optimisation criterion, e.g. snakes or active contours, for pattern recognition | |
Matching based on shape statistics, e.g. active shape models, for pattern recognition | |
Matching based on statistics of image patches, e.g. active appearance models, for pattern recognition |
For Active shape model [ASM], Indexing Code G06T 2207/20124 should be added.
For Active appearance model [AAM], Indexing Code G06T 2207/20121 should be added.
For Active contour; Active surface; Snakes, Indexing Code G06T 2207/20116 should be added.
This place covers:
- Morphological methods
- Watersheds
- Toboggan-based methods
Illustrative examples:
Figure 1. The 1D profile I(x) representing the intensity of a dark object of interest on a light background, forms three basins which correspond to local minima Min1, Min2 and Min3 of the intensity of the segmented region. The three basins give rise to two watershed lines LPE1 and LPE2, which divide the segmented region into three sub-regions SR1, SR2 and SR3.
Figure 2. Toboggan-based object segmentation
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Smoothing or thinning the pattern, e.g. by morphological operators, for pattern recognition | |
Combinations of pre-processing functions using a local operator, for pattern recognition | |
Cutting or merging image elements, e.g. region growing, watershed, for pattern recognition |
For Morphological image processing, an Indexing Code from the range of G06T 2207/20036 - G06T 2207/20044 should be added.
This place covers:
- Graph-cut methods
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Feature extraction by graphical representation, e.g. directed attributed graphs, for pattern recognition |
Attention is drawn to the following places, which may be of interest for search:
Hierarchical clustering techniques, for pattern recognition | |
Non-hierarchical partitioning techniques based on graph theory, for pattern recognition | |
Graph matching, for pattern recognition |
This place covers:
- Fourier-, FFT-, Wavelet-based methods
- Gabor-, Laplace-transform-based methods
- Discrete cosine transform [DCT]-based methods
- Walsh-Hadamard transform [WHT]-based methods
- Hough transform
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition | |
Detecting partial patterns using transforms (e.g. Hough transform), for pattern recognition | |
Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition |
For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.
This place covers:
- Using information from multiple images to determine segmentation result
- Segmentation based on several images taken under varying illumination, focus, exposure, etc.
- Segmentation of a video frame involving several image frames of the video sequence, e.g. neighbouring frames
- Temporal and spatio-temporal segmentation, if not based on motion information
- Segmentation using several (neighbouring) slices of a tomographic data set (CT, MRI, PET, etc.), propagation of segmentation results between neighbouring slices
- Hierarchical segmentation methods (including wavelet-based schemes), if final segmentation result is derived from (partial) results at different resolution levels
- Multispectral image segmentation using information from different spectral bands (beyond the visible spectrum)
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Motion-based segmentation |
This place covers:
Image segmentation or edge detection methods based on
- edge growing
- edge linking
- edge following
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Detecting partial patterns by analysis of the connectivity relationships of elements of the pattern, e.g. by edge linking, connected component or neighbouring slice analysis, for pattern recognition |
This place covers:
Image segmentation methods based on
- region growing; region merging
- split-and-merge
- connected component labelling
Illustrative example:
Figure 1. Region growing method which accumulates costs along a pixel path and as soon as the accumulated costs between neighbouring pixels (91, 92) become higher than a threshold, the growing is stopped.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Detecting partial patterns by analysis of the connectivity relationships of elements of the pattern, e.g. by edge linking, connected component or neighbouring slice analysis, for pattern recognition | |
Segmentation of touching or overlapping patterns, cutting or merging image elements, e.g. region growing, watersheds, for pattern recognition |
This place covers:
Image segmentation or edge detection methods based on a separation of foreground, i.e. relevant parts, and background, i.e. non-relevant parts of an image.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition |
This place covers:
- Image analysis algorithms for determining motion of an image subject, or of the camera having acquired the images. Determination of scene movement and between image frames, e.g. Change detection
- Tracking
- Motion capture
- Determining camera ego-motion add the Indexing Code G06T 2207/30244: Camera pose
- Medical motion analysis, e.g. of the left ventricle of the heart add the Indexing Code G06T 2207/30048: Heart; Cardiac
- Trajectory representation add the Indexing Code: G06T 2207/30241 Trajectory
- Stabilisation of video sequences (see also G06T 7/30)
This place does not cover:
Motion estimation for coding, decoding, compressing or decompressing digital video signals |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Scene recognition | |
Recognising video content | |
Recognising scenes under surveillance | |
Recognising scenes perceived from a vehicle | |
Recognising scenes inside a vehicle | |
Gesture recognition | |
Burglar, theft or intruder alarms using cameras and image comparison |
Attention is drawn to the following places, which may be of interest for search:
Determination of transform parameters for the alignment of images, i.e. image registration | |
Depth or shape recovery from motion | |
Determining position or orientation of objects | |
Video games | |
Target following using TV type tracking systems | |
Light barriers | |
Data indexing of video sequences | |
Surveillance systems using closed-circuit television systems (CCTV) |
For camera pose, Indexing Code G06T 2207/30244 should be added. For heart, cardiac, Indexing Code G06T 2207/30048 should be added. For trajectory details, Indexing Code G06T 2207/30241 should be added. For sports video, sports image, Indexing Code G06T 2207/30221 should be added
This place covers:
Illustrative example:
This place does not cover:
Multi-resolution motion estimation or hierarchical motion estimation for coding, decoding, compressing or decompressing digital video signals |
This place covers:
- Figure-ground segmentation by detection of moving object(s) from dense motion representation
- Partitioning an image into regions of homogenous 2D (apparent) motion
- Based on analysis of motion vector field or motion flow
- Grouping from optical flow
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Retrieval of video data using motion, e.g. objection motion | |
Segmenting video sequences, e.g. scene change analysis | |
Scene change analysis |
Attention is drawn to the following places, which may be of interest for search:
Segmentation; Edge detection |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Movement estimation for television pictures | |
Predictive coding in television systems using temporal prediction with motion detection |
Attention is drawn to the following places, which may be of interest for search:
Image coding using predictors | |
Use of motion vectors for image compression, coding using predictors, video coding |
This place covers:
Full, exhaustive, brute force search
Illustrative example:
Figure 1. A motion vector between the m-th frame (1) and the (m+n)-th frame (2) is detected. At first, the image data of the m-th frame 1 is divided into a plurality of first blocks 11, and the first blocks 11 are extracted sequentially .The second block 12 of the same size and shape as the extracted first block 11 is extracted from the image data of the (m+n)-th frame 2. The absolute difference value of the corresponding pixels of the extracted first block 11 and the extracted second block 12 is computed every pixel.
This place covers:
- Non-full, layered structure, fast, adaptive, efficient search
- Three-Step, New Three-Step, Four-Step Search
- Simple and Efficient Search
- Binary Search
- Spiral Search
- Two-Dimensional Logarithmic Search
- Cross Search Algorithm
- Adaptive Rood Pattern Search
- Orthogonal Search
- One-at-a-Time Algorithm
- Diamond Search
- Hierarchical search
- Spatial dependency check
Illustrative example of an hierarchical search:
For Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform, Indexing Code G06T 2207/20016 should be added.
This place covers:
- Feature points, e.g. determined by image operators; also matching of point descriptors, feature vectors; significant segments, blobs
- Feature, landmark, marker, fiducial, edge, corner, etc.
Illustrative example:
In this place, the following terms or expressions are used with the meaning indicated:
Feature | a significant image region or pixel with certain characteristics |
This place covers:
- Involving correlation of "true to reality" image patches, templates, regions of interest
- Correlation used for 1) finding features in each image or for 2) finding regions of interest from one image in the other images
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Face recognition using comparisons between temporally consecutive images |
Attention is drawn to the following places, which may be of interest for search:
Analysis of motion using block-matching (where blocks are arbitrarily defined by a grid, not as a significant image region) | |
Image matching for pattern recognition or image matching in general |
This place covers:
- Involving matching of intermediary 2D or 3D models extracted from each image before motion analysis, e.g. skeletons, stick models, ellipses, geometric models of all kinds, polygon models, active appearance and shape models, as opposed to reference images or patches
- Model matching used for 1) finding features in each image or for 2) finding structure of interest from one image in the other images
Illustrative example:
For each frame of a captured video sequence, a basic human body model 800 for diving competitions is superimposed on the frame and adjusted to provide an accurate representation of the diver's positioning in that frame, the sequence of adjusted models describing the entire motion sequence of the diver.
Attention is drawn to the following places, which may be of interest for search:
Matching of contours in general or matching of contours for pattern recognition | |
Syntactic or structural pattern recognition, e.g. symbolic string recognition |
This place covers:
- Subtraction of previous image
- Subtraction of background image, background maintenance, background models therefor
- Also involving ratio or more general comparison of corresponding pixels in successive frames
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Burglar, theft or intruder alarms using cameras and image comparison |
Attention is drawn to the following places, which may be of interest for search:
Change detection in biomedical image inspection |
This place covers:
- Fourier, DCT, Wavelet, Gabor, etc.
- Using phase correlation
Illustrative examples:
Figure 1.
Figure 2.
Attention is drawn to the following places, which may be of interest for search:
Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition | |
Detecting partial patterns using Hough transform for pattern recognition | |
Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition |
For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.
This place covers:
Optic (optical) flow involving the calculation of spatial and temporal gradient
Illustrative example:
This place covers:
- Bayesian methods
- HMM
- Particle filtering
Illustrative examples:
Figure 1.
Figure 2. Kalman filter-based tracking of 3D heart model
Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/20.
This place covers:
Illustrative example:
This place covers:
- Algorithms for camera networks
- Interaction, cooperation between trackers
- Multi-view tracking, multi-camera tracking
- The cameras view the same scene (cooperation, e.g. by voting, fusion)
- The cameras view different scenes (cooperation, e.g. by handover, tracklet joining, trajectory joining)
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Classification of unknown faces, i.e. recognising the same non-enrolled faces, e.g. recognising the unknown faces across different face tracks |
Attention is drawn to the following places, which may be of interest for search:
Analysis of motion using a sequence of stereo pairs, e.g. cooperative motion analysis from a single stereo camera pair or motion analysis from at least three views, wherein at least one pair of views is processed as stereo pair |
Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/20.
In particular, in the case of motion analysis from multiple monocular views with subsequent merging or joining of analysis results, details about the respective analysis algorithm per view should be classified in the subgroups of G06T 7/20 as well.
In this place, the following terms or expressions are used with the meaning indicated:
Multi-camera | Treatment of multiple image sequences, not in a pairwise manner |
Stereo | Treatment of two images, e.g. from two cameras or a single camera that is displaced, in a pairwise manner |
This place covers:
Image analysis algorithms for determining geometric transformations required to register (i.e. align) separate images. The process involves the estimation of transform parameters. Registration means determining the alignment of images or finding their relative position.
- Registration of image subparts for the construction of mosaics image
- Multi-modal, cross-modal, across-modal registration of medical image data sets
- Registration with medical atlas Registration of pre-operative and intra-operative medical image data sets
- Registration for change detection in biomedical or remote sensing images (change detection see also G06T 7/20
- Registration of models
- Registration of a model with an image
- Registration of range data, point clouds (ICP algorithm)
- 2D/2D, 2D/3D, 3D/3D registration
- Interactive registration
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Segmentation involving deformable models | |
Recognising three-dimensional objects, e.g. range data matching for pattern recognition |
Attention is drawn to the following places, which may be of interest for search:
Geometric image transformation in the plane of the image for image registration | |
Analysis of motion | |
Combining images from different aspect angles, e.g. spatial compounding | |
Pattern matching criteria, e.g. proximity measures | |
Image or video pattern matching | |
Comparing pixel values or logical combinations thereof, e.g. template matching |
For registration of medical image data, an Indexing Code from the range of G06T 2207/30004-G06T 2207/30104(Biomedical image processing) should be added.
For involving image mosaicing, Indexing Code G06T 2200/32 should be added.
For Interactive image processing based on input by user, an Indexing Code from the range of G06T 2207/20092-G06T 2207/20108 should be added.
In patent documents, the following words/expressions are often used with the meaning indicated:
Recalage (French) | Registration (English) |
This place covers:
- Global correlation
- Block-matching like correlation, if not for motion analysis
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Analysis of motion using block-matching |
This place covers:
- Feature points, e.g. determined by image operators; also matching of point descriptors, feature vectors; significant segments, blobs
- Feature, landmark, marker, fiducial, edge, corner, etc.
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Extraction of features or characteristics of the image, for pattern recognition |
In this place, the following terms or expressions are used with the meaning indicated:
Feature | significant image region or pixel with certain characteristics |
This place covers:
Involving correlation with "true to reality" image patches, templates, regions of interest; correlation used for 1) finding features in each image, or for 2) finding regions of interest from one image in the other image.
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Image registration using correlation of complete images or block-matching-like registration (where blocks are arbitrarily defined by a grid, not as a significant image region, region of interest) | |
Pattern matching criteria, e.g. proximity measures | |
Image or video pattern matching |
This place covers:
- Involving matching of intermediary 2D or 3D models extracted from each image before registration, e.g. geometric models of all kinds, polygon models, active appearance and shape models, as opposed to reference images or patches
- Corresponding models are adapted to each image to be registered, respectively, transform parameters between the images are determined from a comparison/matching of the adapted models
- Model matching used for 1) finding features in each image, or for 2) finding structure of interest from one image in the other image
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Matching of contours | |
Syntactic or structural pattern recognition, e.g. symbolic string recognition |
This place covers:
- Involving probabilistic feature points, statistical features or reference images / patches, statistical models, statistical matching
- Approaches based on mutual information
- RANSAC
Attention is drawn to the following places, which may be of interest for search:
Matching configurations of points or features for pattern recognition, e.g. using RANSAC | |
Image matching by comparing statistics of regions for pattern recognition |
Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/30.
This place covers:
Fourier, DCT, Wavelet, Gabor, etc.
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition | |
Detecting partial patterns using transforms (e.g. Hough transform) for pattern recognition | |
Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition |
For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.
This place covers:
- Aligning one image sequence or image set to the other, i.e. finding spatially or temporally corresponding frames between one image sequence and the other (inter-sequence alignment), as opposed to spatial alignment of image frames within a single image sequence (intra-sequence alignment)
- Temporal alignment = alignment along the t-axis, e.g. alignment of two video sequences
- Spatial alignment = alignment along the z-axis, e.g. alignment of two stacks of CT slices
- Additionally, spatially aligning the temporally or spatially corresponding frames in the x-y-plane (intra-sequence alignment) is possible
- Source sequences can be of any orientation
Illustrative examples:
Figure 1. Spatial alignment
Figure 2.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Matching video sequences for pattern recognition | |
Document matching for pattern recognition |
Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/30.
This place covers:
Analysis of the spatial arrangement of image colour or intensity characteristics representative of a perceived image texture.
This place does not cover:
Depth or shape recovery from texture |
Attention is drawn to the following places, which may be of interest for search:
Segmentation; Edge detection | |
Depth or shape recovery from shading | |
Filling a planar surface by adding texture in 2D image generation | |
Texture mapping in 3D image rendering |
This place covers:
Analysis of texture using:
- First-order statistics
- Global histogram-based measures: mean, variance, skewness, kurtosis, energy, entropy
- Autocorrelation
- Run-length based algorithms
This place covers:
Fourier, DCT, Wavelet, Gabor, etc.
Illustrative example:
Texture-based image retrieval method using a Gabor filter in the frequency domain, wherein the frequency domain representation is divided according to a predetermined layout for extracting texture descriptors of respective feature channels.
Attention is drawn to the following places, which may be of interest for search:
Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition | |
Detecting partial patterns using transforms (e.g. Hough transform), for pattern recognition | |
Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition |
For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.
This place covers:
- Laws texture energy measure
- Texture analysis using edge operators
- Texture analysis using difference of Gaussians
- Texture analysis using local linear transforms
- Local Binary Pattern [LBP]
- Grey level difference method
- Local rank order correlation
This place covers:
- Second-order statistics
- Generalised co-occurrence matrix
This place covers:
- Markov Random Fields, Gaussian Random Fields, Gibbs Random Fields
- Autoregressive Model
This place covers:
- fractal texture analysis methods
- fractal dimension
- box counting methods
This place covers:
- Shape chain grammars, graph grammars
- Grouping of primitives in hierarchical textures
Illustrative example:
Figure 1 (top) and 2 (bottom). Method for finding periodic structures in a layer of an integrated circuit that have identical optical properties. Fig. 2 illustrates a geometric hierarchy of the periodic elements in the cell layer of Fig. 1.
This place covers:
- Image analysis algorithms for determining scene depth parameters from image characteristics.
- Shape from X
- Depth map determination
- Disparity calculation for shape recovery
Attention is drawn to the following places, which may be of interest for search:
Picture taking arrangements specially adapted for photogrammetry or photographic surveying | |
LIDAR systems for mapping or imaging |
This place covers:
Shape from shading or shadows
Illustrative example:
This place does not cover:
Depth or shape recovery from multiple light sources, e.g. photometric stereo |
This place covers:
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Image acquisition and arrangements for measuring contours or curvatures of an object by projecting a pattern thereupon |
In this place, the following terms or expressions are used with the meaning indicated:
Structured | characterises the illumination |
This place covers:
- shape from texture
- shape from blur in a single image
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Depth or shape recovery from focus |
This place covers:
Illustrative example:
This place covers:
- shape from line drawings
- shape from contours in a single image
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Volumetric display with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes |
Attention is drawn to the following places, which may be of interest for search:
Determining parameters from multiple pictures, e.g. disparity calculation as such |
For documents concerning trilinear computations, trifocal tensor: add the Indexing Code G06T 2207/20088: Trinocular vision calculations; trifocal tensor.
This place covers:
Depth reconstruction using, or based on, light field representations, i.e. 5D plenoptic function, 4D light field, lumigraph, ray space; such light field representations may originate, e.g. from plenoptic cameras, light field cameras, cameras with a lenslet array or integral imaging.
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Depth using trinocular vision calculations/trifocal tensor | |
Depth from focus | |
Depth from motion | |
Depth from multiple light sources | |
Depth from stereo images |
This place covers:
- Shape from contours
- Shape from silhouettes
- Shape from visual hulls
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Depth or shape recovery from line drawings, e.g. shape from contours involving one image only |
This place covers:
- Shape from focus
- Shape from defocus of multiple images
Illustrative example:
Figure 1
Figure 2
Figure 1 and 2. Input image sequence and resulting depth map
Attention is drawn to the following places, which may be of interest for search:
Shape from texture, e.g. shape from blur in a single image | |
Systems for automatic generation of focusing signals | |
Focusing aids for cameras; Autofocus systems for cameras |
This place covers:
- Shape from motion, structure from motion
- Extracting the shape of a scene from the spatial and temporal changes occurring in an image sequence (camera or scene moves)
- Simultaneous Localisation and Mapping [SLAM]
Illustrative examples:
Figure 1
Figure 2. Shape from motion reconstruction
Attention is drawn to the following places, which may be of interest for search:
Determining position or orientation of objects or cameras |
For Camera pose, Indexing Code G06T 2207/30244 should be added.
This place covers:
Algorithms for the determination of scene depth parameters from multiple images for which more than one source of illumination has been used. Typically, different illumination sources are used when capturing each of the multiple images to produce different images of the same scene under the different lighting conditions. The different images are used to determine depth and shape parameters in the scene.
- Different illumination intensities, e.g. ambient and flash
- Different directions of illumination
Illustrative example:
In this place, the following terms or expressions are used with the meaning indicated:
Photometric stereo | a technique for estimating the normal vectors at different points on an object's surface by observing the object under different lighting conditions. |
This place covers:
Shape from stereo images or sequences of stereo images
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Stereoscopic or multiview image generation wherein the generated image signals comprise depth maps or disparity maps |
Attention is drawn to the following places, which may be of interest for search:
Depth or shape recovery from multiple images using trilinear computations / the trifocal tensor | |
Depth or shape recovery from multiple images using the quadrifocal tensor |
This place covers:
Multi-baseline stereo (special case only where
- each view is always treated together with the same reference view and
- the lengths of the respective baselines differ from each other)
Illustrative example:
This place covers:
- Analysis of image subjects to determine geometric attributes thereof, e.g. area, centre of mass, perimeter, diameter or volume.
- Ellipse detection
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Extraction of image features for pattern recognition by deriving geometrical properties of the whole image |
Attention is drawn to the following places, which may be of interest for search:
Measuring characterised arrangements by the use of optical means |
This place covers:
Illustrative example:
This place covers:
Convexity, concavity, curvature, circularity, sphericity, roundness
Illustrative examples:
Figure 1
Figure 2
This place covers:
Following centers of gravity of sections along elongated or tubular structure
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Computation of moments, for pattern recognition |
This place covers:
- Determination of lines of symmetry, midlines
- Measurement of symmetry and asymmetry
Illustrative example:
This place covers:
- Image processing algorithms for determining the position or orientation of an image subject, or of the camera having acquired the image
- Position or orientation of the camera
- Estimation of position, pose, posture, attitude in 2D and 3D
- Gaze direction, head pose
- Bin picking
This place does not cover:
Camera calibration |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Orientation detection before recognition | |
Acquiring or recognising human faces, facial parts, facial sketches, facial expressions, eyes |
Attention is drawn to the following places, which may be of interest for search:
Image feed-back for automatic industrial control | |
Analysis of motion | |
Measuring position in terms of linear or angular dimensions | |
Locating or presence-detecting by the use of the reflection or reradiation of radio or other waves | |
Pattern matching criteria, e.g. proximity measures | |
Image or video pattern matching | |
Mask, wafer positioning, alignment | |
Studio circuitry, e.g. for position determination of a camera in a television studio | |
Aligning or positioning of tools relative to the circuit board for manufacturing printed circuits |
For camera pose, Indexing Code G06T 2207/30244 should be added. For workpiece; machine component, Indexing Code G06T 2207/30164 should be added.
In patent documents, the following words/expressions are often used as synonyms:
- "Repérage" (in French documents), "location", and "locating"
This place covers:
- Feature points, e.g. determined by image operators; also point descriptors, feature vectors; significant segments, blobs
- Feature, landmark, marker, fiducial, edge, corner, etc.
Illustrative example:
In this place, the following terms or expressions are used with the meaning indicated:
Feature | significant image region or pixel with certain characteristics. |
This place covers:
Involving correlation with "true to reality" reference images, templates of various poses; for "directly" determining pose; correlation with "true to reality" templates of landmarks, markers, fiducials; for finding features in the image.
Illustrative examples:
Figure 1
Figure 2
Attention is drawn to the following places, which may be of interest for search:
Pattern matching criteria, e.g. proximity measures | |
Image or video pattern matching |
This place covers:
- Involving matching to a 2D or 3D model, e.g. geometric models of all kinds, polygon models, active appearance and shape models, also abstract models of landmarks, markers, fiducials with spatial extent, as opposed to reference images or patches
- Matching of a graphical, e.g. polygon model, may involve intermediate rendering of the model
- Model matching used for 1) finding features in each image, or for 2) "directly" determining pose of structure of interest
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Segmentation involving deformable models | |
Analysis of motion involving models | |
Matching of contours | |
Syntactic or structural pattern recognition, e.g. symbolic string recognition |
This place covers:
- Involving probabilistic feature points, statistical models, statistics of positions
- Features, reference images, patches or method itself can be statistical
- RANSAC
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Segmentation or edge detection involving probabilistic approaches | |
Analysis of motion involving a stochastic approach | |
Image matching by comparing statistics of regions for pattern recognition |
Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/70.
This place covers:
The use of methods/algorithms to analyse camera images for the determination of intrinsic parameters defining the camera's properties, or for the determination of extrinsic parameters defining the camera's position and orientation. Camera calibration enables pixel positions in a captured 2D image to be mapped to real-world 3D coordinates of the subject represented in the image.
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Geometric correction, e.g. of lens distortion | |
Determining position or orientation of objects, e.g. of the camera, without calibration context | |
Calibration patterns | |
Systems for automatic generation of focusing signals | |
Focusing aids for cameras; Autofocus systems for cameras | |
Colour balance, e.g. colour cast correction | |
Calibration of stereoscopic cameras | |
Picture signal generators using solid state devices, e.g. correction of chromatic aberrations | |
Suppressing or minimising disturbance in picture signal generation |
In this place, the following terms or expressions are used with the meaning indicated:
Intrinsic parameters | The geometric and optical characteristics of a camera, including effective focal length, a scale factor and the image centre or "principal point". |
Extrinsic parameters | The three-dimensional position and orientation of the camera in real-world coordinates. |
In patent documents, the following words/expressions are often used as synonyms:
- "Camera calibration", "Geometric camera calibration", and "Camera re-sectioning".
This place covers:
Camera calibration for stereoscopic cameras, e.g. for determining the transformation between left camera coordinate system and right camera coordinate system
Illustrative example:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Calibration aspects relating to the control of a stereoscopic camera |
This place covers:
- Determining colour characteristics by image analysis
- Redeye detection
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Colour image segmentation | |
Acquiring or recognising eyes, e.g. iris verification | |
Retouching, i.e. modification of isolated colours only or in isolated picture areas only |
Attention is drawn to the following places, which may be of interest for search:
Correcting redeye defects by retouching or inpainting |
For redeye defect, Indexing Code G06T 2207/30216 should be added.
This place covers:
- Disparity, correspondence, stereopsis, if not provided for elsewhere
- Disparity calculation for the production of 3D images from 2D images without intermediate modelling
This place does not cover:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Industrial image inspection using an image reference approach | |
Biomedical image inspection using an image reference approach | |
Segmentation involving the use of two or more images | |
Computing motion using a sequence of stereo image pairs | |
Determination of transform parameters for the alignment of images, i.e. image registration |
Attention is drawn to the following places, which may be of interest for search:
Image-based rendering | |
3D from 2D images with intermediate modelling |
For Disparity calculation for image-based rendering, Indexing Code G06T 2207/20228 should be added.
This place covers:
Coding/compression and decoding/decompression of computer graphics(CG) data and computer graphics compression methods applied on natural image/video.
Apparatus/devices of coding/compressing and/or decoding/decompressing of computer graphics data.
Computer graphics data mentioned including:
- object geometry models
- scene models
- 2D/3D vector graphics
- 3D/4D volumetric models
- CAD models
- contour shape data
- elevation data
- CG related metadata/parameters including depth, colour, texture, motion vectors, scene graph, position, connectivity information and similar.
This group covers compression/coding/decompression/decoding of CG related data and CG related methods applied on natural image or video. Other compression techniques specific to the natural image/video without using CG related methods are covered by H04N 19/00.
Compression in general is covered by H03M 1/00.
This place does not cover:
Bandwidth or redundancy reduction for static pictures | |
Coding or decoding of static colour picture signals | |
Methods or arrangements for coding, decoding, compressing or decompressing digital video signals |
Attention is drawn to the following places, which may be of interest for search:
Animation | |
Model based coding | |
Model based coding using a 3D model | |
Rendering of computer graphics data | |
Modeling of computer graphics data | |
Re-meshing for manipulation, editing purpose | |
Manipulation 3D objects | |
Pattern recognition | |
Computer aided design | |
Image or video recognition or understanding | |
Pattern recognition by contour coding | |
Coding or decoding, in general | |
Compression in general | |
Transmission of TV signals | |
Selective content distribution |
In general, consult the gérant before using any sub-groups. This is a provisionary document which will be replaced in January , 2012, after completing reorganization in G06T 9/00.
- for classification, the main group G06T 9/00 is assigned always before completing the reorganization.
- the sub-groups G06T 9/004, G06T 9/005, G06T 9/005, G06T 9/008 are not used anymore, the content, which is not related with computer graphics data compression/coding, will be transferred to the corresponding classes defined in the group definition statements below.
In this place, the following terms or expressions are used with the meaning indicated:
4D volumetric models | Sequences of volumetric images over time |
MPEG | Moving Picture Experts Group |
SNHC | Synthetic/Natural Hybrid Coding |
BIFS | Binary Format for Scene |
VRML | Virtual Reality Modeling Language |
SVG | Scalable Vector Graphics |
NN | Neural Networks |
TV | Television |
In patent documents, the following abbreviations are often used:
CG | Computer graphics |
3D | Three dimensional |
4D | Four dimensional |
CAD | Computer aided design |
In patent documents, the following words/expressions are often used as synonyms:
- "Compression" and "Coding"
- "Decompression" and "Decoding"
- "Scene graph" and "Scene model"
- "Scene description graph" and "Scene graph"
- "Metadata" and "Parameter"
- "Contour coding" and "Shape coding"
- "Elevation data" and "Height data"
- "Object geometry models" and "Object models"
- "Natural image" and "Raster/Bitmap image"
- "Vector graphics" and "Scalable Vector Graphics"
This place covers:
Means or steps for the compression/coding of wire frame models, e.g. polygon meshes.
Documents concerning mesh compression/coding by
- face merging
- incremental decimation
- simplification by remeshing for data reduction purpose are classified here.
This place does not cover:
Animation | |
Rendering of computer graphics data | |
Re-meshing for manipulation, editing | |
Manipulation 3D objects |
Documents classified in G06T 9/001, H04N 19/20 and G06T 9/001, G06T 15/00, G06T 17/00, H04N 19/20 are transferred to G06T 9/001.
Documents concerning re-meshing for manipulation, editing and similar, i.e. all means not having data reduction purpose are classified in G06T 17/205.
In patent documents, the following words/expressions are often used as synonyms:
- "wireframe" and "polygon mesh"
This place covers:
Means or steps for the compression/coding of computer graphics data and natural image/video data using neural networks (NN).
The compression/coding data concerning in this group includes:
- computer graphics data
- natural image/video data.
In this place, the following terms or expressions are used with the meaning indicated:
NN | Neural Networks |
This place covers:
This group is not used anymore, its content, which is not related with computer graphics data compression/coding, are transferred to H04N 19/105, H04N 19/103 or H04N 19/107.
Attention is drawn to the following places, which may be of interest for search:
Coding or prediction mode selection | |
Predictor | |
Intracode mode selection |
This place covers:
This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/13, H04N 19/91.
Attention is drawn to the following places, which may be of interest for search:
Variable length coding (VLC) or entropy coding |
In this place, the following terms or expressions are used with the meaning indicated:
VLC | Variable length coding |
This place covers:
This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/60.
Attention is drawn to the following places, which may be of interest for search:
Transform coding |
In this place, the following terms or expressions are used with the meaning indicated:
DCT | Discrete cosine transform |
This place covers:
This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/94.
Attention is drawn to the following places, which may be of interest for search:
Vector coding |
In patent documents, the following words/expressions are often used as synonyms:
- "vector coding" and "vector quantization"
This place covers:
Means or steps for the compression/coding of computer graphics data using contour/shape coding method, e.g. by detection of edges.
Documents classified in G06T 9/20, H04N 19/20 are transferred to G06T 9/20.
The compression/coding data concerning in this sub-group includes:
- computer graphics data, e.g. vector graphics data
- natural image/video data.
In this place, the following terms or expressions are used with the meaning indicated:
SVG | Scalable Vector Graphics |
In patent documents, the following words/expressions are often used as synonyms:
- "contour coding" and "shape coding"
- "vector graphics" and "scalable vector graphics"
This place covers:
Means or steps for the compression/coding of computer graphics data by using a tree hierarchy, e.g. quadtree, octree, and similar.
The documents concerning compression/coding of:
- computer graphics object models, scene models and related metadata, e.g. depth data,
are classified here.
This place does not cover:
Modelling by tree structure | |
Natural image/video tree coding |
Attention is drawn to the following places, which may be of interest for search:
Tree description | |
Tree coding |
In this place, the following terms or expressions are used with the meaning indicated:
Bintree or binary tree | tree structure in which each node has at most two child nodes |
Quadtree or quad tree | tree structure in which each node has at most four child nodes |
K-tree | tree structure in which each node has at most K child nodes |
Hextree | tree structure in which each node has at most six child nodes |
Volume octree | tree structure in which each voxel is subdivided into at most 8 subvoxels |
Surface octree | Volume octree with incorporated surface information |
Multi tree | directed acyclic graph in which the set of nodes reachable from any node forms a tree |
In patent documents, the following words/expressions are often used as synonyms:
- "scene graph", "scene description graph" and "scene model"
This place covers:
- Documents dealing with generating a 2D image or texture in general. To a large extend, but not exclusively, G06T 11/00 covers image generation "from a description to a bit-mapped image" in general.
- Software packages, systems
- Caricaturing, Identikit
- Fusion of images with different objects, e.g. fusion of real and virtual images, labelling of 2D images
- Clipping of 2D images
- 2D and 3D reconstruction from projections, e.g. for computed tomography.
- Device independent techniques, i.e. it is not for documents which are specially adapted for I/O devices such as printers, scanners or displays.
- Note:
General idea for G06T 11/00:
- For generating an image,
- first select a colour (G06T 11/001),
- then draw a line (G06T 11/203),
- fill a rectangle, circle or any other closed shape (G06T 11/40),
- edit your work (G06T 11/60).
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Image processing specially adapted for radiation diagnosis | |
Map generation for navigation systems | |
Producing output data for printers | |
Controller for display circuits, e.g. for LCDs, Plasma, OLEDs | |
Image generation for scanner, fax-machines, copy machines | |
Studio circuits for video generation, mixing, special effects, blue/green screens |
Attention is drawn to the following places, which may be of interest for search:
Generating of panoramic or mosaic images | |
Generating high dynamic range images (HDR) | |
Non-photorealistic rendering in 3D | |
Input arrangements or combined input and output interaction between user and computer (user interfaces) | |
Video editing |
This place covers:
Texture generation
- Textures; endless, periodic pattern
- Texture synthesis, procedural textures
- Neural style transfers
- Brush strokes
- Fractals; Julia sets; Koch curves
Colour generation, changing of selected colours
- Colour palettes, schemes; colour LUT; CLUT
- False colours
- Simulation of watercolour, oil paint, airbrush
Illustrative examples:
This place does not cover:
Inpainting |
Attention is drawn to the following places, which may be of interest for search:
Texture mapping | |
Colour palettes, CLUTs for displays | |
Colour space manipulation |
In patent documents, the following abbreviations are often used:
LUT | look-up table |
CLUT | colour look-up table |
This place covers:
- Reconstruction from tomographic projections, i.e. measurements of an unknown object function (e.g. density of matter, activity distribution) using penetrating radiation or electromagnetic waves, described by radiation transport equations, e. g. integration along lines (= Radon transform), for e. g. refraction tomography, CT, SPECT, PET, Tomosynthesis, Optical Tomography.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Impedance measuring for diagnostic purposes | |
Apparatus for radiation diagnosis | |
Radiation diagnosis devices using data or image processing specially adapted for radiation diagnosis | |
Diagnostic device using ultrasound |
Attention is drawn to the following places, which may be of interest for search:
Image enhancement in general | |
Image analysis, incl. biomedical image inspection, image registration, segmentation, analysis of motion, analysis of geometric attributes | |
Depth or Shape recovery, from multiple images | |
Analysis of materials using tomography, e.g. CT | |
NMR | |
Measuring and detection of X-radiation | |
ICT specially adapted for processing medical images, e.g. editing |
In this group, it is desirable to add the indexing codes of groups G06T 2211/404 - G06T 2211/464.
The following list of symbols from the series G06T 2211/404 - G06T 2211/464 should be allocated to documents in G06T 11/003 whenever relevant:
- G06T 2211/404 Angiography - Angiographic reconstruction includes all the reconstruction methods concerning vessels, tree structures etc.
- G06T 2211/408 Dual energy - Reconstruction from dual or multi energy acquisition, polychromatic X-rays, photon-counting CT
- G06T 2211/412 Dynamic - Dynamic reconstruction, i.e. moving objects are involved or motion compensation is required (e. g.: heart, lung movement, etc...)
- G06T 2211/416 Exact reconstruction - Exact or quasi-exact reconstruction algorithms (in contrast to approximate algorithms)
- G06T 2211/421 Filtered Back Projection based methods (the projection data can be handled sequentially, view-by-view)
- G06T 2211/424 Iterative - Iterative methods including all the methods using iterations independent of the reconstruction method per-se (e.g. maximum likelihood (ML) or maximum a posteriori (MAP) estimation, regularisation, compressed sensing)
- G06T 2211/428 Real-time - Real time reconstruction, e.g. fluoroscopy, intra-operative CT
- G06T 2211/432 Truncation - All or part of the data from the detectors are spatially truncated, or incomplete projection data is used.
- G06T 2211/436 Limited angle - limited-angle or few view acquisition, tomosynthesis
- G06T 2211/441 AI-based methods, e.g. deep learning or convolutional artificial neural networks
- G06T 2211/444 Low dose acquisition, reduction of radiation dose
- G06T 2211/448 Involving metal artefacts, streaking artefacts, beam hardening, photon starvation
- G06T 2211/452 Involving suppression of scattered radiation or scatter correction
- G06T 2211/456 Optical coherence tomography [OCT]
- G06T 2211/461 Phase contrast imaging or dark field imaging
- G06T 2211/464 Dual or multimodal imaging, i.e. combining two or more imaging modalities, e.g. PET-CT, PET-MRI
In patent documents, the following abbreviations are often used:
CT | Computed Tomography |
NMR | Nuclear Magnetic Resonance |
MRI | Magnetic Resonance Imaging |
SPECT | Single-Photon-Emission Computed Tomography |
PET | Positron Emission Tomography |
This place covers:
Specific pre-processing for tomographic reconstruction
- Calibration
- Source positioning
- Synchronisation
- Scouts
- Rebinning
- Scatter correction
- Attenuation correction
- Metal artefact reduction (MAR)
Example: Scatter and beam hardening correction in CT applications
This place covers:
- Inverse problem, transformation from projection-space into object-space
- Fourier methods
- Algebraic methods
- Back-projection
- Statistical Methods, e.g. maximum likelihood
- Compressed sensing, sparsity
- AI-based methods, e. g. neural networks
Example: Reconstruction method for cone-beam CT
This place covers:
- Specific post-processing after tomographic reconstruction
- Processing which relies essentially on unique properties of tomographic images, e.g. projection geometry or interactions of radiation with matter
- Voxelisation
- Artefact correction (e.g. scatter, metal, cone-beam)
Example: Method for post- reconstructive correction of images of a computer tomograph
This place covers:
- Rendering, scan conversion of vectors, lines, ellipses, circles
- Offset curves, contour curves
- Wide, thick lines or strokes
- Splines, B-splines, NURBS; Bézier, algebraic, parametric, polynomial, cubic curves
- Approximation of curves or polygons
- Antialiasing of lines and curves; e. g. using supersampling; subpixel or area weighting
- Font rendering, e.g. scalable, outline, contour, edge fonts
- Sketching; freehand curve drawing
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Vehicle instruments | |
Printer fonts |
Attention is drawn to the following places, which may be of interest for search:
Vector coding | |
Filling a planar surface by adding surface attributes | |
Entering handwritten data | |
Font handling; Temporal or kinetic typography | |
Feature extraction by contour coding | |
Display character generators |
This place covers:
Illustrative examples:
- Diagram, graph layout; directed graph; flow graph; flowchart
- Venn diagram; nested tree-map
- Pie, tile, column, bar, business charts
- 2D and 3D Visualization of data; fluid flows; vector fields; scattered data
- Sketched diagrams or graphs
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Navigational instruments, e.g. for aircrafts | |
ICT specially adapted for bioinformatics-related data visualisation, e.g. displaying of maps or networks | |
ICT specially adapted for medical reports, e.g. generation or transmission thereof |
Attention is drawn to the following places, which may be of interest for search:
Animation of fluid flows, 2D character animation | |
Input devices, GUIs | |
GUI programs, e.g. file browsers | |
Menu systems, graphical querying | |
Administration, e.g. office automation or reservations; resource or project management | |
Finance, e.g. banking, investment or tax processing; Insurance, e.g. risk analysis or pensions | |
Network visualisation or monitoring |
This place covers:
- Polygon scan conversion; rasterisation
- Scan-line algorithms, fragment processing
- Antialiasing, supersampling, subpixel or coverage masks
- Tile-based rendering
- Filling of a 2D shape, e.g. polygon, circle, ellipse, region, area
- Interior/exterior determination; edge lists or edge flags
- Colour blends, gradient fills, seed filling, e.g. for vector graphics
Illustrative example:
Attention is drawn to the following places, which may be of interest for search:
Drawing or scan conversion of lines and fonts | |
3D image rendering (architectures) | |
Control of the frame buffer(s) |
In patent documents, the following words/expressions are often used as synonyms:
- In patent documents the terms "rasterising", "scan conversion" and "rendering" are often used as synonyms.
This place covers:
- Editing of bitmaps or vector graphics
- Page layout, page composition, e.g. photo-album, collages, business or greeting cards
- Combining small images by editing in order to generate a new (big) one
- Graphical simulations, e. g. for 2D cosmetic or hairstyle
- Electronic or desktop publishing (DTP), Page Description Language (PDL), PostScript, TeX
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Face sketching with eye witnesses | |
PDL specifically for printers | |
ICT specially adapted for processing medical images, e.g. editing |
Attention is drawn to the following places, which may be of interest for search:
Mosaic or panoramic images | |
Image registration | |
Annotating 3D objects with text | |
Input devices, GUIs | |
Formatting, i.e. changing of presentation of documents | |
Form filling or merging of text | |
Document analysis | |
Composing, repositioning or geometrically modifying originals, from scanning |
In patent documents, the following abbreviations are often used:
DTP | Desktop Publishing |
PDL | Page Description Language |
This group is not used for classification. Its subject-matter is covered by G06F 3/00 and subgroups
This place covers:
Generating and displaying a sequence of images of artwork or model positions in order to create the effect of movement in a scene.
Animation of data representing a 3D or 2D image model or object.
Time related computation of 2D or 3D images, generation of a sequence of 2D or 3D images is classified in this group.
This group is also given as classification to indicate that animation aspects are present but the invention lies in another group than G06T 13/00.
Documents only dealing with related subject-matter like for example motion capture for animation or navigation in virtual worlds and merely mentioning animation in passing are not classified in G06T 13/00 i.e. the generation of an animation has to be a substantive part of the document to be classified here.
Attention is drawn to the following places, which may be of interest for search:
Geometric image transformations for image warping | |
Motion capture (for animation) | |
3D modelling for computer graphics | |
Manipulation of 3D models for computer graphics | |
Navigation in virtual worlds | |
Video games | |
Computer aided design using simulation | |
Processing, recording or transmission of stereoscopic or multi-view image signals | |
Model based coding of video objects |
Deforming meshes for animation purposes get both classifications: G06T 13/00 or one of its subgroups and G06T 17/20.
The series G06T 2213/00 of Indexing Codes is reserved for the use of documents classified in G06T 13/00 and subgroups. They should be allocated to documents in G06T 13/00 and subgroups whenever relevant:
Head group of indexing scheme for animation. This symbol should not be allocated to any documents because the group only serves as an internal node in the group hierarchy. | |
Animation description languages: computer languages for the description of an animation. | |
Animation software package: also includes hardware packages for animation. | |
Rule based animation: e.g. rules for behaviour, script, personality. |
Furthermore, Indexing Codes from the series G06T 2200/00 and G06T 2210/00 should be allocated to documents whenever relevant. Specific symbols from these series that are especially relevant for the documents in a certain subgroup are mentioned under the "Specific rules for classification" of the respective subgroups.
In this place, the following terms or expressions are used with the meaning indicated:
Animation system | traditional animation systems are based on key-frames, which are a succession of individual states (the position, orientation, and current shape of objects) specified by an animator or user |
In patent documents, the following words/expressions are often used as synonyms:
- "simulation (of motion)" and "animation"
This place covers:
Subject matter wherein the animated image data presents a three-dimensional image model or object.
Means or steps for the generation of a sequence of 3D images.
Documents in this group concern the generation of an animation of 3D objects in general and articulated 3D objects not representing characters.
Simulations with 3D objects (e.g. bouncing balls) or 2D surfaces in 3D space (e.g. cloth) are classified here.
This place does not cover:
Nominally claimed subject-matter directed to animation with significant user interaction or manipulation |
Attention is drawn to the following places, which may be of interest for search:
Coding of wireframe meshes for animation | |
Simulating properties, behaviour or motion of objects in video games |
For documents concerning both 2D and 3D animation of objects the first place priority rule is applied, i.e. they are classified only in G06T 13/20 or its subgroups.
Documents where cloth moves according to wind effects are classified in both subgroups G06T 13/20 and G06T 13/60.
For specific aspects of documents in this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated to documents in G06T 13/20 and subgroups whenever relevant:
For animation of cloth: G06T 2210/16
For collision of 3D objects: G06T 2210/21
For fluid flows: G06T 2210/24
For animation using particles: G06T 2210/56
In this place, the following terms or expressions are used with the meaning indicated:
CFD | Computational fluid dynamics |
This place covers:
Means or steps for the generation of an animation sequence based on audio data.
The input is audio data, e.g. music, speech data, i.e. no written text.
Changes e.g. in motion, colour, shape or position of objects in the animation are generated based on time events in the audio data, e.g. the beat in the music or the change of instrumentation.
This place does not cover:
Electrophonic musical instruments | |
Emotion analysis from speech for face animation or talking heads | |
Lip-synchronization or synthesis of lip shapes (visemes) from speech for face animation or talking heads |
Attention is drawn to the following places, which may be of interest for search:
Animation based on written text | |
Video editing or indexing or timing |
Documents where the audio input animates a 2D object are classified in both subgroups G06T 13/205 and G06T 13/80.
This place covers:
Subject matter wherein the animated object exhibits lifelike motions or behaviours.
Means or steps for the generation of an animation sequence of articulated objects representing virtual characters or for the generation of an animation sequence of "body" parts.
The animated characters herein include, e.g. humans, animals or virtual beings.
Animation of a character normally consists of an articulated skeleton surrounded by an implicitly defined volume or a wireframe surface mesh.
Lifelike motions include walking, running, waving or talking. Lifelike behaviours include showing emotions or reactions to events.
Animation of e.g. faces, lips, eyes, gestures, hair or feathers on a character.
Documents concerning only the synthesizing aspect of character animations for Tele- or Videoconferencing (no image capturing, no data transmission)
This place does not cover:
Interaction of avatars in virtual worlds | |
Interaction of avatars in virtual worlds for business | |
Tele- or Video-conferencing |
Attention is drawn to the following places, which may be of interest for search:
Animation of articulated objects in general, i.e. not exclusively or not with the main application for character animation | |
Garment try-on simulators | |
Computing the motion of game characters with respect to other game characters, virtual objects or elements of a game scene | |
Head tracking input arrangements for interaction between user and computer | |
Eye tracking input arrangements for interaction between user and computer | |
Emotion analysis from speech for face animation or talking heads | |
Lip-synchronization or synthesis of lip shapes (visemes) from speech for face animation or talking heads |
- Documents where the characters are only 2D are classified in both subgroups G06T 13/40 and G06T 13/80.
- Documents where the hair on a character is moved by wind effects are classified in both subgroups G06T 13/40 and G06T 13/60.
- Documents where the animation data for the character results from motion capture of real characters are classified in both subgroups G06T 7/00 and G06T 13/40.
In this place, the following terms or expressions are used with the meaning indicated:
Avatar | graphical representation of the user or the user's character |
(inverse) kinematics | calculates the motions necessary to achieve a desired position of the character |
Mocap | motion capture |
Motion retargeting | transferring the motion from one character to another, different one |
Skeleton | tree structure composed of several joints to facilitate modelling the motion of the character |
Skinning | technique to deform the skin from the deformation of the skeleton |
In patent documents, the following words/expressions are often used as synonyms:
- "Avatar" and "character"
This place covers:
Subject-matter wherein the animated images are associated with natural phenomena.
Means or steps for the generation of a simulation of natural elements or phenomena.
Documents concerning:
- the simulation of rain, water, foam, water waves, clouds, fog, snow, fireworks, explosions or
- wind effects on grass, plants, flags or hair or
- growing processes of plants or beings or
- destruction processes
are classified here.
This place does not cover:
Physical forces (other than wind) acting on 3D objects, e.g. simulation of a flying bullet or bouncing of a ball | |
The simulation of behavioural effects of characters, e.g. the flee behaviour of sea anemons |
Attention is drawn to the following places, which may be of interest for search:
Simulation of fluid flows in general (3D flows) | |
Simulation of fluid flows in general (2D flows) | |
Computer aided design using simulation |
Documents where the hair on a character is moved by wind effects are classified in both subgroups G06T 13/40 and G06T 13/60.
Documents where cloth moves according to wind effects are classified in both subgroups G06T 13/20 and G06T 13/60.
For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:
For fluid flows: G06T 2210/24
For animation using particles, e.g. fireworks, dust: G06T 2210/56
For weathering effects like e.g. aging, corrosion: G06T 2210/64
In this place, the following terms or expressions are used with the meaning indicated:
Weathering | aging process of material by exposure to weather, e.g. wind, water, certain temperatures |
This place covers:
- Subject matter wherein the animated image data is a 2D image object.
- Means or steps for time related computation of a sequence of 2D images, e.g. a small moveable 2D graphic pattern on a display, often used in video game animation.
- Generation of 2D animated cartoons.
- Animation of 2D text, 2D letters.
- Change over in slide shows, leafing through digital photo albums.
- General aspects of 2D morphing or keyframe interpolation.
- All documents exclusively dealing with the animation of 2D images, i.e. no 3D animation.
- Generation of 2D motion blur.
Attention is drawn to the following places, which may be of interest for search:
Geometric image transformations for image warping | |
Video editing or indexing or timing |
- Documents where the animated 2D object is a character, i.e. 2D character animation, are classified in both subgroups G06T 13/40 and G06T 13/80.
- Documents where the motion blur concerns only the background image are classified in both subgroups G06T 13/20 and G06T 13/80.
- Documents where the audio input animates a 2D object are classified in both subgroups G06T 13/205 and G06T 13/80.
- For documents concerning both 2D and 3D animation of objects with similar algorithms the first place priority rule is applied, i.e. they are classified only in G06T 13/20 or its subgroups, not in G06T 13/80.
- Documents concerning morphing or warping are additionally classified with the Indexing Code G06T 2210/44.
In this place, the following terms or expressions are used with the meaning indicated:
Keyframe interpolation | generation of a smooth transition between a starting and an ending keyframe |
Morphing | continuous transformation between images (shape and colour) |
Sprite | 2D image or animation that is integrated into a larger 2D scene |
Warping | geometric transformation of the 2D object shape |
In patent documents, the following words/expressions are often used as synonyms:
- "Keyframe interpolation" and "inbetweening"
- "Morphing" and "warping"
This place covers:
Means or steps for generating a displayable monoscopic image from a 3D model or 3D data set.
The 3D model is a description of three-dimensional objects in a strictly defined language or data structure.
A 3D data set may include voxel data.
Included in this group are input data sets of 3D coordinates or higher.
This group covers the geometry subsystem of the graphics rendering pipeline, i.e. modeling transformation, lighting, viewing transformation, clipping, mapping to viewport.
This place does not cover:
Rasterization | |
Visualization of models without surface characteristics or attributes | |
Manipulation and visualization of 3D models for computer graphics | |
Image signal generator |
Attention is drawn to the following places, which may be of interest for search:
Video games |
The boundaries between G06T 15/00 (in particular G06T 15/08 and G06T 15/10) on the one hand, and G06T 3/06 and subgroups on the other hand is not yet completely determined. Thus double classification should be considered.
Architectural elements are in general classified in G06T 15/005. However, if the architectural element is only related to a certain part or function within the graphics pipeline (e.g. texture mapping or ray tracing) the document is classified in the respective subgroup (e.g. G06T 15/04 for texture mapping) and additionally the Indexing Code G06T 2200/28 is assigned.
The series G06T 2215/00 of Indexing Codes is reserved for the use of documents classified in G06T 15/00 and subgroups. They should be allocated to documents in G06T 15/00 and subgroups whenever relevant:
Indexing scheme for image rendering: SHOULD BE EMPTY! | |
curved planar reformation of 3D line structures: CPR of tubular structures (e.g. bronchia, arteries, colon, vertebrae), deployment of line structures in 3D to a 2D plane | |
gnomonic or central projection: projection from a center of an object, e.g. a ball, to the surrounding surface, related to VTV (virtual television) | |
shadow map, environment map: generation and use of shadow maps, soft shadows, environment maps | |
using real world measurements to influence rendering: e.g. shadow based on actual light, viewport based on viewer's pose, texturing with real-time output from camera |
In this place, the following terms or expressions are used with the meaning indicated:
OpenGL | Open Graphics Library: standard specification defining an application programming interface (API) for writing applications that produce 2D and 3D computer-graphics |
Direct3D | standard specification defining an API for writing graphic applications; is part of DirectX |
Graphics pipeline | rendering pipeline |
In patent documents, the following words/expressions are often used as synonyms:
- "rasterization" and "rendering"
This place covers:
Functional or operational structure of an image rendering computer system.
Documents in this group focus largely on the way by which the central processing unit (CPU) performs internally with the different units (e.g. the GPU) and accesses memories.
Information relevant is the selection and interconnection of hardware components or functional units in 3D rendering systems.
Hardware and software shader units.
This subgroup is given as classification if the document covers elements of the whole pipeline architecture or if the architectural element covers multiple functions of the graphics pipeline.
This place does not cover:
Architectures for general purpose image data processing | |
Memory management for general purpose image data processing | |
Program control in graphics processors | |
Use of graphics processors for other purposes than rendering | |
Graphics controllers, e.g. control of visual indicators or display of a graphic pattern |
In this place, the following terms or expressions are used with the meaning indicated:
GPU | graphics processing unit |
Shader unit | instruction sets (in software or hardware) to calculate rendering effects on the graphics hardware |
In patent documents, the following words/expressions are often used as synonyms:
- "shader unit" and "hardware shader"
This place covers:
Means or steps for rendering a scene in a style intended to look like a painting or drawing.
Illustrative examples of non-photorealistic rendering may include, e.g. cartoons, sketches, paintings or drawings.
This place does not cover:
Generation of texture or colour, e.g. brush strokes |
In patent documents, the following words/expressions are often used as synonyms:
- "Cartoon-style rendering", "Freehand-style rendering", "Handmade-style rendering", "Ink rendering", "Painterly rendering", "Pen rendering", "Pencil rendering", "Silhouette rendering", "Sketchy rendering", "Toon-Style rendering" and "non-photorealistic rendering"
This place covers:
Means or steps for applying or mapping surface detail or colour pattern to a computer-generated graphic, geometry or 3D-model.
Texture mapping used for the generation of a surface image in final format or form is classified herein.
MIP maps, bump mapping, displacement mapping, environment mapping, shadow maps.
This place does not cover:
Generation of texture |
Documents dealing with shadow maps are classified in both subgroups G06T 15/04 and G06T 15/60.
Documents dealing with environment mapping are classified in both subgroups G06T 15/04 and G06T 15/506.
Documents concerning environment maps or shadow maps are additionally classified with the Indexing Code G06T 2215/12.
In this place, the following terms or expressions are used with the meaning indicated:
Texel | texture element or texture pixel |
This place covers:
Means or steps for creating an image by tracing rays from a viewpoint through each pixel to a visible point on an object.
Ray casting for hidden part removal is classified in both subgroups G06T 15/06 and G06T 15/40.
Generation of a photon map via photon tracing is classified in both subgroups G06T 15/06 and G06T 15/506.
In this place, the following terms or expressions are used with the meaning indicated:
Ray casting | non-recursive variant of ray tracing |
In patent documents, the following words/expressions are often used as synonyms:
- "ray tracing" and "ray casting (especially in early patent documents)"
This place covers:
Means or steps for displaying a two-dimensional representation of three-dimensional volume data sets.
Volume data sets are typically voxels or 3D data sets consisting of groups of 2D slice images acquired by e.g. CT, MRT.
Illustrative examples of volume rendering techniques are Direct Volume Rendering Techniques (e.g. splatting, shear warp), Maximum Intensity Projection (MIP), Minimum Intensity Projection, Curved Planar Reformation (CPR), Multiplanar Reformatting (MPR), Curved Multiplanar Reformatting (CMPR).
Technical details of the projection or mapping technique used for volume rendering.
This place does not cover:
Definition of the position of the projection plane, surface or curve for volume rendering |
Attention is drawn to the following places, which may be of interest for search:
Volumetric displays for the representation of 3D data sets |
Documents concerning curved planar reformation of tubular structures are additionally classified with the symbol G06T 2215/06 .
In this place, the following terms or expressions are used with the meaning indicated:
CMPR | Curved Multi-Planar Reformatting |
CPR | Curved Planar Reformation |
MIP | Maximum (or Minimum) Intensity Projection |
MPR | Multi-Planar Reformatting |
In patent documents, the following words/expressions are often used as synonyms:
- "curved Planar Reformatting", "curved Multiplanar Reformatting", "curved Multiplanar Reformation", "deployment" and "Curved Planar Reformation"
This place covers:
Means or steps for changing the visualization of a graphical object due to view transformations.
Generation of views, multiple views.
Visualization of a graphical object through projection, e.g. parallel projections, oblique projections, gnomonic projections
Mapping of the 3D graphical object on a subspace for visualization, e.g. on (a part of) a plane or on a surface in 3D space (e.g. a bend virtual screen)
This place does not cover:
Visualization of volume data sets | |
Perspective projections | |
Changes in the visualization related to lighting effects | |
Changes in the visualization due to geometric transformations of the object (rotation, translation etc.) | |
Stereoscopic imaging or 3D displays |
Attention is drawn to the following places, which may be of interest for search:
Geometric transformations in the plane of the image, i.e. from 2D to 2D |
The boundaries between G06T 15/10 on the one hand, and G06T 3/08 on the other hand is not yet completely determined. Thus double classification should be considered.
Documents concerning gnomonic or central projections are additionally classified with the Indexing Code G06T 2215/08.
This place covers:
Means or steps for presenting a 3D-object on a screen such that objects closer to the viewpoint appear larger than if farther from the viewpoint.
Perspective projections of graphical objects.
Subject matter related to details of viewpoint determination or computation with claimed or disclosed rendering aspects.
This place does not cover:
View determination or computation without rendering | |
Changing the viewpoint for navigation without details of view generation | |
Transformation of image signals corresponding to virtual viewpoints |
Attention is drawn to the following places, which may be of interest for search:
Changing parameters of virtual cameras in video games | |
Navigational Instruments, e.g. visual route guidance with on-board computers using 3D or perspective road maps | |
Interaction techniques, e.g. control of the viewpoint to navigate in a 3D environment | |
TV systems, e.g. alteration of picture orientation, perspective, position etc. | |
Stereoscopic images |
In this place, the following terms or expressions are used with the meaning indicated:
Multiple views | rendering a graphical object seen from different viewpoints |
View generation | visual rendering of geometric properties of a graphical object seen from a certain viewpoint |
Viewpoint alteration | change of a viewpoint (of a virtual camera) |
Virtual camera | display of a view of a 3D virtual world |
Virtual Studio | technological tools for simulating a physical television or movie studio, the image of the virtual camera is rendered in real-time from the same perspective as the real camera in 3D space |
This place covers:
Means or steps for rendering a 3D-object or scene using a set of two-dimensional images of it.
Generation of a new view of a graphics object exclusively from 2D images of the object without prior generation of a 3D model.
Rendering using billboards.
Pixel based rendering or point based rendering of 3D objects which are not volume data.
Depth image-based rendering.
This place does not cover:
From multiple images | |
Determining parameters from multiple pictures | |
Splatting of volume data | |
Rendering of a 3D model generated from 2D images of it |
In this place, the following terms or expressions are used with the meaning indicated:
IBR | image-based rendering |
Billboard | textured rectangles that are used as simplified version of 3D models for rendering |
This place covers:
Means or steps for eliminating those portions of graphics primitives that extend beyond a predetermined region.
The predetermined region may include a viewing volume or any subset of the view volume of any shape.
The shape of the graphics primitives that partly extend beyond the predetermined region is modified.
This place does not cover:
Cropping of 2D images |
Documents where a bounding box or shape is defined or used are additionally classified with the Indexing Code G06T 2210/12.
In this place, the following terms or expressions are used with the meaning indicated:
Bounding box or bounding shape | minimal box or convex polygon surrounding the graphic object |
Viewport | rectangular area on the screen for displaying the rendered graphical object |
In patent documents, the following words/expressions are often used as synonyms:
- "viewing volume", "view volume" and "view frustum"
This place covers:
Means or steps for determining which surfaces or part of surfaces of a graphic object are visible from a certain viewpoint and optionally removing them.
Hidden surface or line removal.
Culling, e.g. frustum culling, backface culling, frontface culling, occlusion culling. Culling removes graphics objects or scene graph nodes that are completely falling outside the view frustum. This is usually performed before clipping.
In this place, the following terms or expressions are used with the meaning indicated:
VSD | visible surface determination |
This place covers:
Means or steps for determining which surfaces or parts of surfaces of a graphic object are visible from a certain viewpoint and optionally removing them using Z-Buffer information.
In patent documents, the following words/expressions are often used as synonyms:
- "Z-Buffer" and "Depth-Buffer"
This place covers:
Means or steps for determining intensity or colour on a surface of an object based on interaction of light with the object, considering surface properties or its orientation.
This place covers:
Means or steps for computing an image or pixel-value form several (source) images or pixel-values taking into account their weighting factors.
Weighting factors are usually opacity or transparency associated values.
Compositing.
Vertex or geometry blending.
This place does not cover:
Video editing or indexing or timing |
In this place, the following terms or expressions are used with the meaning indicated:
Alpha channel or alpha transparency channel | a portion of each pixel's data that is reserved for transparency information |
Alpha compositing | combining an image with a background to create the appearance of partial or full transparency |
Matte | contains the coverage information, e.g. the shape of the object to be drawn |
This place covers:
Means or steps for computing the amount of energy absorbed, reflected, diffracted or transmitted by an object (or element) to be 3D rendered.
Illumination models usually include composition, direction or geometry of the light source, surface orientation and/or surface properties of the object.
Local illumination models only take into account light arriving straight from the light source.
Global illumination models take into account light arriving after interaction with another object in the scene.
Direct light sources, indirect light sources, multiple light sources, physically based illumination models.
Generation of a photon map via photon tracing is classified in both subgroups G06T 15/06 and G06T 15/506.
In this place, the following terms or expressions are used with the meaning indicated:
BRDF | bidirectional reflectance distribution function |
This place covers:
Means or steps for rendering graphic objects through computing the balancing of substantially all light energy coming toward and going away from every point on a surface.
In radiosity, the balance of light energy is usually independent of the viewpoint.
This place does not cover:
Subject matter directed to illumination models that only consider viewpoint dependent vectors |
This place covers:
Means or steps for determination and generation of a region of darkness on an object where light is at least partially blocked by another graphical object.
The blocking object herein might be a semitransparent object.
Shadow computation normally refers to computation of shadow caused by one object onto another object.
Concave Objects where the shadow caused by one portion of the object falls onto another portion of the concave object is classified herein, e.g. an "L" shaped object can cast a shadow from the vertical portion onto the horizontal portion.
Documents concerning the calculation of the position of the light source from the shadow are classified in both subgroups G06T 15/50 and G06T 15/60.
Documents concerning shadow maps are classified in both subgroups G06T 15/04 and G06T 15/60 and are additionally classified with the Indexing Code G06T 2215/12.
This place covers:
Means or steps for assigning colour or intensity alterations or gradations in a particular area of a graphical object's surface based on its relationship with light.
Relationship of light herein includes vector of light which consists of angle and distance or it even may include ambient light.
Surfaces may include polygons or curved surfaces or patches.
Interpolation of colour or shade based on vertex data or other pixels on the surface is classified herein.
Shading caused by the object blocking light on the back side of the same object with respect to a light source is classified herein.
This place does not cover:
Shader units |
In this place, the following terms or expressions are used with the meaning indicated:
Scanline interpolation | Interpolation of values along each surface edge linearly and interpolatation of values in the interior of each surface from left edge to rightedge, i.e. along a scanline |
This place covers:
Means or steps for interpolating surface normals from the vertices of a graphical object in rasterizing a surface thereby calculating specular reflections on a graphical object.
This place covers:
Means or steps for producing a smooth variation of surface intensity over a surface by bilinearly interpolating the color or intensities from the vertices of a graphical object.
This place covers:
Means or steps for generating a description of a 3D model or scene.
The 3D model description is usually generated from point clouds, 2D images, mathematical definitions for the description of curves, surfaces or volumes or data from different sensors.
Marching Cubes, sampled distance fields.
Image data format conversions, e.g. converting polar coordinates to rectangular coordinates or IGES to combinatorial geometry descriptions.
This place does not cover:
Depth or shape recovery | |
Manipulating 3D models or images for computer graphics | |
Route guidance using 3D or perspective road maps including 3D objects and buildings | |
Generation of 3D objects with NC-machines | |
CAM (Computer aided manufacturing) | |
CAD (Computer aided design) in general |
Attention is drawn to the following places, which may be of interest for search:
Methods for drafting or marking-out cutting-out patterns for cloth | |
Collision detection for path planning of manipulators | |
Collision detection for programme-controlled systems | |
Image signal generators |
Documents concerning image data format conversion are additionally classified with the Indexing Code G06T 2210/32 - image data format.
In this place, the following terms or expressions are used with the meaning indicated:
IGES | Initial Graphics Exchange Specification |
This place covers:
Means or steps for generating a hierarchical tree-based description of a 3D model or scene.
Documents concerning scene graphs are additionally classified with the Indexing Code G06T 2210/61 - scene description
In patent documents, the following words/expressions are often used as synonyms:
- "Bintree or binary tree" and "tree structure in which each node has at most two child nodes"
- "Quadtree or quad tree" and "tree structure in which each node has at most four child nodes"
- "K-tree" and "tree structure in which each node has at most K child nodes"
- "Hextree" and "tree structure in which each node has at most six child nodes"
- "Volume octree" and "tree structure in which each voxel is subdivided into at most 8 subvoxels"
- "Surface octree" and "Volume octree with incorporated surface information"
- "Multi tree" and "directed acyclic graph in which the set of nodes reachable from any node forms a tree"
This place covers:
Means or steps for generating 3D models which relate to geographic data.
The geographic data is usually obtained from different sensors, e.g. LIDAR, stereo photogrammetry from aerial surveys, radar, infrared cameras, GPS, satellite photography and maps e.g. topographic maps, road maps, development plans.
Digital Elevation Models (DEM), contour maps, digital cartography.
Superimposing or overlaying of registered geographic data from different sensors.
Editing of maps, e.g. modelling of roofs or generation of 3D models for buildings displayed on a map.
Map revision, map updating.
Calculation of visibility fields for geographic areas.
Geographical fractal modeling.
This place does not cover:
Determination of transform parameters for the alignment of images, i.e. image registration | |
Navigation in a road network, GPS for navigation | |
Navigational Instruments, e.g. visual route guidance using 3D or perspective road maps (including 3D objects and buildings) |
Attention is drawn to the following places, which may be of interest for search:
Geometric image transformations for image registration |
This subgroup is an application oriented group. Therefore, whenever possible, documents classified herein should also be classified in a function oriented group.
In this place, the following terms or expressions are used with the meaning indicated:
GIS | Geographic Information Systems |
AMS | Automated Mapping System |
In patent documents the following expressions are often used as synonyms:
Chorography | description of a landscape |
Choropleth map | thematic map |
This place covers:
Means or steps for generating 3D models using boundary or volumetric representations of solid primitive objects.
Incremental feature generation, feature modification or modelling, feature-based design is classified here.
Solid modelling via sheet modelling or via sweeping or extrusion of contours, areas or volumes, e.g. the generation of sweep objects or generalized cylinders.
Modelling of solids using volumetric representations, an "alternating sum of volumes" process, volume or convex decomposition or boundary representations.
Generation of 3D objects from 2D line drawings.
For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:
For convex hull for 3D objects: G06T 2210/12
For collision detection or intersection of 3D objects: G06T 2210/21
In this place, the following terms or expressions are used with the meaning indicated:
B-rep or BREP | boundary representation |
Alternating sum of volumes (ASV) process | a convex decomposition method for volumetric objects |
In patent documents, the following words/expressions are often used as synonyms:
- "sweep object" and "generalized cylinder"
This place covers:
Means or steps for the generation or modification of polygonal surface descriptions of 3D models or parts thereof.
Meshes, grids, tessellations, tessellated surface patches, triangulations, tilings are classified here.
Delaunay triangulation, Voronoi diagrams.
Concatenation of tessellated surface patches, T-junctions.
Meshes for finite element modelling.
This place does not cover:
Compression using wireframes | |
Computer-aided design using finite element methods |
Attention is drawn to the following places, which may be of interest for search:
Seismic models | |
Geologic models |
For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:
For modelling of cloth: G06T 2210/16
For collision detection or intersection of 3D objects: G06T 2210/21
In this place, the following terms or expressions are used with the meaning indicated:
FEM | Finite element modelling |
TIN | Triangulated irregular network |
T-junction | a spot where two polygons meet along the edge of another polygon |
This place covers:
Means or steps for modifying the structure of a mesh by inserting or deleting mesh vertices.
Generation of meshes with different level of detail from a source mesh.
Refinement or simplification of meshes, honeycomb scheme.
The refinement or coarsening may be locally or globally.
Documents concerning the generation of meshes with different levels of detail are additionally classified with the Indexing Code G06T 2210/36.
This place covers:
Means or steps for generating a meshfree surface description.
Polynomial surface descriptions, e.g. NURBS, Bézier surfaces, B-spline surfaces, Coons patches, Tensor product patches, without mesh generation or visualization based on tessellations.
Analytical surface descriptions.
Free-form surfaces.
In this place, the following terms or expressions are used with the meaning indicated:
NURBS | Non-Uniform Rational B-Spline |
This place covers:
Means or steps for changing 3D models, for adding information or for changing the visualization via a user interface.
View determination or computation without rendering details, geometric transformations of the whole 3D object to change the viewpoint.
Manipulating 3D models by multiple users in a collaborative environment.
Annotating or labelling of 3D models with text, markers
Dimensioning and tolerancing of 3D models, e.g. display of dimension information for each part
Display of 3D models as an exploded view drawing.
Unfolding or flattening of 3D models or graphs.
Positioning or defining a cut plane or a curved surface in a 3D volume data set, e.g. for projection in volume rendering.
Manipulating 3D data while displaying or updating several views at the same time, e.g. top, front, and side view or sagittal, coronal, and axial view for medical applications.
Virtual try-on or virtual 3D design systems, e.g. virtual dressing or fitting-rooms, virtual mannequins, virtual interior or garden design, architectural design, virtual car configurators.
For documents in this group the function of manipulating 3D objects is prevailing, not the details how it is achieved. Therefore, the documents are usually general and do not contain specific technical details, e.g. documents concerning the change of the viewpoint via a GUI are classified here whereas documents with mathematical details on the change of the viewpoint and the frustum are classified in G06T 15/20.
This place does not cover:
CAD-CAM (Computer Aided Design and Manufacturing) | |
Generation of 3D objects with NC-machines | |
Interaction techniques for graphical user interfaces |
Attention is drawn to the following places, which may be of interest for search:
2D cosmetic or hairstyle simulations | |
Video games | |
Computer-aided design | |
Transformation of image signals corresponding to virtual viewpoints |
The boundaries between G06T 19/00 on the one hand, and G06T 3/06 and subgroups and G06T 3/08 on the other hand is not yet completely determined. Thus double classification should be considered.
The Indexing Code series G06T 2219/00 and below is reserved for documents classified in G06T 19/00 and subgroups. They should be allocated to documents in G06T 19/00 whenever relevant:
Indexing scheme for manipulating 3D models or images for computer graphics: SHOULD BE EMPTY! | |
annotating, labelling: annotating or labelling of 3D models or 3D images with text or markers | |
cut plane or projection plane definition: positioning or defining a cut plane or a curved surface in a 3D volume data set, e.g. for projection in volume rendering | |
dimensioning, tolerancing: dimensioning or tolerancing of 3D models, e.g. display of dimension information for each part of the model | |
exploded view: displaying 3D models as an exploded view drawing | |
flattening: unfolding or flattening of 3D models or graphs in a 2D plane | |
multi-user, collaborative environment: collaborative environments, multi-user environments | |
multiple view windows (top-side-front-sagittal-orthogonal): manipulating 3D data while displaying or updating several views at the same time, e.g. sagittal, axial, and coronal view or top, side, and front view |
The Indexing Code series G06T 2219/20 and below is reserved exclusively for documents classified in G06T 19/20. To each document classified in G06T 19/20 at least one of the symbols from this series should be allocated:
Indexing scheme for editing of 3D models: SHOULD BE EMPTY! | |
aligning objects, relative positioning of parts: aligning graphical objects or relative positioning of parts of a 3D model | |
assembling, disassembling: assembling and disassembling of parts of a 3D model | |
colour coding, editing, changing, or manipulating: colour modifications, e.g. colour coding, use of pseudo-colour, highlighting object parts in a different colour | |
rotation, translation, scaling: Euclidian transformations of the object or parts thereof, i.e. rotation, translation/dragging/shifting, reflection/mirroring, or size changes of a 3D object or parts thereof | |
shape modification: shape modifications of a 3D object, e.g. adding or deleting parts of the object, shearing, free-form deformations | |
style variation: modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style (e.g. bold lines, dotted lines), displaying more details of an object or of parts thereof in a separate window |
Furthermore, symbols from the Indexing Code series G06T 2200/00 and below as well as G06T 2210/00 and below should be allocated to documents in G06T 19/00 and subgroups whenever relevant.
For the documents in the group G06T 19/00 the following additional symbols from the Indexing Code series G06T 2210/00 and below are especially relevant and should be allocated whenever possible:
For architectural design: G06T 2210/04
For bandwidth reduction: G06T 2210/08
Convex hull for 3D objects: G06T 2210/12
For virtual dressing rooms: G06T 2210/16
For collision detection of 3D objects: G06T 2210/21
For medical applications concerning e.g. heart, lung, brain, tumours: G06T 2210/41
This place covers:
Means or steps for generating a sequence of images of a virtual movement (e.g. flight, walk, sail) through a 3D space or scene.
Navigation path or flight path determination.
Virtual navigation within human or animal bodies or organs, e.g. virtual medical endoscopy of the colon, of the ventricular system, of the vascular system, of the bronchial tree, or within 3D objects, e.g. virtual inspection of pipeline tubes.
Walk- or flight-through a virtual museum, a virtual building, a virtual landscape etc.
This place does not cover:
Navigational instruments, e.g. visual route guidance using 3D or perspective road maps (including 3D objects and buildings) | |
Interaction techniques for GUIs, e.g. control of the viewpoint to navigate in a 3D environment |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
ICT specially adapted for processing medical images, e.g. editing |
Attention is drawn to the following places, which may be of interest for search:
Segmentation; Edge detection | |
Analysis of geometric attributes | |
3D animation | |
Centreline of tubular or elongated structure | |
Virtual racing games |
In this place, the following terms or expressions are used with the meaning indicated:
Virtual angioscopy | virtual endoscopy of the vascular system |
Virtual bronchoscopy | virtual endoscopy of the bronchial tree |
Virtual colonoscopy | virtual endoscopy of the colon |
Virtual ventriculoscopy | virtual endoscopy of the ventricular system |
In patent documents, the following words/expressions are often used as synonyms:
- "virtual fly through navigation", "virtual navigation", "virtual flight", "virtual fly-through" and "virtual walk-through"
This place covers:
Means or steps for generating 3D mixed reality, i.e. displaying 3D virtual model data together with 2D or 3D real-world image data or for displaying 2D virtual model data together with 3D real-world image data, e.g. real volume data.
3D mixed reality encompasses 3D augmented reality and 3D augmented virtuality.
This place does not cover:
Object pose determination, tracking or camera calibration for mixed reality | |
Mixed reality by combining 2D virtual models or text with 2D real image data |
Attention is drawn to the following places, which may be of interest for search:
Head-up displays, head mounted displays | |
With head-mounted left-right displays | |
Volumetric display, i.e. systems where the image distributed through a volume |
This place covers:
Means or steps for changing the visual appearance of the 3D object or parts thereof or for changing the position of the 3D object or parts thereof in the visualization environment.
Shape modifications of the 3D object, e.g. adding or deleting parts of the 3D object, shearing, free-form deformations.
Colour modifications, e.g. colour coding, use of pseudo-colour, highlighting object parts in a different colour.
Modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style (e.g. stroke width and pattern), displaying more details of the object or of parts thereof in a separate window).
Shifting objects or parts thereof, aligning objects, rotating parts of the object or model, Euclidian transformations, size changes of the object or parts thereof.
Assembling and disassembling of object parts, connecting or mating different 3D parts.
This place does not cover:
Geometric transformations of the whole 3D object to change the viewpoint but without rendering details |
Attention is drawn to the following places, which may be of interest for search:
Geometric image transforms in the image plane | |
Colour changes in 2D images | |
Editing of 2D images | |
Time-related zooming on 3D objects | |
Time-related zooming on 2D images |
For the documents in the group G06T 19/00 the following additional symbols from the Indexing Code series G06T 2210/00 and below are especially relevant. To each document classified in G06T 19/20 at least one of the following symbols should be allocated:
For aligning objects, relative positioning of parts: G06T 2219/2004
For assembling, disassembling: G06T 2219/2008
For colour coding, editing, changing, or manipulating, pseudo-colours, highlighting: G06T 2219/2012
For rotation, translation, scaling: G06T 2219/2016
For shape modifications, adding or deleting parts, shearing, free form deformations: G06T 2219/2021
For modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style: G06T 2219/2024
In this place, the following terms or expressions are used with the meaning indicated:
DDM | Direct deformation method |
This place covers:
Indexing Codes that relate to
- the modality with which the processed image was acquired
- special algorithmic details, also in the sense of further breakdown of groups
- the imaged subject or the context of the image processing
Whenever classifying in G06T 5/00 and G06T 7/00, additional information should be classified using one or more of the Indexing Codes from the range of G06T 2207/00.The use of the Indexing Codes is obligatory.
For Image acquisition modality, see Indexing Code G06T 2207/10.
For Special algorithmic details, see Indexing Code G06T 2207/20.
For Subject of image; Context of image processing, see Indexing Code G06T 2207/30.
For example, the Indexing Codes would be used to classify that a model-based segmentation (G06T 7/12 and G06T 7/149) using an active shape model (G06T 2207/20124) is done on a CT image (G06T 2207/10081) of the heart (G06T 2207/30048), or to classify that extrinsic camera parameters (G06T 7/80) are determined for an infrared camera (G06T 2207/10048) mounted on a car facing to the exterior of the car (G06T 2207/30252), wherein multiresolution image processing is used (G06T 2207/20016).
As a basic principle, the Indexing Codes from G06T 2207/00 are applicable only in connection with G06T 5/00 and G06T 7/00.
However, not all Indexing Codes are applicable over the whole range of G06T 5/00 and G06T 7/00. The following restrictions apply:
- The Indexing Codes in the range G06T 2207/20116 - G06T 2207/20168 are applicable only together with G06T 7/10 and subgroups.
- The Indexing Codes in the range G06T 2207/20182 - G06T 2207/20204 are applicable only together with G06T 5/00 and subgroups.
- The Indexing Code G06T 2207/20228 is applicable only together with G06T 7/97.
The following Indexing Codes are only used as nodes to build the classification hierarchy and should not contain any documents, i.e. only their subgroups are used for classification:
Moreover, the following Indexing Code is considered redundant in the context of image processing and is, thus, not used for classification:
This place covers:
Stereo images - image acquisition by two cameras or a single camera that is displaced acquire at least one stereo image pair | |
Color image - image acquisition by color or multichannel camera; only to be used when color aspect is of some importance also in the processing | |
Range image; Depth image; 3D point clouds - range image, depth image, surface image, i.e. 2D image providing depth information; 3D point clouds | |
Satellite or aerial image; Remote sensing - satellite or aerial imaging; space-based; remote sensing; Fernerkundung (German expression) | |
Multispectral image; Hyperspectral image - multispectral or hyperspectral radiometers in satellite or aerial imaging | |
Endoscopic image - image acquisition by endoscopic instrument, e.g. ultrasound catheter, colonoscope, video endoscope, capsule/pill endoscope | |
Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities - image acquisition by hybrid tomographic scanner, i.e. by system that combines different tomographic modalities | |
Digital tomosynthesis [DTS] - image from digital tomosynthesis [DTS], i.e. limited angle reconstruction based on radiographies | |
Digitally reconstructed radiograph [DRR] - DRR reconstructed from 3D tomographic data | |
Scintigraphy - image acquisition by scintigraphy or gamma camera | |
Varying exposure - acquisition of multiple images with varying exposure parameters | |
Varying focus - modification of focus during acquisition of single image or of multiple images | |
Varying illumination - acquisition of multiple images with varying illumination conditions |
This place covers:
Globally adaptive - processing of whole image with the same parameters, e.g. the same filter weights, but parameters may vary from image to image | |
Locally adaptive - processing of image in a locally differing manner; covers also the limiting of processing to a ROI | |
Training; Learning - training or learning, e.g. of background for motion analysis or of model or atlas for segmentation | |
Interactive definition of curve of interest - involving interactive definition of non-closed curve of interest; closed curve, see G06T 2207/20104 | |
Interactive definition of region of interest [ROI] - involving interactive definition of ROI; setting of closed curve or box | |
Image cropping - cutting out, cropping, i.e. defining automatically a ROI of simple shape, e.g. rectangular, circular, usually for limiting the further processing to the ROI; this place does not cover manual definition of the ROI: G06T 2207/20104 | |
Automatic seed setting - automatic setting of seed, e.g. based on statistics of a region of interest, usually for subsequent region-growing or for edge-growing/following; this place does not cover manual seed-setting: G06T 2207/20101 | |
Salient point detection; Corner detection - detection of salient points, e.g. corners, T-junctions, end points; this place does not cover automatic seed setting: G06T 2207/20156; salient points for pattern recognition: G06F 18/00 | |
Motion blur correction - correcting motion blur in still image or video | |
High dynamic range [HDR] image processing; - High Dynamic Range Imaging [HDR or HDRI] from a series of conventional images of lower dynamic range | |
Image averaging - averaging of multiple images | |
Image fusion; Image merging - image fusion, i.e. merging of images of same subject | |
Image subtraction - subtraction of images of same subject, e.g. temporal subtraction, subtraction of images with varying illumination conditions or for masking out certain pre-segmented image parts |
This place covers:
Catheter; Guide wire - subject of image: catheter, endoscope or guide wire when imaged in biomedical image | |
Implant; Prosthesis - subject of image: implant or prosthesis; also non-synthetical transplants | |
Mammography; Breast - subject of image: mammography; breast, usage not limited to x-ray image | |
Plethysmography - measurement of possibly periodic volume/size/position changes, e.g. due to blood flow | |
Blood vessel; Artery; Vein; Vascular - subject of image: vascular structures, blood vessel, artery, vein, angiography | |
Masonry; Concrete- inspection of concrete or masonry in buildings, dams, bridges, etc. | |
Printing quality - inspection of printed product | |
Solder - inspection of solder, electrical contacts | |
Workpiece; Machine component - inspection of workpiece, e.g. machine component; Werkstück (German expression) | |
Centreline of tubular or elongated structure - determining the centreline of a tubular or elongated structure, e.g. of a lumen, vessel, colon, pipe | |
Document - enhancement or analysis of document image; this place does not cover document recognition: G06F 18/00, G06V | |
Earth observation - earth observation with image from remote sensing | |
Infrastructure - observation of infrastructure, e.g. urban infrastructure, roads, railway, water channel, power transmission line | |
Vegetation; Agriculture - observation of vegetation areas , e.g. agriculture | |
Weather; Meteorology - weather, meteorology, climate | |
Marker - subject of image: artificial marker or symbol in image, e.g. used for calibration, registration or tracking | |
Military - military application, e.g. target tracking | |
Redeye defect - redeye defect detection and correction | |
Surveillance - application in video surveillance | |
Traffic on road, railway or crossing - subject of image: traffic on road, railway, crossing, square | |
Trajectory - determination of trajectory, track, trace | |
Camera pose - determination of camera pose, as opposed to the determination of the pose of image content | |
Vehicle exterior or interior - imaging with camera placed on a vehicle, car, train, plane, boat or mobile robot | |
Vehicle exterior; Vicinity of vehicle - subject of image: exterior of a vehicle; imaging from a vehicle | |
Lane; Road marking - subject of image: lane, road marking, railroad, pathway | |
Obstacle - subject of image: obstacle, e.g. pedestrian, other vehicle | |
Parking - imaging from a vehicle, e.g. for parking aid | |
Vehicle interior - subject of image: interior of a vehicle |