CPC Definition - Subclass G06T

Last Updated Version: 2024.01
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
Definition statement

This place covers:

  • Processor architectures or memory management for general purpose image data processing
  • Geometric image transformations
  • Image enhancement or restoration
  • Image analysis
  • Image coding
  • Two-dimensional image generation
  • Animation
  • Three-dimensional image rendering
  • Three-dimensional modelling for computer graphics
  • Manipulating three-dimensional models or images for computer graphics
Relationships with other classification places

G06T is the functional place for image data processing or generation. Image data processing or generation specially adapted for a particular application is classified in the relevant subclass. Documents which merely mention the general use of image processing or generation without detailing of the underlying details of such, are classified in the application place. Where the essential technical characteristics of an invention relate both to the image processing or generation and to its particular use or special adaptation, classification is made in both G06T and the application place.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Apparatus for radiation diagnosis

A61B 6/00

Aspects of games using an electronically generated display having two or more dimensions

A63F 13/00

Measuring, by optical means, length, thickness or similar linear dimensions, angles, areas, irregularities of surfaces or contours

G01B 11/00

Reading or recognising printed or written characters or recognising patterns, e.g. fingerprints

G06F 18/00, G06V

Coding, decoding or code conversion

H03M 13/00

Pictorial communication, television systems

H04N 1/00 - H04N 21/00

Special rules of classification

Symbols under G06T 1/00 - G06T 19/20 may only be allocated as invention information.

Whenever possible, additional information should be classified using one or more of the Indexing Codes from the range of G06T.

The indexing codes under G06T 2200/00 - G06T 2219/2024 may only be allocated to documents to which a symbol under G06T 1/00 - G06T 19/20 is allocated as invention information as well.

The following list of symbols from the series G06T 2200/00 are for allocation to documents within the whole range of G06T except G06T 9/00:

G06T 2200/00

Indexing scheme for image data processing or generation, in general - Not used for classification

G06T 2200/04

involving 3D image data - processing of 3D image data, i.e. voxels; relevant for G06T 3/00, G06T 5/00, G06T 7/00 or G06T 11/00

G06T 2200/08

involving all processing steps from image acquisition to 3D model generation - complete systems from acquisition to modelling

G06T 2200/12

involving antialiasing - dejagging, staircase effect

G06T 2200/16

involving adaptation to the client's capabilities - adapting the colour or resolution of an image to the client's capabilities

G06T 2200/21

involving computational photography

G06T 2200/24

involving graphical user interfaces [GUIs]

G06T 2200/28

involving image processing hardware - relevant for groups not directly related to hardware; not used in G06T 1/20, G06T 1/60, G06T 15/005

G06T 2200/32

involving image mosaicing - image mosaicing, panoramic images

G06T 2200/36

Review paper; Tutorial; Survey - basic documents describing the state of the art.

There are further series of symbols for G06T whose use is reserved to particular maingroups or ranges of maingroups and whose full list and description are given in the FCRs of the respective maingroups:

G06T 2210/00 for G06T 11/00 - G06T 19/00 only; see list below

Symbols from the series G06T 2210/00 for allocation in the range of G06T 11/00 - G06T 19/00 only:

G06T 2210/00

Indexing scheme for image generation or computer graphics - Not used for classification

G06T 2210/04

architectural design, interior design - interior/garden/facade design, architectural layout plans

G06T 2210/08

bandwidth reduction

G06T 2210/12

bounding box - convex hull for polygons or 3D objects

G06T 2210/16

cloth - animation, rendering or modeling of cloth/garment/textile, virtual dressing rooms

G06T 2210/21

collision detection, intersection - intersection/collision detection of 3D objects

G06T 2210/22

cropping - cropping of image borders

G06T 2210/24

fluid dynamics - animation, rendering or modelling of fluid flows

G06T 2210/28

force feedback - virtual force

G06T 2210/32

image data format - conversion between different image or graphics formats

G06T 2210/36

level of detail - level of detail, also for textures (e.g. mip-mapping)

G06T 2210/41

medical - medical applications concerning e.g. heart, lung, brain, tumors

G06T 2210/44

morphing - morphing or warping

G06T 2210/52

parallel processing

G06T 2210/56

particle system, point based geometry or rendering - rendering and animation of particle systems (e.g. fireworks, dust, clouds), point clouds, splatting

G06T 2210/61

scene description - scene graphs, scene description languages, e.g. VRML

G06T 2210/62

semi-transparency - screen-door effect, change of transparency values

G06T 2210/64

weathering - weathering effects like e.g. aging, corrosion

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

2D

Two-dimensional

3D

Three-dimensional

4D

Four-dimensional, 3D in time

CAD

Computer-Aided Design (in computer graphics); Computer-Aided Detection (in image analysis)

MR

Magnetic Resonance (in image analysis); Mixed Reality (in computer graphics)

Stereo

Treatment of the images of exactly two cameras in a pairwise manner

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

ANN

Artificial Neural Network

AR

Augmented Reality

CT

Computed Tomography

DCE-MRI

Dynamic Contrast-Enhanced Magnetic Resonance Imaging

DCT

Discrete Cosine Transform

DRR

Digitally Reconstructed Radiograph

DTS

Digital Tomosynthesis

GUI

Graphical User Interface

IC

Integrated Circuit

ICP

Iterative Closest Point

LCD

Liquid Crystal Display

MRF

Markov Random Field

MRI

Magnetic Resonance Imaging

PCB

Printed Circuit Board

RGB

Red, Green, Blue

ROI

Region of Interest

SLAM

Simultaneous Localisation And Mapping

SNR

Signal-to-Noise Ratio

SPECT

Single Photon Emission Computed Tomography

US

Ultrasound

VOI

Volume of Interest

VR

Virtual Reality

General purpose image data processing
Definition statement

This place covers:

General purpose image data processing systems and methods.

Special rules of classification

The IPC class G06T1/40 is not used. The corresponding documents are classified in G06T 1/20.

{Image acquisition}
Definition statement

This place covers:

Capturing or storing images from or to memory

References
Limiting references

This place does not cover:

Scanning, transmission or reproduction of documents or the like

H04N 1/00

Television cameras

H04N 23/00

{Image feed-back for automatic industrial control, e.g. robot with camera (robots B25J 19/023)}
Definition statement

This place covers:

  • Machine vision or tool control
  • Image feedback for robot navigation or walking
  • 3D vision systems.
References
Limiting references

This place does not cover:

Vision controlled manipulators

B25J 9/1697

Accessories fitted to manipulators including video camera means

B25J 19/023

Control of position, course, altitude or attitude of land, water, air or space vehicles using means capturing signals occurring naturally from the environment for determining position or orientation

G05D 1/243

{Image watermarking}
Definition statement

This place covers:

  • Image watermarking in general.
  • Applications or software packages for watermarking.

Illustrative example - Hiding a digital image (message) into another digital image (carrier) (US6094483 - UNIV NEW YORK STATE RES FOUND):

media0.png

References
Limiting references

This place does not cover:

Testing specially adapted to determine the identity or genuineness of paper currency or similar valuable papers

G07D 7/1205

Audio watermarking

G10L 19/018

Arrangements for secret or secure communication using encryption of data

H04L 9/06

Arrangements for secret or secure communication using electronic signatures

H04L 9/3247

Informative references

Attention is drawn to the following places, which may be of interest for search:

Security arrangements for protecting computers or computer systems against unauthorised activity

G06F 21/00

Circuits for prevention of unauthorised reproduction or copying

G11B 20/00086

Scanning, transmission or reproduction of documents involving image watermarking

H04N 1/32144

{Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking}
Definition statement

This place covers:

  • Adaptations based on Human Visual System [HVS].
  • Perceptual masking.
  • Preservation of image quality; Distortion minimization.
  • Methods to measure quality of watermarked images.
  • Measuring the balance between quality and robustness, i.e., fixed robustness, adapting quality, or vice versa.

Illustrative example - Changing a portion of an image based on an embedding strength map (EP1170938 - HITACHI LTD):

media1.jpg

{Output size adaptive watermarking}
Definition statement

This place covers:

  • Embedding without modifying the size of input.
  • Embedding or modifying the watermark directly in a coded image or video stream, without decoding first.
{Fragile watermarking, e.g. so as to detect tampering}
Definition statement

This place covers:

  • Birthday attacks.
  • Forgery.

Illustrative example - Changing pixels at selected positions according to a replacement table (WO2011021114 - NDS LIMITED):

media2.png

{Robust watermarking, e.g. average attack or collusion attack resistant}
Definition statement

This place covers:

  • Resistance; Resistance to attacks or distortions; Distortion compensation.
  • Strength.
  • Collusion attacks; Average attacks; Averaging.
  • Reliable detection, e.g. with reduced likelihood of false positive/negative.

Illustrative example - Watermarking an image using the difference of average intensity of two adjacent blocks (EP1927948 - FUJITSU LTD):

media3.png

{Compression invariant watermarking}
Definition statement

This place covers:

Watermarking techniques for JPEG or MPEG or for a wavelet transformed image.

Illustrative example - Embedded a watermark in a DC component region of a wavelet transformed image (US2004047489 - KOREA ELECTRONICS TELECOMM):

media4.png

{Geometric transfor invariant watermarking, e.g. affine transform invariant}
Definition statement

This place covers:

  • Robust against resizing or rotation or cropping, etc.
  • Determining the rescaling factor or rotation angle by using the watermarks so as to compensate the image, i.e. as a calibration signal.
  • Desynchronization attacks.

Illustrative example - Combining a reference mark with an identification mark and embedding them in image textures to detect the applied transformations (GB2378602 - CENTRAL RESEARCH LAB LTD):

media5.png

{using multiple or alternating watermarks}
Definition statement

This place covers:

  • Many, possibly different, watermarks on the same image, e.g. for copy or distribution control.
  • Same watermark repeated on different parts of the image.

Illustrative example - Encoding payload in relative positions and/or polarities of multiple embedded watermarks (WO0111563 - KONINKL PHILIPS ELECTRONICS NV):

media6.png

{using multiple thresholds}
Definition statement

This place covers:

Using thresholds to define ranges of detection probability or ranges of robustness.

Illustrative example - Multiple thresholds for reducing false detection likelihood

(EP1271401 - SONY UK LTD):

media7.png

{Time domain based watermarking, e.g. watermarks spread over several images}
Definition statement

This place covers:

Watermarks spread over several images or frames or a sequence.

Illustrative example - Alternating watermark patterns (e.g. by translation, mirror, rotation) to improve the reliability of scale factor measurement (WO2005109338 - KONINKL PHILIPS ELECTRONICS NV):

media8.png

{Payload characteristic determination in a watermarking scheme, e.g. number of bits to be embedded}
Definition statement

This place covers:

Illustrative example - Calculating capacity of DCT coefficients of a digital image file and selecting the ones apted to embedding, thereby providing robustness (US6724913 - HSU WEN-HSING):

media9.png

Processor architectures; Processor configuration, e.g. pipelining
Definition statement

This place covers:

  • Graphics accelerators; Graphic processing units (GPUs).
  • Graphics pipelines.
  • Parallel or massively parallel data bus specially adapted for image data processing.
  • Architecture or signal processor specially adapted for image data processing.
  • VLSI or SIMD or fine-grained machines specially adapted for image data processing.
  • Multiprocessor or multicomputer or multi-core specially adapted for image data processing.

Illustrative example - Ring architecture for image data processing:

media10.jpg

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Architectures of general purpose stored program computers

G06F 15/76

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Pipelining

the use of a sequence (pipeline) of image processing stages for execution of instructions in a series of units, arranged so that several units can be used for simultaneously processing appropriate parts of several instructions.

Multiprocessor

processor arrangements comprising a computer system consisting of two or more processors for the simultaneous execution of two or more programs or sequences of instructions.

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

GPU

Graphics Processing Unit

Memory management
Definition statement

This place covers:

  • Address generation or addressing circuit or BitBlt for image data processing.
  • 3D or virtual or cache memory specially adapted for image data processing.
  • Frame or screen or image memory specially adapted for image data processing.

Illustrative example - Cache memory for image processing (EP0589724 - QUANTEL LTD)

media11.jpg

References
Limiting references

This place does not cover:

Accessing, addressing or allocating within memory systems or architectures

G06F 12/00

Ping-pong buffers

G09G 5/399

Arrangements for selecting an address in a digital store

G11C 8/00

Digital stores characterised by the use of particular electric or magnetic storage elements

G11C 11/00

Geometric image transformations in the plane of the image
Definition statement

This place covers:

Geometric image transformations in the plane of the image.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric transformations for image enhancements

G06T 5/00

Image animations

G06T 13/00

Geometric effects for 3D image rendering

G06T 15/10

Perspective computation for 3D image rendering

G06T 15/20

Geographic models

G06T 17/05

Matrix or vector computation

G06F 17/16

Conversion of standard for television systems

H04N 7/01

Affine transformations (for image registration G06T 3/147; for image mosaicing G06T 3/4038)
Definition statement

This place covers:

Scaling and rotation.

References
Limiting references

This place does not cover:

Affine transformations for image registration, e.g. elastic snapping

G06T 3/147

Affine transformations for image mosaicing

G06T 3/4038

Context-preserving transformations, e.g. by using an importance map (panospheric to cylindrical image transformations G06T 3/12)
Definition statement

This place covers:

  • Selective warping according to an importance map; Smart image reduction.
  • Seam carving; Liquid resizing; Image retargeting.

media12.jpg

References
Limiting references

This place does not cover:

Panospheric to cylindrical image transformation

G06T 3/12

Fisheye or wide-angle transformations
Definition statement

This place covers:

Establishing a lens for a region-of-interest.

Illustrative example:

media13.jpg

Detail-in-context presentations (fisheye or wide-angle transformations G06T 3/047)
Definition statement

This place covers:

  • Side or corner panels; Perspective wall.
  • Document lens.

Illustrative example:

media14.jpg

References
Limiting references

This place does not cover:

Fisheye, wide-angle transformation

G06T 3/047

Topological mapping of higher dimensional structures onto lower dimensional surfaces
Definition statement

This place covers:

Flattening the scanned image of a bound book.

Illustrative example:

media15.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Panospheric to cylindrical image transformation

G06T 3/12

Texture mapping

G06T 15/04

Manipulating 3D models or images for computer graphics

G06T 19/00

Reshaping or unfolding 3D tree structures onto 2D planes
Definition statement

This place covers:

Curved planar reformation [CPR].

Transforming surfaces of revolution to planar images, e.g. cylindrical surfaces to planar images
Definition statement

This place covers:

Mapping a surface of revolution to a plane, e.g. mapping a pot or a can to a plane.

Illustrative example:

media16.png

Projecting images onto non-planar surfaces, e.g. geodetic screens
Definition statement

This place covers:

  • Geometric image transformation for projecting an image on a multi-projectors system or on a geodetic screen; Dome imaging.
  • Geometric image transformation for projecting an image through multi-planar displays.
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Texture mapping

G06T 15/04

Selection of transformation methods according to the characteristics of the input images
Definition statement

This place covers:

  • Selection of the interpolation method depending on the scale factor.
  • Selection of the interpolation method depending on media type or image appearance characteristics.

Illustrative example:

media17.png

Panospheric to cylindrical image transformations
Definition statement

This place covers:

  • Omnidirectional or hyperboloidal to cylindrical image transformation or mapping; Catadioptric transformation, e.g. images from surveillance cameras.
  • Panospheric image transformation or mapping by using the output of a multiple cameras system.

Illustrative example:

media18.png

Transformations for image registration, e.g. adjusting or mapping for alignment of images
Definition statement

This place covers:

Geometric image transformation for

  • iterative image registration
  • spline-based image registration
  • mutual-information-based registration
  • phase correlation or FFT-based methods
  • using fiducial points, e.g. landmarks
  • maximised mutual information-based methods
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Determination of transform parameters for the alignment of images, i.e. image registration

G06T 7/30

using elastic snapping
Definition statement

This place covers:

  • Elastic mapping or snapping or matching; Deformable mapping.
  • Diffeomorphic representations of deformations to control the image registration process.

Illustrative example:

media19.png

media20.png

Spatio-temporal transformations, e.g. video cubism
Definition statement

This place covers:

  • Video cubism; Video cube.
  • Dynamic panoramic video.
  • Stylized video cubes.
Image warping, e.g. rearranging pixels individually
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Image animation

G06T 13/00

Scaling of whole images or parts thereof, e.g. expanding or contracting
Definition statement

This place covers:

  • Resampling; Resolution conversion.
  • Zooming or expanding or magnifying or enlarging or upscaling.
  • Shrinking or reducing or compressing or downscaling.
  • Pyramidal partitions; Storing sub-sampled copies.
  • Area based or weighted interpolation; Scaling by surface fitting, e.g. piecewise polynomial surfaces or B-splines or Beta-splines.
  • Two-steps image scaling, e.g. by stretching.
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Polynomial surface description for image modeling

G06T 17/30

Scanning, transmission or reproduction of documents involving modification of image resolution

H04N 1/393; H04N 1/3875; H04N 1/40068

Studio circuits for television systems involving alteration of picture size or orientation

H04N 5/2628

Frame rate conversion; De-interlacing

H04N 7/01

based on interpolation, e.g. bilinear interpolation (image demosaicing G06T 3/4015; edge-driven or edge-based scaling G06T 3/403)
Definition statement

This place covers:

  • Linear or bi-linear or tetrahedral or cubic image interpolation.
  • Adaptive interpolation, e.g. the coefficients of the interpolation depend on the pattern of the local structure.

Illustrative example - Third order spline interpolation:

media21.png

References
Limiting references

This place does not cover:

Demosaicing, e.g. colour filter array [CFA], Bayer pattern

G06T 3/4015

Edge-driven scaling

G06T 3/403

Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
Definition statement

This place covers:

  • CFA demosaicing or demosaicking or interpolating.
  • Bayer pattern.
  • Colour-separated images, i.e. one colour in each image quadrant.

Illustrative examples - Image demosaicing:

media22.png

  • Colour-separated image:

media23.png

based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
Definition statement

This place covers:

  • Pixel or row deletion or removal.
  • Pixel or row insertion or duplication or replication.
  • Decimating FIR filters.
  • Array indexes or tables, e.g. LUT.

Illustrative example - Decimating by using two array of indexes:

media24.png

Edge-driven scaling; Edge-based scaling
Definition statement

This place covers:

  • Edge adaptive or directed or dependent or following or preserving interpolation; Edge preservation.
  • Edge map injecting or projecting or combining or superimposing.

Illustrative example - Correcting for abnormalities next to boundaries:

media25.png

Image mosaicing, e.g. composing plane images from plane sub-images
Definition statement

This place covers:

  • Image mosaicing or mosaiking.
  • Panorama views.
  • Mosaic of video sequences; Salient video still; Video collage or synopsis.

Illustrative example - Image mosaicing for microscopy applications:

media26.jpg

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Image processing arrangements associated with discharge tubes with provision for introducing objects or material to be exposed to the discharge

H01J 37/222

using neural networks
Definition statement

This place covers:

  • Using neural networks specially adapted for image interpolation.
  • Using neural networks specially adapted for interpolation coefficient selection.

Illustrative example - Using a neural network to select the coefficients of a polynomial interpolation:

media27.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Computing arrangements using neural networks

G06N 3/02

based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Definition statement

This place covers:

  • Super resolution by fitting the pixel intensity to a mathematical function.
  • Super resolution from image sequences; Images or frames addition or coaddition or combination.
  • Super resolution by iteratively applying constraints, e.g. energy reduction, on the transform domain and inverse transforming.

Illustrative example - fitting a mathematical function and resampling:

media28.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Image enhancement or restoration using two or more images, e.g. averaging or subtraction

G06T 5/50

by injecting details from different spectral ranges
Definition statement

This place covers:

Multisensor or multiband images fusion.

by subpixel displacements
Definition statement

This place covers:

Illustrative example of subject matter covered in this group - Displaying sub-frames at spatially offset positions:

media29.png

using the original low-resolution images to iteratively correct the high-resolution images
Definition statement

This place covers:

Illustrative example of subject matter classified in this group - Iterative correction of the high-resolution image:

media30.png

in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
Definition statement

This place covers:

  • DCT coefficients decimation or insertion for image scaling.
  • Zero padding DCT coefficients for image scaling.
  • Downscaling by selecting a specific Wavelet subband.

Illustrative example - Enlargement / reduction through DCT interpolation / decimation:

media31.png

Image resolution transcoding, e.g. by using client-server architectures
Definition statement

This place covers:

Adapting the image resolution to the client's capabilities.

Illustrative example - The processing unit is coupled downstream from video cross-point switcher for generating additionally scaled video streams by additional video scaling on initially scaled video stream:

media32.jpg

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Server components or server architectures for processing of video elementary streams by altering the spatial resolution.

H04N 21/234363; H04N 21/2356; H04N 21/4356; H04N 21/440263

Rotation of whole images or parts thereof
Definition statement

This place covers:

  • Transpose or continuous write-transpose-read.
  • Mirror.
  • Rung-length (RL) rotation.
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Scanning, transmission or reproduction of documents involving image rotation.

H04N 1/3877

Studio circuits for television systems involving alteration of picture size or orientation.

H04N 5/2628

by block rotation, e.g. by recursive reversal or rotation
Definition statement

This place covers:

Illustrative example - Rotation by recursive reversing:

media33.png

by memory addressing or mapping
Definition statement

This place covers:

Illustrative example - Continuous read-transpose-write:

media34.png

by skew deformation, e.g. two-pass or three-pass rotation
Definition statement

This place covers:

  • Shift processing
  • Rotation by shearing.

Illustrative example - Image rotation by two-pass de-skewing:

media35.png

Image enhancement or restoration
Definition statement

This place covers:

Image enhancement or restoration using:

  • using non-spatial domain filtering
  • using local operators
  • using morphological operators, i.e. erosion or dilatation
  • using histogram techniques
  • using two or more images, e.g. averaging or subtraction
  • using machine learning, e.g. neural networks
  • Denoising; Smoothing
  • Deblurring; Sharpening
  • Unsharp masking
  • Retouching; Inpainting; Scratch removal
  • Geometric correction
  • Dynamic range modification of images or parts thereof
Relationships with other classification places

G06T 5/00 is the function place for image enhancement or restoration. Image enhancement or restoration specially adapted for a particular application is classified in the relevant application field, e.g. G06V or H04N.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Neural networks

G06N 3/02

Image preprocessing for image or video recognition or understanding

G06V 10/20

Image processing adapted to be used in scanners, printers, photocopying machines, displays or similar devices, including composing, repositioning or otherwise modifying originals

H04N 1/387

Picture signal circuits adapted to be used in scanners, printers, photocopying machines, displays or similar devices

H04N 1/40

Processing of colour picture signals in scanners, printers, photocopying machines, displays or similar devices

H04N 1/56

Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors

H04N 23/70

Camera processing pipelines in cameras or camera modules comprising electronic image sensors

H04N 23/80

Computational photography systems, e.g. light-field imaging systems

H04N 23/95

Noise processing, e.g. detecting, correcting, reducing or removing noise in circuitry of solid-state image sensors [SSIS]

H04N 25/60

Special rules of classification

This group focuses on image processing algorithms. Although such algorithms sometimes need to consider characteristics of the underlying image acquisition apparatus, inventions to the image acquisition apparatus per se are outside the scope of this group.

Whenever possible, additional information should be classified using one or more of the indexing codes from the ranges of G06T 2200/00 (see definitions re. G06T) or G06T 2207/00 (see definitions re. G06T 2207/00).

The classification symbol G06T 5/00 should be allocated to documents concerning:

  • Interactive / multiple choice image processing, e.g. choosing outputs from multiple enhancement algorithms
  • Image restoration based on properties or models of the human vision system [HVS]
Synonyms and Keywords

In patent documents, the following abbreviations are often used:

HDR

High dynamic range

HDRI

High dynamic range imaging

HMM

Hidden Markov model

PSF

Point spread function

SDR

Standard dynamic range

using non-spatial domain filtering
Definition statement

This place covers:

All transform domain-based enhancement methods, e.g. using:

  • Fourier transform, discrete Fourier transform [DFT] or fast Fourier transform [FFT] (add the indexing code G06T 2207/20056)
  • Hadamard transform
  • discrete cosine transform [DCT] (add the indexing code G06T 2207/20052)
  • Wavelet transform, discrete wavelet transform [DWT] (add the indexing code G06T 2207/20064)

Illustrative example:

media51.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine

H04N 5/253

Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors

H04N 23/70

Camera processing pipelines in cameras or camera modules comprising electronic image sensors

H04N 23/80

Noise processing, e.g. detecting, correcting, reducing, or removing noise in circuitry of solid-state image sensors [SSIS]

H04N 25/60

using local operators
Definition statement

This place covers:

  • Convolution with a mask or kernel in the spatial domain
  • High-pass filter, low-pass filter
  • Gauss filter, Laplace filter
  • Averaging filter, mean filter, blurring filter
  • Differential filters (e.g. Sobel operator)
  • Median filter (add the indexing code G06T 2207/20032)
  • Bilateral filter (add the indexing code G06T 2207/20028)
  • Minimum, maximum or and rank filtering
  • Wiener filter
  • Phase-locked loops, detectors, mixers
  • Recursive filter
  • Distance transforms
  • Local image processing architectures

Illustrative example:

media52.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine

H04N 5/253

Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors

H04N 23/70

Camera processing pipelines in cameras or camera modules comprising electronic image sensors

H04N 23/80

Noise processing, e.g. detecting, correcting, reducing, or removing noise in circuitry of solid-state image sensors [SSIS]

H04N 25/60

Informative references

Attention is drawn to the following places, which may be of interest for search:

Applying local operators for during image preprocessing for image or video recognition or understanding

G06V 10/36

Erosion or dilatation, e.g. thinning
Definition statement

This place covers:

All morphology-based operations for image enhancement, e.g.:

  • Thickening, thinning
  • Opening, closing
  • Erosion, dilation
  • Structuring elements
  • Skeletons
  • Geodesic transforms

Illustrative examples:

1.

media53.png

2.

media54.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation or edge detection involving morphological operators

G06T 7/155

Smoothing or thinning of patterns during image preprocessing for image or video recognition or understanding

G06V 10/34

using histogram techniques
Definition statement

This place covers:

All histogram-based image enhancement methods

Illustrative example:

media55.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors

H04N 23/70

Camera processing pipelines in cameras or camera modules comprising electronic image sensors

H04N 23/80

Informative references

Attention is drawn to the following places, which may be of interest for search:

Dynamic range modification

G06T 5/90

Histogram techniques adapted to be used in scanners, printers, photocopying machines, displays or similar devices

H04N 1/4074

Equalising the characteristics of different image components, e.g. their average brightness or colour balance, in stereoscopic or multi-view video systems

H04N 13/133

using two or more images, e.g. averaging or subtraction
Definition statement

This place covers:

  • Image averaging (add the indexing code G06T 2207/20216)
  • Image fusion, image merging: (add the indexing code G06T 2207/20221)
  • Image subtraction (add the indexing code G06T 2207/20224)
  • Enhanced final image by combining multiple, e.g. degraded, images, while maintaining the same number of pixels (for increased number of pixels: see G06T 3/40)
  • Full-field focus from multiple of depth-of-field images, e.g. from confocal microscopy
  • Processing of synthetic aperture radar [SAR] images
  • Energy subtraction
  • Bright field, dark field processing
  • Angiography image processing
  • High dynamic range [HDR] image processing (add the indexing code G06T 2207/20208)
  • Multispectral image processing
  • Computational photography, e.g. coded aperture imaging (add the indexing code G06T 2200/21)

Illustrative example:

media56.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Circuitry for compensating brightness variation in the scene in cameras or camera modules comprising electronic image sensors

H04N 23/70

Camera processing pipelines in cameras or camera modules comprising electronic image sensors

H04N 23/80

Informative references

Attention is drawn to the following places, which may be of interest for search:

Scaling of whole images or parts thereof based on super-resolution

G06T 3/4053

Unsharp masking

G06T 5/75

Radar or analogous systems, specially adapted for mapping or imaging using synthetic aperture techniques

G01S 13/90

Spatial compounding in short-range sonar imaging systems

G01S 15/8995

Confocal scanning microscopes

G02B 21/0024

Computational photography systems, e.g. light-field imaging systems

H04N 23/95

using machine learning, e.g. neural networks
Definition statement

This place covers:

All machine learning-based image enhancement methods, e.g. using:

  • artificial neural networks [ANN] (add the indexing code G06T 2207/20084), convolutional neural networks [CNN], generative adversarial networks [GAN], deep learning
  • decision trees
  • support-vector machines
  • regression analysis
  • bayesian networks
  • gaussian processes
  • genetic algorithms

Illustrative example:

media209.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Neural networks

G06N 3/02

Learning methods

G06N 3/08

Machine learning

G06N 20/00

Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

G06V 10/82

Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks

G06V 10/84

Denoising; Smoothing
Definition statement

This place covers:

  • Removing noise from images
  • Temporal denoising, spatio-temporal noise filtering (add the indexing code G06T 2207/20182)
  • Removing pattern noise from images
  • Image smoothing
  • Image blurring, adding motion blur to images, adding blur to images
  • Edge-adaptive smoothing (add the indexing code G06T 2207/20192)
  • Smoothing of depth map in stereo or range images
  • Antialiasing by image filtering
  • Denoising or smoothing using singular value decomposition [SVD]

Illustrative example:

media210.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Camera processing pipelines for suppressing or minimising disturbance in the image signal generation

H04N 23/81

Noise processing in circuitry of solid-state image sensors [SSIS], e.g. detecting, correcting, reducing or removing noise

H04N 25/60

Informative references

Attention is drawn to the following places, which may be of interest for search:

Antialiasing during drawing of lines

G06T 11/20

Antialiasing during filling a planar surface by adding surface attributes, e.g. colour or texture

G06T 11/40

Noise filtering in image pre-processing for image or video recognition or understanding

G06V 10/30

Noise or error suppression in colour picture communication systems

H04N 1/58

Processing image signals for flicker reduction in stereoscopic or multi-view video systems

H04N 13/144

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/70 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Deblurring; Sharpening
Definition statement

This place covers:

  • Deblurring
  • Removing motion blur from images (add the indexing code G06T 2207/20201)
  • Point-spread function [PSF] model of blurring
  • Deconvolution
  • Modulation transfer function [MTF]
  • Sharpening, crispening
  • Edge enhancement, edge boosting (add the indexing code G06T 2207/20192)

Illustrative examples:

1.

media211.png

2.

media212.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Vibration or motion blur correction for stable pick-up of the scene in cameras or camera modules comprising electronic image sensors

H04N 23/682

Informative references

Attention is drawn to the following places, which may be of interest for search:

Edge-driven scaling

G06T 3/403

Edge or detail enhancement for scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission

H04N 1/4092

Edge or detail enhancement in colour picture communication systems

H04N 1/58

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/73 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Unsharp masking
Definition statement

This place covers:

  • Unsharp masking
  • Adding or subtracting a processed version of an image to or from the image

Illustrative example:

media213.png

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/75 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Retouching; Inpainting; Scratch removal
Definition statement

This place covers:

  • Concealing defective pixels in images
  • Scratch removal
  • Inpainting by image filtering or by replacing patches within an image using a generated image or texture patch, or a patch retrieved from another source, e.g. image databases or the Internet
  • Correcting redeye defects (add the indexing code G06T 2207/30216)

Illustrative examples:

media214.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Scratch removal adapted to be used in scanners, printers, photocopying machines, displays or similar devices

H04N 1/4097

Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine

H04N 5/253

Noise processing, e.g. detecting, correcting, reducing or removing noise in circuitry of solid-state image sensors [SSIS]

H04N 25/60

Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation or edge detection in image analysis

G06T 7/10

Analysis of geometric attributes in image analysis

G06T 7/60

Determining position or orientation of objects or cameras in image analysis

G06T 7/70

Determination of colour characteristics in image analysis

G06T 7/90

Texture generation as such

G06T 11/001

Recognition of eye characteristics

G06V 40/18

Retouching monochrome or colour images adapted to be used in scanners, printers, photocopying machines, displays or similar devices

H04N 1/40093, H04N 1/62

Redeye correction adapted to be used in scanners, printers, photocopying machines, displays or similar devices

H04N 1/624

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/77 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Geometric correction
Definition statement

This place covers:

  • Correcting lens distortions or aberrations
  • Correcting pincushion, barrel, trapezoidal or fish-eye distortions
  • Calibrating parameters of lens distortion
  • Reference grids, coordinate mapping

Illustrative example:

media215.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transformations in the plane of the image

G06T 3/00

Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

G06T 7/80

Normalisation of the pattern dimension during image preprocessing for image or video recognition

G06V 10/32

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/80 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Dynamic range modification of images or parts thereof
Definition statement

This place covers:

  • Contrast enhancement based on a combination of local and global properties

Illustrative examples:

1.

media216.png

2.

media217.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

H04N 23/741

Bracketing, i.e. taking a series of images with varying exposure conditions

H04N 23/743

Informative references

Attention is drawn to the following places, which may be of interest for search:

Equalising the characteristics of different image components of stereoscopic or multi-view image signals, e.g. their average brightness or colour balance

H04N 13/133

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/90 should additionally be classified in groups G06T 5/10 - G06T 5/60.

based on global image properties
Definition statement

This place covers:

  • Global contrast enhancement or tone mapping to increase the dynamic range of an image, based on properties of the whole image, e.g. global statistics or histograms
  • Contrast stretching, brightness equalisation
  • Gamma and gradation correction in general
  • Tone mapping for high dynamic range [HDR] imaging (add the indexing code G06T 2207/20208)
  • Intensity mapping, e.g. using lookup tables [LUT]

Illustrative example:

media218.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Picture signal circuitry for controlling amplitude response in television systems

H04N 5/20

Gamma control in television systems

H04N 5/202

Circuitry for compensating brightness variation in the scene

H04N 23/70

Camera processing pipelines comprising electronic image sensors

H04N 23/80

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/92 should additionally be classified in groups G06T 5/10 - G06T 5/60.

based on local image properties, e.g. for local contrast enhancement
Definition statement

This place covers:

  • Local contrast enhancement, e.g. locally adaptive filtering
  • Retinex processing

Illustrative examples:

1.

media219.png

2.

media220.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Unsharp masking

G06T 5/75

Special rules of classification

Whenever possible or appropriate, documents classified in group G06T 5/94 should additionally be classified in groups G06T 5/10 - G06T 5/60.

Image analysis
Definition statement

This place covers:

  • Analysis of motion, i.e. determining motion of an image subject, or of the camera having acquired the images; Tracking; Change detection; e.g. by block matching, feature-based methods, gradient-based methods, hierarchical or stochastic approaches, motion estimation from a sequence of stereo images.
  • Analysis of texture, i.e. analysis of colour or intensity features which represent a perceived image texture, e.g. based on statistical or structural descriptions.
  • Analysis of geometric attributes, e.g. area, perimeter, diameter, volume, convexity, concavity, centre of gravity, moments or symmetry.
  • Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration; Calibration of stereo cameras, e.g. determining the transformation between left and right camera coordinate systems
  • Computational analysis of images to determine information, e.g. parameters or characteristics, therefrom
  • Inspection-detection on images, e.g. flaw detection; Industrial image inspection using e.g. a design-rule based approach or an image reference. Industrial image inspection checking presence / absence; Biomedical image inspection.
  • Segmentation, i.e. partitioning an image into regions, or edge detection, i.e. detection of edge features in an image, e.g. involving probabilistic or graph-based approaches, deformable models, morphological operators, transform domain-based approaches or the use of more than two images.
  • Motion-based segmentation.
  • Determination of transform parameters for the alignment of images, i.e. image registration, e.g. by correlation-, feature- or transform domain-based or statistical approaches.
  • Depth or shape recovery, i.e. determination of scene depth parameters by consideration of image characteristics; Depth or shape recovery from shading, specularities, texture, perspective effects, e.g. vanishing points, or line drawings; Depth or shape recovery from multiple images involving amongst others contours, focus, motion, multiple light sources, photometric stereo or stereo images.
  • Determining the position or orientation of objects, e.g. by feature- or transform domain-based or statistical approaches.
  • Determination of image colour characteristics.
Relationships with other classification places

G06T 7/00 covers the details of image analysis algorithms, insofar as it deals with the related image processing algorithms per se. Documents which merely mention the general use of image analysis, without details of the underlying image analysis algorithms, are classified in the application place. Where the image analysis is functionally linked and restricted to specific image acquisition or display hardware or processes, it is classified in the application place; otherwise, it is classified in G06T 7/00. Where the essential technical characteristics relate both to the image analysis detail and to its particular use or special adaptation, classification is made in both G06T 7/00 and the application place.

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Computed tomography

A61B 6/03

Signal processing for Nuclear Magnetic Resonance (NMR) imaging systems

G01R 33/54

ICT specially adapted for processing medical images, e.g. editing 30/40

G16H 30/40

Scanning, transmission or reproduction of documents or the like

H04N 1/00

Stereoscopic television systems

H04N 13/00

Methods of arrangements for coding, decoding, compressing or decompressing digital video signals

H04N 19/00

Transforming light or analogous information into electric information using solid-state image sensors

H04N 25/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Image Acquisition

G06T 1/0007

Processor architectures; Processor configuration, e.g. pipelining

G06T 1/20

Processing seismic data

G01V 1/28

Methods or arrangements for reading or recognising printed or written characters or for recognising patterns

G06F 18/00, G06V 10/00

Bioinformatics

G16B

Medical informatics

G16H

Special rules of classification

Where the essential technical characteristics of the invention relate both to the image analysis detail and to its particular use or special adaptation, classification is made in both G06T 7/00 and the relevant application place in other subclasses.

G06T 7/00 focuses on image processing algorithms. Although such algorithms sometimes need to take into account characteristics of the underlying image acquisition apparatus, inventions to the image acquisition apparatus per se are outside the scope of this group.

Additional information should be classified using one or more of the Indexing Codes from the ranges of G06T 2200/00 or G06T 2207/00. Their use is obligatory.

The classification symbol G06T 7/00 is allocated to documents concerning:

  • Architectures of image analysis systems, fif not provided for elsewhere
  • Extraction of MPEG7 descriptors, if not provided for elsewhere
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Stereo

Treatment of two images, e.g. from two cameras or a single camera that is displaced, in a pairwise manner

Feature

a significant image region or pixel with certain characteristics, for example a feature point, landmark, edge, corner or blob, typically determined by image operators.

Image analysis

the extraction of information from images through the use of image processing techniques acting upon image data, such as intensity, colour, motion and spatial frequency characteristics.

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

AAM

Active appearance model

ASM

Active shape model

HMM

Hidden Markov Model

LBP

Local Binary Pattern

LPE

ligne de partage des eaux (French expression for watershed segmentation)

RANSAC

Random Sampling (and) Consensus

CAD

Computer-Aided Detection

SLAM

Simultaneous Localization and Mapping

{Inspection of images, e.g. flaw detection}
Definition statement

This place covers:

  • Quality, conformity control
  • Defects, abnormality, incompleteness
  • Acceptability determination
  • User interface for automated visual inspection
  • Database-to-object inspection
  • Image quality inspection
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Determining position or orientation of objects

G06T 7/70

Validation, performance evaluation or active pattern learning techniques

G06F 18/217

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Clustering techniques for pattern recognition

G06F 18/23

Classification techniques for pattern recognition

G06F 18/24

Image or video pattern matching

G06V 10/74

Pattern recognition or machine learning using clustering within arrangements for image or video recognition or understanding

G06V 10/762, G06V 10/764

Detection or correction of errors in pattern recognition

G06V 10/98

Evaluation of the quality of the acquired pattern in pattern recognition

G06V 10/993

Special rules of classification

In relation to the remaining, function-oriented groups of G06T 7/00, this subgroup is an application-oriented group. Therefore, documents classified herein should also be classified in a function-oriented group under G06T 7/00, if they contain a considerable contribution on the respective function.

For image quality inspection G06T 2207/30168 (Image quality inspection) should be added.

{Industrial image inspection}
Definition statement

This place covers:

  • Quality, conformity control in industrial context
  • Defects, abnormality in industrial context
  • Acceptability determination in industrial context
  • User interfaces for automated visual inspection in industrial context
  • "Teaching" (macros for inspection algorithms)
  • Database-to-object inspection in industrial context
  • Printing quality
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Investigating the presence of flaws or contamination on materials

G01N 21/88

Informative references

Attention is drawn to the following places, which may be of interest for search:

Contactless testing using optical radiation for printed circuits

G01R 31/309

Contactless testing using optical radiation for individual semiconductor devices

G01R 31/311

Photolithography mask inspection

G03F 7/7065

Component placement (in PCB manufacturing)

H05K 3/0008

Special rules of classification

When classifying in this group, the use of the indexing scheme G06T 2207/30108 - G06T 2207/30164 is mandatory for additional information related to industrial image inspection.

For user interfaces for automated visual inspection in industrial context, Indexing code G06T 2200/24 (involving graphical user interfaces [GUIs]) should be added.

{using a design-rule based approach}
Definition statement

This place covers:

Verifying geometric design rules or known geometric parameters, e.g. width or spacing of structures, repetitive patterns

Illustrative example:

media189.png

{checking presence/absence}
Definition statement

This place covers:

  • Detecting the absence of an item that should be there
  • Detecting incompleteness

Illustrative examples:

media190.png

{using an image reference approach}
Definition statement

This place covers:

  • Industrial image inspection where an image is compared to a reference image, standard image, ground truth image, gold standard: either by image comparison at image level, e.g. by image correlation, or by comparison of parameters extracted from the images
  • Reference images originated from an image acquisition apparatus or derived from computer-aided design data

Illustrative examples:

media191.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Determining representative reference patterns or generating dictionaries

G06F 18/28

Determining representative reference patterns or generating dictionaries for image or video recognition or understanding

G06V 10/772

{Biomedical image inspection}
Definition statement

This place covers:

Defects, abnormality in biomedical context

Computer-aided detection [CAD]

Detecting, measuring, scoring, grading of

  • Disease, pathology, lesions
  • Cancer, tumor, tumour, malignancy, nodule
  • Emphysema
  • Microcalcifications
  • Polyps
  • Scar, non-viable tissue
  • Osteoporosis, fracture risk prediction, Arthritis
  • Alzheimer disease
  • Scoring wrinkles, ageing
  • Tissue abnormalities in microscopic images, e.g. inflammation, deformations
  • Grading of living plants

Illustrative examples:

media60.png

Characterising skin imperfections

media194.png

Evaluating spine balance

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Apparatus for radiation diagnostics

A61B 6/00

Diagnosis using ultrasound

A61B 8/00

Signal processing for Nuclear Magnetic Resonance (NMR) imaging systems

G01R 33/54

Ultrasound imaging

G01S 7/52017, G01S 15/8906

ICT specially adapted for processing medical images, e.g. editing

G16H 30/40

Informative references

Attention is drawn to the following places, which may be of interest for search:

Recognising microscopic objects

G06V 20/69

Bioinformatics

G16B

Medical informatics

G16H

Special rules of classification

When classifying in this group, the use of the indexing scheme G06T 2207/30004 - G06T 2207/30104 is mandatory for additional information related to biomedical image processing.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Biomedical

biological or medical

{using an image reference approach}
Definition statement

This place covers:

  • Comparison to a reference image, standard image, atlas...
  • Reference image taken from different patient or patients, or reference image taken from spatially different anatomical regions of the same patient, e.g. comparison of left and right body parts.

Illustrative examples

Superposition of a perfusion image and the brain atlas images in contour representation

media63.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Determining representative reference patterns or generating dictionaries

G06F 18/28

Determining representative reference patterns or generating dictionaries for image or video recognition or understanding

G06V 10/772

{involving temporal comparison}
Definition statement

This place covers:

  • Follow-up studies, comparison of images from different points of time, temporal difference images, temporal subtraction images, biomedical change detection.
  • Reference image taken from the same patient and the same anatomical region.
  • Subtraction angiography for abnormality detection.
  • Assessment of dynamic contrast enhancement, wash-in/wash-out for abnormality detection.
  • Plethysmography based on image analysis

Illustrative example:

Floating image, reference image and temporal subtraction image

media64.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of motion, e.g. change detection in general

G06T 7/20

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Temporal feature extraction for image or video recognition or understanding

G06V 10/62

Image or video pattern matching

G06V 10/74

Special rules of classification

For plethysmography based on image analysis, Indexing Code G06T 2207/30076 should be added.

Segmentation; Edge detection (motion-based segmentation G06T 7/215)
Definition statement

This place covers:

  • Segmentation, i.e. partitioning an image into regions
  • Edge detection, i.e. detection of edge features in an image
References
Limiting references

This place does not cover:

Motion-based segmentation

G06T 7/215

Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Separation of touching or overlapping patterns for pattern recognition,e.g. character segmentation for optical character recognition (OCR)

G06V 10/26

Extraction of image features/characteristics for pattern recognition

G06V 10/40

Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition

G06V 10/44

Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of texture

G06T 7/40

Determination of colour characteristics

G06T 7/90

Clustering techniques in pattern recognition

G06F 18/23

Classification techniques in pattern recognition

G06F 18/24

Feature extraction related to colour, for pattern recognition

G06V 10/56

Pattern recognition or machine learning using clustering within arrangements for image or video recognition or understanding

G06V 10/762 , G06V 10/764

Region-based segmentation
Definition statement

This place covers:

Methods evaluating properties or features of image regions to determine the segmentation result, e.g.:

  • Thresholding, fixed threshold binarisation, multiple and histogram-derived thresholds
  • Region growing, splitting and merging
  • Colour-based segmentation
  • Texture-based segmentation
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition

G06V 10/28

Extraction of features or characteristics of the image related to colour, for pattern recognition

G06V 10/56

Cutting or merging image elements, e.g. region growing, watershed, clustering-based techniques, for pattern recognition

G06V 40/23

Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of texture

G06T 7/40

Determination of colour characteristics

G06T 7/90

Edge-based segmentation
Definition statement

This place covers:

Methods evaluating (closed) contours, edges or outlines of image portions to determine the segmentation result, e.g.:

  • Contour-based segmentation
  • Detection of straight edge-lines (e.g. buildings or roads from aerial images) which partition an image into regions
  • Finding and linking edge candidate points or segments (edgels)

Illustrative example:

media132.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition

G06V 10/44

Extraction of features or characteristics of the image by coding the contour of a pattern, for pattern recognition

G06V 10/46

Edge detection
Definition statement

This place covers:

In contrast to G06T 7/12, this group covers documents pertaining purely to edge-detection without partitioning an image into regions, e.g.:

  • Derivative methods (first-order or gradient, second order, e.g. Laplacian)
  • Zero crossing
  • Corner detection

Illustrative example:

media133.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections, for pattern recognition

G06V 10/44

Extraction of features or characteristics of the image by coding the contour of a pattern, for pattern recognition

G06V 10/46

involving thresholding
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition

G06V 10/28

involving probabilistic approaches, e.g. Markov random field [MRF] modelling
Definition statement

This place covers:

  • Statistical/Probabilistic methods for segmentation

Illustrative example:

media134.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Classification techniques based on a parametric (probabilistic) model, for pattern recognition

G06F 18/2415

Markov models or related models, Markov random fields or networks embedding Markov models for pattern recognition

G06F 18/295

Detecting partial patterns or configurations by analysing connectivity relationship of elements of the pattern, for pattern recognition

G06V 10/457

Pattern recognition or machine learning using classification within arrangements for image or video recognition or understanding

G06V 10/764

involving deformable models, e.g. active contour models
Definition statement

This place covers:

  • Model-based segmentation (in particular when applied to biomedical images)
  • Methods based on active shape models
  • Methods based on active appearance models
  • Methods based on active contours, active surfaces, snakes or deformable templates

Illustrative example:

media99.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Pattern recognition techniques involving a deformation of the sample or reference pattern or elastic matching

G06V 10/754

Matching of contours based on a local optimisation criterion, e.g. snakes or active contours, for pattern recognition

G06V 10/755

Matching based on shape statistics, e.g. active shape models, for pattern recognition

G06V 10/7553

Matching based on statistics of image patches, e.g. active appearance models, for pattern recognition

G06V 10/7557

Special rules of classification

For Active shape model [ASM], Indexing Code G06T 2207/20124 should be added.

For Active appearance model [AAM], Indexing Code G06T 2207/20121 should be added.

For Active contour; Active surface; Snakes, Indexing Code G06T 2207/20116 should be added.

involving morphological operators
Definition statement

This place covers:

  • Morphological methods
  • Watersheds
  • Toboggan-based methods

Illustrative examples:

media136.png

Figure 1. The 1D profile I(x) representing the intensity of a dark object of interest on a light background, forms three basins which correspond to local minima Min1, Min2 and Min3 of the intensity of the segmented region. The three basins give rise to two watershed lines LPE1 and LPE2, which divide the segmented region into three sub-regions SR1, SR2 and SR3.

media137.png

Figure 2. Toboggan-based object segmentation

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Smoothing or thinning the pattern, e.g. by morphological operators, for pattern recognition

G06V 10/34

Combinations of pre-processing functions using a local operator, for pattern recognition

G06V 10/36

Cutting or merging image elements, e.g. region growing, watershed, for pattern recognition

G06V 40/23

Special rules of classification

For Morphological image processing, an Indexing Code from the range of G06T 2207/20036 - G06T 2207/20044 should be added.

involving graph-based methods
Definition statement

This place covers:

  • Graph-cut methods

Illustrative example:

media102.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Feature extraction by graphical representation, e.g. directed attributed graphs, for pattern recognition

G06V 10/426

Informative references

Attention is drawn to the following places, which may be of interest for search:

Hierarchical clustering techniques, for pattern recognition

G06F 18/231

Non-hierarchical partitioning techniques based on graph theory, for pattern recognition

G06F 18/2323

Graph matching, for pattern recognition

G06V 30/1988

involving transform domain methods
Definition statement

This place covers:

  • Fourier-, FFT-, Wavelet-based methods
  • Gabor-, Laplace-transform-based methods
  • Discrete cosine transform [DCT]-based methods
  • Walsh-Hadamard transform [WHT]-based methods
  • Hough transform

Illustrative example:

media139.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition

G06V 10/431

Detecting partial patterns using transforms (e.g. Hough transform), for pattern recognition

G06V 10/48

Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition

G06V 10/52

Special rules of classification

For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.

involving the use of two or more images
Definition statement

This place covers:

  • Using information from multiple images to determine segmentation result
  • Segmentation based on several images taken under varying illumination, focus, exposure, etc.
  • Segmentation of a video frame involving several image frames of the video sequence, e.g. neighbouring frames
  • Temporal and spatio-temporal segmentation, if not based on motion information
  • Segmentation using several (neighbouring) slices of a tomographic data set (CT, MRI, PET, etc.), propagation of segmentation results between neighbouring slices
  • Hierarchical segmentation methods (including wavelet-based schemes), if final segmentation result is derived from (partial) results at different resolution levels
  • Multispectral image segmentation using information from different spectral bands (beyond the visible spectrum)

Illustrative example:

media140.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Motion-based segmentation

G06T 7/215

involving edge growing; involving edge linking
Definition statement

This place covers:

Image segmentation or edge detection methods based on

  • edge growing
  • edge linking
  • edge following
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Detecting partial patterns by analysis of the connectivity relationships of elements of the pattern, e.g. by edge linking, connected component or neighbouring slice analysis, for pattern recognition

G06V 10/457

involving region growing; involving region merging; involving connected component labelling
Definition statement

This place covers:

Image segmentation methods based on

  • region growing; region merging
  • split-and-merge
  • connected component labelling

Illustrative example:

media141.png

Figure 1. Region growing method which accumulates costs along a pixel path and as soon as the accumulated costs between neighbouring pixels (91, 92) become higher than a threshold, the growing is stopped.

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Detecting partial patterns by analysis of the connectivity relationships of elements of the pattern, e.g. by edge linking, connected component or neighbouring slice analysis, for pattern recognition

G06V 10/457

Segmentation of touching or overlapping patterns, cutting or merging image elements, e.g. region growing, watersheds, for pattern recognition

G06V 40/23

involving foreground-background segmentation
Definition statement

This place covers:

Image segmentation or edge detection methods based on a separation of foreground, i.e. relevant parts, and background, i.e. non-relevant parts of an image.

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Quantising the analogue image signal, e.g. histogram thresholding for discrimination between background and foreground patterns, for pattern recognition

G06V 10/28

Analysis of motion (motion estimation for coding, decoding, compressing or decompressing digital video signals H04N 19/43, H04N 19/51)
Definition statement

This place covers:

  • Image analysis algorithms for determining motion of an image subject, or of the camera having acquired the images. Determination of scene movement and between image frames, e.g. Change detection
  • Tracking
  • Motion capture
  • Determining camera ego-motion add the Indexing Code G06T 2207/30244: Camera pose
  • Medical motion analysis, e.g. of the left ventricle of the heart add the Indexing Code G06T 2207/30048: Heart; Cardiac
  • Trajectory representation add the Indexing Code: G06T 2207/30241 Trajectory
  • Stabilisation of video sequences (see also G06T 7/30)
References
Limiting references

This place does not cover:

Motion estimation for coding, decoding, compressing or decompressing digital video signals

H04N 19/43, H04N 19/51

Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Scene recognition

G06V 20/00

Recognising video content

G06V 20/52

Recognising scenes under surveillance

G06V 20/52

Recognising scenes perceived from a vehicle

G06V 20/56

Recognising scenes inside a vehicle

G06V 20/59

Gesture recognition

G06V 40/20

Burglar, theft or intruder alarms using cameras and image comparison

G08B 13/196

Informative references

Attention is drawn to the following places, which may be of interest for search:

Determination of transform parameters for the alignment of images, i.e. image registration

G06T 7/30

Depth or shape recovery from motion

G06T 7/579

Determining position or orientation of objects

G06T 7/70

Video games

A63F 13/00

Target following using TV type tracking systems

G01S 3/7864

Light barriers

G01V 8/20

Data indexing of video sequences

G06F 16/50

Surveillance systems using closed-circuit television systems (CCTV)

H04N 7/18

Special rules of classification

For camera pose, Indexing Code G06T 2207/30244 should be added. For heart, cardiac, Indexing Code G06T 2207/30048 should be added. For trajectory details, Indexing Code G06T 2207/30241 should be added. For sports video, sports image, Indexing Code G06T 2207/30221 should be added

for motion estimation over a hierarchy of resolutions (multi-resolution motion estimation or hierarchical motion estimation for coding, decoding, compressing or decompressing digital video signals H04N 19/53)
Definition statement

This place covers:

Illustrative example:

media116.png

References
Limiting references

This place does not cover:

Multi-resolution motion estimation or hierarchical motion estimation for coding, decoding, compressing or decompressing digital video signals

H04N 19/53

Motion-based segmentation
Definition statement

This place covers:

  • Figure-ground segmentation by detection of moving object(s) from dense motion representation
  • Partitioning an image into regions of homogenous 2D (apparent) motion
  • Based on analysis of motion vector field or motion flow
  • Grouping from optical flow

Illustrative example:

media143.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Retrieval of video data using motion, e.g. objection motion

G06F 16/786

Segmenting video sequences, e.g. scene change analysis

G06V 20/49

Scene change analysis

H04N 5/147

Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation; Edge detection

G06T 7/10

using block-matching
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Movement estimation for television pictures

H04N 5/144

Predictive coding in television systems using temporal prediction with motion detection

H04N 19/503

Informative references

Attention is drawn to the following places, which may be of interest for search:

Image coding using predictors

G06T 9/004

Use of motion vectors for image compression, coding using predictors, video coding

H04N 19/52

using full search
Definition statement

This place covers:

Full, exhaustive, brute force search

Illustrative example:

media144.png

Figure 1. A motion vector between the m-th frame (1) and the (m+n)-th frame (2) is detected. At first, the image data of the m-th frame 1 is divided into a plurality of first blocks 11, and the first blocks 11 are extracted sequentially .The second block 12 of the same size and shape as the extracted first block 11 is extracted from the image data of the (m+n)-th frame 2. The absolute difference value of the corresponding pixels of the extracted first block 11 and the extracted second block 12 is computed every pixel.

using non-full search, e.g. three-step search
Definition statement

This place covers:

  • Non-full, layered structure, fast, adaptive, efficient search
  • Three-Step, New Three-Step, Four-Step Search
  • Simple and Efficient Search
  • Binary Search
  • Spiral Search
  • Two-Dimensional Logarithmic Search
  • Cross Search Algorithm
  • Adaptive Rood Pattern Search
  • Orthogonal Search
  • One-at-a-Time Algorithm
  • Diamond Search
  • Hierarchical search
  • Spatial dependency check

Illustrative example of an hierarchical search:

media145.png

Special rules of classification

For Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform, Indexing Code G06T 2207/20016 should be added.

using feature-based methods, e.g. the tracking of corners or segments
Definition statement

This place covers:

  • Feature points, e.g. determined by image operators; also matching of point descriptors, feature vectors; significant segments, blobs
  • Feature, landmark, marker, fiducial, edge, corner, etc.

Illustrative example:

media146.png

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Feature

a significant image region or pixel with certain characteristics

{involving reference images or patches}
Definition statement

This place covers:

  • Involving correlation of "true to reality" image patches, templates, regions of interest
  • Correlation used for 1) finding features in each image or for 2) finding regions of interest from one image in the other images

Illustrative example:

media147.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Face recognition using comparisons between temporally consecutive images

G06V 40/167

Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of motion using block-matching (where blocks are arbitrarily defined by a grid, not as a significant image region)

G06T 7/223

Image matching for pattern recognition or image matching in general

G06V 40/167

{involving models}
Definition statement

This place covers:

  • Involving matching of intermediary 2D or 3D models extracted from each image before motion analysis, e.g. skeletons, stick models, ellipses, geometric models of all kinds, polygon models, active appearance and shape models, as opposed to reference images or patches
  • Model matching used for 1) finding features in each image or for 2) finding structure of interest from one image in the other images

Illustrative example:

media148.png

For each frame of a captured video sequence, a basic human body model 800 for diving competitions is superimposed on the frame and adjusted to provide an accurate representation of the diver's positioning in that frame, the sequence of adjusted models describing the entire motion sequence of the diver.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Matching of contours in general or matching of contours for pattern recognition

G06V 10/752

Syntactic or structural pattern recognition, e.g. symbolic string recognition

G06V 30/1983

involving subtraction of images
Definition statement

This place covers:

  • Subtraction of previous image
  • Subtraction of background image, background maintenance, background models therefor
  • Also involving ratio or more general comparison of corresponding pixels in successive frames

Illustrative example:

media149.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Burglar, theft or intruder alarms using cameras and image comparison

G08B 13/196

Informative references

Attention is drawn to the following places, which may be of interest for search:

Change detection in biomedical image inspection

G06T 7/0014

using transform domain methods, e.g. Fourier domain methods
Definition statement

This place covers:

  • Fourier, DCT, Wavelet, Gabor, etc.
  • Using phase correlation

Illustrative examples:

media150.png

Figure 1.

media113.png

Figure 2.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition

G06V 10/431

Detecting partial patterns using Hough transform for pattern recognition

G06V 10/48

Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition

G06V 10/52

Special rules of classification

For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.

using gradient-based methods
Definition statement

This place covers:

Optic (optical) flow involving the calculation of spatial and temporal gradient

Illustrative example:

media152.png

involving stochastic approaches, e.g. using Kalman filters
Definition statement

This place covers:

  • Bayesian methods
  • HMM
  • Particle filtering

Illustrative examples:

media153.png

Figure 1.

media154.png

Figure 2. Kalman filter-based tracking of 3D heart model

Special rules of classification

Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/20.

using a sequence of stereo image pairs
Definition statement

This place covers:

Illustrative example:

media155.png

Multi-camera tracking
Definition statement

This place covers:

  • Algorithms for camera networks
  • Interaction, cooperation between trackers
  • Multi-view tracking, multi-camera tracking
  • The cameras view the same scene (cooperation, e.g. by voting, fusion)
  • The cameras view different scenes (cooperation, e.g. by handover, tracklet joining, trajectory joining)

Illustrative example:

media119.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Classification of unknown faces, i.e. recognising the same non-enrolled faces, e.g. recognising the unknown faces across different face tracks

G06V 40/173

Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of motion using a sequence of stereo pairs, e.g. cooperative motion analysis from a single stereo camera pair or motion analysis from at least three views, wherein at least one pair of views is processed as stereo pair

G06T 7/285

Special rules of classification

Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/20.

In particular, in the case of motion analysis from multiple monocular views with subsequent merging or joining of analysis results, details about the respective analysis algorithm per view should be classified in the subgroups of G06T 7/20 as well.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Multi-camera

Treatment of multiple image sequences, not in a pairwise manner

Stereo

Treatment of two images, e.g. from two cameras or a single camera that is displaced, in a pairwise manner

Determination of transform parameters for the alignment of images, i.e. image registration
Definition statement

This place covers:

Image analysis algorithms for determining geometric transformations required to register (i.e. align) separate images. The process involves the estimation of transform parameters. Registration means determining the alignment of images or finding their relative position.

  • Registration of image subparts for the construction of mosaics image
  • Multi-modal, cross-modal, across-modal registration of medical image data sets
  • Registration with medical atlas Registration of pre-operative and intra-operative medical image data sets
  • Registration for change detection in biomedical or remote sensing images (change detection see also G06T 7/20
  • Registration of models
  • Registration of a model with an image
  • Registration of range data, point clouds (ICP algorithm)
  • 2D/2D, 2D/3D, 3D/3D registration
  • Interactive registration
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Segmentation involving deformable models

G06T 7/149

Recognising three-dimensional objects, e.g. range data matching for pattern recognition

G06V 20/64

Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transformation in the plane of the image for image registration

G06T 3/14

Analysis of motion

G06T 7/20

Combining images from different aspect angles, e.g. spatial compounding

G01S 15/8995

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Image or video pattern matching

G06V 10/74

Comparing pixel values or logical combinations thereof, e.g. template matching

G06V 10/751

Special rules of classification

For registration of medical image data, an Indexing Code from the range of G06T 2207/30004-G06T 2207/30104(Biomedical image processing) should be added.

For involving image mosaicing, Indexing Code G06T 2200/32 should be added.

For Interactive image processing based on input by user, an Indexing Code from the range of G06T 2207/20092-G06T 2207/20108 should be added.

Synonyms and Keywords

In patent documents, the following words/expressions are often used with the meaning indicated:

Recalage (French)

Registration (English)

using correlation-based methods
Definition statement

This place covers:

  • Global correlation
  • Block-matching like correlation, if not for motion analysis

Illustrative example:

media157.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Analysis of motion using block-matching

G06T 7/223

using feature-based methods
Definition statement

This place covers:

  • Feature points, e.g. determined by image operators; also matching of point descriptors, feature vectors; significant segments, blobs
  • Feature, landmark, marker, fiducial, edge, corner, etc.

Illustrative example:

media158.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Extraction of features or characteristics of the image, for pattern recognition

G06V 10/40

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Feature

significant image region or pixel with certain characteristics

{involving reference images or patches}
Definition statement

This place covers:

Involving correlation with "true to reality" image patches, templates, regions of interest; correlation used for 1) finding features in each image, or for 2) finding regions of interest from one image in the other image.

Illustrative example:

media159.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Image registration using correlation of complete images or block-matching-like registration (where blocks are arbitrarily defined by a grid, not as a significant image region, region of interest)

G06T 7/32

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Image or video pattern matching

G06V 10/74

{involving models}
Definition statement

This place covers:

  • Involving matching of intermediary 2D or 3D models extracted from each image before registration, e.g. geometric models of all kinds, polygon models, active appearance and shape models, as opposed to reference images or patches
  • Corresponding models are adapted to each image to be registered, respectively, transform parameters between the images are determined from a comparison/matching of the adapted models
  • Model matching used for 1) finding features in each image, or for 2) finding structure of interest from one image in the other image

Illustrative example:

media160.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Matching of contours

G06V 10/752

Syntactic or structural pattern recognition, e.g. symbolic string recognition

G06V 30/1983

using statistical methods
Definition statement

This place covers:

  • Involving probabilistic feature points, statistical features or reference images / patches, statistical models, statistical matching
  • Approaches based on mutual information
  • RANSAC
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Matching configurations of points or features for pattern recognition, e.g. using RANSAC

G06V 10/757

Image matching by comparing statistics of regions for pattern recognition

G06V 10/758

Special rules of classification

Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/30.

using transform domain methods
Definition statement

This place covers:

Fourier, DCT, Wavelet, Gabor, etc.

Illustrative example:

media71.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition

G06V 10/431

Detecting partial patterns using transforms (e.g. Hough transform) for pattern recognition

G06V 10/48

Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition

G06V 10/52

Special rules of classification

For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.

Registration of image sequences
Definition statement

This place covers:

  • Aligning one image sequence or image set to the other, i.e. finding spatially or temporally corresponding frames between one image sequence and the other (inter-sequence alignment), as opposed to spatial alignment of image frames within a single image sequence (intra-sequence alignment)
  • Temporal alignment = alignment along the t-axis, e.g. alignment of two video sequences
  • Spatial alignment = alignment along the z-axis, e.g. alignment of two stacks of CT slices
  • Additionally, spatially aligning the temporally or spatially corresponding frames in the x-y-plane (intra-sequence alignment) is possible
  • Source sequences can be of any orientation

Illustrative examples:

media162.png

Figure 1. Spatial alignment

media163.png

Figure 2.

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Matching video sequences for pattern recognition

G06V 20/48

Document matching for pattern recognition

G06V 30/418

Special rules of classification

Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/30.

Analysis of texture (depth or shape recovery from texture G06T 7/529)
Definition statement

This place covers:

Analysis of the spatial arrangement of image colour or intensity characteristics representative of a perceived image texture.

References
Limiting references

This place does not cover:

Depth or shape recovery from texture

G06T 7/529

Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation; Edge detection

G06T 7/10

Depth or shape recovery from shading

G06T 7/507

Filling a planar surface by adding texture in 2D image generation

G06T 11/40

Texture mapping in 3D image rendering

G06T 15/04

based on statistical description of texture
Definition statement

This place covers:

Analysis of texture using:

  • First-order statistics
  • Global histogram-based measures: mean, variance, skewness, kurtosis, energy, entropy
  • Autocorrelation
  • Run-length based algorithms
using transform domain methods
Definition statement

This place covers:

Fourier, DCT, Wavelet, Gabor, etc.

Illustrative example:

media164.png

Texture-based image retrieval method using a Gabor filter in the frequency domain, wherein the frequency domain representation is divided according to a predetermined layout for extracting texture descriptors of respective feature channels.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Feature extraction by deriving mathematical or geometrical properties, frequency domain transformations, for pattern recognition

G06V 10/431

Detecting partial patterns using transforms (e.g. Hough transform), for pattern recognition

G06V 10/48

Feature extraction by deriving mathematical or geometrical properties, scale-space transformation, e.g. wavelet transform, for pattern recognition

G06V 10/52

Special rules of classification

For Transform domain processing, an Indexing Code from the range of G06T 2207/20052 - G06T 2207/20064 should be added.

using image operators, e.g. filters, edge density metrics or local histograms
Definition statement

This place covers:

  • Laws texture energy measure
  • Texture analysis using edge operators
  • Texture analysis using difference of Gaussians
  • Texture analysis using local linear transforms
  • Local Binary Pattern [LBP]
  • Grey level difference method
  • Local rank order correlation
using co-occurrence matrix computation
Definition statement

This place covers:

  • Second-order statistics
  • Generalised co-occurrence matrix
using random fields
Definition statement

This place covers:

  • Markov Random Fields, Gaussian Random Fields, Gibbs Random Fields
  • Autoregressive Model
using fractals
Definition statement

This place covers:

  • fractal texture analysis methods
  • fractal dimension
  • box counting methods
based on structural texture description, e.g. using primitives or placement rules
Definition statement

This place covers:

  • Shape chain grammars, graph grammars
  • Grouping of primitives in hierarchical textures

Illustrative example:

media165.png

Figure 1 (top) and 2 (bottom). Method for finding periodic structures in a layer of an integrated circuit that have identical optical properties. Fig. 2 illustrates a geometric hierarchy of the periodic elements in the cell layer of Fig. 1.

Depth or shape recovery
Definition statement

This place covers:

  • Image analysis algorithms for determining scene depth parameters from image characteristics.
  • Shape from X
  • Depth map determination
  • Disparity calculation for shape recovery
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Picture taking arrangements specially adapted for photogrammetry or photographic surveying

G01C 11/02

LIDAR systems for mapping or imaging

G01S 17/89

from shading (G06T 7/586 takes precedence)
Definition statement

This place covers:

Shape from shading or shadows

Illustrative example:

media166.png

References
Limiting references

This place does not cover:

Depth or shape recovery from multiple light sources, e.g. photometric stereo

G06T 7/586

from specularities
Definition statement

This place covers:

Illustrative example:

media167.png

from laser ranging, e.g. using interferometry; from the projection of structured light
Definition statement

This place covers:

Illustrative example:

media127.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Image acquisition and arrangements for measuring contours or curvatures of an object by projecting a pattern thereupon

G01B 11/25

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Structured

characterises the illumination

from texture
Definition statement

This place covers:

  • shape from texture
  • shape from blur in a single image

Illustrative example:

media168.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Depth or shape recovery from focus

G06T 7/571

from perspective effects, e.g. by using vanishing points
Definition statement

This place covers:

Illustrative example:

media83.png

from line drawings
Definition statement

This place covers:

  • shape from line drawings
  • shape from contours in a single image

Illustrative example:

media170.png

from multiple images
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Volumetric display with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes

H04N 13/388

Informative references

Attention is drawn to the following places, which may be of interest for search:

Determining parameters from multiple pictures, e.g. disparity calculation as such

G06T 7/97

Special rules of classification

For documents concerning trilinear computations, trifocal tensor: add the Indexing Code G06T 2207/20088: Trinocular vision calculations; trifocal tensor.

from light fields, e.g. from plenoptic cameras
Definition statement

This place covers:

Depth reconstruction using, or based on, light field representations, i.e. 5D plenoptic function, 4D light field, lumigraph, ray space; such light field representations may originate, e.g. from plenoptic cameras, light field cameras, cameras with a lenslet array or integral imaging.

Illustrative example:

media171.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Depth using trinocular vision calculations/trifocal tensor

G06T 7/55, G06T 2207/20088

Depth from focus

G06T 7/571

Depth from motion

G06T 7/579

Depth from multiple light sources

G06T 7/586

Depth from stereo images

G06T 7/593

from contours
Definition statement

This place covers:

  • Shape from contours
  • Shape from silhouettes
  • Shape from visual hulls

Illustrative example:

media172.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Depth or shape recovery from line drawings, e.g. shape from contours involving one image only

G06T 7/543

from focus
Definition statement

This place covers:

  • Shape from focus
  • Shape from defocus of multiple images

Illustrative example:

media173.png

Figure 1

media174.png

Figure 2

Figure 1 and 2. Input image sequence and resulting depth map

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Shape from texture, e.g. shape from blur in a single image

G06T 7/529

Systems for automatic generation of focusing signals

G02B 7/28

Focusing aids for cameras; Autofocus systems for cameras

G03B 13/00

from motion
Definition statement

This place covers:

  • Shape from motion, structure from motion
  • Extracting the shape of a scene from the spatial and temporal changes occurring in an image sequence (camera or scene moves)
  • Simultaneous Localisation and Mapping [SLAM]

Illustrative examples:

Figure 1

media175.png

media176.png

Figure 2. Shape from motion reconstruction

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Determining position or orientation of objects or cameras

G06T 7/70

Special rules of classification

For Camera pose, Indexing Code G06T 2207/30244 should be added.

from multiple light sources, e.g. photometric stereo
Definition statement

This place covers:

Algorithms for the determination of scene depth parameters from multiple images for which more than one source of illumination has been used. Typically, different illumination sources are used when capturing each of the multiple images to produce different images of the same scene under the different lighting conditions. The different images are used to determine depth and shape parameters in the scene.

  • Different illumination intensities, e.g. ambient and flash
  • Different directions of illumination

Illustrative example:

media128.png

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Photometric stereo

a technique for estimating the normal vectors at different points on an object's surface by observing the object under different lighting conditions.

from stereo images
Definition statement

This place covers:

Shape from stereo images or sequences of stereo images

Illustrative example:

media129.png

media130.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Stereoscopic or multiview image generation wherein the generated image signals comprise depth maps or disparity maps

H04N 13/271

Informative references

Attention is drawn to the following places, which may be of interest for search:

Depth or shape recovery from multiple images using trilinear computations / the trifocal tensor

G06T 7/55 and G06T 2207/20088

Depth or shape recovery from multiple images using the quadrifocal tensor

G06T 7/55

{from three or more stereo images}
Definition statement

This place covers:

Multi-baseline stereo (special case only where

  • each view is always treated together with the same reference view and
  • the lengths of the respective baselines differ from each other)

Illustrative example:

media177.png

Analysis of geometric attributes
Definition statement

This place covers:

  • Analysis of image subjects to determine geometric attributes thereof, e.g. area, centre of mass, perimeter, diameter or volume.
  • Ellipse detection
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Extraction of image features for pattern recognition by deriving geometrical properties of the whole image

G06V 10/42

Informative references

Attention is drawn to the following places, which may be of interest for search:

Measuring characterised arrangements by the use of optical means

G01B 11/00

of area, perimeter, diameter or volume
Definition statement

This place covers:

Illustrative example:

media178.png

of convexity or concavity
Definition statement

This place covers:

Convexity, concavity, curvature, circularity, sphericity, roundness

Illustrative examples:

Figure 1

media123.png

Figure 2

media180.png

of image moments or centre of gravity
Definition statement

This place covers:

Following centers of gravity of sections along elongated or tubular structure

Illustrative example:

media181.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Computation of moments, for pattern recognition

G06V 10/435

of symmetry
Definition statement

This place covers:

  • Determination of lines of symmetry, midlines
  • Measurement of symmetry and asymmetry

Illustrative example:

media182.png

Determining position or orientation of objects or cameras (camera calibration G06T 7/80)
Definition statement

This place covers:

  • Image processing algorithms for determining the position or orientation of an image subject, or of the camera having acquired the image
  • Position or orientation of the camera
  • Estimation of position, pose, posture, attitude in 2D and 3D
  • Gaze direction, head pose
  • Bin picking
References
Limiting references

This place does not cover:

Camera calibration

G06T 7/80

Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Orientation detection before recognition

G06V 10/242

Acquiring or recognising human faces, facial parts, facial sketches, facial expressions, eyes

G06V 40/16, G06V 40/18

Informative references

Attention is drawn to the following places, which may be of interest for search:

Image feed-back for automatic industrial control

G06T 1/0014

Analysis of motion

G06T 7/20

Measuring position in terms of linear or angular dimensions

G01B

Locating or presence-detecting by the use of the reflection or reradiation of radio or other waves

G01S

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Image or video pattern matching

G06V 10/74

Mask, wafer positioning, alignment

H01L 21/681

Studio circuitry, e.g. for position determination of a camera in a television studio

H04N 5/222

Aligning or positioning of tools relative to the circuit board for manufacturing printed circuits

H05K 3/0008

Special rules of classification

For camera pose, Indexing Code G06T 2207/30244 should be added. For workpiece; machine component, Indexing Code G06T 2207/30164 should be added.

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Repérage" (in French documents), "location", and "locating"
using feature-based methods
Definition statement

This place covers:

  • Feature points, e.g. determined by image operators; also point descriptors, feature vectors; significant segments, blobs
  • Feature, landmark, marker, fiducial, edge, corner, etc.

Illustrative example:

media183.png

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Feature

significant image region or pixel with certain characteristics.

{involving reference images or patches}
Definition statement

This place covers:

Involving correlation with "true to reality" reference images, templates of various poses; for "directly" determining pose; correlation with "true to reality" templates of landmarks, markers, fiducials; for finding features in the image.

Illustrative examples:

Figure 1

media184.png

Figure 2

media185.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Pattern matching criteria, e.g. proximity measures

G06F 18/22

Image or video pattern matching

G06V 10/74

{involving models}
Definition statement

This place covers:

  • Involving matching to a 2D or 3D model, e.g. geometric models of all kinds, polygon models, active appearance and shape models, also abstract models of landmarks, markers, fiducials with spatial extent, as opposed to reference images or patches
  • Matching of a graphical, e.g. polygon model, may involve intermediate rendering of the model
  • Model matching used for 1) finding features in each image, or for 2) "directly" determining pose of structure of interest

Illustrative example:

media186.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation involving deformable models

G06T 7/149

Analysis of motion involving models

G06T 7/251

Matching of contours

G06V 10/752

Syntactic or structural pattern recognition, e.g. symbolic string recognition

G06V 30/1983

using statistical methods
Definition statement

This place covers:

  • Involving probabilistic feature points, statistical models, statistics of positions
  • Features, reference images, patches or method itself can be statistical
  • RANSAC

Illustrative example:

media187.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation or edge detection involving probabilistic approaches

G06T 7/143

Analysis of motion involving a stochastic approach

G06T 7/277

Image matching by comparing statistics of regions for pattern recognition

G06V 10/758

Special rules of classification

Whenever possible, documents classified herein should also be classified in one of the other subgroups of G06T 7/70.

Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definition statement

This place covers:

The use of methods/algorithms to analyse camera images for the determination of intrinsic parameters defining the camera's properties, or for the determination of extrinsic parameters defining the camera's position and orientation. Camera calibration enables pixel positions in a captured 2D image to be mapped to real-world 3D coordinates of the subject represented in the image.

Illustrative example:

media131.png

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric correction, e.g. of lens distortion

G06T 5/80

Determining position or orientation of objects, e.g. of the camera, without calibration context

G06T 7/70, G06T 2207/30244

Calibration patterns

G01B 21/042, G01C 15/02

Systems for automatic generation of focusing signals

G02B 7/28

Focusing aids for cameras; Autofocus systems for cameras

G03B 13/00

Colour balance, e.g. colour cast correction

H04N 1/6077

Calibration of stereoscopic cameras

H04N 13/246

Picture signal generators using solid state devices, e.g. correction of chromatic aberrations

H04N 23/10

Suppressing or minimising disturbance in picture signal generation

H04N 23/81

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Intrinsic parameters

The geometric and optical characteristics of a camera, including effective focal length, a scale factor and the image centre or "principal point".

Extrinsic parameters

The three-dimensional position and orientation of the camera in real-world coordinates.

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Camera calibration", "Geometric camera calibration", and "Camera re-sectioning".
{Stereo camera calibration}
Definition statement

This place covers:

Camera calibration for stereoscopic cameras, e.g. for determining the transformation between left camera coordinate system and right camera coordinate system

Illustrative example:

media188.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Calibration aspects relating to the control of a stereoscopic camera

H04N 13/246

Determination of colour characteristics
Definition statement

This place covers:

  • Determining colour characteristics by image analysis
  • Redeye detection
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Colour image segmentation

G06T 7/10

Acquiring or recognising eyes, e.g. iris verification

G06V 40/18

Retouching, i.e. modification of isolated colours only or in isolated picture areas only

H04N 1/62

Informative references

Attention is drawn to the following places, which may be of interest for search:

Correcting redeye defects by retouching or inpainting

G06T 5/77

Special rules of classification

For redeye defect, Indexing Code G06T 2207/30216 should be added.

{Determining parameters from multiple pictures (depth or shape recovery from multiple images G06T 7/55; stereo camera calibration G06T 7/85)}
Definition statement

This place covers:

  • Disparity, correspondence, stereopsis, if not provided for elsewhere
  • Disparity calculation for the production of 3D images from 2D images without intermediate modelling
References
Limiting references

This place does not cover:

Depth or shape recovery from multiple images

G06T 7/55

Stereo camera calibration

G06T 7/85

Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Industrial image inspection using an image reference approach

G06T 7/001

Biomedical image inspection using an image reference approach

G06T 7/0014

Segmentation involving the use of two or more images

G06T 7/174

Computing motion using a sequence of stereo image pairs

G06T 7/285

Determination of transform parameters for the alignment of images, i.e. image registration

G06T 7/30

Informative references

Attention is drawn to the following places, which may be of interest for search:

Image-based rendering

G06T 15/205

3D from 2D images with intermediate modelling

G06T 17/20

Special rules of classification

For Disparity calculation for image-based rendering, Indexing Code G06T 2207/20228 should be added.

Image coding (bandwidth or redundancy reduction for static pictures H04N 1/41; coding or decoding of static colour picture signals H04N 1/64; methods or arrangements for coding, decoding, compressing or decompressing digital video signals H04N 19/00)
Definition statement

This place covers:

Coding/compression and decoding/decompression of computer graphics(CG) data and computer graphics compression methods applied on natural image/video.

Apparatus/devices of coding/compressing and/or decoding/decompressing of computer graphics data.

Computer graphics data mentioned including:

  • object geometry models
  • scene models
  • 2D/3D vector graphics
  • 3D/4D volumetric models
  • CAD models
  • contour shape data
  • elevation data
  • CG related metadata/parameters including depth, colour, texture, motion vectors, scene graph, position, connectivity information and similar.
Relationships with other classification places

This group covers compression/coding/decompression/decoding of CG related data and CG related methods applied on natural image or video. Other compression techniques specific to the natural image/video without using CG related methods are covered by H04N 19/00.

Compression in general is covered by H03M 1/00.

References
Limiting references

This place does not cover:

Bandwidth or redundancy reduction for static pictures

H04N 1/41

Coding or decoding of static colour picture signals

H04N 1/64

Methods or arrangements for coding, decoding, compressing or decompressing digital video signals

H04N 19/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Animation

G06T 13/00

Model based coding

G06T 15/00, G06T 17/00

Model based coding using a 3D model

G06T 15/00, G06T 17/00

Rendering of computer graphics data

G06T 15/00

Modeling of computer graphics data

G06T 17/00

Re-meshing for manipulation, editing purpose

G06T 17/205

Manipulation 3D objects

G06T 19/00

Pattern recognition

G06F 18/00

Computer aided design

G06F 30/00

Image or video recognition or understanding

G06V

Pattern recognition by contour coding

G06V 10/46

Coding or decoding, in general

H03M

Compression in general

H03M 1/00

Transmission of TV signals

H04N 7/24

Selective content distribution

H04N 21/00

Special rules of classification

In general, consult the gérant before using any sub-groups. This is a provisionary document which will be replaced in January , 2012, after completing reorganization in G06T 9/00.

  • for classification, the main group G06T 9/00 is assigned always before completing the reorganization.
  • The Indexing Code series of symbols is reserved for the use of documents classified in G06T 9/00 and subgroups. They should be allocated to documents in G06T 9/00 and subgroups whenever relevant.
  • the sub-groups G06T 9/004, G06T 9/005, G06T 9/005, G06T 9/008 are not used anymore, the content, which is not related with computer graphics data compression/coding, will be transferred to the corresponding classes defined in the group definition statements below.
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

4D volumetric models

Sequences of volumetric images over time

MPEG

Moving Picture Experts Group

SNHC

Synthetic/Natural Hybrid Coding

BIFS

Binary Format for Scene

VRML

Virtual Reality Modeling Language

SVG

Scalable Vector Graphics

NN

Neural Networks

TV

Television

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

CG

Computer graphics

3D

Three dimensional

4D

Four dimensional

CAD

Computer aided design

In patent documents, the following words/expressions are often used as synonyms:

  • "Compression" and "Coding"
  • "Decompression" and "Decoding"
  • "Scene graph" and "Scene model"
  • "Scene description graph" and "Scene graph"
  • "Metadata" and "Parameter"
  • "Contour coding" and "Shape coding"
  • "Elevation data" and "Height data"
  • "Object geometry models" and "Object models"
  • "Natural image" and "Raster/Bitmap image"
  • "Vector graphics" and "Scalable Vector Graphics"
{Model-based coding, e.g. wire frame}
Definition statement

This place covers:

Means or steps for the compression/coding of wire frame models, e.g. polygon meshes.

Documents concerning mesh compression/coding by

  • face merging
  • incremental decimation
  • simplification by remeshing for data reduction purpose are classified here.
References
Limiting references

This place does not cover:

Animation

G06T 13/00

Rendering of computer graphics data

G06T 15/00

Re-meshing for manipulation, editing

G06T 17/205

Manipulation 3D objects

G06T 19/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Model based coding

G06T 9/001, H04N 19/20

Model based coding using a 3D model

G06T 9/001, G06T 15/00, G06T 17/00, H04N 19/20

Special rules of classification

Documents classified in G06T 9/001, H04N 19/20 and G06T 9/001, G06T 15/00, G06T 17/00, H04N 19/20 are transferred to G06T 9/001.

Documents concerning re-meshing for manipulation, editing and similar, i.e. all means not having data reduction purpose are classified in G06T 17/205.

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "wireframe" and "polygon mesh"
{using neural networks}
Definition statement

This place covers:

Means or steps for the compression/coding of computer graphics data and natural image/video data using neural networks (NN).

Special rules of classification

The compression/coding data concerning in this group includes:

  • computer graphics data
  • natural image/video data.
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

NN

Neural Networks

{Predictors, e.g. intraframe, interframe coding}
Definition statement

This place covers:

This group is not used anymore, its content, which is not related with computer graphics data compression/coding, are transferred to H04N 19/105, H04N 19/103 or H04N 19/107.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Coding or prediction mode selection

H04N 19/103

Predictor

H04N 19/105

Intracode mode selection

H04N 19/107

{Statistical coding, e.g. Huffman, run length coding}
Definition statement

This place covers:

This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/13, H04N 19/91.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Variable length coding (VLC) or entropy coding

H04N 19/13, H04N 19/91

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

VLC

Variable length coding

{Transform coding, e.g. discrete cosine transform}
Definition statement

This place covers:

This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/60.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Transform coding

H04N 19/60

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

DCT

Discrete cosine transform

{Vector quantisation}
Definition statement

This place covers:

This group is not used anymore, its content, which is not related with computer graphics data compression/coding, will be transferred to H04N 19/94.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Vector coding

H04N 19/94

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "vector coding" and "vector quantization"
Contour coding, e.g. using detection of edges
Definition statement

This place covers:

Means or steps for the compression/coding of computer graphics data using contour/shape coding method, e.g. by detection of edges.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Shape coding for video objects

G06T 9/20, H04N 19/20

Special rules of classification

Documents classified in G06T 9/20, H04N 19/20 are transferred to G06T 9/20.

The compression/coding data concerning in this sub-group includes:

  • computer graphics data, e.g. vector graphics data
  • natural image/video data.
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

SVG

Scalable Vector Graphics

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "contour coding" and "shape coding"
  • "vector graphics" and "scalable vector graphics"
Tree coding, e.g. quadtree, octree
Definition statement

This place covers:

Means or steps for the compression/coding of computer graphics data by using a tree hierarchy, e.g. quadtree, octree, and similar.

The documents concerning compression/coding of:

  • computer graphics object models, scene models and related metadata, e.g. depth data,

are classified here.

References
Limiting references

This place does not cover:

Modelling by tree structure

G06T 17/005

Natural image/video tree coding

H04N 19/96

Informative references

Attention is drawn to the following places, which may be of interest for search:

Tree description

G06T 17/005

Tree coding

H04N 19/96

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Bintree or binary tree

tree structure in which each node has at most two child nodes

Quadtree or quad tree

tree structure in which each node has at most four child nodes

K-tree

tree structure in which each node has at most K child nodes

Hextree

tree structure in which each node has at most six child nodes

Volume octree

tree structure in which each voxel is subdivided into at most 8 subvoxels

Surface octree

Volume octree with incorporated surface information

Multi tree

directed acyclic graph in which the set of nodes reachable from any node forms a tree

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "scene graph", "scene description graph" and "scene model"
2D [Two Dimensional] image generation
Definition statement

This place covers:

  • Documents dealing with generating a 2D image or texture in general. To a large extend, but not exclusively, G06T 11/00 covers image generation "from a description to a bit-mapped image" in general.
  • Software packages, systems
  • Caricaturing, Identikit
  • Fusion of images with different objects, e.g. fusion of real and virtual images, labelling of 2D images
  • Clipping of 2D images
  • 2D and 3D reconstruction from projections, e.g. for computed tomography.
  • Device independent techniques, i.e. it is not for documents which are specially adapted for I/O devices such as printers, scanners or displays.
  • Note:

General idea for G06T 11/00:

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Image processing specially adapted for radiation diagnosis

A61B 6/52

Map generation for navigation systems

G01C 21/00

Producing output data for printers

G06K 15/02

Controller for display circuits, e.g. for LCDs, Plasma, OLEDs

G09G 5/00

Image generation for scanner, fax-machines, copy machines

H04N 1/00

Studio circuits for video generation, mixing, special effects, blue/green screens

H04N 5/262

Informative references

Attention is drawn to the following places, which may be of interest for search:

Generating of panoramic or mosaic images

G06T 3/4038

Generating high dynamic range images (HDR)

G06T 5/92

Non-photorealistic rendering in 3D

G06T 15/02

Input arrangements or combined input and output interaction between user and computer (user interfaces)

G06F 3/01

Video editing

G11B 27/00

{Texturing; Colouring; Generation of texture or colour (inpainting G06T 5/77)}
Definition statement

This place covers:

Texture generation

  • Textures; endless, periodic pattern
  • Texture synthesis, procedural textures
  • Neural style transfers
  • Brush strokes
  • Fractals; Julia sets; Koch curves

Colour generation, changing of selected colours

  • Colour palettes, schemes; colour LUT; CLUT
  • False colours
  • Simulation of watercolour, oil paint, airbrush

Illustrative examples:

media199.png

media200.png

References
Limiting references

This place does not cover:

Inpainting

G06T 5/77

Informative references

Attention is drawn to the following places, which may be of interest for search:

Texture mapping

G06T 15/04

Colour palettes, CLUTs for displays

G09G 5/00

Colour space manipulation

H04N 1/60

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

LUT

look-up table

CLUT

colour look-up table

{Reconstruction from projections, e.g. tomography}
Definition statement

This place covers:

  • Reconstruction from tomographic projections, i.e. measurements of an unknown object function (e.g. density of matter, activity distribution) using penetrating radiation or electromagnetic waves, described by radiation transport equations, e. g. integration along lines (= Radon transform), for e. g. refraction tomography, CT, SPECT, PET, Tomosynthesis, Optical Tomography.
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Impedance measuring for diagnostic purposes

A61B 5/053

Apparatus for radiation diagnosis

A61B 6/00

Radiation diagnosis devices using data or image processing specially adapted for radiation diagnosis

A61B 6/52

Diagnostic device using ultrasound

A61B 8/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Image enhancement in general

G06T 5/00

Image analysis, incl. biomedical image inspection, image registration, segmentation, analysis of motion, analysis of geometric attributes

G06T 7/00

Depth or Shape recovery, from multiple images

G06T 7/55

Analysis of materials using tomography, e.g. CT

G01N 23/046

NMR

G01R 33/4824

Measuring and detection of X-radiation

G01T 1/00

ICT specially adapted for processing medical images, e.g. editing

G16H 30/40

Special rules of classification

In this group, it is desirable to add the indexing codes of groups G06T 2211/404 - G06T 2211/464.

The following list of symbols from the series G06T 2211/404 - G06T 2211/464 should be allocated to documents in G06T 11/003 whenever relevant:

  • G06T 2211/404 Angiography - Angiographic reconstruction includes all the reconstruction methods concerning vessels, tree structures etc.
  • G06T 2211/408 Dual energy - Reconstruction from dual or multi energy acquisition, polychromatic X-rays, photon-counting CT
  • G06T 2211/412 Dynamic - Dynamic reconstruction, i.e. moving objects are involved or motion compensation is required (e. g.: heart, lung movement, etc...)
  • G06T 2211/416 Exact reconstruction - Exact or quasi-exact reconstruction algorithms (in contrast to approximate algorithms)
  • G06T 2211/421 Filtered Back Projection based methods (the projection data can be handled sequentially, view-by-view)
  • G06T 2211/424 Iterative - Iterative methods including all the methods using iterations independent of the reconstruction method per-se (e.g. maximum likelihood (ML) or maximum a posteriori (MAP) estimation, regularisation, compressed sensing)
  • G06T 2211/428 Real-time - Real time reconstruction, e.g. fluoroscopy, intra-operative CT
  • G06T 2211/432 Truncation - All or part of the data from the detectors are spatially truncated, or incomplete projection data is used.
  • G06T 2211/436 Limited angle - limited-angle or few view acquisition, tomosynthesis
  • G06T 2211/441 AI-based methods, e.g. deep learning or convolutional artificial neural networks
  • G06T 2211/444 Low dose acquisition, reduction of radiation dose
  • G06T 2211/448 Involving metal artefacts, streaking artefacts, beam hardening, photon starvation
  • G06T 2211/452 Involving suppression of scattered radiation or scatter correction
  • G06T 2211/456 Optical coherence tomography [OCT]
  • G06T 2211/461 Phase contrast imaging or dark field imaging
  • G06T 2211/464 Dual or multimodal imaging, i.e. combining two or more imaging modalities, e.g. PET-CT, PET-MRI
Synonyms and Keywords

In patent documents, the following abbreviations are often used:

CT

Computed Tomography

NMR

Nuclear Magnetic Resonance

MRI

Magnetic Resonance Imaging

SPECT

Single-Photon-Emission Computed Tomography

PET

Positron Emission Tomography

{Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating}
Definition statement

This place covers:

Specific pre-processing for tomographic reconstruction

  • Calibration
  • Source positioning
  • Synchronisation
  • Scouts
  • Rebinning
  • Scatter correction
  • Attenuation correction
  • Metal artefact reduction (MAR)

Example: Scatter and beam hardening correction in CT applications

media195.png

{Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods}
Definition statement

This place covers:

  • Inverse problem, transformation from projection-space into object-space
  • Fourier methods
  • Algebraic methods
  • Back-projection
  • Statistical Methods, e.g. maximum likelihood
  • Compressed sensing, sparsity
  • AI-based methods, e. g. neural networks

Example: Reconstruction method for cone-beam CT

media196.png

media197.png

{Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction}
Definition statement

This place covers:

  • Specific post-processing after tomographic reconstruction
  • Processing which relies essentially on unique properties of tomographic images, e.g. projection geometry or interactions of radiation with matter
  • Voxelisation
  • Artefact correction (e.g. scatter, metal, cone-beam)

Example: Method for post- reconstructive correction of images of a computer tomograph

media198.png

{Drawing of straight lines or curves}
Definition statement

This place covers:

  • Rendering, scan conversion of vectors, lines, ellipses, circles
  • Offset curves, contour curves
  • Wide, thick lines or strokes
  • Splines, B-splines, NURBS; Bézier, algebraic, parametric, polynomial, cubic curves
  • Approximation of curves or polygons
  • Antialiasing of lines and curves; e. g. using supersampling; subpixel or area weighting
  • Font rendering, e.g. scalable, outline, contour, edge fonts
  • Sketching; freehand curve drawing

media201.png

media202.png

References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Vehicle instruments

B60K 35/00

Printer fonts

G06K 15/02

Informative references

Attention is drawn to the following places, which may be of interest for search:

Vector coding

G06T 9/20

Filling a planar surface by adding surface attributes

G06T 11/40

Entering handwritten data

G06F 3/04883

Font handling; Temporal or kinetic typography

G06F 40/109

Feature extraction by contour coding

G06V 10/469

Display character generators

G09G 5/24

{Drawing of charts or graphs}
Definition statement

This place covers:

Illustrative examples:

media203.png

media204.jpg

  • Diagram, graph layout; directed graph; flow graph; flowchart
  • Venn diagram; nested tree-map
  • Pie, tile, column, bar, business charts
  • 2D and 3D Visualization of data; fluid flows; vector fields; scattered data
  • Sketched diagrams or graphs
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Navigational instruments, e.g. for aircrafts

G01C 21/00, G01C 23/00

ICT specially adapted for bioinformatics-related data visualisation, e.g. displaying of maps or networks

G16B 45/00

ICT specially adapted for medical reports, e.g. generation or transmission thereof

G16H 15/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Animation of fluid flows, 2D character animation

G06T 13/60, G06T 13/80

Input devices, GUIs

G06F 3/048

GUI programs, e.g. file browsers

G06F 9/451

Menu systems, graphical querying

G06F 16/54, G06F 16/532

Administration, e.g. office automation or reservations; resource or project management

G06Q 10/00

Finance, e.g. banking, investment or tax processing; Insurance, e.g. risk analysis or pensions

G06Q 40/00

Network visualisation or monitoring

H04L 41/06

Filling a planar surface by adding surface attributes, e.g. colour or texture
Definition statement

This place covers:

  • Polygon scan conversion; rasterisation
  • Scan-line algorithms, fragment processing
  • Antialiasing, supersampling, subpixel or coverage masks
  • Tile-based rendering
  • Filling of a 2D shape, e.g. polygon, circle, ellipse, region, area
  • Interior/exterior determination; edge lists or edge flags
  • Colour blends, gradient fills, seed filling, e.g. for vector graphics

Illustrative example:

media205.png

media206.jpg

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Drawing or scan conversion of lines and fonts

G06T 11/203

3D image rendering (architectures)

G06T 15/00 ( G06T 15/005)

Control of the frame buffer(s)

G09G 5/39

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • In patent documents the terms "rasterising", "scan conversion" and "rendering" are often used as synonyms.
Editing figures and text; Combining figures or text
Definition statement

This place covers:

  • Editing of bitmaps or vector graphics
  • Page layout, page composition, e.g. photo-album, collages, business or greeting cards
  • Combining small images by editing in order to generate a new (big) one
  • Graphical simulations, e. g. for 2D cosmetic or hairstyle
  • Electronic or desktop publishing (DTP), Page Description Language (PDL), PostScript, TeX
References
Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

Face sketching with eye witnesses

A61B 5/117

PDL specifically for printers

G06K 15/02

ICT specially adapted for processing medical images, e.g. editing

G16H 30/40

Informative references

Attention is drawn to the following places, which may be of interest for search:

Mosaic or panoramic images

G06T 3/40

Image registration

G06T 7/30

Annotating 3D objects with text

G06T 19/00

Input devices, GUIs

G06F 3/048

Formatting, i.e. changing of presentation of documents

G06F 40/103

Form filling or merging of text

G06F 40/174

Document analysis

G06V 30/41

Composing, repositioning or geometrically modifying originals, from scanning

H04N 1/387

Synonyms and Keywords

In patent documents, the following abbreviations are often used:

DTP

Desktop Publishing

PDL

Page Description Language

Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard
Special rules of classification

This group is not used for classification. Its subject-matter is covered by G06F 3/00 and subgroups

Animation
Definition statement

This place covers:

Generating and displaying a sequence of images of artwork or model positions in order to create the effect of movement in a scene.

Animation of data representing a 3D or 2D image model or object.

Time related computation of 2D or 3D images, generation of a sequence of 2D or 3D images is classified in this group.

This group is also given as classification to indicate that animation aspects are present but the invention lies in another group than G06T 13/00.

Documents only dealing with related subject-matter like for example motion capture for animation or navigation in virtual worlds and merely mentioning animation in passing are not classified in G06T 13/00 i.e. the generation of an animation has to be a substantive part of the document to be classified here.

References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transformations for image warping

G06T 3/18

Motion capture (for animation)

G06T 7/20

3D modelling for computer graphics

G06T 17/00

Manipulation of 3D models for computer graphics

G06T 19/00

Navigation in virtual worlds

G06T 19/003

Video games

A63F 13/00

Computer aided design using simulation

G06F 30/3308

Processing, recording or transmission of stereoscopic or multi-view image signals

H04N 13/10

Model based coding of video objects

H04N 19/20

Special rules of classification

Deforming meshes for animation purposes get both classifications: G06T 13/00 or one of its subgroups and G06T 17/20.

The series G06T 2213/00 of Indexing Codes is reserved for the use of documents classified in G06T 13/00 and subgroups. They should be allocated to documents in G06T 13/00 and subgroups whenever relevant:

G06T 2213/00

Head group of indexing scheme for animation. This symbol should not be allocated to any documents because the group only serves as an internal node in the group hierarchy.

G06T 2213/04

Animation description languages: computer languages for the description of an animation.

G06T 2213/08

Animation software package: also includes hardware packages for animation.

G06T 2213/12

Rule based animation: e.g. rules for behaviour, script, personality.

Furthermore, Indexing Codes from the series G06T 2200/00 and G06T 2210/00 should be allocated to documents whenever relevant. Specific symbols from these series that are especially relevant for the documents in a certain subgroup are mentioned under the "Specific rules for classification" of the respective subgroups.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Animation system

traditional animation systems are based on key-frames, which are a succession of individual states (the position, orientation, and current shape of objects) specified by an animator or user

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "simulation (of motion)" and "animation"
3D [Three Dimensional] animation
Definition statement

This place covers:

Subject matter wherein the animated image data presents a three-dimensional image model or object.

Means or steps for the generation of a sequence of 3D images.

Documents in this group concern the generation of an animation of 3D objects in general and articulated 3D objects not representing characters.

Simulations with 3D objects (e.g. bouncing balls) or 2D surfaces in 3D space (e.g. cloth) are classified here.

References
Limiting references

This place does not cover:

Nominally claimed subject-matter directed to animation with significant user interaction or manipulation

G06T 19/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Coding of wireframe meshes for animation

G06T 9/00

Simulating properties, behaviour or motion of objects in video games

A63F 2300/64

Special rules of classification

For documents concerning both 2D and 3D animation of objects the first place priority rule is applied, i.e. they are classified only in G06T 13/20 or its subgroups.

Documents where cloth moves according to wind effects are classified in both subgroups G06T 13/20 and G06T 13/60.

For specific aspects of documents in this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated to documents in G06T 13/20 and subgroups whenever relevant:

For animation of cloth: G06T 2210/16

For collision of 3D objects: G06T 2210/21

For fluid flows: G06T 2210/24

For animation using particles: G06T 2210/56

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

CFD

Computational fluid dynamics

{driven by audio data}
Definition statement

This place covers:

Means or steps for the generation of an animation sequence based on audio data.

The input is audio data, e.g. music, speech data, i.e. no written text.

Changes e.g. in motion, colour, shape or position of objects in the animation are generated based on time events in the audio data, e.g. the beat in the music or the change of instrumentation.

References
Limiting references

This place does not cover:

Electrophonic musical instruments

G10H

Emotion analysis from speech for face animation or talking heads

G10L 17/26

Lip-synchronization or synthesis of lip shapes (visemes) from speech for face animation or talking heads

G10L 21/10

Informative references

Attention is drawn to the following places, which may be of interest for search:

Animation based on written text

G06F 40/00

Video editing or indexing or timing

G11B 27/00

Special rules of classification

Documents where the audio input animates a 2D object are classified in both subgroups G06T 13/205 and G06T 13/80.

of characters, e.g. humans, animals or virtual beings
Definition statement

This place covers:

Subject matter wherein the animated object exhibits lifelike motions or behaviours.

Means or steps for the generation of an animation sequence of articulated objects representing virtual characters or for the generation of an animation sequence of "body" parts.

The animated characters herein include, e.g. humans, animals or virtual beings.

Animation of a character normally consists of an articulated skeleton surrounded by an implicitly defined volume or a wireframe surface mesh.

Lifelike motions include walking, running, waving or talking. Lifelike behaviours include showing emotions or reactions to events.

Animation of e.g. faces, lips, eyes, gestures, hair or feathers on a character.

Documents concerning only the synthesizing aspect of character animations for Tele- or Videoconferencing (no image capturing, no data transmission)

References
Limiting references

This place does not cover:

Interaction of avatars in virtual worlds

A63F 13/00

Interaction of avatars in virtual worlds for business

G06Q 30/00

Tele- or Video-conferencing

H04N 7/14

Informative references

Attention is drawn to the following places, which may be of interest for search:

Animation of articulated objects in general, i.e. not exclusively or not with the main application for character animation

G06T 13/20

Garment try-on simulators

G06T 19/00 , G06T 2210/16

Computing the motion of game characters with respect to other game characters, virtual objects or elements of a game scene

A63F 2300/6607

Head tracking input arrangements for interaction between user and computer

G06F 3/012

Eye tracking input arrangements for interaction between user and computer

G06F 3/013

Emotion analysis from speech for face animation or talking heads

G10L 17/26

Lip-synchronization or synthesis of lip shapes (visemes) from speech for face animation or talking heads

G10L 21/10

Special rules of classification
  • Documents where the characters are only 2D are classified in both subgroups G06T 13/40 and G06T 13/80.
  • Documents where the hair on a character is moved by wind effects are classified in both subgroups G06T 13/40 and G06T 13/60.
  • Documents where the animation data for the character results from motion capture of real characters are classified in both subgroups G06T 7/00 and G06T 13/40.
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Avatar

graphical representation of the user or the user's character

(inverse) kinematics

calculates the motions necessary to achieve a desired position of the character

Mocap

motion capture

Motion retargeting

transferring the motion from one character to another, different one

Skeleton

tree structure composed of several joints to facilitate modelling the motion of the character

Skinning

technique to deform the skin from the deformation of the skeleton

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Avatar" and "character"
of natural phenomena, e.g. rain, snow, water or plants
Definition statement

This place covers:

Subject-matter wherein the animated images are associated with natural phenomena.

Means or steps for the generation of a simulation of natural elements or phenomena.

Documents concerning:

  • the simulation of rain, water, foam, water waves, clouds, fog, snow, fireworks, explosions or
  • wind effects on grass, plants, flags or hair or
  • growing processes of plants or beings or
  • destruction processes

are classified here.

References
Limiting references

This place does not cover:

Physical forces (other than wind) acting on 3D objects, e.g. simulation of a flying bullet or bouncing of a ball

G06T 13/20

The simulation of behavioural effects of characters, e.g. the flee behaviour of sea anemons

G06T 13/40

Informative references

Attention is drawn to the following places, which may be of interest for search:

Simulation of fluid flows in general (3D flows)

G06T 13/20

Simulation of fluid flows in general (2D flows)

G06T 13/80

Computer aided design using simulation

G06F 30/3308

Special rules of classification

Documents where the hair on a character is moved by wind effects are classified in both subgroups G06T 13/40 and G06T 13/60.

Documents where cloth moves according to wind effects are classified in both subgroups G06T 13/20 and G06T 13/60.

For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:

For fluid flows: G06T 2210/24

For animation using particles, e.g. fireworks, dust: G06T 2210/56

For weathering effects like e.g. aging, corrosion: G06T 2210/64

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Weathering

aging process of material by exposure to weather, e.g. wind, water, certain temperatures

2D [Two Dimensional] animation, e.g. using sprites
Definition statement

This place covers:

  • Subject matter wherein the animated image data is a 2D image object.
  • Means or steps for time related computation of a sequence of 2D images, e.g. a small moveable 2D graphic pattern on a display, often used in video game animation.
  • Generation of 2D animated cartoons.
  • Animation of 2D text, 2D letters.
  • Change over in slide shows, leafing through digital photo albums.
  • General aspects of 2D morphing or keyframe interpolation.
  • All documents exclusively dealing with the animation of 2D images, i.e. no 3D animation.
  • Generation of 2D motion blur.
References
Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transformations for image warping

G06T 3/18

Video editing or indexing or timing

G11B 27/00

Special rules of classification
  • Documents where the animated 2D object is a character, i.e. 2D character animation, are classified in both subgroups G06T 13/40 and G06T 13/80.
  • Documents where the motion blur concerns only the background image are classified in both subgroups G06T 13/20 and G06T 13/80.
  • Documents where the audio input animates a 2D object are classified in both subgroups G06T 13/205 and G06T 13/80.
  • For documents concerning both 2D and 3D animation of objects with similar algorithms the first place priority rule is applied, i.e. they are classified only in G06T 13/20 or its subgroups, not in G06T 13/80.
  • Documents concerning morphing or warping are additionally classified with the Indexing Code G06T 2210/44.
Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Keyframe interpolation

generation of a smooth transition between a starting and an ending keyframe

Morphing

continuous transformation between images (shape and colour)

Sprite

2D image or animation that is integrated into a larger 2D scene

Warping

geometric transformation of the 2D object shape

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Keyframe interpolation" and "inbetweening"
  • "Morphing" and "warping"
3D [Three Dimensional] image rendering
Definition statement

This place covers:

Means or steps for generating a displayable monoscopic image from a 3D model or 3D data set.

The 3D model is a description of three-dimensional objects in a strictly defined language or data structure.

A 3D data set may include voxel data.

Included in this group are input data sets of 3D coordinates or higher.

This group covers the geometry subsystem of the graphics rendering pipeline, i.e. modeling transformation, lighting, viewing transformation, clipping, mapping to viewport.

References
Limiting references

This place does not cover:

Rasterization

G06T 11/40

Visualization of models without surface characteristics or attributes

G06T 17/00

Manipulation and visualization of 3D models for computer graphics

G06T 19/00

Image signal generator

H04N 13/20

Informative references

Attention is drawn to the following places, which may be of interest for search:

Video games

A63F 13/00

Special rules of classification

The boundaries between G06T 15/00 (in particular G06T 15/08 and G06T 15/10) on the one hand, and G06T 3/06 and subgroups on the other hand is not yet completely determined. Thus double classification should be considered.

Architectural elements are in general classified in G06T 15/005. However, if the architectural element is only related to a certain part or function within the graphics pipeline (e.g. texture mapping or ray tracing) the document is classified in the respective subgroup (e.g. G06T 15/04 for texture mapping) and additionally the Indexing Code G06T 2200/28 is assigned.

The series G06T 2215/00 of Indexing Codes is reserved for the use of documents classified in G06T 15/00 and subgroups. They should be allocated to documents in G06T 15/00 and subgroups whenever relevant:

G06T 2215/00

Indexing scheme for image rendering: SHOULD BE EMPTY!

G06T 2215/06

curved planar reformation of 3D line structures: CPR of tubular structures (e.g. bronchia, arteries, colon, vertebrae), deployment of line structures in 3D to a 2D plane

G06T 2215/08

gnomonic or central projection: projection from a center of an object, e.g. a ball, to the surrounding surface, related to VTV (virtual television)

G06T 2215/12

shadow map, environment map: generation and use of shadow maps, soft shadows, environment maps

G06T 2215/16

using real world measurements to influence rendering: e.g. shadow based on actual light, viewport based on viewer's pose, texturing with real-time output from camera

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

OpenGL

Open Graphics Library: standard specification defining an application programming interface (API) for writing applications that produce 2D and 3D computer-graphics

Direct3D

standard specification defining an API for writing graphic applications; is part of DirectX

Graphics pipeline

rendering pipeline

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "rasterization" and "rendering"
{General purpose rendering architectures}
Definition statement

This place covers:

Functional or operational structure of an image rendering computer system.

Documents in this group focus largely on the way by which the central processing unit (CPU) performs internally with the different units (e.g. the GPU) and accesses memories.

Information relevant is the selection and interconnection of hardware components or functional units in 3D rendering systems.

Hardware and software shader units.

This subgroup is given as classification if the document covers elements of the whole pipeline architecture or if the architectural element covers multiple functions of the graphics pipeline.

References
Limiting references

This place does not cover:

Architectures for general purpose image data processing

G06T 1/20

Memory management for general purpose image data processing

G06T 1/60

Program control in graphics processors

G06F 9/44

Use of graphics processors for other purposes than rendering

G06F 9/44

Graphics controllers, e.g. control of visual indicators or display of a graphic pattern

G09G 5/363

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

GPU

graphics processing unit

Shader unit

instruction sets (in software or hardware) to calculate rendering effects on the graphics hardware

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "shader unit" and "hardware shader"
Non-photorealistic rendering
Definition statement

This place covers:

Means or steps for rendering a scene in a style intended to look like a painting or drawing.

Illustrative examples of non-photorealistic rendering may include, e.g. cartoons, sketches, paintings or drawings.

References
Limiting references

This place does not cover:

Generation of texture or colour, e.g. brush strokes

G06T 11/001

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Cartoon-style rendering", "Freehand-style rendering", "Handmade-style rendering", "Ink rendering", "Painterly rendering", "Pen rendering", "Pencil rendering", "Silhouette rendering", "Sketchy rendering", "Toon-Style rendering" and "non-photorealistic rendering"
Texture mapping
Definition statement

This place covers:

Means or steps for applying or mapping surface detail or colour pattern to a computer-generated graphic, geometry or 3D-model.

Texture mapping used for the generation of a surface image in final format or form is classified herein.

MIP maps, bump mapping, displacement mapping, environment mapping, shadow maps.

References
Limiting references

This place does not cover:

Generation of texture

G06T 11/001

Special rules of classification

Documents dealing with shadow maps are classified in both subgroups G06T 15/04 and G06T 15/60.

Documents dealing with environment mapping are classified in both subgroups G06T 15/04 and G06T 15/506.

Documents concerning environment maps or shadow maps are additionally classified with the Indexing Code G06T 2215/12.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Texel

texture element or texture pixel

Ray-tracing
Definition statement

This place covers:

Means or steps for creating an image by tracing rays from a viewpoint through each pixel to a visible point on an object.

Special rules of classification

Ray casting for hidden part removal is classified in both subgroups G06T 15/06 and G06T 15/40.

Generation of a photon map via photon tracing is classified in both subgroups G06T 15/06 and G06T 15/506.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Ray casting

non-recursive variant of ray tracing

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "ray tracing" and "ray casting (especially in early patent documents)"
Volume rendering
Definition statement

This place covers:

Means or steps for displaying a two-dimensional representation of three-dimensional volume data sets.

Volume data sets are typically voxels or 3D data sets consisting of groups of 2D slice images acquired by e.g. CT, MRT.

Illustrative examples of volume rendering techniques are Direct Volume Rendering Techniques (e.g. splatting, shear warp), Maximum Intensity Projection (MIP), Minimum Intensity Projection, Curved Planar Reformation (CPR), Multiplanar Reformatting (MPR), Curved Multiplanar Reformatting (CMPR).

Technical details of the projection or mapping technique used for volume rendering.

References
Limiting references

This place does not cover:

Definition of the position of the projection plane, surface or curve for volume rendering

G06T 19/00 , G06T 2219/008

Informative references

Attention is drawn to the following places, which may be of interest for search:

Volumetric displays for the representation of 3D data sets

H04N 13/388

Special rules of classification

Documents concerning curved planar reformation of tubular structures are additionally classified with the symbol G06T 2215/06 .

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

CMPR

Curved Multi-Planar Reformatting

CPR

Curved Planar Reformation

MIP

Maximum (or Minimum) Intensity Projection

MPR

Multi-Planar Reformatting

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "curved Planar Reformatting", "curved Multiplanar Reformatting", "curved Multiplanar Reformation", "deployment" and "Curved Planar Reformation"
Geometric effects
Definition statement

This place covers:

Means or steps for changing the visualization of a graphical object due to view transformations.

Generation of views, multiple views.

Visualization of a graphical object through projection, e.g. parallel projections, oblique projections, gnomonic projections

Mapping of the 3D graphical object on a subspace for visualization, e.g. on (a part of) a plane or on a surface in 3D space (e.g. a bend virtual screen)

References
Limiting references

This place does not cover:

Visualization of volume data sets

G06T 15/08

Perspective projections

G06T 15/20

Changes in the visualization related to lighting effects

G06T 15/50

Changes in the visualization due to geometric transformations of the object (rotation, translation etc.)

G06T 19/00

Stereoscopic imaging or 3D displays

H04N 13/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric transformations in the plane of the image, i.e. from 2D to 2D

G06T 3/00

Special rules of classification

The boundaries between G06T 15/10 on the one hand, and G06T 3/08 on the other hand is not yet completely determined. Thus double classification should be considered.

Documents concerning gnomonic or central projections are additionally classified with the Indexing Code G06T 2215/08.

Perspective computation
Definition statement

This place covers:

Means or steps for presenting a 3D-object on a screen such that objects closer to the viewpoint appear larger than if farther from the viewpoint.

Perspective projections of graphical objects.

Subject matter related to details of viewpoint determination or computation with claimed or disclosed rendering aspects.

References
Limiting references

This place does not cover:

View determination or computation without rendering

G06T 19/00

Changing the viewpoint for navigation without details of view generation

G06T 19/003

Transformation of image signals corresponding to virtual viewpoints

H04N 13/111

Informative references

Attention is drawn to the following places, which may be of interest for search:

Changing parameters of virtual cameras in video games

A63F 2300/66

Navigational Instruments, e.g. visual route guidance with on-board computers using 3D or perspective road maps

G01C 21/3635

Interaction techniques, e.g. control of the viewpoint to navigate in a 3D environment

G06F 3/04815

TV systems, e.g. alteration of picture orientation, perspective, position etc.

H04N 5/2628

Stereoscopic images

H04N 13/00

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Multiple views

rendering a graphical object seen from different viewpoints

View generation

visual rendering of geometric properties of a graphical object seen from a certain viewpoint

Viewpoint alteration

change of a viewpoint (of a virtual camera)

Virtual camera

display of a view of a 3D virtual world

Virtual Studio

technological tools for simulating a physical television or movie studio, the image of the virtual camera is rendered in real-time from the same perspective as the real camera in 3D space

{Image-based rendering}
Definition statement

This place covers:

Means or steps for rendering a 3D-object or scene using a set of two-dimensional images of it.

Generation of a new view of a graphics object exclusively from 2D images of the object without prior generation of a 3D model.

Rendering using billboards.

Pixel based rendering or point based rendering of 3D objects which are not volume data.

Depth image-based rendering.

References
Limiting references

This place does not cover:

From multiple images

G06T 7/55

Determining parameters from multiple pictures

G06T 7/97

Splatting of volume data

G06T 15/08

Rendering of a 3D model generated from 2D images of it

G06T 17/00

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

IBR

image-based rendering

Billboard

textured rectangles that are used as simplified version of 3D models for rendering

Clipping
Definition statement

This place covers:

Means or steps for eliminating those portions of graphics primitives that extend beyond a predetermined region.

The predetermined region may include a viewing volume or any subset of the view volume of any shape.

The shape of the graphics primitives that partly extend beyond the predetermined region is modified.

References
Limiting references

This place does not cover:

Cropping of 2D images

G06T 11/60

Special rules of classification

Documents where a bounding box or shape is defined or used are additionally classified with the Indexing Code G06T 2210/12.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Bounding box or bounding shape

minimal box or convex polygon surrounding the graphic object

Viewport

rectangular area on the screen for displaying the rendered graphical object

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "viewing volume", "view volume" and "view frustum"
Hidden part removal
Definition statement

This place covers:

Means or steps for determining which surfaces or part of surfaces of a graphic object are visible from a certain viewpoint and optionally removing them.

Hidden surface or line removal.

Culling, e.g. frustum culling, backface culling, frontface culling, occlusion culling. Culling removes graphics objects or scene graph nodes that are completely falling outside the view frustum. This is usually performed before clipping.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

VSD

visible surface determination

{using Z-buffer}
Definition statement

This place covers:

Means or steps for determining which surfaces or parts of surfaces of a graphic object are visible from a certain viewpoint and optionally removing them using Z-Buffer information.

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Z-Buffer" and "Depth-Buffer"
Lighting effects
Definition statement

This place covers:

Means or steps for determining intensity or colour on a surface of an object based on interaction of light with the object, considering surface properties or its orientation.

{Blending, e.g. for anti-aliasing}
Definition statement

This place covers:

Means or steps for computing an image or pixel-value form several (source) images or pixel-values taking into account their weighting factors.

Weighting factors are usually opacity or transparency associated values.

Compositing.

Vertex or geometry blending.

References
Limiting references

This place does not cover:

Video editing or indexing or timing

G11B 27/00

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Alpha channel or alpha transparency channel

a portion of each pixel's data that is reserved for transparency information

Alpha compositing

combining an image with a background to create the appearance of partial or full transparency

Matte

contains the coverage information, e.g. the shape of the object to be drawn

{Illumination models}
Definition statement

This place covers:

Means or steps for computing the amount of energy absorbed, reflected, diffracted or transmitted by an object (or element) to be 3D rendered.

Illumination models usually include composition, direction or geometry of the light source, surface orientation and/or surface properties of the object.

Local illumination models only take into account light arriving straight from the light source.

Global illumination models take into account light arriving after interaction with another object in the scene.

Direct light sources, indirect light sources, multiple light sources, physically based illumination models.

Special rules of classification

Generation of a photon map via photon tracing is classified in both subgroups G06T 15/06 and G06T 15/506.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

BRDF

bidirectional reflectance distribution function

Radiosity
Definition statement

This place covers:

Means or steps for rendering graphic objects through computing the balancing of substantially all light energy coming toward and going away from every point on a surface.

In radiosity, the balance of light energy is usually independent of the viewpoint.

References
Limiting references

This place does not cover:

Subject matter directed to illumination models that only consider viewpoint dependent vectors

G06T 15/506

Shadow generation
Definition statement

This place covers:

Means or steps for determination and generation of a region of darkness on an object where light is at least partially blocked by another graphical object.

The blocking object herein might be a semitransparent object.

Shadow computation normally refers to computation of shadow caused by one object onto another object.

Concave Objects where the shadow caused by one portion of the object falls onto another portion of the concave object is classified herein, e.g. an "L" shaped object can cast a shadow from the vertical portion onto the horizontal portion.

Special rules of classification

Documents concerning the calculation of the position of the light source from the shadow are classified in both subgroups G06T 15/50 and G06T 15/60.

Documents concerning shadow maps are classified in both subgroups G06T 15/04 and G06T 15/60 and are additionally classified with the Indexing Code G06T 2215/12.

Shading
Definition statement

This place covers:

Means or steps for assigning colour or intensity alterations or gradations in a particular area of a graphical object's surface based on its relationship with light.

Relationship of light herein includes vector of light which consists of angle and distance or it even may include ambient light.

Surfaces may include polygons or curved surfaces or patches.

Interpolation of colour or shade based on vertex data or other pixels on the surface is classified herein.

Shading caused by the object blocking light on the back side of the same object with respect to a light source is classified herein.

References
Limiting references

This place does not cover:

Shader units

G06T 15/005

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Scanline interpolation

Interpolation of values along each surface edge linearly and interpolatation of values in the interior of each surface from left edge to rightedge, i.e. along a scanline

Phong shading
Definition statement

This place covers:

Means or steps for interpolating surface normals from the vertices of a graphical object in rasterizing a surface thereby calculating specular reflections on a graphical object.

Gouraud shading
Definition statement

This place covers:

Means or steps for producing a smooth variation of surface intensity over a surface by bilinearly interpolating the color or intensities from the vertices of a graphical object.

Three dimensional [3D] modelling, e.g. data description of 3D objects
Definition statement

This place covers:

Means or steps for generating a description of a 3D model or scene.

The 3D model description is usually generated from point clouds, 2D images, mathematical definitions for the description of curves, surfaces or volumes or data from different sensors.

Marching Cubes, sampled distance fields.

Image data format conversions, e.g. converting polar coordinates to rectangular coordinates or IGES to combinatorial geometry descriptions.

References
Limiting references

This place does not cover:

Depth or shape recovery

G06T 7/50

Manipulating 3D models or images for computer graphics

G06T 19/00

Route guidance using 3D or perspective road maps including 3D objects and buildings

G01C 21/3635

Generation of 3D objects with NC-machines

G05B 19/4099

CAM (Computer aided manufacturing)

G05D 3/00

CAD (Computer aided design) in general

G06F 30/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Methods for drafting or marking-out cutting-out patterns for cloth

A41H 3/007

Collision detection for path planning of manipulators

B25J 9/1666

Collision detection for programme-controlled systems

G05B 19/4061

Image signal generators

H04N 13/268

Special rules of classification

Documents concerning image data format conversion are additionally classified with the Indexing Code G06T 2210/32 - image data format.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

IGES

Initial Graphics Exchange Specification

{Tree description, e.g. octree, quadtree}
Definition statement

This place covers:

Means or steps for generating a hierarchical tree-based description of a 3D model or scene.

Special rules of classification

Documents concerning scene graphs are additionally classified with the Indexing Code G06T 2210/61 - scene description

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "Bintree or binary tree" and "tree structure in which each node has at most two child nodes"
  • "Quadtree or quad tree" and "tree structure in which each node has at most four child nodes"
  • "K-tree" and "tree structure in which each node has at most K child nodes"
  • "Hextree" and "tree structure in which each node has at most six child nodes"
  • "Volume octree" and "tree structure in which each voxel is subdivided into at most 8 subvoxels"
  • "Surface octree" and "Volume octree with incorporated surface information"
  • "Multi tree" and "directed acyclic graph in which the set of nodes reachable from any node forms a tree"
Geographic models
Definition statement

This place covers:

Means or steps for generating 3D models which relate to geographic data.

The geographic data is usually obtained from different sensors, e.g. LIDAR, stereo photogrammetry from aerial surveys, radar, infrared cameras, GPS, satellite photography and maps e.g. topographic maps, road maps, development plans.

Digital Elevation Models (DEM), contour maps, digital cartography.

Superimposing or overlaying of registered geographic data from different sensors.

Editing of maps, e.g. modelling of roofs or generation of 3D models for buildings displayed on a map.

Map revision, map updating.

Calculation of visibility fields for geographic areas.

Geographical fractal modeling.

References
Limiting references

This place does not cover:

Determination of transform parameters for the alignment of images, i.e. image registration

G06T 7/30

Navigation in a road network, GPS for navigation

G01C 21/26

Navigational Instruments, e.g. visual route guidance using 3D or perspective road maps (including 3D objects and buildings)

G01C 21/3635

Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transformations for image registration

G06T 3/14

Special rules of classification

This subgroup is an application oriented group. Therefore, whenever possible, documents classified herein should also be classified in a function oriented group.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

GIS

Geographic Information Systems

AMS

Automated Mapping System

Synonyms and Keywords

In patent documents the following expressions are often used as synonyms:

Chorography

description of a landscape

Choropleth map

thematic map

Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
Definition statement

This place covers:

Means or steps for generating 3D models using boundary or volumetric representations of solid primitive objects.

Incremental feature generation, feature modification or modelling, feature-based design is classified here.

Solid modelling via sheet modelling or via sweeping or extrusion of contours, areas or volumes, e.g. the generation of sweep objects or generalized cylinders.

Modelling of solids using volumetric representations, an "alternating sum of volumes" process, volume or convex decomposition or boundary representations.

Generation of 3D objects from 2D line drawings.

Special rules of classification

For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:

For convex hull for 3D objects: G06T 2210/12

For collision detection or intersection of 3D objects: G06T 2210/21

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

B-rep or BREP

boundary representation

Alternating sum of volumes (ASV) process

a convex decomposition method for volumetric objects

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "sweep object" and "generalized cylinder"
Finite element generation, e.g. wire-frame surface description, {tesselation}
Definition statement

This place covers:

Means or steps for the generation or modification of polygonal surface descriptions of 3D models or parts thereof.

Meshes, grids, tessellations, tessellated surface patches, triangulations, tilings are classified here.

Delaunay triangulation, Voronoi diagrams.

Concatenation of tessellated surface patches, T-junctions.

Meshes for finite element modelling.

References
Limiting references

This place does not cover:

Compression using wireframes

G06T 9/00

Computer-aided design using finite element methods

G06F 30/23

Informative references

Attention is drawn to the following places, which may be of interest for search:

Seismic models

G01V 1/282

Geologic models

G09B 23/40

Special rules of classification

For specific aspects of documents In this group the following additional Indexing Codes from the series G06T 2210/00 should be allocated whenever relevant:

For modelling of cloth: G06T 2210/16

For collision detection or intersection of 3D objects: G06T 2210/21

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

FEM

Finite element modelling

TIN

Triangulated irregular network

T-junction

a spot where two polygons meet along the edge of another polygon

{Re-meshing}
Definition statement

This place covers:

Means or steps for modifying the structure of a mesh by inserting or deleting mesh vertices.

Generation of meshes with different level of detail from a source mesh.

Refinement or simplification of meshes, honeycomb scheme.

The refinement or coarsening may be locally or globally.

Special rules of classification

Documents concerning the generation of meshes with different levels of detail are additionally classified with the Indexing Code G06T 2210/36.

Polynomial surface description
Definition statement

This place covers:

Means or steps for generating a meshfree surface description.

Polynomial surface descriptions, e.g. NURBS, Bézier surfaces, B-spline surfaces, Coons patches, Tensor product patches, without mesh generation or visualization based on tessellations.

Analytical surface descriptions.

Free-form surfaces.

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

NURBS

Non-Uniform Rational B-Spline

Manipulating 3D models or images for computer graphics
Definition statement

This place covers:

Means or steps for changing 3D models, for adding information or for changing the visualization via a user interface.

View determination or computation without rendering details, geometric transformations of the whole 3D object to change the viewpoint.

Manipulating 3D models by multiple users in a collaborative environment.

Annotating or labelling of 3D models with text, markers

Dimensioning and tolerancing of 3D models, e.g. display of dimension information for each part

Display of 3D models as an exploded view drawing.

Unfolding or flattening of 3D models or graphs.

Positioning or defining a cut plane or a curved surface in a 3D volume data set, e.g. for projection in volume rendering.

Manipulating 3D data while displaying or updating several views at the same time, e.g. top, front, and side view or sagittal, coronal, and axial view for medical applications.

Virtual try-on or virtual 3D design systems, e.g. virtual dressing or fitting-rooms, virtual mannequins, virtual interior or garden design, architectural design, virtual car configurators.

For documents in this group the function of manipulating 3D objects is prevailing, not the details how it is achieved. Therefore, the documents are usually general and do not contain specific technical details, e.g. documents concerning the change of the viewpoint via a GUI are classified here whereas documents with mathematical details on the change of the viewpoint and the frustum are classified in G06T 15/20.

References
Limiting references

This place does not cover:

CAD-CAM (Computer Aided Design and Manufacturing)

G05B 19/4097

Generation of 3D objects with NC-machines

G05B 19/4099

Interaction techniques for graphical user interfaces

G06F 3/048

Informative references

Attention is drawn to the following places, which may be of interest for search:

2D cosmetic or hairstyle simulations

G06T 11/60

Video games

A63F 13/00

Computer-aided design

G06F 30/00

Transformation of image signals corresponding to virtual viewpoints

H04N 13/111

Special rules of classification

The boundaries between G06T 19/00 on the one hand, and G06T 3/06 and subgroups and G06T 3/08 on the other hand is not yet completely determined. Thus double classification should be considered.

The Indexing Code series G06T 2219/00 and below is reserved for documents classified in G06T 19/00 and subgroups. They should be allocated to documents in G06T 19/00 whenever relevant:

G06T 2219/00

Indexing scheme for manipulating 3D models or images for computer graphics: SHOULD BE EMPTY!

G06T 2219/004

annotating, labelling: annotating or labelling of 3D models or 3D images with text or markers

G06T 2219/008

cut plane or projection plane definition: positioning or defining a cut plane or a curved surface in a 3D volume data set, e.g. for projection in volume rendering

G06T 2219/012

dimensioning, tolerancing: dimensioning or tolerancing of 3D models, e.g. display of dimension information for each part of the model

G06T 2219/016

exploded view: displaying 3D models as an exploded view drawing

G06T 2219/021

flattening: unfolding or flattening of 3D models or graphs in a 2D plane

G06T 2219/024

multi-user, collaborative environment: collaborative environments, multi-user environments

G06T 2219/028

multiple view windows (top-side-front-sagittal-orthogonal): manipulating 3D data while displaying or updating several views at the same time, e.g. sagittal, axial, and coronal view or top, side, and front view

The Indexing Code series G06T 2219/20 and below is reserved exclusively for documents classified in G06T 19/20. To each document classified in G06T 19/20 at least one of the symbols from this series should be allocated:

G06T 2219/20

Indexing scheme for editing of 3D models: SHOULD BE EMPTY!

G06T 2219/2004

aligning objects, relative positioning of parts: aligning graphical objects or relative positioning of parts of a 3D model

G06T 2219/2008

assembling, disassembling: assembling and disassembling of parts of a 3D model

G06T 2219/2012

colour coding, editing, changing, or manipulating: colour modifications, e.g. colour coding, use of pseudo-colour, highlighting object parts in a different colour

G06T 2219/2016

rotation, translation, scaling: Euclidian transformations of the object or parts thereof, i.e. rotation, translation/dragging/shifting, reflection/mirroring, or size changes of a 3D object or parts thereof

G06T 2219/2021

shape modification: shape modifications of a 3D object, e.g. adding or deleting parts of the object, shearing, free-form deformations

G06T 2219/2024

style variation: modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style (e.g. bold lines, dotted lines), displaying more details of an object or of parts thereof in a separate window

Furthermore, symbols from the Indexing Code series G06T 2200/00 and below as well as G06T 2210/00 and below should be allocated to documents in G06T 19/00 and subgroups whenever relevant.

For the documents in the group G06T 19/00 the following additional symbols from the Indexing Code series G06T 2210/00 and below are especially relevant and should be allocated whenever possible:

For architectural design: G06T 2210/04

For bandwidth reduction: G06T 2210/08

Convex hull for 3D objects: G06T 2210/12

For virtual dressing rooms: G06T 2210/16

For collision detection of 3D objects: G06T 2210/21

For medical applications concerning e.g. heart, lung, brain, tumours: G06T 2210/41

{Navigation within 3D models or images}
Definition statement

This place covers:

Means or steps for generating a sequence of images of a virtual movement (e.g. flight, walk, sail) through a 3D space or scene.

Navigation path or flight path determination.

Virtual navigation within human or animal bodies or organs, e.g. virtual medical endoscopy of the colon, of the ventricular system, of the vascular system, of the bronchial tree, or within 3D objects, e.g. virtual inspection of pipeline tubes.

Walk- or flight-through a virtual museum, a virtual building, a virtual landscape etc.

References
Limiting references

This place does not cover:

Navigational instruments, e.g. visual route guidance using 3D or perspective road maps (including 3D objects and buildings)

G01C 21/3635 , G01C 21/3638

Interaction techniques for GUIs, e.g. control of the viewpoint to navigate in a 3D environment

G06F 3/04815

Application-oriented references

Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:

ICT specially adapted for processing medical images, e.g. editing

G16H 30/40

Informative references

Attention is drawn to the following places, which may be of interest for search:

Segmentation; Edge detection

G06T 7/10

Analysis of geometric attributes

G06T 7/60

3D animation

G06T 13/20

Centreline of tubular or elongated structure

G06T 2207/30172

Virtual racing games

A63F 13/803

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

Virtual angioscopy

virtual endoscopy of the vascular system

Virtual bronchoscopy

virtual endoscopy of the bronchial tree

Virtual colonoscopy

virtual endoscopy of the colon

Virtual ventriculoscopy

virtual endoscopy of the ventricular system

Synonyms and Keywords

In patent documents, the following words/expressions are often used as synonyms:

  • "virtual fly through navigation", "virtual navigation", "virtual flight", "virtual fly-through" and "virtual walk-through"
{Mixed reality (object pose determination, tracking or camera calibration for mixed reality G06T 7/00)}
Definition statement

This place covers:

Means or steps for generating 3D mixed reality, i.e. displaying 3D virtual model data together with 2D or 3D real-world image data or for displaying 2D virtual model data together with 3D real-world image data, e.g. real volume data.

3D mixed reality encompasses 3D augmented reality and 3D augmented virtuality.

References
Limiting references

This place does not cover:

Object pose determination, tracking or camera calibration for mixed reality

G06T 7/00

Mixed reality by combining 2D virtual models or text with 2D real image data

G06T 11/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Head-up displays, head mounted displays

G02B 27/01

With head-mounted left-right displays

H04N 13/344

Volumetric display, i.e. systems where the image distributed through a volume

H04N 13/388

Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Definition statement

This place covers:

Means or steps for changing the visual appearance of the 3D object or parts thereof or for changing the position of the 3D object or parts thereof in the visualization environment.

Shape modifications of the 3D object, e.g. adding or deleting parts of the 3D object, shearing, free-form deformations.

Colour modifications, e.g. colour coding, use of pseudo-colour, highlighting object parts in a different colour.

Modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style (e.g. stroke width and pattern), displaying more details of the object or of parts thereof in a separate window).

Shifting objects or parts thereof, aligning objects, rotating parts of the object or model, Euclidian transformations, size changes of the object or parts thereof.

Assembling and disassembling of object parts, connecting or mating different 3D parts.

References
Limiting references

This place does not cover:

Geometric transformations of the whole 3D object to change the viewpoint but without rendering details

G06T 19/00

Informative references

Attention is drawn to the following places, which may be of interest for search:

Geometric image transforms in the image plane

G06T 3/00

Colour changes in 2D images

G06T 11/001

Editing of 2D images

G06T 11/60

Time-related zooming on 3D objects

G06T 13/20

Time-related zooming on 2D images

G06T 13/80

Special rules of classification

For the documents in the group G06T 19/00 the following additional symbols from the Indexing Code series G06T 2210/00 and below are especially relevant. To each document classified in G06T 19/20 at least one of the following symbols should be allocated:

For aligning objects, relative positioning of parts: G06T 2219/2004

For assembling, disassembling: G06T 2219/2008

For colour coding, editing, changing, or manipulating, pseudo-colours, highlighting: G06T 2219/2012

For rotation, translation, scaling: G06T 2219/2016

For shape modifications, adding or deleting parts, shearing, free form deformations: G06T 2219/2021

For modifications of the display style, e.g. changes of patterns for surfaces, change of line drawing style: G06T 2219/2024

Glossary of terms

In this place, the following terms or expressions are used with the meaning indicated:

DDM

Direct deformation method

Indexing scheme for image analysis or image enhancement
Definition statement

This place covers:

Indexing Codes that relate to

  • the modality with which the processed image was acquired
  • special algorithmic details, also in the sense of further breakdown of groups
  • the imaged subject or the context of the image processing
Special rules of classification

Whenever classifying in G06T 5/00 and G06T 7/00, additional information should be classified using one or more of the Indexing Codes from the range of G06T 2207/00.The use of the Indexing Codes is obligatory.

For Image acquisition modality, see Indexing Code G06T 2207/10.

For Special algorithmic details, see Indexing Code G06T 2207/20.

For Subject of image; Context of image processing, see Indexing Code G06T 2207/30.

For example, the Indexing Codes would be used to classify that a model-based segmentation (G06T 7/12 and G06T 7/149) using an active shape model (G06T 2207/20124) is done on a CT image (G06T 2207/10081) of the heart (G06T 2207/30048), or to classify that extrinsic camera parameters (G06T 7/80) are determined for an infrared camera (G06T 2207/10048) mounted on a car facing to the exterior of the car (G06T 2207/30252), wherein multiresolution image processing is used (G06T 2207/20016).

As a basic principle, the Indexing Codes from G06T 2207/00 are applicable only in connection with G06T 5/00 and G06T 7/00.

However, not all Indexing Codes are applicable over the whole range of G06T 5/00 and G06T 7/00. The following restrictions apply:

The following Indexing Codes are only used as nodes to build the classification hierarchy and should not contain any documents, i.e. only their subgroups are used for classification:

Moreover, the following Indexing Code is considered redundant in the context of image processing and is, thus, not used for classification:

Image acquisition modality
Definition statement

This place covers:

G06T 2207/10012

Stereo images - image acquisition by two cameras or a single camera that is displaced acquire at least one stereo image pair

G06T 2207/10024

Color image - image acquisition by color or multichannel camera; only to be used when color aspect is of some importance also in the processing

G06T 2207/10028

Range image; Depth image; 3D point clouds - range image, depth image, surface image, i.e. 2D image providing depth information; 3D point clouds

G06T 2207/10032

Satellite or aerial image; Remote sensing - satellite or aerial imaging; space-based; remote sensing; Fernerkundung (German expression)

G06T 2207/10036

Multispectral image; Hyperspectral image - multispectral or hyperspectral radiometers in satellite or aerial imaging

G06T 2207/10068

Endoscopic image - image acquisition by endoscopic instrument, e.g. ultrasound catheter, colonoscope, video endoscope, capsule/pill endoscope

G06T 2207/10084

Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities - image acquisition by hybrid tomographic scanner, i.e. by system that combines different tomographic modalities

G06T 2207/10112

Digital tomosynthesis [DTS] - image from digital tomosynthesis [DTS], i.e. limited angle reconstruction based on radiographies

G06T 2207/10124

Digitally reconstructed radiograph [DRR] - DRR reconstructed from 3D tomographic data

G06T 2207/10128

Scintigraphy - image acquisition by scintigraphy or gamma camera

G06T 2207/10144

Varying exposure - acquisition of multiple images with varying exposure parameters

G06T 2207/10148

Varying focus - modification of focus during acquisition of single image or of multiple images

G06T 2207/10152

Varying illumination - acquisition of multiple images with varying illumination conditions

Special algorithmic details
Definition statement

This place covers:

G06T 2207/20008

Globally adaptive - processing of whole image with the same parameters, e.g. the same filter weights, but parameters may vary from image to image

G06T 2207/20012

Locally adaptive - processing of image in a locally differing manner; covers also the limiting of processing to a ROI

G06T 2207/20081

Training; Learning - training or learning, e.g. of background for motion analysis or of model or atlas for segmentation

G06T 2207/20096

Interactive definition of curve of interest - involving interactive definition of non-closed curve of interest; closed curve, see G06T 2207/20104

G06T 2207/20104

Interactive definition of region of interest [ROI] - involving interactive definition of ROI; setting of closed curve or box

G06T 2207/20132

Image cropping - cutting out, cropping, i.e. defining automatically a ROI of simple shape, e.g. rectangular, circular, usually for limiting the further processing to the ROI; this place does not cover manual definition of the ROI: G06T 2207/20104

G06T 2207/20156

Automatic seed setting - automatic setting of seed, e.g. based on statistics of a region of interest, usually for subsequent region-growing or for edge-growing/following; this place does not cover manual seed-setting: G06T 2207/20101

G06T 2207/20164

Salient point detection; Corner detection - detection of salient points, e.g. corners, T-junctions, end points; this place does not cover automatic seed setting: G06T 2207/20156; salient points for pattern recognition: G06F 18/00

G06T 2207/20201

Motion blur correction - correcting motion blur in still image or video

G06T 2207/20208

High dynamic range [HDR] image processing; - High Dynamic Range Imaging [HDR or HDRI] from a series of conventional images of lower dynamic range

G06T 2207/20216

Image averaging - averaging of multiple images

G06T 2207/20221

Image fusion; Image merging - image fusion, i.e. merging of images of same subject

G06T 2207/20224

Image subtraction - subtraction of images of same subject, e.g. temporal subtraction, subtraction of images with varying illumination conditions or for masking out certain pre-segmented image parts

Subject of image; Context of image processing
Definition statement

This place covers:

G06T 2207/30021

Catheter; Guide wire - subject of image: catheter, endoscope or guide wire when imaged in biomedical image

G06T 2207/30052

Implant; Prosthesis - subject of image: implant or prosthesis; also non-synthetical transplants

G06T 2207/30068

Mammography; Breast - subject of image: mammography; breast, usage not limited to x-ray image

G06T 2207/30076

Plethysmography - measurement of possibly periodic volume/size/position changes, e.g. due to blood flow

G06T 2207/30101

Blood vessel; Artery; Vein; Vascular - subject of image: vascular structures, blood vessel, artery, vein, angiography

G06T 2207/30132

Masonry; Concrete- inspection of concrete or masonry in buildings, dams, bridges, etc.

G06T 2207/30144

Printing quality - inspection of printed product

G06T 2207/30152

Solder - inspection of solder, electrical contacts

G06T 2207/30164

Workpiece; Machine component - inspection of workpiece, e.g. machine component; Werkstück (German expression)

G06T 2207/30172

Centreline of tubular or elongated structure - determining the centreline of a tubular or elongated structure, e.g. of a lumen, vessel, colon, pipe

G06T 2207/30176

Document - enhancement or analysis of document image; this place does not cover document recognition: G06F 18/00, G06V

G06T 2207/30181

Earth observation - earth observation with image from remote sensing

G06T 2207/30184

Infrastructure - observation of infrastructure, e.g. urban infrastructure, roads, railway, water channel, power transmission line

G06T 2207/30188

Vegetation; Agriculture - observation of vegetation areas , e.g. agriculture

G06T 2207/30192

Weather; Meteorology - weather, meteorology, climate

G06T 2207/30204

Marker - subject of image: artificial marker or symbol in image, e.g. used for calibration, registration or tracking

G06T 2207/30212

Military - military application, e.g. target tracking

G06T 2207/30216

Redeye defect - redeye defect detection and correction

G06T 2207/30232

Surveillance - application in video surveillance

G06T 2207/30236

Traffic on road, railway or crossing - subject of image: traffic on road, railway, crossing, square

G06T 2207/30241

Trajectory - determination of trajectory, track, trace

G06T 2207/30244

Camera pose - determination of camera pose, as opposed to the determination of the pose of image content

G06T 2207/30248

Vehicle exterior or interior - imaging with camera placed on a vehicle, car, train, plane, boat or mobile robot

G06T 2207/30252

Vehicle exterior; Vicinity of vehicle - subject of image: exterior of a vehicle; imaging from a vehicle

G06T 2207/30256

Lane; Road marking - subject of image: lane, road marking, railroad, pathway

G06T 2207/30261

Obstacle - subject of image: obstacle, e.g. pedestrian, other vehicle

G06T 2207/30264

Parking - imaging from a vehicle, e.g. for parking aid

G06T 2207/30268

Vehicle interior - subject of image: interior of a vehicle