CPC Definition - Subclass G10L
This place covers:
- processing of speech or voice signals in general (G10L 25/00);
- production of synthetic speech signals (G10L 13/00);
- recognition of speech (G10L 15/00);
- lyrics recognition from a singing voice (G10L 15/00);
- speaker identification, authentication or verification (G10L 17/00);
- singer recognition from a singing voice (G10L 17/00);
- analysis of speech signals for bandwidth compression or extension, bit-rate or redundancy reduction (G10L 19/00);
- coding/decoding of audio signals for compression and expansion using analysis-synthesis, source filter models or psycho-acoustic analysis (G10L 19/00);
- modification of speech signals, speech enhancement, source separation (G10L 21/00);
- noise filtering or echo cancellation in an audio signal (G10L 21/00);
- speech or voice analysis techniques specially adapted to analyse or modify audio signals not necessarily including speech or voice are also covered in subgroups (G10L 21/00, G10L 25/00);
This place does not cover:
Speech or voice prosthesis | |
Sound input or sound output arrangements for computers | |
Handling natural language data | |
General pattern recognition | |
Coding or synthesis of audio signals in musical instruments | |
Karaoke or singing voice processing | |
Sound production | |
Devices for the storage of speech signals | |
Amplifiers | |
Gain or frequency control | |
Broadcasting | |
Secret communication | |
Encoding of compressed speech signals for transmission or storage | |
Spatial sound recording | |
Spatial sound reproduction | |
Mere application of speech or voice analysis techniques | application place |
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Information retrieval of audio data | |
Broadcasting arrangements of audio | |
Name dialling controlled by voice recognition | |
Automatic arrangements for answering calls |
Examples of places in relation to which this place is residual:
Acoustics not otherwise provided for |
Attention is drawn to the following places, which may be of interest for search:
Measurement of sound waves in general | |
Sound input/output for computers | |
Image data processing | |
Teaching or communicating with the blind, deaf or mute | |
Electronic musical instruments | |
Information storage, e.g. sound storage | |
Electronic circuits for sound generation | |
Electronic filters | |
Coding, decoding or code conversion, error protection in general | |
Telephonic communication | |
Switching systems | |
Microphone arrangements, hearing aids, public address systems | |
Spatial sound reproduction |
In this place, the following terms or expressions are used with the meaning indicated:
Speech | definite vocal sounds that form words to express thoughts and ideas |
Voice | sounds generated by vocal chords or synthetic versions thereof |
Audio | of or relating to humanly audible sound |
This place covers:
- synthesis of speech from text, concatenation of smaller speech units, grapheme to phoneme conversion;
- modification of the voice for speech synthesis: gender, age, pitch, prosody, stress.
- hardware or software implementation details of a speech synthesis system
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Attention is drawn to the following places, which may be of interest for search:
Excitation coding of a speech signal | |
Processing or translation of natural language |
In patent documents, the following abbreviations are often used:
HMM | Hidden Markov Model |
TTS | Text To Speech |
This place covers:
concepts used for speech synthesis can be linked to an emotion to be conveyed (US2010329505), a communication goal driving a dialogue (US2010241420), image-to-speech (US2010231752), native sounding speech (US2004030554)
This place does not cover:
Processing or translation of natural language |
This place covers:
- recognition of text or phonemes from a spoken audio signal;
- spoken dialog interfaces, human-machine spoken interfaces
- topic detection in a dialogue, semantic analysis, keyword detection, spoken command and control
- context dependent speech recognition (location, environment, age, gender, etc.)
- parameter extraction, acoustic models, word models, grammars, language models for speech recognition
- recognition of speech in a noisy environment
- recognition of speech using visual clues
- feedback of the recognition results, disambiguation of speech recognition results
- dedicated hardware or software implementations, parallel and distributed processing of speech recognition engines
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Spoken command and control of surgical instruments | |
Speech input in video games | |
Voice control for systems within a vehicle | |
Speech input for vehicle navigation systems | |
Sound input arrangements for computers | |
Teaching how to speak | |
Name dialling controlled by voice recognition | |
Speech interaction details in automatic or semi-automatic exchange systems for interactive information services |
Attention is drawn to the following places, which may be of interest for search:
Information retrieval of audio data | |
Complex mathematical functions | |
Handling natural language data | |
Pattern recognition | |
Face recognition, lip reading without acoustical input | |
Educational appliances | |
Signal processing for recording |
In patent documents, the following abbreviations are often used:
ANN | Artificial neural network |
ASR | Automatic speech recognition |
CSR | Continuous speech recognition |
GMM | Gaussian mixture model |
HMM | Hidden Markov model |
IVR | Interactive voice response |
MLP | Multi layer perceptron |
VLSR | Very large speech recognition |
This place covers:
- recognition, identification of a speaker
- verification, authentication of a speaker
- feature extraction, dialog, prompts, passwords for identification
- identification in noisy condition
- multimodal identification including voice
- impostor detection
Attention is drawn to the following places, which may be of interest for search:
Information retrieval of audio data | |
Complex mathematical functions | |
Security arrangements, restricting access by authenticating users, using biometric data | |
Pattern recognition | |
Individual entry or exit registers, access control with identity check using personal physical data | |
Secret secure communication including means for verifying the identity or authority of a user |
In this place, the following terms or expressions are used with the meaning indicated:
Speaker verification, or authentication | refers to verifying that the user claimed identity is real, he is otherwise an impostor. Speaker recognition, or identification, aims at determining who the user is among a closed (finite number) set of users. He is otherwise unknown. |
A goat, sheep | often refers to a person whose voice is easy to counterfeit. |
A wolf, predator | often refers to a person who can easily counterfeit someone else's voice or is often identified as someone else. |
An impostor | is someone actively trying to counterfeit someone else's identity. |
In patent documents, the following abbreviations are often used:
ANN | Artificial neural network |
ASR | Automatic speech recognition |
GMM | Gaussian mixture model |
HMM | Hidden Markov model |
IVR | Interactive voice response |
MLP | Multi layer perceptron |
This place covers:
Techniques for the reduction of data from audio sources, i.e. compression of audio. These techniques are applied to reduce the quantity of information to be stored or transmitted, but are independent of the end-application, medium or transmission channel, i.e. do only exploit the properties of the source signal itself or the final receiver exposed to this signal (the listener).
Mainly two types of sources can be distinguished :
"speech only" encompass signals produced by human speakers, and historically was to be understood as mono-channel, single speaker "telephone quality" speech having a narrow bandwidth limited to max. 4kHz. Encoding of speech only sources primarily aim at reducing the bit-rate while still providing fair intelligibility of the spoken content, but not always fidelity to the original.
"Audio signal" is broader and comprises speech as well as background information, e.g. music source having multiple channels. Encoding of audio deals primarily with transparent, i.e. "high fidelity" reproduction of the original signal.
The compression techniques can also be distinguished as being :
Lossy or Lossless, i.e. whether a perfect reconstruction of the source is possible, or only a perceptually acceptable approximation can be done.
The techniques classified in this subclass are based either on modelling the production of the signal (voice) or the perception of it (general audio).
This place does not cover:
Coding of signals within electronic musical instruments |
Attention is drawn to the following places, which may be of interest for search:
Complex mathematical functions | |
Signal processing not specific to the method of recording or reproducing | |
Editing; Indexing; Addressing; Timing or synchronizing; Monitoring; | |
Compression | |
Detecting, preventing errors in received information | |
Quality monitoring in automatic, semi automatic exchanges | |
Quality control of voice transmission between switching centres | |
Simultaneous speech and data transmission | |
Transmission of audio and video in television systems | |
Stereophonic arrangements | |
Stereophonic systems | |
Wireless communication networks |
In this place, the following terms or expressions are used with the meaning indicated:
audio signal | is meant to include speech, music, silence or background signal, or any combinations thereof, unless explicitly specified |
In patent documents, the following abbreviations are often used:
CELP | Code Excited Linear Prediction |
CTX | Continuous transmission |
DTX | Discontinuous transmission |
HVXC | Harmonic Vector eXcitation Coding |
LPC | linear prediction coding |
MBE | Multiband Excitation |
MELP | Mixed Excitation Linear Prediction |
MOS | mean opinion score |
MPEG | Moving Picture Experts Group |
MPEG1 audio | Standard ISO/IEC 11172-3 |
MPEG2 audio | Standard ISO/IEC 13818-3 |
MPEG4 audio | Standard ISO/IEC 14496-3 |
MP3 | MPEG 1 Layer III |
PCM | pulse code modulation |
PWI | Prototype Waveform Interpolation |
SBR | Spectral Band Replication |
In patent documents, the following words/expressions are often used as synonyms:
- " perceptual" and "psychoacoustic"
This place covers:
Coding of a signal with rate adaptation, e.g. adapted to voiced speech, unvoiced speech, transitions and noise/silence portions.
Coding of a signal with a core encoder providing a minimum level of quality, and extension layers to improve the quality but requiring a higher bitrate. It includes parameter based bandwidth extension (i.e. SBR) or channel extension.
This group is in opposition to G10L 21/038 in which the bandwidth extension is artificial, i.e. based on the only narrowband encoded signal.
Attention is drawn to the following places, which may be of interest for search:
Artificial bandwidth extension, i.e. based on the only narrowband encoded signal | |
Spatial sound recording | |
Spatial sound reproduction |
This place covers:
The subgroup deals with speech or voice modification applications, but receives also applications for speech or voice analysis techniques specially adapted to analyse or modify audio signals not necessarily including speech or voice but which are not music signals (G10H).
- bandwidth extension of an audio signal
- improvement of the intelligibility of a coded speech signal
- removal of noise from an audio signal
- removal of echo from an audio signal
- separation of audio sources
- pitch, speed modification of an audio signal
- voice morphing
- visualisation of audio signals (e.g. sonagrams)
- lips or face movement synchronisation with speech (e.g phonemes - visemes alignment).
- face animation synchronisation with the emotion contained in the voice or speech signal
This place does not cover:
Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders ; Coding or decoding of speech or audio signals, e.g. for compression or expansion, source-filter models or psychoacoustic analysis |
Examples of places in relation to which this place is residual:
Attention is drawn to the following places, which may be of interest for search:
Direction finder | |
Complex mathematical functions | |
Animation based on audio data, talking heads | |
Signal processing not specific to the method of recording or reproducing, | |
Signal processing not specific to the method of recording or reproducing, for reducing noise | |
Editing; Indexing; Addressing; Timing or synchronizing; Monitoring; | |
Gain control in amplifiers | |
Reducing echo effect or singing in line transmissions systems | |
Reducing noise or bandwidth in transmission systems not characterised by the medium used for transmission | |
Hearing aids | |
Public address systems |
In this place, the following terms or expressions are used with the meaning indicated:
Viseme | a visual representation of the mouth, lips, tongue and teeth corresponding to a phoneme |
In patent documents, the following abbreviations are often used:
BSS | blind source separation |
LDA | linear discriminant analysis |
NB | narrowband |
PCA | principal component analysis |
SBR | Spectral Band Replication |
WB | wideband |
This place covers:
Visemes are selected to match with the corresponding speech segment, or the speech segments are adapted/chosen, to match with the viseme. This symbol also encompasses the coarticulation effects as used in facial character animation or talking heads.
Attention is drawn to the following places, which may be of interest for search:
Facial character animation per se |
This place covers:
Bandwidth extension taking place at the receiving side, e.g. generation of artificial low or high frequency components, regeneration of spectral holes, based on the only narrowband encoded signal. This is in opposition with G10L 19/24 wherein parameters are computed during the encoding step to enable bandwidth extension at the decoding step.
Attention is drawn to the following places, which may be of interest for search:
Parameter based bandwidth extension (e.g. SBR) |
This place covers:
- processing of speech or voice signals in general, in particular detection of a speech signal, end points detection in noise, extraction of pitch, measure of the voicing, emotional state, voice pathology or other speech or voice related parameters
- speech or voice analysis techniques specially adapted to analyse audio signals not necessarily including speech or voice, such as audio scene segmentation, jingle detection, separation from music or noise, detection of particular sounds;
This place does not cover:
Muting amplifier when no signal is present |
Attention is drawn to the following places, which may be of interest for search:
Comfort noise | |
Karaoke or singing voice processing, parameter extraction for musical signal categorisation, electronic musical instruments | |
Gain or frequency control | |
DTX communication | |
Switching of direction of transmission by voice in loud-speaking telephone systems | |
Multiplex systems |
In this place, the following terms or expressions are used with the meaning indicated:
audio signal | is of or relating to humanly audible sound. e.g., it comprises any combination of background noise or silence, voice or speech, music. |