CPC Definition - Subclass G06N
This place covers:
Computing systems where the computation is not based on a traditional mathematical model of computer
This place covers:
Computing systems where the computation is based on biological models (brains, intelligence, consciousness, genetic reproduction) or is using physical material of biological origin (biomolecules, DNA, biological neurons, etc.) to perform the computation. The computation can be digital, analogue or chemical in nature.
Classification in this group or its subgroups is expected only if the invention concerns the development of a computer. DNA and proteins biomaterials as such, should be classified in the relevant groups of (bio) chemistry.
Attention is drawn to the following places, which may be of interest for search:
Computer systems using knowledge based models | |
Probabilistic networks | |
Computers systems using fuzzy logic | |
Machine Learning | |
Analogue computers simulating functional aspects of living beings | |
Memories whose operation depends upon chemical change | |
Bioinformatics |
In patent documents, the following words/expressions are often used as synonyms:
- "biocomputers", "biological computers", "nanocomputers", "neural networks" and "artificial life"
This place covers:
Computers using actual physical material of biochemical origin or material as used in carbon-based living systems, i.e. biomolecules, proteins, cells or other biochemicals to perform computation.
This place does not cover:
Computers using real biological neurons integrated on chips | |
Computers using DNA |
Attention is drawn to the following places, which may be of interest for search:
Computation based on Inorganic chemicals |
In patent documents, the following words/expressions are often used as synonyms:
- "biocomputers", "wetware", "biochemical computers", "biochips" and "living computers"
This place covers:
Artificial or synthetic life forms that are based on models of or are inspired by natural life forms but are actually implemented or controlled by computing arrangements.
Typical examples of artificial life (Alife) models: agent-based models, multi-agent systems, cellular automata, collective behaviours, self-organised systems, swarm intelligence.
Attention is drawn to the following places, which may be of interest for search:
Biological life forms that are created involving biological genetic engineering, e.g. clones |
In this place, the following terms or expressions are used with the meaning indicated:
Alife | Artificial life |
In patent documents, the following words/expressions are often used as synonyms:
- "Alife", "artificial life", synthetic life" and "virtual creatures "
This place covers:
Software simulations on computing arrangements of systems exhibiting behaviours normally ascribed to life forms.
Typical examples of Alife based on simulated virtual life forms: ant colony optimisation (ACO), ant clustering, Ant-Miner, artificial bee colonies (ABC), artificial immune systems (AIS), firefly algorithms, particle swarm clustering, particle swarm classification, autonomous agents or bots, intelligent agents or bots, learning agents or bots, smart agents or bots, metaverse, virtual reality, virtual world, virtual society, virtual creatures.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Computer-aided design [CAD] for design optimisation, verification or simulation | |
ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics |
Attention is drawn to the following places, which may be of interest for search:
Computer games | |
Information retrieval | |
Computer Aided Design [CAD] | |
Collaborative systems - Groupware | |
Image processing for animations | |
Protocols for games, networked simulations or virtual reality |
In patent documents, the following words/expressions are often used as synonyms:
- "metaverse", "virtual reality", "virtual world", "virtual society", "social simulations", "particle swarm", "ant colony", "artificial immune systems"
This place covers:
Computing arrangements emulating/simulating existing biological life forms mainly implemented as physical robots in the form of animals (pets) or humans (humanoids or androids). These physical entities can be standalone or work in groups/swarms (e.g. Robocup team of robotic football players).
Typical examples of Alife based on physical entities: humanoids, androids, robotic pets, autonomous robots, intelligent robots, learning robots, smart robots, behaviour-based robotics.
This group does not cover purely mechanical devices: there should always be some computer involved.
It should act, or at least have as function to look like an animal or a human.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Programme-controlled manipulators | |
Control of industrial robots | |
Total factory control |
Attention is drawn to the following places, which may be of interest for search:
Toys or dolls | |
Industrial robots or mechanical grippers |
In patent documents, the following words/expressions are often used as synonyms:
- "humanoid", "android", "robot", "robot pet ", "behaviour-based robots"
This place covers:
Computation simulating or emulating the functioning of biological brains mainly implemented in non-biological material, i.e. electronics or optical material. It can be in digital electronic or analogue electronic or biological technology.
Applications of whatever sort just using neural networks with no description of the neural network itself are to be classified in the relevant application field only.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Adaptive control systems | |
Pattern recognition | |
Image processing using neural networks | |
Speech recognition using artificial neural networks |
In patent documents, the following words/expressions are often used as synonyms:
- "neural network", "neuronal network", "neuromimetic network", "artificial brain" and "perceptron"
This place covers:
The specific architecture or layout of the neural network, how the neurons are interconnected. For the different architectures see the titles of the different subgroups.
In patent documents, the following words/expressions are often used as synonyms:
- "architecture", "topology", "layout"and "interconnection pattern"
This place covers:
Adaptive Resonance Theory (ART).
Adaptive Resonance Theory was a short live method of neural networks developed by Grossberg and Carpenter. This subgroup contains only documents on ART by Grossberg and Carpenter (obsolete technology).
This place covers:
Neural networks using some form of chaos or fractal technology or methods
Attention is drawn to the following places, which may be of interest for search:
Chaos models per se |
In patent documents, the following words/expressions are often used as synonyms:
- "fractal transform function", "fractal growth", "chaotic neural network" and "Mandelbrot"
This place covers:
Combinations of neural networks and knowledge-based models, in particular knowledge representation and reasoning (KRR) models such as expert systems. This place contains documents where knowledge-based models and neural networks work together on the same level and also where knowledge-based models are used to represent, approximate, construct, augment, support, explain or control a neural network.
Typical examples of such neural network models: rule-based networks, graph networks, hybrid networks, surrogate networks, response surface networks, physics-augmented networks, neural ordinary differential equations, neural tensor networks, symbolic networks, neuro-symbolic systems.
Where the knowledge-based models are within neural networks, classification should be made in group G06N 3/042 only.
Attention is drawn to the following places, which may be of interest for search:
In patent documents, the following words/expressions are often used as synonyms:
- "rule-based neural network" and "knowledge-based neural network"
This place covers:
Combinations of neural networks and fuzzy logic or inference. This place contains documents where fuzzy logic/inference and neural networks work together on the same level and also where fuzzy logic/inference is used to represent, approximate, construct, augment, support, explain or control a neural network.
Where the fuzzy-based models are within neural networks, classification should be made in group G06N 3/043 only.
Attention is drawn to the following places, which may be of interest for search:
Fuzzy inferencing | |
Computing arrangements using fuzzy logic |
In this place, the following terms or expressions are used with the meaning indicated:
ANFIS | Adaptive neuro-fuzzy inference systems |
In patent documents, the following words/expressions are often used as synonyms:
- "Adaptive neuro-fuzzy interference system (ANFIS)" and "Neuro-fuzzy interference system"
This place covers:
Neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes, and exhibiting temporal dynamic behaviours.
Typical examples of such neural network models: associative memories, feedback networks, Elman networks, reservoir computing, echo state networks (ESN), liquid state machines (LSM), Boltzmann machines.
In patent documents, the following words/expressions are often used as synonyms:
- "feedback network" and "recurrent neural network"
- "Hopfield nets" and "associative networks"
This place covers:
Architectures wherein multiple neural networks are connected in parallel or in series, including modular architectures wherein components/modules may be represented as neural networks. The neural networks can cooperate on the same level or one neural network can represent, approximate, construct, augment, support, explain or control another neural network.
Parallel neural networks can also be used for fault tolerance when connecting to a voting system.
Several neural networks can also be trained in different ways or with different training examples and then combined in parallel to increase the reliability or accuracy.
Typical examples of such neural network models: multiple neural networks, hierarchical networks, pyramidal networks, modular networks, neural network ensembles, stacked networks, cascaded networks, mixture of expert (MoE) networks, hierarchical temporal memories (HTM), cortical learning algorithms (CLA), dueling networks, adversarial networks, Siamese networks, triplet networks, latent space models, network embeddings, memory-augmented neural networks (MANN), memory networks, neural Turing machines (NTM), differentiable neural computers (DNC), networks with attention mechanisms, transformers, bidirectional encoder representations from transformers (BERT), generative pre-trained transformers (GPT-2, GPT-3), distributed neural networks.
Attention is drawn to the following places, which may be of interest for search:
Ensemble learning |
In patent documents, the following words/expressions are often used as synonyms:
- "multiple neural networks" and "parallel neural networks"
- "hierarchical neural networks" and "ensemble neural networks"
This place covers:
Architectures learning representations for datasets and comprising two main components: an encoder transforming inputs into a latent space and a decoder transforming this intermediate representation to the outputs. Additional components further process data or embeddings in the input, latent or output spaces. Auto-encoders learn to reconstruct the original representation.
Typical examples of such neural network models: stochastic autoencoders (SAE), denoising autoencoders (DAE), contractive autoencoders (CAE), variational autoencoders (VAE), ladder networks, convolution-deconvolution networks.
This place covers:
Neocognitrons are an unique and specific architecture of neural network charaterized by its name.
The neocognitron is a hierarchical multilayered neural network and is a natural extension of cascading models.
In the neocognitron, multiple types of cells such as S-cells and C-cells are used to perform recognition task.
Contains only documents if the type of neural network is specifically called neocognitron.
This place covers:
Convolutional neural networks, or convolution neural networks, are a specialised type of artificial neural networks that use a mathematical operation called convolution in place of general matrix multiplication in at least one of their layers.
Typical examples or features of such neural network models: pooling layers, convolution layers, feature maps, dilated convolutions, residual networks (ResNet), dense networks (DenseNet), U-Net.
This place covers:
Neural networks having as special feature that the neurons individually, or the weights connecting the neurons, or the architecture as a whole, have a probabilistic, stochastic or statistical aspect.
Typical examples of such neural network models: Bayesian neural networks, Boltzmann machines, probabilistic RAM (pRAM).
Attention is drawn to the following places, which may be of interest for search:
Chaotic determination of the weights | |
Neural networks based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS] | |
Probabilistic graphical models, e.g. probabilistic networks |
In patent documents, the following words/expressions are often used as synonyms:
- "probabilistic neural network" and "PNN"
- "statistical neuron function" and "stochastic neuron function"
- "p-RAM" and "probabilistic RAM"
This place covers:
Neural networks having as special feature that they generate candidates from an existing distribution of samples, e.g. generate virtual image, text, code or sound data examples.
Typical examples of such neural network models: generative adversarial networks (GAN), Boltzmann machines, Helmholtz machines, energy-based models, spin-based models, networks based on Ising or Pott models.
This place covers:
Function that converts a weighted sum of input data into an output signal. In artificial neural networks, the size of the weighted sum for the previous layer determines whether it is active or not.
Typical examples of activation functions: sigmoid, logistic, hyperbolic tangent (tanh), step function, Heaviside, thresholding, softmax, maxout, rectified linear unit (ReLU), piecewise linear activation functions, radial basis functions (RBF), Gaussian error linear unit (GELU), exponential linear unit (ELU), ridge functions, fold functions.
In patent documents, the following words/expressions are often used as synonyms:
- "sigmoid" and "logistic function"
- "non-linear activation function" and "non-linear transfer function"
- "approximated activation functions" and "piecewise linear activation function"
This place covers:
Neurons or neural networks having a temporal aspect, e.g. spiking neurons or neural networks where the time-like dynamics are a specific aspect of the invention. This can be in digital, but is often in analogue technology. These neurons are meant to be a more realistic simulation of real biological neurons.
Typical examples or features of such neural network models: integrate-and-fire (IF) neurons, resonate-and-fire neurons, FitzHugh-Nagumo model, Hodgkin-Huxley model, Izhikevich models, conductance-based models, compartmental models, multi-compartment models, tempotron, time delay networks, address event representations (AER), neuromorphic behaviours, spike-timing-dependent plasticity (STDP), bursting, firing, spiking, population coding, rate coding, temporal coding.
In patent documents, the following words/expressions are often used as synonyms:
- "spiking", "timelike", "temporal" and "dynamical"
This place covers:
Neural networks having as special feature that connections between the nodes do not form a cycle. This network model is structured with only forward connections through which data is transmitted in one direction.
This group covers only neural networks with feedforward structures that are not provided for by other groups under G06N 3/04.
This place covers:
The technology used to physically construct the neurons or neural network : digital electronics, analog electronics, biochemical elements, optical elements
This head subgroups should contain no documents, all documents should fall in one of its lower subgroups
In patent documents, the following words/expressions are often used as synonyms:
- "hardware", "technology", "implementation" and "physical"
This place covers:
Using real biological neurons from a living being implemented on a substrate. These neurons can be externally activated and read-out. The interconnections can be fixed or the can be allowed to grow and evolve.
Attention is drawn to the following places, which may be of interest for search:
Biomolecular computers |
In patent documents, the following words/expressions are often used as synonyms:
- "neurochip", "biochip" and "wetware"
This place covers:
Neurons or interconnections implemented in dedicated digital electronics.
Attention is drawn to the following places, which may be of interest for search:
Neurons implemented using standard electronic digital computers |
In patent documents, the following words/expressions are often used as synonyms:
- "electronic neuron", "digital", "numeric", "neuromorphic" and "synaptronic"
This place covers:
Neurons or interconnections implemented using analogue electronics, including mixed-signal or hybrid analogue-digital electronics.
Typical examples of such neural network realisations: neuromorphic chips, neuromorphic circuits, neuromorphic systems, neurons or synapses implemented with memristors, with memristive systems, or with non-volatile memories (NVM) typically arranged in arrays, e.g. in crossbar (XB) arrays.
Attention is drawn to the following places, which may be of interest for search:
In patent documents, the following words/expressions are often used as synonyms:
- "analogue" and "analog"
This place covers:
Neurons or interconnections implemented in dedicated optical components..
This place covers:
Neurons or neural networks using electro-optical, acousto-optical or opto-electronic components.
Attention is drawn to the following places, which may be of interest for search:
Hybrid optical computers in general |
In patent documents, the following words/expressions are often used as synonyms:
- "electro-optical", "acousto-optical" and "opto-electronic"
This place covers:
Means and methods of training or learning the neural networks. For specific training methods or algorithms see the different subgroups.
Where the machine learning relates to learning methods within neural networks, classification should be made in group G06N 3/08 only.
In patent documents, the following words/expressions are often used as synonyms:
- "training or learning neural network", "evolving or adapting neural network" and "optimizing neural network"
This place covers:
During the learning or training process of the neural network, not only are the weights of the synapses changed, but also is the architecture of the neural network changed, even temporarily. This can involve adding/deleting/silencing neurons or adding/deleting/silencing connections between the neurons.
When during the training process it becomes clear that the size/capacity of the neural network is not sufficient, additional neurons or connections can be added to the network after which the training can resume. When it is found that certain neurons are not used or have no influence, they can be removed (pruning). Such modifications can be part of a search for optimal architectures (neural architecture search). Neurons or connections can also be silenced for regularisation (dropout/dropconnect) and improving the network's generalisation.
This place covers:
Training method whereby the synapses of the neurons are adapted depending on the difference between the actual output of the neural network and the wanted output. This difference is used to adapt the weights of the synapses with a mathematical method that back-propagates from the higher layers to the lower layers of the neural network. Mainly used in multilayer neural networks.
Typical examples of such learning or training methods: feedback alignment, automatic differentiation, backprop, error-backpropagation, backward propagation of errors based on gradient ascent or gradient descent (e.g. stochastic or minibatch gradient descent, Adagrad, Adam, RMSprop).
In patent documents, the following words/expressions are often used as synonyms:
- "backprop" and "backpropagation"
This place covers:
The use of evolutionary algorithms for creating an optimally functioning neural network, such as evolutionary programming, genetic algorithms, genetic programming, evolution strategies, etc.
Attention is drawn to the following places, which may be of interest for search:
Evolutionary algorithms, e.g. genetic algorithms or genetic programming |
In patent documents, the following words/expressions are often used as synonyms:
- "evolutionary", "Darwinistic", "genetic algorithm", "evolutionary programming", "genetic programming" and "evolution strategies"
This place covers:
Learning or training without direct supervision from unlabelled data. Neural networks are created, and then it is observed how they function in the real world. As a result of global functioning, the neural network is further adapted. No sets of ground truth data are necessary, and input data are, for example, clustered.
Typical examples of non-supervised or unsupervised methods: competitive learning, self-organising maps (SOM), self-organising feature maps (SOFM), Kohonen maps, topological maps, neural gas, neural network clustering, anomaly detection, contrastive divergence algorithms, expectation-maximisation (EM), spike-timing-dependent plasticity (STDP), variational inference, wake-sleep algorithms, Hebbian learning, Hebb's rule, Oja's rule, BCM rule.
In patent documents, the following words/expressions are often used as synonyms:
- "non-supervised neural network" and "unsupervised neural network"
This place covers:
Learning or training with partial or artificial supervisory signals, e.g. using a training set with a limited number of ground truth labelled data, or by generating pseudo-labels from an unlabelled dataset.
Typical examples of such learning or training methods: barely-supervised learning, co-training, pseudo-labelling, data augmentation, learning with noisy labels, consistency regularisations, FixMatch, MixMatch.
This place covers:
Using a labelled data set to train or learn neural network models. These datasets are designed to "supervise" the training or learning of models into classifying, regressing or generally predicting data or outcomes accurately.
Typical examples of such learning or training methods: empirical risk minimisation (ERM), structural risk minimisation (SRM), MixUp, instance-based learning, neural network classifiers, neural network regressors, learning vector quantisation, training-validation-test frameworks.
This place covers:
Learning or training relying on human interactions in general (human-in-the-loop), such as active learning techniques querying a user/oracle/teacher to label selected data. The various strategies for interacting/querying humans often aim at minimising the cost of manual labelling, or maximising the accuracy of the predictions.
This place covers:
Techniques that enable an agent to learn a policy in an interactive environment by trial and error using feedback from its actions and experiences. The policy optimises a reward/value/utility function, or other reinforcement signals. Reinforcement learning is often modelled as a Markov decision process (MDP). Neural networks may, e.g., be used to represent the policy, or approximate reinforcement signals.
Typical examples or features of such learning or training methods: policy gradient, policy optimisation, policy search, reinforcement learning agents, multi-agent systems, actor-critic, advantage functions, reward functions, utility functions, value functions, Q-values, deep Q-networks (DQN), Q-learning, imitation learning, temporal difference (TD) learning, multi-armed bandit (MAB), A3C algorithms, DDPG algorithms, Dyna algorithms, PPO algorithms, SARSA algorithms.
This place covers:
Learning method which aims to trick machine learning models by providing deceptive/distorted/noisy inputs. This includes both the generation and detection of adversarial examples, which are inputs specially created to deceive classifiers, as well as techniques for improving the neural network's robustness to adversarial/poisoning attacks.
Typical examples or features of such learning or training methods: Wasserstein losses, earth mover's distances, adversarial regularisations, fast gradient sign methods (FGSM), projected gradient descent (PGD), Carlini & Wagner algorithms, black-box or white-box attacks, Byzantine attacks, data poisoning, model extraction, model reverse engineering, model stealing.
This place covers:
Techniques that store knowledge gained while solving a given problem or task, and reuse the learned model on another problem or task.
Typical examples or features of such learning or training methods: catastrophic forgetting mitigation, continual learning, incremental learning, lifelong learning, knowledge distillation, teacher-student learning, domain adaptations, knowledge transfers, zero-shot learning, one-shot learning, few-shot learning, multitask learning, common representations, joint representations, shared representations.
This place covers:
Techniques wherein features of the neural network model itself or its learning/training enable a distributed or parallel implementation.
Typical examples or features of such learning or training methods: decentralised learning, collaborative learning, federated averaging (FedAvg), parallel gradient ascent or descent, Downpour stochastic gradient descent (D-SGD), subnet training, DistBelief, data parallelism, model parallelism, parameter server, model replicas, data shards, cloud-based learning, client/server-based learning, edge machine learning, MapReduce for machine learning.
This place covers:
Process of finding the right combination of hyperparameter values to achieve maximum performance on the data in a reasonable amount of time.
Learning algorithms that learn from other learning algorithms. For example, meta-data associated with learning techniques are input to another (meta-)learner in order to improve their performance or even induce/learn the (meta-)learning itself.
Typical examples or features of such learning or training methods: automated machine learning (AutoML), neural architecture search (NAS), Bayesian optimisation, algorithm selection, end-to-end learning.
This place covers:
Neural networks not implemented in specific special purpose electronics but simulated by a program on a standard general purpose digital computer
Attention is drawn to the following places, which may be of interest for search:
Computer simulations in general |
In patent documents, the following words/expressions are often used as synonyms:
- "purely-software neural network", "neural network program" and "simulation of neural networks"
This place covers:
Specific software for specifying or creating neural networks to be simulated on a general purpose digital computer. Specific graphical user interfaces for this application.
Attention is drawn to the following places, which may be of interest for search:
General graphical user interfaces | |
Program for computer aided design |
This place covers:
Computation based on the principles of biological genetic processing (mutation, recombination, reproduction, selection of the fittest).
Attention is drawn to the following places, which may be of interest for search:
Genetic algorithms for training neural networks |
In patent documents, the following words/expressions are often used as synonyms:
- "evolutionary prgramming", "Darwinistic programming", "evolutionary programming", "genetic programming", and "evolution strategies"
This place covers:
Information processing using DNA, whereby a computational problem (e.g. optimisation) is represented with, or encoded on DNA molecules which are manipulated in such a way that at least one DNA molecule is produced that represents a solution to the problem.
Attention is drawn to the following places, which may be of interest for search:
Biological genetic engineering in general | |
Computer memory using DNA |
In patent documents, the following words/expressions are often used as synonyms:
- "DNA computer" and "DNA chips"
This place covers:
Software simulations using the principles evolution as exhibited in real biological systems. For example, genetic algorithms (GA) involve the creation of a number of possible solutions (chromosomes or individuals), testing the different solutions by evaluating a fitness, performance or score function (representing an optimisation problem such as classification, clustering or regression), selecting the best performing ones, starting from these to create a new set of possible solutions using reproduction and mutation, and reiterating until an optimal or sufficiently performing solution is found.
Typical examples of evolutionary algorithms (EA): gene expression programming (GEP), evolutionary programming (EP), memetic algorithms (MA), evolution strategies (ES), covariance matrix adaptation evolutionary strategies (CMA-ES), Darwinistic programming, differential evolution (DE), estimation of distribution algorithms (EDA), probabilistic model-building genetic algorithms (PMBGA), co-evolution, learning classifier systems (LCS), niche-based EA, island-based EA, diffusion grid EA, cellular EA, parallel EA, distributed EA, fine-grained EA,, coarse-grained EA, multi-objective EA (MOEA), non-dominated sorting GA (NSGA).
Classification in this group is not expected when evolutionary algorithms are used in training neural networks. Applications of whatever sort just using evolutionary algorithms with no description of the evolutionary algorithm itself are to be classified in the relevant application field only.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Evolutionary algorithms used in training of neural networks |
In patent documents, the following words/expressions are often used as synonyms:
- "evolutionary programming", "Darwinistic programming", "genetic programming", "evolution strategies", "differential evolution", "estimation of distribution algorithm", "gene expression programming", "memetic algorithm", "co-evolution", "learning classifier systems", "cellular genetic algorithm", "parallel, distributed, fine-grained or coarse-grained genetic algorithm"
This place covers:
Computer systems using knowledge bases or creating knowledge bases.
In particular, specific subjects are classified in the subgroups as it follows:
Attention is drawn to the following places, which may be of interest for search:
Information retrieval; Database structures therefor; File system structures therefor |
In this place, the following terms or expressions are used with the meaning indicated:
knowledge base | set of representations of facts about the system to be controlled and its environment |
knowledge-based agent | a software module that uses a knowledge base to implement control decisions |
In patent documents, the following words/expressions are often used as synonyms:
- "knowledge base", "knowledge model", "knowledge graph", "semantic network", and "reasoning model"
This place covers:
Techniques for searching or exploring the solution space of an optimisation problem, such as dynamic programming, branch-and-bound, breadth-first search, depth-first search, shortest path algorithms, techniques based on tree or graph representations (e.g. tree- or graph-traversal, Monte Carlo tree search), first-order logic (e.g. automatic theorem proving), heuristics or models based on empirical knowledge. The optimisation problem is typically defined by one or more objectives or constraints. The outcome may represent an optimal solution, or an indication that the problem can be solved or that its solutions can be verified. Such techniques are normally used when classic methods fail to find an exact solution in a short time.
Typical examples of such techniques: annealing techniques, Monte Carlo search techniques, adaptive search techniques, exploration-exploitation techniques, constraint solvers, constraint optimisations, empirical optimisations, replica methods, predicate logic, iterative dichotomiser 3 (ID3), C4.5 algorithms, classification and regression trees (CART), decision trees, isolation or random forests, good old-fashioned artificial intelligence (GOFAI) techniques.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Video games | |
Complex mathematical operations for solving equations | |
Computer-aided design [CAD] for design optimisation, verification or simulation | |
Forecasting or optimisation specifically adapted for administration or management purposes | |
ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics |
In patent documents, the following words/expressions are often used as synonyms:
- "dynamic search" and "adaptive search"
- "branch-and-bound" and "decision trees"
- "constraint solver" and "constraint optimization"
- "empirical optimization" and "sample average approximation"
This place covers:
Automatic theorem proving; constraint satisfaction; probability consistency check in a decision problem.
In patent documents, the following words/expressions are often used as synonyms:
- "logical consistency" and "automatic proving" and "formula checker"
- "verification" and "determination of probability" and "formula converter"
This place covers:
Knowledge-based models based on specific symbolic or knowledge representations, knowledge engineering, or knowledge acquisition. Typical examples of representations: knowledge bases, knowledge graphs, knowledge repositories, knowledge corpus, predicates, ontologies, taxonomies, semantic networks and other graph-based representations of knowledge.
Where the knowledge-based models are within neural networks, classification should be made in group G06N 3/042 only.
Attention is drawn to the following places, which may be of interest for search:
Indexing in information and retrieval |
In patent documents, the following words/expressions are often used as synonyms:
- "formalisation of a problem", "formalism for knowledge representation", "expressivity", "semantics of a formalism", "elicitation of knowledge action", "rules, ontologies, frames, logics", "description logic", "semantic web", "declarative", "formula converter", "knowledge graph", and "semantic network"
This place covers:
Knowledge-based models based on symbolic or knowledge representations including rules (causal, logic, propositional, temporal, if-then-else or antecedent-consequent), or knowledge engineering/acquisition by extracting rules. For example, rule extraction, rule induction, rule elicitation, rule maintenance, rule engines, Apriori algorithm, frequent pattern or itemset mining, association rules.
This place covers:
Knowledge systems using frames as knowledge representation including attributes and slots
Rule systems for specific applications are classified in the field of application, unless the invention is still about the rules formalism and/ or extraction and maintenance process itself.
In patent documents, the following words/expressions are often used as synonyms:
- rules extraction", "elcitation", "knowledge discovery", "rules engine","rules maintenance", "rules consistency" and "rules priority"
This place covers:
Symbolic inference methods and devices. Programs with symbolic reasoning capabilities using knowledge. Inference systems.
Attention is drawn to the following places, which may be of interest for search:
Adaptive control |
In patent documents, the following words/expressions are often used as synonyms:
- "inference", "reasoning", "expert system", "instantiation, explanation, recommendation","aid to diagnosis", "pattern matching", "case-based reasoning", "deduction", "analogy","abnormal condition detection", "problem solving, planning" and "question answering"
This place covers:
Kind of logical inference that refers to the process of arriving at an explanatory hypothesis.
Abduction is about the most probable explanation for a fact given the sufficient premises
Attention is drawn to the following places, which may be of interest for search:
Empirical guesses or heuristics |
In patent documents, the following words/expressions are often used as synonyms:
- "hypothetical reasoning", "explanatory hypothesis", "disambiguation","reasonable guess" and "most possible explanation"
This place covers:
An inference mechanism that works backwards from the conclusion
Attention is drawn to the following places, which may be of interest for search:
Automatic theorem proving |
Game-theory based applications are classified in their field of application when possible.
In patent documents, the following words/expressions are often used as synonyms:
- "backwards chaining, backwards reasoning, backwards induction", "retrograde analysis", "goal, hypothesis, goal driven", "conclusion, premises", "consequent, antecedent", "game theory", "modus ponens" and "depth-first strategy"
This place covers:
Expert systems implemented in distributed programming units or multiple interacting intelligent autonomous components, for example, multi-agent systems.
In patent documents, the following words/expressions are often used as synonyms:
- "multi-agents", "cognitive agent", "autonomous", "decentralization", "self-steering", "software agents" and "swarm"
This place covers:
Inference or reasoning model that provides or supports explanations or interpretations of the inferences or reasoning to the user in the context of diagnostic or decision support.
In patent documents, the following words/expressions are often used as synonyms:
- "explanation", "anomaly", "decision", "diagnostic", "fault", "abnormal" and "alarm"
This place covers:
Inference or reasoning that starts with the available data and makes inferences to derive more data. The inferences are performed forwards towards a goal by repetitive application of the modus ponens.
In patent documents, the following words/expressions are often used as synonyms:
- "modus ponens", "interations", "if-then clause", "data driven" and "Rete algorithm"
This place covers:
Transformation of exact inputs in fuzzy inputs with membership functions (fuzzification). The fuzzified inputs are processed in a fuzzy inference machine with fuzzy if-then rules. Depending on the degree of membership, several rules are fired in parallel. The consequents of each rule are aggregated into fuzzy outputs which are de-fuzzified or not de-fuzzified.
Where the fuzzy-based models are within neural networks, classification should be made in group G06N 3/043 only.
Attention is drawn to the following places, which may be of interest for search:
Computing arrangements using fuzzy logic |
In patent documents, the following words/expressions are often used as synonyms:
- "membership function", "fuzzification, fuzzy rules, fuzzy expert system", "parallel rules evaluation" and "degree of membership"
This place covers:
Computer systems based on mathematical models that cannot be classified in their application field.
Attention is drawn to the following places, which may be of interest for search:
Neural networks | |
Complex mathematical operations |
When other types of Machine Learning are involved, also classify in G06N 20/00.
In patent documents, the following words/expressions are often used as synonyms:
- "probabilities", "statistics", "stochastic", "chaos", "non-linear function", "fuzzy logic", "formalism", "applied mathematics" and "systems simulation"
This place covers:
Probabilistic graphical model (PGM) is a probabilistic model for which a graph expresses the conditional dependence structure between random variables, such as belief networks, Bayesian networks, Markov models, Markov decision process (MDP), conditional random fields (CRF), Markov chain Monte Carlo (MCMC). Typical examples of graphical models: structured probabilistic models, Bayes networks, directed acyclic graph models, belief propagation, influence diagrams, latent Dirichlet allocation (LDA), Bayes classifiers, Bayesian optimisation, Ising models, Pott models, spin-glass models, Markov chains, Markov networks, Markov random fields (MRF).
Classification in this group is not expected when probabilistic graphical models are used in neural networks (e.g. Boltzmann machines).
Applications of whatever sort just using Bayesian or Markov models with no description of the Bayesian or Markov model itself are to be classified in the relevant application field.
Learning of unknown parameters of the network should also be classified in G06N 20/00.
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Video games | |
Digital data processing | |
Information retrieval | |
Pattern recognition | |
Classification of content in image-based pattern recognition | |
Speech recognition |
Attention is drawn to the following places, which may be of interest for search:
Recurrent networks, e.g. Hopfield networks | |
Neural networks having a probabilistic aspect | |
Generative networks |
In patent documents, the following words/expressions are often used as synonyms:
- "Bayesian network" and "Bayes network" and "belief network" and "generalised Bayesian network"
- "directed acyclic graphical model" and "DAG" and "probabilistic graphical model" and "probability node"
- "beliefs propagation" and "influence diagram" and "conditional dependencies" and "probability function" and "probability density function" and "Bayes theorem"
- "Markov model" and "Markov chain" and "Markov network" and "Markov random field" and "Markov decision process" and "conditional random fields"
This place covers:
Computer systems based on fuzzy logic
Classification in this group is not expected when fuzzy logics is used in combination with neural networks, nor when fuzzy logic is used in fuzzy inferencing.
Applications of whatever sort just using fuzzy logic with no description of the fuzzy logic itself are to be classified in the relevant application field.
This place does not cover:
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Adaptive control systems |
In patent documents, the following words/expressions are often used as synonyms:
- "fuzzy logic" and "tuning parameters"
This place covers:
Physical realizations of computer systems based on mathematical models
In patent documents, the following words/expressions are often used as synonyms:
- "analogue" and "implementation"
This place covers:
Fuzzy systems simulated on general purpose computers
Examples of places where the subject matter of this place is covered when specially adapted, used for a particular purpose, or incorporated in a larger system:
Simulation in game playing | |
Computer aided design (CAD) | |
Simulation for the purpose of Optimisation | |
Telecom applications using simulation | |
Computer aided chemistry components design | |
Network architectures or network communication protocols for network security | |
Network arrangements, protocols or services for supporting real-time applications in data packet communication | |
Network arrangements or protocols for supporting network services or applications |
This place covers:
Computer-based systems using chaos or non-linear models
Classification in this group is not expected when chaos models or non-linear models are used in neural networks.
Attention is drawn to the following places, which may be of interest for search:
Neural networks using chaos or fractal principles |
In patent documents, the following words/expressions are often used as synonyms:
- "chaos theory", "non-linear", "stochastic" and "fractal"
This place covers:
Computation performed by a combination of atomic or subatomic particles where the interactions are no longer described by macroscopic physics but by the theory of quantum mechanics.
Attention is drawn to the following places, which may be of interest for search:
Manufacture or treatment of nanostructures | |
Nanotechnology for information processing, storage or transmission, e.g. quantum computing or single electron logic | |
Optical computing devices for processing non-digital data | |
Photonic quantum communication | |
Quantum cryptography | |
Devices using superconductivity |
In this place, the following terms or expressions are used with the meaning indicated:
quantum-mechanical phenomena | covers the quantum phenomena of superposition, coherence, decoherence, entanglement, nonlocality and teleportation |
In patent documents, the following words/expressions are often used as synonyms:
- "quantum computer", "qubit", "quantum bit", "superconducting bits", "Josephson junction" and "SQUID"
This place covers:
Models or logical architectures, as opposed to the hardware architectures covered by group G06N 10/40, of quantum computing, independent of whether or not a physical realisation is also disclosed. In particular, general logical/physical models of quantum computing, e.g. related to quantum circuit, are classified in group G06N 10/20.
The physical realisations of a specific model (see examples below) are classified in both G06N 10/20 and G06N 10/40.
A "quantum circuit" is a sequence of quantum logic gates, e.g. quantum gate array, quantum register or quantum random access memory. It should be noted that these are terms of art representing quantum models and should not be confused with physical circuit versions, e.g. electrical circuitry, in general. Quantum circuits are typically obtained via "quantum circuit synthesis", "quantum circuit decomposition" or "quantum compilers" (also not to be confused with "classical" compilers).
Typical examples of quantum gates: Clifford gates, controlled gates, e.g. cX, cY, cZ, CNOT, Hadamard gate, Pauli-X/Y/Z gates, SWAP gate, T gate, i.e. pi/8, Toffoli gate, i.e. CCNOT, Deutsch gate, Ising XX/YY/ZZ coupling gates, phase shift gates.
Other typical models of quantum computing: adiabatic quantum computation [AQC], topological quantum computing, quantum simulations, e.g. universal quantum simulator, quantum state machines, quantum cellular automata, quantum Turing machines [QTM].
Models wherein the units of quantum information are based on d-level quantum systems (qudits), e.g. using qutrits (d=3).
This place covers:
Physical realisations or hardware architectures, as opposed to the logical architectures covered by group G06N 10/20 for quantum computing, independent of whether or not a model of quantum computing is also disclosed. Executing models of quantum computing on a specific physical realisation (see examples below) are classified in both G06N 10/20 and G06N 10/40.
Physical realisations typically fall in one of the following categories: superconducting quantum computers, e.g. based on charge qubits, flux qubits, phase qubits, Transmon, Xmon, trapped ion/atom quantum computers, e.g. based on Paul ion trap, optical lattices, spin-based quantum computers, e.g. based on quantum dots, NMR, NMRQC, nitrogen-vacancy centres, fullerenes, Kane or Loss-DiVincenzo quantum computers, based on quantum optics, e.g. linear optical quantum computers.
Examples of quantum components and qubit manipulations: qubit coupling, control or readout, storing quantum states, quantum processor, quantum bus, quantum memory, quantum network (for computations), quantum repeater (for computations).
Attention is drawn to the following places, which may be of interest for search:
Nanotechnology for information processing, storage or transmission, e.g. quantum computing or single electron logic | |
Superconducting quantum bits per se |
This place covers:
All quantum algorithms and not limited to, e.g. quantum optimisation (see examples below). In particular, quantum computing algorithms for specific problems, e.g. NP problem, are classified in group G06N 10/60. Algorithms based on quantum optimisation also includes so-called "hybrid quantum-classical algorithms". The physical realisations of a specific algorithm (see example below) are classified in both G06N 10/40 and G06N 10/60.
Quantum algorithms typically fall in one of the following categories:
- based on amplitude amplification, e.g. Grover's algorithm;
- based on Fourier or Hadamard transforms, e.g. Shor's algorithm, Simon's algorithm, Deutsch-Josza algorithm, quantum phase estimation algorithm [QPEA] or quantum eigenvalue estimation algorithm;
- quantum optimisation, e.g. quantum annealing, Ising machines, variational quantum eigen-solver [VQE], quantum alternating operator ansatz [QAOA], quantum approximate optimisation algorithm, including hybrid quantum-classical algorithms, e.g. quantum machine learning, machine learning based quantum algorithms;
- quantum walks.
This place covers:
Arrangements to achieve fault-tolerant quantum computations. Typical solutions rely on the introduction of ancillary, i.e. additional or auxiliary qubits, such as stabiliser codes, but this place also covers ancilla-free solutions, i.e. no additional qubit necessary. Other examples: bit flip codes, sign flip codes, Shor code, topological codes, e.g. surface codes, planar codes, toric codes.
Arrangements for assessing the quality of quantum computers, whether characterised by a metrics or figure of merits, e.g. quantum fidelity, quantum volume, quantum purity, error rate, or by its calculation or measurement, e.g. randomized benchmarking [RB], cross-entropy benchmarking [CEB], random circuit sampling [RCS].
This place covers:
All arrangements for quantum programming, such as quantum instruction sets, quantum software development kits, or quantum programming languages. Typical examples: Quil, Qiskit, or QCL.
Platforms for simulating or accessing the quantum computers, such as cloud-based quantum computing. Typical examples: IBM Q Experience, Quantum Inspire, Azure Quantum, Amazon Braket, Rigetti Quantum Cloud Services, Quantum Playground.
This place covers:
Methods or apparatus giving a machine (in its broadest sense) the ability of adapting or evolving according to experience gained by the machine. A machine in its broadest sense is understood as either an "abstract machine" or a physical one (i.e. a computer).
Where the machine learning relates to learning methods within neural networks, classification should be made in group G06N 3/08 only.
Attention is drawn to the following places, which may be of interest for search:
Computing arrangements using neural networks | |
Computing arrangements using knowledge-based models | |
Computing arrangements using fuzzy logic | |
Adaptive control systems | |
Image processing using neural networks | |
Image or video recognition or understanding using machine learning | |
Speech recognition using artificial neural networks |
This place covers:
Machine learning processes where multiple learners (i.e. learning algorithms) are trained to solve the same problem, to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
This place covers:
This group is residual to the whole of the subclass, i.e. it covers subject matter which falls under the scope of G06N and which is not covered by its groups.
Therefore this main group should be rarely used or not used for classification.
Whenever a new computing technology is identified, which is not covered by the other main groups of G06N, it is recommended to create a new subgroup here for that new subject.
This place covers:
Systems where the computational elements are implemented on the molecular level using inorganic molecules e.g. molecular switches.
Classification in this group is not expected when computational elements implement quantum computers.
This place does not cover:
Computing based on bio molecules |
Attention is drawn to the following places, which may be of interest for search:
Quantum computers |