US 10,275,902 C1 (12,951st)
Devices, methods and systems for biometric user recognition utilizing neural networks
Gary R. Bradski, Palo Alto, CA (US)
Filed by MAGIC LEAP, INC., Dania Beach, FL (US)
Assigned to MAGIC LEAP, INC., Dania Beach, FL (US)
Reexamination Request No. 90/019,366, Jan. 3, 2024.
Reexamination Certificate for Patent 10,275,902, issued Apr. 30, 2019, Appl. No. 15/150,042, May 9, 2016.
Claims priority of provisional application 62/159,593, filed on May 11, 2015.
Ex Parte Reexamination Certificate issued on Jun. 27, 2025.
Int. Cl. G06T 7/60 (2017.01); G06K 9/00 (2022.01); G06K 9/46 (2006.01); G06T 7/62 (2017.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 40/18 (2022.01)
CPC G06V 10/82 (2022.01) [G06T 7/62 (2017.01); G06V 10/454 (2022.01); G06V 40/18 (2022.01); G06V 40/197 (2022.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 9-11 are cancelled.
Claim 8 is determined to be patentable as amended.
New claims 12-19 are added and determined to be patentable.
Claims 1-7 were not reexamined.
8. A method of identifying a user of a system, comprising:
analyzing image data [ pertaining to a user who is to be identified] ;
generating shape data [ of the user ] based on the image data;
analyzing the shape data;
generating general category data [ of the user ] based on the shape data [ , wherein the general category data comprises a plurality of candidate users for the user] ;
generating narrow category data [ from the general category data at least ] by comparing [ performing a comparison of ] the general category data with a characteristic and [ of the user and at least by reducing one or more candidate users, that are confused with the image data of the user, from the general category data, based at least in part upon a result of the comparison] ;
generating a classification decision based on the narrow category data, wherein the characteristic is selected from the group consisting of eyebrow shape and eye shape [ ; and
prior to generating the narrow category data yet after the general category data has been generated
tracking, by using a first layer of a network, movement of an eye corresponding to the characteristic; and
modifying, by using the first layer of the network, the image data based at least in part upon a variance that is caused by movement of the eye] .
[ 12. The method of claim 8, wherein modifying the image data comprises removing, from the image data, artifacts that are caused by the movement of the eye.]
[ 13. The method of claim 8, further comprising:
identifying, using one or more second layers following the first layer in the network and preceding a classifier that generates the classification decision, one or more confusing images that the first layer and a plurality of base layers preceding the first layer confuse with the image data;
respectively allocating a node in the one or more second layers to a unique confusing image of the one or more confusing images;
processing the one or more confusing images by respectively determining, at the node, whether the image data distinguishes from the unique confusing image.]
[ 14. The method of claim 13, further comprising:
recognizing, at a first subset of base layers of the plurality of base layers, one or more basic geometric shapes from the image data;
receiving, at a second subset of base layers that follows the first subset in the plurality of base layers, an output of the first subset of base layers; and
recognizing, at a second subset of base layers that follows the first subset in the plurality of base layers, one or more objects from the image data by processing the output from the first subset.]
[ 15. The method of claim 13, further comprising:
adding, by the one or more second layers following the first layer in the network and preceding the classifier that generates the classification decision, one or more new nodes to the one or more second layers;
determining, by the one or more new nodes, whether the image data is distinguishable from the one or more confusing images at least by processing a unique confusing image of the one or more confusing images by a corresponding node of the one or more new nodes.]
[ 16. The method of claim 13, further comprising:
reducing the general category data into the narrow category data at least by:
receiving an output from the first layer; and
processing, at the one or more second layers, the output to reduce a total number of confusing images that the first layer and a plurality of base layers preceding the first layer confuse with the image data.]
[ 17. The method of claim 8, further comprising:
modifying, by using the first layer of the network, the general category data based at least in part upon the movement of the eye.]
[ 18. The method of claim 8, further comprising:
identifying a portion of the image data that corresponds to one or more imperfect images of the eye, wherein the one or more imperfect images are due to distortions caused by one or more extreme angles at which the one or more imperfect images of the eye are captured; and
rendering the image data and the general category data resilient to at least the one or more imperfect images by processing the portion of the image data at the first layer.]
[ 19. The method of claim 8, wherein an identity of the user is recognized by using the general category data alone.]