US 11,182,614 C1 (13,145th)
Methods and apparatuses for determining and/or evaluating localizing maps of image display devices
Divya Sharma, San Jose, CA (US); Ali Shahrokni, San Jose, CA (US); Anush Mohan, Mountain View, CA (US); Prateek Singhal, Mountain View, CA (US); Xuan Zhao, San Jose, CA (US); Sergiu Sima, Plantation, FL (US); and Benjamin Langmann, Plantation, FL (US)
Filed by MAGIC LEAP, INC., Plantation, FL (US)
Assigned to MAGIC LEAP, INC., Plantation, FL (US)
Reexamination Request No. 90/019,657, Sep. 12, 2024.
Reexamination Certificate for Patent 11,182,614, issued Nov. 23, 2021, Appl. No. 16/520,582, Jul. 24, 2019.
Claims priority of provisional application 62/702,829, filed on Jul. 24, 2018.
Ex Parte Reexamination Certificate issued on Jan. 20, 2026.
Int. Cl. G06V 20/20 (2022.01); G06F 1/16 (2006.01); G06F 3/01 (2006.01); G06T 15/00 (2011.01); G06T 19/00 (2011.01)
CPC G06F 3/011 (2013.01) [G06F 1/163 (2013.01); G06T 15/005 (2013.01); G06T 19/006 (2013.01); G06V 20/20 (2022.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1, 13, 14, 17-19, 22-29, 31 and 32 are determined to be patentable as amended.
Claims 2-12, 15, 16, 20, 21 and 30, dependent on an amended claim, are determined to be patentable.
New claims 33-39 are added and determined to be patentable.
1. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric [ ;
wherein the processing unit is configured to determine the map based on multiple images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images] .
13. The apparatus of claim 1, wherein the processing unit is configured to perform an optimization based on [ the ] images from the camera system, three-dimensional reference points, and a relative orientation between cameras of the camera system.
14. The apparatus of claim 1, wherein the processing unit is configured to determine a score for an image [ one of the scores for one of the images ] obtained from the camera system.
17. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric; and
wherein the metric is for one of a plurality of cells, and each of the cells represents a three dimensional space of a portion of the environment [ ; and
wherein the processing unit is configured to determine the map based on multiple images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images] .
18. The apparatus of claim 17, wherein the camera system is configured to obtain multiple images, and [ wherein the metric is for one of the plurality of cells, and ] wherein the processing unit is configured to determine the metric for [ the ] one of the plurality of cells by:
identifying a subset of the images that belong to a same range of viewing directions;
determining [ the ] respective scores for the images in the subset of the images; and
summing the scores to obtain a total score.
19. The apparatus of claim 18, wherein the processing unit is also configured to determine an average score [ the metric for the one of the plurality of cells ] by dividing the total score by a number of the images in the subset of the images [ to obtain an average score] .
22. The apparatus of claim 18, wherein the processing unit is configured to determine each of the respective scores by determining a number of reference point(s) that is detected in the corresponding one of the images in the subset of images.
23. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric; and
wherein the processing unit is configured to determine the metric by:
obtaining a plurality of images from the camera system; and
determining co-visibility values [ respectively for the images] , wherein each of the co-visibility values indicating a number of reference points detected in a corresponding one of the plurality of images [ ; and
wherein the processing unit is configured to determine the map based on the images from the camera system, and wherein the metric is based on the co-visibility values respectively for the images and is a measure of a quality of the map, the metric being a separate data item and different from the co-visibility values] .
24. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric; and
wherein the processing unit is configured to perform a sanitization to remove or to disregard data that would otherwise provide an undesirable contribution for the map if the data is used to determine the map [ ; and
wherein the processing unit is configured to determine the map based on multiple images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images] .
25. The apparatus of claim 24, wherein the data comprises an image from the camera system, and wherein the processing unit is configured to perform the sanitization by (1) removing or disregarding the image [ one of the images] , (2) disregarding an identification of a reference point in the image [ one of the images] , and/or (3) disregarding a ray or a line that is associated with the image [ one of the images] .
26. The apparatus of claim 24, wherein the processing unit is configured to perform a bundle adjustment to adjust one or more rays associated with one or more [ of the ] images from the camera system, wherein the processing unit is configured to perform the bundle adjustment after performing the sanitization to remove the data.
27. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric;
[ wherein the processing unit is configured to determine the map based on multiple images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images;]
wherein the processing unit is configured to determine a score for an image [ one of the scores for one of the images ] obtained from the camera system;
wherein the processing unit is configured to perform data sanitization based on the score [ one of the scores] ; and
wherein the processing unit is configured to remove a constraint of the image [ one of the images] , or to remove the image [ one of the images] , when performing the data sanitization.
28. An apparatus configured to be worn on a head of a user, comprising:
a screen configured to present graphics to the user;
a camera system configured to view an environment in which the user is located; and
a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment;
wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric; and
wherein the processing unit is configured to determine the map by:
determining multiple map segments; and
connecting the map segments [ ; and
wherein the processing unit is configured to determine the map based on images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images] .
29. The apparatus of claim 28, wherein the processing unit is configured to determine a first map segment of the map segments by obtaining images from the camera system, and linking the images, wherein the images are generated in sequence by the camera system.
31. The apparatus of claim 30, wherein the processing unit is configured to start the second map segment when the score [ of the additional image ] indicates that the [ additional ] image has a degree of constraint with respect to the first map segment that is below a threshold.
32. A method performed by an apparatus that is configured to be worn on a head of a user, the apparatus having a screen configured to present graphics to the user, a camera system configured to view an environment in which the user is located, and a processing unit, the method comprising:
obtaining, by the processing unit, output(s) [ images ] from the camera system;
determining a map by the processing unit based at least in part on the output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; and
obtaining, by the processing unit, a metric indicating a likelihood of success to localize the user using the map, wherein the act of obtaining comprises computing the metric or receiving the metric by the processing unit [ ;
wherein the map is determined by the processing unit based on the images from the camera system, and wherein the metric is based on respective scores of the images and is a measure of a quality of the map, the metric being a separate data item and different from the respective scores of the images] .
[ 33. The apparatus of claim 1, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 34. The apparatus of claim 17, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 35. The apparatus of claim 23, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 36. The apparatus of claim 24, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 37. The apparatus of claim 27, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 38. The apparatus of claim 28, wherein the apparatus is configured to inform the user of the metric, and wherein the apparatus is configured to receive an input from the user to initiate a map generation session after the apparatus informs the user of the metric.]
[ 39. The method of claim 32, further comprising informing the user of the metric, and receiving an input from the user to initiate a map generation session after the user is informed of the metric.]