US 11,704,806 B2
Scalable three-dimensional object recognition in a cross reality system
Siddharth Choudhary, San Jose, CA (US); Divya Ramnath, Sunnyvale, CA (US); Shiyu Dong, Santa Clara, CA (US); Siddharth Mahendran, Mountain View, CA (US); Arumugam Kalai Kannan, Sunnyvale, CA (US); Prateek Singhal, Mountain View, CA (US); Khushi Gupta, Mountain View, CA (US); Nitesh Sekhar, Mountain View, CA (US); and Manushree Gangwar, San Francisco, CA (US)
Assigned to Magic Leap, Inc., Plantation, FL (US)
Filed by Magic Leap, Inc., Plantation, FL (US)
Filed on Jan. 12, 2022, as Appl. No. 17/574,305.
Application 17/574,305 is a division of application No. 16/899,878, filed on Jun. 12, 2020, granted, now 11,257,300.
Claims priority of provisional application 63/024,291, filed on May 13, 2020.
Claims priority of provisional application 63/006,408, filed on Apr. 7, 2020.
Claims priority of provisional application 62/968,023, filed on Jan. 30, 2020.
Claims priority of provisional application 62/861,784, filed on Jun. 14, 2019.
Prior Publication US 2022/0139057 A1, May 5, 2022
Int. Cl. G06T 19/20 (2011.01); G06T 7/11 (2017.01); G06T 7/50 (2017.01); G06V 20/00 (2022.01); G06V 10/764 (2022.01)
CPC G06T 7/11 (2017.01) [G06T 7/50 (2017.01); G06T 19/20 (2013.01); G06V 10/764 (2022.01); G06V 20/00 (2022.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01)] 29 Claims
OG exemplary drawing
 
1. A computer-implemented method, the method comprising:
maintaining object data specifying objects that have been recognized in a scene in an environment;
receiving a stream of input images of the scene;
for each of a plurality of input images in the stream of input images:
providing the input image as input to an object recognition system;
receiving, as output from the object recognition system, a recognition output that identifies a respective bounding box in the input image for each of one or more objects that have been recognized in the input image;
providing data identifying the bounding boxes as input to a three-dimensional (3-D) bounding box generation system that determines, from the object data and the bounding boxes, a respective 3-D bounding box for each of one or more of the objects that have been recognized in the input image,
wherein the 3-D bounding box generation system performs operations comprising
generating a current 3-D object mask of a first object that has been recognized in the input image, and
performing fusion between the current 3-D object mask of the first object and respective object data specifying a previously recognized object associated with the first object; and
receiving, as output from the 3-D bounding box generation system, data specifying one or more 3-D bounding boxes for one or more of the objects recognized in the input image based on the fusion performed by the 3-D bounding box generation system; and
providing, as output, data specifying the one or more 3-D bounding boxes.