US 11,717,971 B2
Method and computing system for performing motion planning based on image information generated by a camera
Xutao Ye, Tokyo (JP); Puttichai Lertkultanon, Tokyo (JP); and Rosen Nikolaev Diankov, Tokyo (JP)
Assigned to MUJIN, INC., Tokyo (JP)
Filed by MUJIN, INC., Tokyo (JP)
Filed on Jul. 26, 2021, as Appl. No. 17/385,349.
Application 17/385,349 is a continuation of application No. 17/084,272, filed on Oct. 29, 2020, granted, now 11,103,998.
Claims priority of provisional application 62/946,973, filed on Dec. 12, 2019.
Prior Publication US 2021/0347051 A1, Nov. 11, 2021
Int. Cl. G06F 7/00 (2006.01); B25J 9/16 (2006.01); B25J 19/02 (2006.01); G06T 7/73 (2017.01); B25J 13/08 (2006.01); B25J 15/00 (2006.01); B65G 59/02 (2006.01); G05B 19/4155 (2006.01); G06T 7/60 (2017.01); G06F 18/2413 (2023.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01)
CPC B25J 9/1697 (2013.01) [B25J 9/1612 (2013.01); B25J 9/1653 (2013.01); B25J 9/1664 (2013.01); B25J 9/1669 (2013.01); B25J 9/1671 (2013.01); B25J 13/08 (2013.01); B25J 15/0061 (2013.01); B25J 19/023 (2013.01); B65G 59/02 (2013.01); G05B 19/4155 (2013.01); G06F 18/2413 (2023.01); G06T 7/60 (2013.01); G06T 7/74 (2017.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01); G05B 2219/40269 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20164 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computing system comprising:
a communication interface configured to communicate with: (i) a robot having an end effector apparatus, and (ii) a camera mounted on the end effector apparatus and having a camera field of view;
at least one processing circuit configured, when an object is or has been in the camera field of view, to:
determine a first estimate of an object structure associated with the object;
identify, based on the first estimate of the object structure, a corner of the object structure;
determine a camera pose which, when adopted by the camera, causes the camera to be pointed at the corner of the object structure such that the camera field of view encompasses the corner and at least a portion of an outer surface of the object structure;
receive image information for representing the object structure, wherein the image information is generated by the camera while the camera is in the camera pose;
determine a second estimate of the object structure based on the image information; and
generate a motion plan based on at least the second estimate of the object structure, wherein the motion plan is for causing robot interaction between the robot and the object.
 
17. A non-transitory computer-readable medium having instructions that, when executed by at least one processing circuit of a computing system, wherein the computing system is configured to communicate with: (i) a robot having an end effector apparatus, and (ii) a camera mounted on the end effector apparatus and having a camera field of view, causes the at least one processing circuit to:
determine a first estimate of an object structure associated with an object that is or has been in the camera field of view;
identify, based on the first estimate of the object structure, a corner of the object structure;
determine a camera pose which, when adopted by the camera, causes the camera to be pointed at the corner of the object structure such that the camera field of view encompasses the corner and at least a portion of an outer surface of the object structure;
receive image information for representing the object structure, wherein the image information is generated by the camera while the camera is in the camera pose;
determine a second estimate of the object structure based on the image information; and
generate a motion plan based on at least the second estimate of the object structure, wherein the motion plan is for causing robot interaction between the robot and the object.