US 11,704,819 B2
Apparatus and method for aligning 3-dimensional data
Min Ho Chang, Seoul (KR); Han Sol Kim, Seoul (KR); and Keonhwa Jung, Seoul (KR)
Assigned to MEDIT CORP., Seoul (KR); and Korea University Research and Business Foundation, Seoul (KR)
Filed by MEDIT CORP., Seoul (KR); and Korea University Research and Business Foundation, Seoul (KR)
Filed on Sep. 25, 2020, as Appl. No. 17/33,674.
Claims priority of application No. 10-2019-0119053 (KR), filed on Sep. 26, 2019.
Prior Publication US 2021/0097703 A1, Apr. 1, 2021
Int. Cl. G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06T 7/11 (2017.01)
CPC G06T 7/33 (2017.01) [G06T 7/11 (2017.01); G06T 7/73 (2017.01); G06T 2200/04 (2013.01); G06T 2207/10081 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/30036 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A three-dimensional data alignment apparatus comprising:
a three-dimensional data alignment unit for aligning a location between first three-dimensional data and second three-dimensional data expressed in different data forms with regard to a target to be measured,
wherein the first three-dimensional data aligning with the second three-dimensional data are three-dimensional data acquired in a voxel form with regard to the target to be measured and including an edge region in which the surface of the target to be measured exists,
wherein the second three-dimensional data are three-dimensional data acquired in a surface form with regard to the target to be measured, wherein the second three-dimensional data is different from surfaces extracted by segmentation of the first three-dimensional data,
wherein the three-dimensional data alignment unit is configured to:
extract one or more vertices from the second three-dimensional data,
determine sampling locations at a predetermined interval in a normal direction perpendicular to a surface of each vertex extracted from the second three-dimensional data which is volume data before segmentation,
extract the first voxel values of first voxels located around each vertex from the first three-dimensional data, based on a location of each vertex extracted from the second three-dimensional data,
calculate intensity values at the sampling locations within the edge region based on the first voxel values, and
determine corresponding points between the first three-dimensional data and the second three-dimensional data based on the first voxel values extracted from the first three-dimensional data, and
calculate location conversion information minimizing a location error between the first three-dimensional data and the second three-dimensional data based on the corresponding points.