US 11,703,323 B2
Multi-channel depth estimation using census transforms
Michael Hall, Bellevue, WA (US); Xinqiao Liu, Medina, WA (US); Zhaoming Zhu, Redmond, WA (US); Rajesh Lachhmandas Chhabria, San Jose, CA (US); Huixuan Tang, Redmond, WA (US); and Shuochen Su, Seattle, WA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by META PLATFORMS TECHNOLOGIES, LLC, Menlo Park, CA (US)
Filed on Apr. 14, 2021, as Appl. No. 17/230,109.
Application 17/230,109 is a continuation of application No. 16/417,872, filed on May 21, 2019, granted, now 11,010,911.
Claims priority of provisional application 62/674,430, filed on May 21, 2018.
Prior Publication US 2022/0028103 A1, Jan. 27, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/00 (2017.01); G01B 11/25 (2006.01); G06T 7/521 (2017.01); G06T 7/529 (2017.01); G02B 27/01 (2006.01); G06T 7/11 (2017.01); G06T 7/174 (2017.01); G01B 11/22 (2006.01); G06T 7/55 (2017.01); G06T 7/90 (2017.01); H04N 13/106 (2018.01); H04N 13/204 (2018.01); G06T 7/593 (2017.01); H04N 13/128 (2018.01); H04N 13/271 (2018.01); H04N 13/239 (2018.01); H04N 23/56 (2023.01); G06V 10/22 (2022.01); G06V 10/145 (2022.01); H04N 13/00 (2018.01)
CPC G01B 11/2513 (2013.01) [G01B 11/22 (2013.01); G02B 27/0172 (2013.01); G06T 7/11 (2017.01); G06T 7/174 (2017.01); G06T 7/521 (2017.01); G06T 7/529 (2017.01); G06T 7/55 (2017.01); G06T 7/593 (2017.01); G06T 7/90 (2017.01); G06T 7/97 (2017.01); G06V 10/145 (2022.01); G06V 10/22 (2022.01); H04N 13/106 (2018.05); H04N 13/128 (2018.05); H04N 13/204 (2018.05); H04N 13/239 (2018.05); H04N 13/271 (2018.05); H04N 23/56 (2023.01); G02B 2027/014 (2013.01); G02B 2027/0138 (2013.01); G02B 2027/0178 (2013.01); G06T 2207/10028 (2013.01); H04N 2013/0081 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
capturing, via a first camera, a first image that includes a plurality of light channels;
capturing, via a second camera, a second image that includes the plurality of light channels;
selecting a scan direction from a plurality of scan directions in association with the first image and the second image;
along each of a plurality of scanlines in the selected scan direction, comparing pixels from the first image to pixels from the second image based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image;
determining a stereo correspondence between the pixels in the first image and the pixels in the second image based on the comparing; and
generating a depth map based on the stereo correspondence.