US 11,758,111 B2
3D lidar system using a dichroic mirror for autonomous driving vehicles
Yaoming Shen, Milpitas, CA (US); and Yang Han, San Jose, CA (US)
Assigned to BAIDU USA LLC, Sunnyvale, CA (US)
Filed by Baidu USA LLC, Sunnyvale, CA (US)
Filed on Oct. 27, 2017, as Appl. No. 15/796,546.
Prior Publication US 2019/0132572 A1, May 2, 2019
Int. Cl. H04N 13/271 (2018.01); G01S 7/481 (2006.01); G01S 17/86 (2020.01); G01S 17/931 (2020.01); G06V 20/40 (2022.01); G05D 1/02 (2020.01); G02B 7/04 (2021.01); G02B 26/10 (2006.01); G02B 27/14 (2006.01); H04N 13/00 (2018.01)
CPC H04N 13/271 (2018.05) [G01S 7/4811 (2013.01); G01S 17/86 (2020.01); G01S 17/931 (2020.01); G05D 1/0251 (2013.01); G06V 20/41 (2022.01); G02B 7/04 (2013.01); G02B 26/10 (2013.01); G02B 27/141 (2013.01); H04N 2013/0081 (2013.01); H04N 2013/0092 (2013.01); H04N 2213/001 (2013.01); H04N 2213/003 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A three-dimensional (3D) light detection and range (LIDAR) device of an autonomous driving vehicle, the LIDAR device comprising:
a light source to emit a light beam to sense a physical range associated with a target;
a light detector to receive at least a portion of the light beam reflected from the target;
a first camera;
a dichroic mirror situated between the target and the light detector, the dichroic mirror configured to direct light to both the light detector and the first camera, wherein the dichroic mirror directs the light beam reflected from the target to the light detector to generate a first image, wherein the dichroic mirror further directs optical lights reflected from the target to the first camera to generate a second image, wherein the optical lights and the light beam are from different sources, wherein the light beam from the light source is reflected by the dichroic mirror from the light source to the target;
a second camera situated relative to the first camera to form a stereo camera pair, the second camera to generate a third image to perceive a disparity from the second image by applying a stereo segmentation algorithm to the second and the third images; and
an image processing logic coupled to the light detector and the first camera to combine the first image and the second image to generate a 3D image, wherein the 3D image is utilized to perceive a driving environment surrounding the autonomous driving vehicle, and wherein the 3D image is generated by mapping each pixel of the first image onto one or more pixels of the second image, wherein a pixel density count of the first image is different from a pixel density count of the second image, wherein one pixel of the first image representing an object perceived in the first image is mapped to multiple pixels of the second image, and wherein the 3D image is generated by applying a semantic segmentation algorithm to the second image to classify objects perceived in the second image and mapping one or more pixels of the first image indirectly onto one or more pixels of the second image based on the perceived objects,
wherein the image processing logic generates a depth for each pixel by applying an average function of depth values produced from a stereo depth image and a LIDAR depth image, wherein the stereo depth image is a RGB image containing three channels of 2D color information and a fourth channel of distance depth, and wherein the LIDAR image contains distance or depth information without color information.