US 9,811,091 B2
Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
Dmitri A. Dolgov, Los Altos, CA (US); and Christopher Paul Urmson, Mountain View, CA (US)
Assigned to Waymo LLC, Mountain View, CA (US)
Filed by Waymo LLC, Mountain View, CA (US)
Filed on Apr. 25, 2016, as Appl. No. 15/137,120.
Application 15/137,120 is a continuation of application No. 13/749,793, filed on Jan. 25, 2013, granted, now 9,367,065.
Prior Publication US 2016/0266581 A1, Sep. 15, 2016
This patent is subject to a terminal disclaimer.
Int. Cl. G05D 1/02 (2006.01); B60W 30/18 (2012.01); B60W 50/00 (2006.01)
CPC G05D 1/0274 (2013.01) [G05D 1/0248 (2013.01); G05D 1/0257 (2013.01); G05D 1/0276 (2013.01); B60W 30/18154 (2013.01); B60W 2050/0095 (2013.01); B60W 2550/12 (2013.01); G05D 2201/0213 (2013.01)] 20 Claims
OG exemplary drawing
1. A method comprising:
generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle's environment, a 3D model of the given sensor's field of view;
aggregating, by one or more processors, the plurality of 3D models to generate a comprehensive model, wherein the comprehensive model indicates an extent of an aggregated field of view for the plurality of sensors;
combining the comprehensive model with map information corresponding to environmental data for the vehicle's environment obtained at a previous point in time using probability data of the map information indicating a probability of detecting objects at various locations in the map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and
using the combined model to maneuver the vehicle.