| CPC G01S 5/163 (2013.01) [G06T 7/579 (2017.01); H04N 23/80 (2023.01); H04N 23/951 (2023.01)] |

| AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT: |
| Claims 5, 6 and 18 are cancelled. |
| Claims 1 and 9 are determined to be patentable as amended. |
| Claims 2-4, 7-8, 10-17 and 19-20, dependent on an amended claim, are determined to be patentable. |
| New claims 21-30 are added and determined to be patentable. |
|
1. An imaging system comprising:
an image capture device comprising a lens and an image sensor, the lens configured to direct light from an environment surrounding the image capture device to the image sensor, the image sensor configured to:
sequentially capture a first plurality of image segments of an image based on the light from the environment, the image representing a field of view (FOV) of the image capture device, the FOV comprising a portion of the environment and including a plurality of sparse points, and
sequentially capture a second plurality of image segments, the second plurality of image segments captured after the first plurality of image segments and forming at least another portion of the image;
non-transitory data storage configured to sequentially receive the first and second plurality of image segments from the image sensor and store instructions for estimating at least one of a position and orientation of the image capture device within the environment; and at least one hardware processor operably coupled to the non-transitory data storage and configured by the instructions to:
identify a first group of sparse points based in part on a corresponding subset of the first plurality of image segments, the first group of sparse points identified as the first plurality of image segments are received at the non-transitory data storage,
determine at least one of the position and orientation of the
identify a second group of sparse points based in part on a corresponding subset of the second plurality of image segments, the second group of sparse points identified as the second plurality of image segments are received at the non-transitory data storage, and
update the at least one of the position and orientation of the
wherein the hardware processor is configured to update the at least one of the position and orientation of the image capture device based on the sliding set of sparse points that comprises a predetermined number of sparse points greater than a number of sparse points in the second group of sparse points, and wherein the sliding set of sparse points includes all of the sparse points from the second group of sparse points and a remainder of the predetermined number of sparse points from the first group of sparse points] .
|
|
9. A head mounted display (HMD) configured to be worn on the head of a user, the HMD comprising:
a frame;
a display supported by the frame and disposed forward of an eye of the user;
an outward facing image capture device disposed on the frame and comprising a lens and an image sensor, the lens configured to direct light from an environment surrounding the HMD to the image sensor and the image sensor configured to sequentially capture a plurality of image segments of an image based on the light from the environment, the image representing a field of view (FOV) of the outward facing image capture device, the FOV comprising a portion of an environment and including a plurality of sparse points, wherein each sparse point is identifiable in part based on a corresponding subset of the plurality of image segments;
non-transitory data storage configured to sequentially receive the plurality of image segments from the image sensor and store instructions for estimating at least one of a position and orientation of the HMD within the environment; and
at least one hardware processor operably coupled to the non-transitory data storage and configured by the instructions to:
sequentially identify one or more sparse points of the plurality of sparse points when each subset of image segments corresponding to the one or more sparse points is received at the non-transitory data storage, and
estimate the at least one of the position and orientation of the HMD within the environment based on [ a sliding set comprising a predetermined number of most recently identified sparse points selected from ] the identified
wherein each individual estimation is based on a different set of sparse points relative to a set of sparse points used for a preceding estimation] .
|
|
[ 21. The imaging system of claim 1
wherein the image sensor is further configured to sequentially capture a third plurality of image segments captured after the second plurality of image segments and forming at least a further portion of the image, and
wherein the hardware processor is further configured to:
identify a third group of sparse points based in part on a third plurality of image segments, and
update the at least one of the position or orientation of the image capture device within the environment based at least in part on an updated sliding set of sparse points comprising at least the most recently identified sparse points selected first from the third group and from the second group.]
|
|
[ 22. The imaging system of claim 21, wherein the hardware processor is further configured to update the at least one of the position or orientation of the image capture device within the environment based at least in part on the updated sliding set of sparse points comprising the predetermined number of most recently identified sparse points selected first from the third group, from the second group, and from the first group.]
|
|
[ 23. An imaging system comprising:
an image capture device comprising a lens and an image sensor, the lens configured to direct light from an environment surrounding the image capture device to the image sensor, the image sensor configured to:
sequentially capture a first plurality of image segments of an image based on the light from the environment, the image representing a field of view (FOV) of the image capture device, the FOV comprising a portion of the environment and including a plurality of sparse points, and
sequentially capture a second plurality of image segments, the second plurality of image segments captured after the first plurality of image segments and forming at least another portion of the image;
non-transitory data storage configured to sequentially receive the first and second plurality of image segments from the image sensor and store instructions for estimating at least one of a position and orientation of the image capture device within the environment; and at least one hardware processor operably coupled to the non-transitory data storage and configured by the instructions to:
identify a first group of sparse points based in part on a corresponding subset of the first plurality of image segments, the first group of sparse points identified as the first plurality of image segments are received at the non-transitory data storage,
determine at least one of the position and orientation of the image capture device within the environment based on the first group of sparse points,
identify a second group of sparse points based in part on a corresponding subset of the second plurality of image segments, the second group of sparse points identified as the second plurality of image segments are received at the non-transitory data storage, and
update the at least one of the position and orientation of the image capture device within the environment based on a sliding set of sparse points comprising sparse points selected from the second group of sparse points and from the first group of sparse points,
wherein the hardware processor is configured to update the at least one of the position and orientation of the image capture device based on the sliding set of sparse points that comprises a predetermined number of sparse points greater than a number of sparse points in the second group of sparse points, and wherein the sliding set of sparse points includes all of the sparse points from the second group of sparse points and a remainder of the predetermined number of sparse points from the first group of sparse points,
wherein the image sensor is further configured to sequentially capture a third plurality of image segments captured after the second plurality of image segments and forming at least a further portion of the image, and
wherein the hardware processor is further configured to:
identify a third group of sparse points based in part on a third plurality of image segments, and
update the at least one of the position or orientation of the image capture device within the environment based at least in part on an updated sliding set of sparse points comprising at least the most recently identified sparse points selected first from the third group and from the second group,
wherein the updated sliding set of sparse points is formed with a predetermined number of sparse points that is greater than a number of sparse points in the third group such that the hardware processor is further configured to update the at least one of the position or orientation of the image capture device within the environment based at least in part on the updated sliding set of sparse points comprising all of the sparse points from the third group, at least a portion of the sparse points of the second group, and if the third group of sparse points and the second groups of sparse points collectively number less than the predetermined number of sparse points in the sliding set of sparse points, from at least a portion of the sparse points of the first group.]
|
|
[ 24. The imaging system of claim 1, wherein the hardware processor configured to update the at least one of the position and orientation of the image capture device according to a sliding integration method with the sliding set of sparse points.]
|
|
[ 25. The imaging system of claim 1, wherein the hardware processor configured to exclude sparse points that are not within the predetermined number of most recently identified sparse points when updating the at least one of the position and orientation of the image capture device.]
|
|
[ 26. The imaging system of claim 1, wherein the sliding set of sparse points is a rolling set of sparse points.]
|
|
[ 27. The HMD of claim 9,
wherein the image sensor is configured to, when capturing the portion of the environment including the plurality of sparse points: sequentially capture a first plurality of image segments and sequentially capture a second plurality of image segments, the second plurality of image segments captured after the first plurality of image segments;
wherein the at least one hardware processor is further configured to, when sequentially identifying the one or more sparse points, identify a first group of the sparse points based in part on a corresponding subset of the first plurality of image segments and identify a second group of sparse points based in part on a corresponding subset of the second plurality of image segments; and
wherein the at least one hardware processor is further configured to, when estimating the at least one of the position and orientation of the HMD within the environment, estimate the at least one of the position and orientation of the HMD within the environment based on the sliding set that comprises sparse points selected from the second group of sparse points and from the first group of sparse points.]
|
|
[ 28. The HMD of claim 27, wherein the hardware processor is configured to update the at least one of the position and orientation of the image capture device based on the sliding set of sparse points that comprises a predetermined number of sparse points greater than a number of sparse points in the second group of sparse points, and wherein the sliding set of sparse points includes all of the sparse points from the second group of sparse points and a remainder of the predetermined number of sparse points from the first group of sparse points.]
|
|
[ 29. The HMD of claim 27,
wherein the image sensor is further configured to sequentially capture a third plurality of image segments captured after the second plurality of image segments and forming at least a further portion of the image, and
wherein the hardware processor is further configured to:
identify a third group of sparse points based in part on a third plurality of image segments, and
wherein the at least one hardware processor is further configured to, when estimating the at least one of the position and orientation of the HMD within the environment, estimate the at least one of the position and orientation of the HMD within the environment based on the sliding set that comprises a predetermined number of the sparse points such that the hardware processor is further configured to estimate the at least one of the position or orientation of the HMD within the environment based at least in part on the sliding set of sparse points comprising all of the sparse points from the third group, at least a portion of the sparse points of the second group, and if the third group of sparse points and the second groups of sparse points collectively number less than the predetermined number of sparse points in the sliding set of sparse points, from at least a portion of the sparse points of the first group.]
|
|
[ 30. The HMD of claim 9, wherein the hardware processor configured to update the at least one of the position and orientation of the HMD according to a sliding integration method with the sliding set of sparse points.]
|