| CPC H04S 7/303 (2013.01) [G02B 27/0101 (2013.01); G02B 27/0172 (2013.01); G06T 19/006 (2013.01); H04R 1/403 (2013.01); H04R 3/12 (2013.01); G02B 2027/0138 (2013.01); H04R 2430/20 (2013.01); H04R 2499/15 (2013.01); H04S 2400/11 (2013.01); H04S 2420/01 (2013.01)] |

| AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT: |
| Claims 1-3, 9-12 and 18-20 are determined to be patentable as amended. |
| Claims 4-8 and 13-17, dependent on an amended claim, are determined to be patentable. |
| New claims 21-36 are added and determined to be patentable. |
|
1. A method of presenting audio signals in a mixed reality environment, the method comprising:
identifying a first ear listener position in the mixed reality environment;
identifying a second ear listener position in the mixed reality environment [ , wherein the second ear listener position is distinct from the first ear listener position] ;
identifying a first virtual sound source in the mixed reality environment;
identifying a first object in the mixed reality environment;
determining a first [ virtual ] audio signal in the mixed reality environment, wherein the first [ virtual ] audio signal [ propagates along a first vector that ] originates at the first virtual sound source and intersects the first ear listener position;
determining a second [ virtual ] audio signal in the mixed reality environment, wherein the second [ virtual ] audio signal [ propagates along a second vector that is distinct from the first vector and ] originates at the first virtual sound source,
[ in response to a determination that the first object is intersected by the second virtual audio signal, ] determining a third [ virtual ] audio signal based on the second [ virtual ] audio signal and the first object, [ wherein the third virtual audio signal propagates along a third vector distinct from the first vector, originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position] ;
presenting, via a first speaker to a first ear of a user, [ an analog audio signal representative of ] the first [ virtual ] audio signal; and
presenting, via a second speaker to a second ear of the user, [ an analog audio signal representative of ] the third [ virtual ] audio signal.
|
|
2. The method of claim 1, wherein determining the third [ virtual ] audio signal from the second [ virtual ] audio signal comprises applying a low-pass filter to the second [ virtual ] audio signal, the low-pass filter having a parameter based on the first object.
|
|
3. The method of claim 1, wherein determining the third [ virtual ] audio signal from the second [ virtual ] audio signal comprises applying an attenuation to the second [ virtual ] audio signal, a strength of the attenuation based on [ a size ] of the first object.
|
|
9. The method of claim 1, further comprising
identifying a second virtual object [ in the mixed reality environment] , wherein [ in response to a determination that the second virtual object is intersected by ] the first [ virtual ] audio signal
presenting, via the first speaker to the first ear, an analog audio signal representative of the fourth virtual audio signal] .
|
|
10. A system comprising:
a wearable head device comprising:
a display for displaying a mixed reality environment to a user, the display comprising a transmissive eyepiece through which a real environment is visible;
a first speaker configured to present audio signals to a first ear of the user; and
a second speaker configured to present audio signals to a second ear of the user; and
one or more processors configured to perform:
identifying a first ear listener position in the mixed reality environment;
identifying a second ear listener position in the mixed reality environment [ , wherein the second ear listener position is distinct from the first ear listener position] ;
identifying a first virtual sound source in the mixed reality environment;
identifying a first object in the mixed reality environment;
determining a first [ virtual ] audio signal in the mixed reality environment, wherein the first [ virtual ] audio signal [ propagates along a first vector that ] originates at the first virtual sound source and intersects the first ear listener position;
determining a second [ virtual ] audio signal in the mixed reality environment, wherein the second [ virtual ] audio signal [ propagates along a second vector that is distinct from the first vector and ] originates at the first virtual sound source,
[ in response to a determination that the first object is intersected by the second virtual audio signal, ] determining a third [ virtual ] audio signal based on the second [ virtual ] audio signal and the first object, [ wherein the third virtual audio signal propagates along a third vector distinct from the first vector, originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position] ;
presenting, via
presenting, via
|
|
11. The system of claim 10, wherein determining the third [ virtual ] audio signal from the second [ virtual ] audio signal comprises applying a low-pass filter to the second [ virtual ] audio signal, the low-pass filter having a parameter based on the first object.
|
|
12. The system of claim 10, wherein determining the third [ virtual ] audio signal from the second [ virtual ] audio signal comprises applying an attenuation to the second [ virtual ] audio signal, a strength of the attenuation based on [ a size of ] the first object.
|
|
18. The system of claim 10, the one or more processors further configured to
perform identifying a second virtual object [ in the mixed reality environment] , wherein [ in response to a determination that the second virtual object is intersected by ] the first [ virtual ] audio signal
presenting, via the first speaker to the first ear, an analog audio signal representative of the fourth virtual audio signal] .
|
|
19. The method of claim 1, further comprising:
determining a fourth [ virtual ] audio signal in the mixed reality environment, wherein the fourth [ virtual ] audio [ signal propagates along a fourth vector, distinct from the second vector, that ] originates at the first virtual sound source and intersects the second ear listener position without intersecting the first object; and
presenting, via the second speaker to the second ear of the user, [ an analog audio signal representative of ] the fourth [ virtual ] audio signal.
|
|
20. The system of claim 10, the one or more processors further configured to perform:
determining a fourth [ virtual ] audio signal in the mixed reality environment, wherein the fourth [ virtual ] audio [ signal propagates along a fourth vector, distinct from the second vector, that ] originates at the first virtual sound source and intersects the second ear listener position without intersecting the first object; and
presenting, via the second speaker to the second ear of the user, [ an analog audio signal representative of ] the fourth [ virtual ] audio signal.
|
|
[ 21. The method of claim 1, wherein:
determining the first vector intersects the first ear listener position without intersecting the first object;
determining the third virtual audio signal from the second virtual audio signal comprises filtering or attenuating the second virtual audio signal based on the first object; and
the first virtual audio signal is not filtered or attenuated based on the first object.]
|
|
[ 22. The method of claim 1, wherein the first ear listener position and the second ear listener position are defined relative to a coordinate system corresponding to the user.]
|
|
[ 23. The method of claim 1, wherein the first ear listener position and the second ear listener position are determined based at least in part on simultaneous localization and mapping (SLAM).]
|
|
[ 24. The method of claim 1, wherein the first ear listener position and the second ear listener position are determined based at least in part on visual odometry.]
|
|
[ 25. The method of claim 1, wherein the first ear listener position and the second ear listener position are determined based at least in part on an output of an inertial measurement unit (IMU) of a wearable head device.]
|
|
[ 26. The method of claim 1, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with the first ear, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with the second ear.]
|
|
[ 27. The method of claim 1, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with the first speaker, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with the second speaker.]
|
|
[ 28. The method of claim 1, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with a first temple arm, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with a second temple arm.]
|
|
[ 29. The system of claim 10, wherein:
determining the first vector intersects the first ear listener position without intersecting the first object;
determining the third virtual audio signal from the second virtual audio signal comprises filtering or attenuating the second virtual audio signal based on the first object; and
the first virtual audio signal is not filtered or attenuated based on the first object.]
|
|
[ 30. The system of claim 10, wherein the first ear listener position and the second ear listener position are defined relative to a coordinate system corresponding to the user.]
|
|
[ 31. The system of claim 10, wherein the first ear listener position and the second ear listener position are determined based at least in part on SLAM.]
|
|
[ 32. The system of claim 10, wherein the first ear listener position and the second ear listener position are determined based at least in part on visual odometry.]
|
|
[ 33. The system of claim 10, wherein the first ear listener position and the second ear listener position are determined based at least in part on an output of an inertial measurement unit (IMU) of a wearable head device.]
|
|
[ 34. The system of claim 10, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with the first ear, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with the second ear.]
|
|
[ 35. The system of claim 10, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with the first speaker, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with the second speaker.]
|
|
[ 36. The system of claim 1, wherein the first ear listener position is determined based on a first sensor of a wearable head device which first sensor is associated with a first temple arm, and wherein the second ear listener position is determined based on a second sensor of a wearable head device which second sensor is associated with a second temple arm.]
|