US 11,757,947 B2
Asymmetric collaborative virtual environments
Rebecca A. Little, Mesa, AZ (US); Bryan R. Nussbaum, Bloomington, IL (US); An Ho, Phoenix, AZ (US); Nathan C. Summers, Mesa, AZ (US); Tyler Reeves, Mesa, AZ (US); and Jacob Simonson, Tempe, AZ (US)
Assigned to State Farm Mutual Automobile Insurance Company, Bloomington, IL (US)
Filed by State Farm Mutual Automobile Insurance Company, Bloomington, IL (US)
Filed on Sep. 28, 2022, as Appl. No. 17/955,007.
Application 17/955,007 is a continuation of application No. 17/308,757, filed on May 5, 2021, granted, now 11,489,884.
Application 17/308,757 is a continuation of application No. 16/397,407, filed on Apr. 29, 2019, granted, now 11,032,328, issued on Jun. 8, 2021.
Prior Publication US 2023/0031290 A1, Feb. 2, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 65/1069 (2022.01); H04L 65/1083 (2022.01); H04L 65/403 (2022.01); H04L 65/401 (2022.01); H04L 67/131 (2022.01); H04L 12/18 (2006.01)
CPC H04L 65/1069 (2013.01) [H04L 12/1827 (2013.01); H04L 65/1083 (2013.01); H04L 65/401 (2022.05); H04L 65/403 (2013.01); H04L 67/131 (2022.05)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for virtual collaboration, comprising:
providing, with a processor, a high-fidelity virtual environment representing a physical location to a first interface device, wherein the high-fidelity virtual environment includes data tools for data capture or presentation;
providing, with the processor, a low-fidelity virtual environment representing the physical location to a second interface device, wherein the physical location includes a structure;
generating a first model of the structure;
generating a second model of the structure, wherein the second model renders the structure in a lower image resolution than the first model;
storing the first model and the second model;
receiving, with the processor, user interaction data from the first interface device and associated with one of the data tools, wherein the user interaction data indicates one or more of a portion of the physical location or a viewing perspective relative to the physical location;
generating, with the processor and based on the user interaction data, a high-fidelity response corresponding to the high-fidelity virtual environment and a low-fidelity response corresponding to the low-fidelity virtual environment; and
synchronizing, by the processor, the high-fidelity virtual environment and the low-fidelity virtual environment by:
providing the high-fidelity response to the first interface device, and
providing the low-fidelity response to the second interface device, wherein the synchronizing causes the high-fidelity virtual environment and the low-fidelity virtual environment to be implemented on respective devices simultaneously.