US 11,720,854 B2
Inventory management through image and data integration
Eric M. Lee, Oakland, MI (US); Thomas Hennel, South Bend, IN (US); and Thomas Higginbotham, Fort Wayne, IN (US)
Filed by TRUEVIEW LOGISTICS TECHNOLOGY LLC, Oakland, MI (US)
Filed on Nov. 15, 2021, as Appl. No. 17/454,853.
Application 17/454,853 is a continuation in part of application No. 16/229,396, filed on Dec. 21, 2018, granted, now 11,177,036.
Claims priority of provisional application 62/609,584, filed on Dec. 22, 2017.
Prior Publication US 2022/0139539 A1, May 5, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06Q 10/087 (2023.01); G06K 7/10 (2006.01); G06K 7/14 (2006.01); G06Q 30/0601 (2023.01); G16H 40/20 (2018.01); G06V 20/20 (2022.01); G06V 10/10 (2022.01); G06V 40/20 (2022.01)
CPC G06Q 10/087 (2013.01) [G06K 7/10297 (2013.01); G06K 7/1413 (2013.01); G06Q 30/0633 (2013.01); G06V 10/17 (2022.01); G06V 20/20 (2022.01); G16H 40/20 (2018.01); G06V 40/20 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method, the method comprising:
obtaining, by one or more processors, a signal of decodable indicia;
decoding, by the one or more processors, the signal of decodable indicia to access decoded data, wherein the decoded data comprises information identifying an object, wherein the object comprises a plurality of items;
based on the information identifying the object, obtaining, by the one or more processors, from a memory, a visual representation of a portion of the object, wherein the visual representation is divided into a plurality of regions and each region represents an item of the plurality of items;
based on identifying the object, obtaining, by the one or more processors, data comprising descriptive text characterizing the portion of the object, wherein the descriptive text comprises quantitative data related to the portion of the object;
displaying, by the one or more processors, the visual representation as a three dimensional image and the descriptive text, via a device, wherein the device is selected from the group consisting of: an augmented reality device and a virtual reality device, wherein the three dimensional image comprises a virtual projection in three dimensional space in a range of view of a user utilizing the device, wherein the device comprises a user interface;
obtaining, by the one or more processors, via the user interface, a designation of at least one of the plurality of regions in the visual representation;
based on obtaining the designation, executing an action, wherein the action changes a quantitative or a qualitative element of the descriptive text for an item represented by the at least one of the plurality of regions; and
updating, by the one or more processors, concurrently with the executing, the descriptive text in the visual representation to reflect the changed quantitative or qualitative element.