CPC G06F 3/017 (2013.01) [G06F 21/316 (2013.01); G06F 21/35 (2013.01); G07C 9/00309 (2013.01); G07C 9/20 (2020.01); H04W 4/025 (2013.01); G07C 2009/00769 (2013.01)] | 10 Claims |
1. A gesture-based access control system applied by a user, the system comprising:
a local access assembly adapted to operate between an access state and a no-access state, the local access assembly including a controller to effect actuation between the access state and the no-access state and a signal receiver;
a mobile device in possession of the user and including an Inertial Measurement Unit (IMU) sensor system configured to detect a motion associated at least in-part with a pre-programmed gesture indicative of an intent of the user of the mobile device to gain access, and at least one of an environment detection system and an internal activity notification module configured to at least determine a location of the mobile device with respect to the user;
one or more electronic storage mediums configured to store an application and a preprogrammed gesture; and
one or more processors configured to receive information from the IMU sensor system and the at least one of the environment detection system and the internal activity notification module, and execute the application to associate the information to the preprogrammed gesture, and depending upon the association outputs a command signal to the controller of the local access assembly via the receiver to effect actuation from the no-access state to the access state;
wherein the local access assembly is separate from the mobile device;
wherein the local access assembly is constructed to provide access through a door;
wherein the application includes a motion module configured to receive and categorize motion information from the IMU sensor system, an environment module configured to receive and categorize environment information from the environment detection system, and the internal activity notification module configured to determine a usage of the mobile device;
wherein the application includes a selection module and a plurality of mode modules, the selection module being configured to receive categorized information from the motion module, the environment module, and the activity notification module, and based on the categorized information select one of the plurality of mode modules configured to determine based on the categorized information if the intentional gesture was performed;
wherein each one of the mode modules include an algorithm and preprogrammed motion scenario data tailored to the categorized information from the motion module, the environment module, and the activity notification module.
|