US 11,681,803 B2
Malware identification using multiple artificial neural networks
Xu Yang, Burnaby (CA)
Assigned to Fortinet, Inc., Sunnyvale, CA (US)
Filed by Fortinet, Inc., Sunnyvale, CA (US)
Filed on Sep. 30, 2020, as Appl. No. 17/39,758.
Application 17/039,758 is a continuation of application No. 16/053,479, filed on Aug. 2, 2018.
Prior Publication US 2021/0019402 A1, Jan. 21, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 21/56 (2013.01); G06N 3/084 (2023.01); H04L 9/40 (2022.01); G06F 18/21 (2023.01); G06N 3/045 (2023.01)
CPC G06F 21/566 (2013.01) [G06F 18/2185 (2023.01); G06F 21/56 (2013.01); G06N 3/045 (2023.01); G06N 3/084 (2013.01); H04L 63/1416 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method of training a deep neural network model for classification of malware performed by one or more processors of one or more computer systems, the method comprising:
for each training sample of a plurality of training samples, including malware samples and benign samples in a form of executable files, performing a supervised learning process, including:
generating a plurality of code blocks of assembly language instructions by disassembling machine language instructions contained within the training sample;
extracting dynamic features corresponding to each of the plurality of code blocks by executing each of the plurality of code blocks within a virtual environment;
for each code block of the plurality of code blocks:
feeding the code block into a first neural network; and
feeding the corresponding dynamic features for the code block into a second neural network;
updating weights and biases of the first neural network and weights and biases of the second neural network based on whether the training sample was a malware sample or a benign sample; and
after processing a predetermined or configurable number of the plurality of training samples, causing the first neural network and the second neural network to criticize each other and to unify their respective weights and biases by exchanging their respective weights and biases and adjusting their respective weights and biases.