US 11,809,954 B2
Method of performing learning of deep neural network and apparatus thereof
Sungho Kang, Suwon-si (KR); Hyungdal Kwon, Suwon-si (KR); Cheon Lee, Suwon-si (KR); and Yunjae Lim, Suwon-si (KR)
Assigned to SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
Filed by SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
Filed on Feb. 21, 2019, as Appl. No. 16/281,737.
Claims priority of application No. 10-2018-0020005 (KR), filed on Feb. 20, 2018.
Prior Publication US 2019/0258932 A1, Aug. 22, 2019
Int. Cl. G06N 3/08 (2023.01); G06N 3/04 (2023.01); G06F 7/58 (2006.01); G06N 3/082 (2023.01)
CPC G06N 3/082 (2013.01) [G06F 7/58 (2013.01); G06N 3/04 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An encoding apparatus comprising:
a memory storing a random number sequence generated by a random number generator; and
at least one processor configured to:
receive dropout information of a deep neural network, the dropout information indicating a ratio between connected edges and disconnected edges of a plurality of first edges included in a first layer of the deep neural network,
generate an edge sequence indicating connection or disconnection of a plurality of second edges included in a second layer of the deep neural network by modifying the random number sequence based on a comparison between the dropout information and a target ratio value, and
output the edge sequence such that the connection or the disconnection of the plurality of second edges included in the second layer of the deep neural network are reconfigured,
wherein the at least one processor is further configured to modify the random number sequence such that a ratio value of a number of bits having a value of 0 and 1 in an entirety of the deep neural network is the target ratio value.