US 11,756,161 B2
Method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction
Zhanli Hu, Guangdong (CN); Hairong Zheng, Guangdong (CN); Na Zhang, Guangdong (CN); Xin Liu, Guangdong (CN); Dong Liang, Guangdong (CN); Yongfeng Yang, Guangdong (CN); and Hanyu Sun, Guangdong (CN)
Assigned to SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY, Guangdong (CN)
Filed by SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY, Guangdong (CN)
Filed on Jun. 7, 2021, as Appl. No. 17/340,117.
Application 17/340,117 is a continuation of application No. PCT/CN2020/135332, filed on Dec. 10, 2020.
Prior Publication US 2022/0188978 A1, Jun. 16, 2022
Int. Cl. G06T 5/00 (2006.01); A61B 6/03 (2006.01); G06N 3/08 (2023.01); G06N 3/045 (2023.01)
CPC G06T 5/00 (2013.01) [A61B 6/037 (2013.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/10104 (2013.01); G06T 2207/20081 (2013.01)] 8 Claims
OG exemplary drawing
 
1. A method for generating a multi-task learning-type generative adversarial network for low-dose positron emission tomography (PET) reconstruction, comprising steps of:
providing an encoder and a decoder, and connecting layers of the encoder with layers of the decoder by skip connection to provide a U-Net type picture generator;
generating a group of generative adversarial networks by matching a plurality of picture generators with a plurality of discriminators in one-to-one manner, wherein the plurality of picture generators use an input modality as a conditional input and generating desired PET images as a learning objective, the plurality of discriminators use an input modality of a corresponding picture generator, a tag image corresponding to the input modality, and an output result as an input, and, in each group of the generative adversarial network, the input modality comprises at least a low-dose PET image and an magnetic resonance (MR) image of a same picture object;
leaving the plurality of generative adversarial networks in the group of the generative adversarial networks to learn in parallel and leaving the picture generators of all the generative adversarial networks to share shallow information to provide a first multi-task learning-type generative adversarial network;
evaluating parameters of the first multi-task learning-type generative adversarial network by using a standard-dose PET picture as a tag picture corresponding to the input modality and using L1 type loss function and cross entropy loss function, and designing a joint loss function 1 for improving image quality according to output results of the picture generators, the tag picture and the output results of the discriminators; and
training the first multi-task learning-type generative adversarial network according to the joint loss function 1 in combination with an optimizer to provide a second multi-task learning-type generative adversarial network.