US 11,706,423 B2
Inter-prediction method and apparatus for same
Hui Yong Kim, Daejeon (KR); Gwang Hoon Park, Seongnam-si (KR); Kyung Yong Kim, Suwon-si (KR); Sung Chang Lim, Daejeon (KR); Jin Ho Lee, Daejeon (KR); Jin Soo Choi, Daejeon (KR); and Jin Woong Kim, Daejeon (KR)
Assigned to UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY, Yongin-si (KR)
Filed by Electronics and Telecommunications Research Institute, Daejeon (KR); and University-Industry Cooperation Group of Kyung Hee University, Yongin-si (KR)
Filed on Jul. 5, 2022, as Appl. No. 17/857,765.
Application 17/857,765 is a continuation of application No. 17/198,720, filed on Mar. 11, 2021, granted, now 11,412,231.
Application 17/198,720 is a continuation of application No. 16/696,319, filed on Nov. 26, 2019, granted, now 10,986,348, issued on Apr. 20, 2021.
Application 16/696,319 is a continuation of application No. 16/107,208, filed on Aug. 21, 2018, granted, now 10,536,704, issued on Jan. 14, 2020.
Application 16/107,208 is a continuation of application No. 15/810,867, filed on Nov. 13, 2017, granted, now 10,085,031, issued on Sep. 25, 2018.
Application 15/810,867 is a continuation of application No. 15/337,309, filed on Oct. 28, 2016, granted, now 9,854,248, issued on Dec. 26, 2017.
Application 15/337,309 is a continuation of application No. 14/127,617, granted, now 9,532,042, issued on Dec. 27, 2016, previously published as PCT/KR2012/004882, filed on Jun. 20, 2012.
Claims priority of application No. 10-2011-0060285 (KR), filed on Jun. 21, 2011; application No. 10-2011-0065714 (KR), filed on Jul. 1, 2011; application No. 10-2011-0066173 (KR), filed on Jul. 4, 2011; application No. 10-2012-0005948 (KR), filed on Jan. 18, 2012; and application No. 10-2012-0066191 (KR), filed on Jun. 20, 2012.
Prior Publication US 2022/0345719 A1, Oct. 27, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 19/159 (2014.01); H04N 19/105 (2014.01); H04N 19/109 (2014.01); H04N 19/117 (2014.01); H04N 19/124 (2014.01); H04N 19/13 (2014.01); H04N 19/139 (2014.01); H04N 19/15 (2014.01); H04N 19/176 (2014.01); H04N 19/52 (2014.01); H04N 19/61 (2014.01)
CPC H04N 19/159 (2014.11) [H04N 19/105 (2014.11); H04N 19/109 (2014.11); H04N 19/117 (2014.11); H04N 19/124 (2014.11); H04N 19/13 (2014.11); H04N 19/139 (2014.11); H04N 19/15 (2014.11); H04N 19/176 (2014.11); H04N 19/52 (2014.11); H04N 19/61 (2014.11)] 11 Claims
OG exemplary drawing
 
1. A method for decoding a video signal, the method comprising:
determining a prediction mode for a current block as an inter prediction mode;
determining a temporal motion information reference picture including temporal motion information of the current block from a reference picture list;
determining a temporal motion information reference block from the temporal motion information reference picture based on a spatial location of the current block;
determining the temporal motion information of the current block from the temporal motion information reference block;
determining a motion candidate list of the current block by using spatial motion information of the current block from neighboring blocks spatially adjacent to the current block and the temporal motion information of the current block from the neighboring blocks temporally adjacent to the current block;
determining motion information of the current block based on the motion candidate list and a motion candidate index;
determining a prediction method for the current block between uni-prediction or bi-prediction according to a size of the current block;
generating a prediction block of the current block by predicting the current block based on the prediction method and the motion information; and
reconstructing the current block based on the prediction block of the current block,
wherein the motion candidate index indicates a motion candidate included in the motion candidate list;
wherein the motion information of the current block includes prediction direction information and both of L0 motion information and L1 motion information,
wherein, when the prediction direction information of the current block indicates a bi-prediction method and a size of the current block is smaller than a pre-determined size, the prediction direction information indicating the bi-prediction method is changed to indicate a uni-prediction method and the prediction block of the current block is generated by performing motion compensation based on only the L0 motion information among the L0 motion information and the L1 motion information, and
wherein, when the prediction direction information indicates the bi-prediction method and the size of the current block is equal to or greater than the pre-determined size, the prediction direction information indicating the bi-prediction method is not changed to indicate the uni-prediction method and the prediction block of the current block is generated by performing the motion compensation based on both the L0 motion information and the L1 motion information.