US 9,813,703 B2
Compressed dynamic image encoding device, compressed dynamic image decoding device, compressed dynamic image encoding method and compressed dynamic image decoding method
Seiji Mochizuki, Kanagawa (JP); Junichi Kimura, Koganei (JP); and Masakazu Ehama, Sagamihara (JP)
Assigned to Renesas Electronics Corporation, Tokyo (JP)
Filed by RENESAS ELECTRONICS CORPORATION, Kawasaki-shi, Kanagawa (JP)
Filed on Sep. 5, 2014, as Appl. No. 14/478,661.
Application 14/478,661 is a division of application No. 13/203,727, granted, now 8,958,479, previously published as PCT/JP2009/000969, filed on Mar. 4, 2009.
Prior Publication US 2014/0376636 A1, Dec. 25, 2014
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 19/56 (2014.01); H04N 19/169 (2014.01); H04N 19/136 (2014.01); H04N 19/583 (2014.01); H04N 19/563 (2014.01); H04N 19/51 (2014.01); H04N 19/61 (2014.01); H04N 19/85 (2014.01)
CPC H04N 19/00733 (2013.01) [H04N 19/136 (2014.11); H04N 19/51 (2014.11); H04N 19/563 (2014.11); H04N 19/61 (2014.11); H04N 19/85 (2014.11)] 14 Claims
OG exemplary drawing
 
1. A compressed dynamic image encoding device
operable to generate a motion vector by searching a reference image read from a frame memory for an image area most similar to an image area of a video input signal to be encoded;
operable to generate a motion-compensated reference image as a predicted image, from the motion vector and the reference image read from the frame memory;
operable to generate a prediction residual, by subtracting the motion-compensated reference image from the video input signal to be encoded;
operable to generate the reference image to be stored in the frame memory, by adding the motion-compensated reference image and the result of processing of orthogonal transform, quantization, inverse quantization, and inverse orthogonal transform performed to the prediction residual; and
operable to generate an encoded video output signal by the processing of orthogonal transform, quantization, and variable-length encoding performed to the prediction residual,
wherein the reference image comprises an on-screen reference image located inside a video display screen, and an off-screen reference image located outside the video display screen,
wherein the off-screen reference image is generated based on the positional relationship of a plurality of similar reference images of the on-screen reference image,
wherein one reference image of the similar reference images of the on-screen reference image is located in close vicinity to a boundary line between the on-screen reference image and the off-screen reference image,
wherein another reference image of the similar reference images is located inside the on-screen reference image, spaced out from the boundary line farther than the one reference image,
wherein the off-screen reference image is located in the closest vicinity to the one reference image across the boundary line,
wherein a yet another reference image of the on-screen reference image is located in close vicinity to the another reference image, in positional relationship analogous to the positional relationship of the one reference image and the off-screen reference image, and
wherein image information of the off-screen reference image is generated on the basis of image information of the yet another reference image.