US 9,807,971 B1
Vision system with automatic teat detection
Mark A. Foresman, Houston, TX (US); and Bradley J. Prevost, Pearland, TX (US)
Assigned to Technologies Holdings Corp., Houston, TX (US)
Filed by Technologies Holdings Corp., Houston, TX (US)
Filed on Aug. 17, 2016, as Appl. No. 15/239,300.
Int. Cl. G06K 9/00 (2006.01); A01J 5/007 (2006.01); A01J 5/017 (2006.01); G06T 7/00 (2017.01); G06K 9/62 (2006.01); H04N 13/02 (2006.01)
CPC A01J 5/007 (2013.01) [A01J 5/017 (2013.01); G06K 9/6202 (2013.01); G06K 9/623 (2013.01); G06T 7/001 (2013.01); G06T 7/0044 (2013.01); H04N 13/0203 (2013.01); H04N 13/0271 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/30204 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A teat detection method comprising:
obtaining, by a processor, a three-dimensional (3D) image of a rearview of a dairy livestock in a stall, wherein:
the dairy livestock is oriented in the 3D image with respect to:
an x-axis corresponding with a horizontal dimension of the 3D image,
a y-axis corresponding with a vertical dimension of the 3D image, and
a z-axis corresponding with a depth dimension into the 3D image; and
each pixel of the 3D image is associated with a depth value along the z-axis;
identifying, by the processor, one or more regions within the 3D image comprising depth values greater than a depth value threshold;
applying, by the processor, a thigh gap detection rule set to the one or more regions to identify a thigh gap region among the one or more regions, wherein the thigh gap region comprises an area between hind legs of the dairy livestock;
demarcating, by the processor, an access region within the thigh gap region, wherein the access region is defined by:
a first vertical edge,
a second vertical edge,
a first upper edge spanning between the first vertical edge and the second vertical edge, and
a first lower edge spanning between the first vertical edge and the second vertical edge;
demarcating, by the processor, a teat detection region, wherein the teat detection region is defined by:
a third vertical edge extending vertically from the first vertical edge of the access region,
a fourth vertical edge extending vertically from the second vertical edge of the access region,
a second upper edge spanning between the third vertical edge and the fourth vertical edge, and
a second lower edge spanning between the third vertical edge and the fourth vertical edge;
partitioning, by the processor, the 3D image within the teat detection region along the z-axis to generate a plurality of image depth planes;
examining, by the processor, each of the plurality of image depth planes, wherein examining each of the image depth planes comprises:
identifying one or more teat candidates within the image depth plane; and
applying a teat detection rule set to the one or more teat candidates to identify one or more teats; and
determining, by the processor, position information for the one or more teats.