Search

CN-121861654-B - Skeleton and meat distribution visual identification method for beef segmentation

CN121861654BCN 121861654 BCN121861654 BCN 121861654BCN-121861654-B

Abstract

The invention relates to the technical field of image analysis and discloses a skeleton and meat distribution visual identification method for beef segmentation, which comprises the steps of capturing spectral image data of beef parts to be segmented to obtain source image data; the method comprises the steps of carrying out local binary pattern coding on source image data to obtain a surface texture feature response map, carrying out multi-scale wavelet decomposition on the source image data, carrying out gradient amplitude analysis on images to obtain gradient energy tensors, carrying out self-adaptive threshold segmentation on meat feature response areas and bone feature response areas to obtain an initial distribution mark map, carrying out perspective projection transformation on acquisition visual angles to obtain a bone space occupation prior mask, carrying out spatial position consistency check on the bone feature response areas to obtain a spatial distribution feature map, and carrying out feature vector cascade splicing on the surface texture feature response map and the spatial distribution feature map to obtain a part distribution mark map.

Inventors

  • LIU TIEGANG
  • LI ZHONGSHAN
  • DAI DONGWEI

Assignees

  • 陕西伊明食品股份有限公司

Dates

Publication Date
20260512
Application Date
20260318

Claims (10)

  1. 1. A method for visual identification of bone and meat distribution for beef segmentation, the method comprising: Pt.1, capturing spectral image data of a beef part to be segmented to obtain source image data of the beef part to be segmented; pt.2, carrying out local binary pattern coding on the source image data to obtain a surface texture characteristic response map of the beef part to be segmented; pt.3, performing multi-scale wavelet decomposition on the source image data, and performing gradient amplitude analysis on the decomposed image to obtain a gradient energy tensor of the beef part to be segmented; Pt.4, carrying out self-adaptive threshold segmentation on the meat characteristic response area and the bone characteristic response area of the beef part to be segmented based on a fusion gradient energy field constructed by the gradient energy tensor, so as to obtain an initial distribution marker graph of the beef part to be segmented; pt.5, performing perspective projection transformation on the acquisition view angle of the source image data based on a preset standard skeleton three-dimensional frame to obtain a skeleton space occupation prior mask of the beef part to be segmented; Pt.6, based on the bone space occupation prior mask, performing spatial position consistency check on a bone feature response area in the initial distribution mark graph to obtain a spatial distribution feature graph of the beef part to be segmented; And Pt.7, performing feature vector cascade splicing on the surface texture feature response map and the spatial distribution feature map, and performing region aggregation on the multi-dimensional feature tensor after cascade so as to obtain a part distribution marking map of the beef part to be segmented.
  2. 2. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the capturing of spectral image data of the beef region to be segmented to obtain source image data of the beef region to be segmented comprises: Projecting a structured light field to the surface of a beef part to be segmented, wherein the multiband structured light of the beef part to be segmented illuminates a sequence; Based on the multi-band structured light illumination sequence, carrying out spectral separation on the echo light beams of the beef parts to be segmented to obtain single-wavelength light beams of the beef parts to be segmented; performing photoelectric conversion on the single-wavelength light beam to obtain an analog electric signal of the beef part to be segmented; performing analog-to-digital conversion on the analog electric signal to obtain a digital image frame of the beef part to be segmented; And carrying out spatial registration on the digital image frames according to the acquisition time sequence and the angle information to obtain the source image data of the beef parts to be segmented.
  3. 3. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the step of performing local binary pattern encoding on the source image data to obtain a surface texture feature response map of the beef region to be segmented comprises the steps of: Traversing the source image data pixel by pixel based on a preset neighborhood sampling topological structure to obtain a central pixel of the beef part to be segmented, and obtaining a neighborhood gray value sequence of the beef part to be segmented by obtaining a gray value of the central pixel in a neighborhood range; Comparing the neighborhood gray value sequence with the gray value of the central pixel element by element to obtain a binary coding sequence of the beef part to be segmented; Binary weight distribution is carried out on the binarization coding sequence, and a local binary pattern coding value of the beef part to be segmented is obtained; The local binary pattern coding values are subjected to pixel arrangement, and a local binary pattern coding map of the beef part to be segmented is constructed; And carrying out multi-channel fusion on the local binary pattern coding spectrum to obtain a surface texture characteristic response spectrum of the beef part to be segmented.
  4. 4. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the steps of performing multi-scale wavelet decomposition on the source image data and performing gradient amplitude analysis on the decomposed image to obtain a gradient energy tensor of the beef part to be segmented comprise: performing wavelet decomposition filtering on the source image data to obtain an original wavelet subband image sequence of the beef part to be segmented; Based on the space direction of the beef part to be segmented, carrying out directional separation on the original wavelet subband image sequence to obtain a horizontal high-frequency subband image, a vertical high-frequency subband image and a diagonal high-frequency subband image of the beef part to be segmented; carrying out direction gradient analysis in a pixel neighborhood on the horizontal direction high-frequency sub-band image, the vertical direction high-frequency sub-band image and the diagonal direction high-frequency sub-band image to obtain a high-frequency gradient response diagram of the beef part to be segmented; accumulating energy of the high-frequency gradient response map to obtain a single-scale gradient energy map of the beef part to be segmented; and arranging the single-scale gradient energy maps in a laminated way to construct the gradient energy tensor of the beef part to be segmented.
  5. 5. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the step of performing adaptive threshold segmentation on the meat quality feature response region and the bone feature response region of the beef part to be segmented based on the fusion gradient energy field constructed based on the gradient energy tensor to obtain an initial distribution marker map of the beef part to be segmented comprises the following steps: carrying out space coordinate registration on a single-scale gradient energy spectrum in the gradient energy tensor, and carrying out scale normalization on the single-scale gradient energy spectrum to obtain a multi-scale gradient energy spectrum sequence of the beef part to be segmented; extracting gradient energy response amplitude values of pixel coordinate positions in the multi-scale gradient energy spectrum sequence, and accumulating and fusing the real-time gradient energy response amplitude values to construct a fused gradient energy field of the beef part to be segmented; carrying out distribution histogram statistics on the fusion gradient energy field to obtain trough region positions and inflection point mutation positions of the beef parts to be segmented; defining an energy amplitude segmentation limit according to the trough region position and the inflection point mutation position to obtain a self-adaptive segmentation threshold of the beef part to be segmented; based on the self-adaptive segmentation threshold, judging the category attribute of the fusion gradient energy field, assigning the pixel points penetrating through the self-adaptive segmentation threshold as skeleton characteristic response points of the beef parts to be segmented, and assigning the pixel points not penetrating through the self-adaptive segmentation threshold as meat characteristic response points of the beef parts to be segmented; And performing space attribution filling on the skeleton characteristic response points and the meat characteristic response points to obtain an initial distribution marking chart of the beef parts to be segmented.
  6. 6. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the performing perspective projection transformation on the acquisition view angle of the source image data based on the preset standard bone three-dimensional frame to obtain the bone space occupation prior mask of the beef part to be segmented comprises: Carrying out space coordinate analysis on the preset standard skeleton three-dimensional frame to obtain a standard skeleton three-dimensional point cloud frame of the beef part to be segmented; Constructing a perspective projection matrix of the beef part to be segmented according to the optical center position parameter and the focal plane azimuth parameter of the source image data; Based on the perspective projection matrix, carrying out space coordinate projection on vertex coordinates in the standard bone three-dimensional point cloud frame to obtain a two-dimensional bone contour point set of the beef part to be segmented; Based on the topological connection relation between the vertex coordinates, performing closed filling on adjacent contour points in the two-dimensional skeleton contour point set to obtain a two-dimensional continuous contour surface of the beef part to be segmented; And mapping the two-dimensional continuous contour surface to a blank canvas with the same size as the source image data, and carrying out state assignment on pixel positions of the two-dimensional continuous contour surface to obtain the bone space occupation prior mask of the beef part to be segmented.
  7. 7. The method for visual identification of bone and meat distribution for beef segmentation according to claim 1, wherein the step of performing spatial location consistency verification on the bone feature response area in the initial distribution marker graph based on the bone space occupation prior mask to obtain a spatial distribution feature graph of the beef part to be segmented comprises the following steps: carrying out regional separation analysis on the connected regions of the bone characteristic response points in the initial distribution mark graph to obtain an initial bone response connected region set of the beef part to be segmented; Carrying out boundary contour tracking on the bone space occupation prior mask to obtain a bone occupation space boundary of the beef part to be segmented; Based on the skeleton space occupation boundary, carrying out space overlapping comparison on the initial skeleton response connected domain set to obtain a space consistency judging result of the beef part to be segmented; Based on the space consistency judging result, redundant filtering is carried out on the communication domain outside the skeleton space boundary in the initial distribution marking graph, so as to obtain a skeleton response region of the beef part to be segmented; And carrying out region fusion on the skeleton response region and the meat quality characteristic response region in the initial distribution mark graph to obtain a spatial distribution characteristic graph of the beef part to be segmented.
  8. 8. The method for visual identification of bone and meat distribution for beef segmentation according to claim 7, wherein the spatial overlap confidence in the spatial consistency determination result is calculated as follows: ; In the formula, For the spatial overlap confidence level, For the overlapping area of the overlapping region between the connected domain in the initial set of bone-responsive connected domains and the bone-space boundary, For the total area of the interior region of the bone footprint boundary, For the shortest euclidean distance of the centroid of the connected domain to the bone space boundary, For a preset spatial scale normalization factor, For the pixel points in the overlapping region Gradient energy response amplitude in the fused gradient energy field, For the self-adaptive segmentation threshold of the beef part to be segmented, For the total number of pixels of the overlap region, Is a preset spatial overlapping weight coefficient, For a preset distance decay weight coefficient, For a preset energy consistency weight coefficient, Is an overlapping region between the connected domain in the initial bone response connected domain set and the bone space boundary.
  9. 9. The method for visually identifying skeleton and meat distribution of beef segments according to claim 1, wherein the performing feature vector cascade stitching on the surface texture feature response map and the spatial distribution feature map, and performing region aggregation on the multi-dimensional feature tensor after cascade stitching to obtain a part distribution marker map of the beef parts to be segmented comprises: Carrying out rigid registration on the surface texture feature response spectrum and the spatial distribution feature map to obtain a texture spectrum and a spatial spectrum of the beef part to be segmented; Based on the local binary pattern coding value of the texture map and the bone and meat quality attribute identification in the space map, performing serial splicing on the texture map and the space map to obtain a joint feature descriptor of the beef part to be segmented; Mapping the combined feature descriptors to an original image coordinate matrix of the source image data to construct an initial multidimensional feature tensor of the beef parts to be segmented; carrying out neighborhood communication labeling on the initial multidimensional feature tensor to obtain an initial aggregation communication domain set of the beef part to be segmented; performing regional characteristic homogeneity verification on the initial aggregation connected domain set to obtain a pure aggregation connected domain set of the beef part to be segmented; And counting the clustering centers of the pure aggregation connected domain sets in a feature space, and giving skeleton type labels, muscle type labels or fat type labels to the pure aggregation connected domain sets according to the membership distances between the clustering centers and preset skeleton type prototypes, muscle type prototypes and fat type prototypes to obtain a part distribution marking graph of the beef parts to be segmented.
  10. 10. The method for visually identifying bone and meat distribution for beef segmentation according to claim 9, wherein the performing in-region feature homogeneity verification on the initial set of interconnected domains to obtain a set of interconnected domains of pure aggregation of the beef parts to be segmented comprises: Carrying out mean vector evaluation on the combined feature descriptors in the initial aggregation connected domain set to obtain feature distribution statistics of the beef parts to be segmented; based on the characteristic distribution statistics, carrying out a mahalanobis distance measurement on the initial aggregation connected domain set to obtain the characteristic deviation degree of the beef part to be segmented; threshold value comparison is carried out on the characteristic deviation degree and a preset deviation tolerance threshold value, and pixel points, penetrating through the deviation tolerance threshold value, of the characteristic deviation degree are assigned to be outlier pixel points of the beef parts to be segmented based on comparison results; Stripping the spatial positions of the outlier pixel points, and performing spatial weighted interpolation on the blank pixel positions formed after filtering to obtain an optimized pixel set of the beef part to be segmented; and performing topology reconstruction on the connected domain of the optimized pixel set to obtain a pure aggregation connected domain set of the beef part to be segmented.

Description

Skeleton and meat distribution visual identification method for beef segmentation Technical Field The invention relates to the technical field of image analysis, in particular to a skeleton and meat distribution visual identification method for beef segmentation. Background In the field of visual identification of bone and meat distribution of beef segmentation, in the prior art, a single mode of capturing spectral image data of a beef part can only acquire basic image information, and accurate source image data cannot be obtained through multi-dimensional light field projection and signal conversion, so that the basic data extracted by subsequent features lacks of integrity and accuracy, and the actual distribution state of the bone and meat of the beef part is difficult to truly reflect. In the prior art, when the characteristic analysis is carried out on the image data, the identification is carried out only by means of single texture characteristics or gradient characteristics, the fusion treatment of multiple characteristics is not realized, the characteristic excavation of the texture and the internal gradient energy of the beef surface is insufficient, a comprehensive characteristic response system cannot be formed, and the identification degree of distinguishing the bone and meat quality characteristics is low. In the prior art, when a skeleton and meat quality area is segmented, a segmentation mode with a fixed threshold value is adopted, a segmentation limit cannot be defined in a self-adaptive mode according to the actual gradient energy distribution state of a beef part, misjudgment of a skeleton characteristic response area and a meat quality characteristic response area is easy to occur, and the area segmentation accuracy of an initial distribution marker graph is insufficient. The prior art lacks a priori verification link of bone space positions, and the spatial position consistency verification is carried out on the identified bone characteristic regions without combining a standard bone three-dimensional frame, so that redundant regions exceeding the actual space occupation of bones are easy to appear in initial identification results, meanwhile, after feature fusion, sufficient region aggregation and homogeneity verification are not carried out, the finally obtained distribution mark graph cannot accurately distinguish bones, muscles and fat types, and the actual application requirements of beef accurate segmentation are difficult to meet. Disclosure of Invention The invention provides a skeleton and meat distribution visual identification method for beef segmentation, which aims to solve the problems in the background technology. In order to achieve the above object, the present invention provides a method for visually identifying bone and meat distribution for beef segmentation, comprising: Pt.1, capturing spectral image data of a beef part to be segmented to obtain source image data of the beef part to be segmented; pt.2, carrying out local binary pattern coding on the source image data to obtain a surface texture characteristic response map of the beef part to be segmented; pt.3, performing multi-scale wavelet decomposition on the source image data, and performing gradient amplitude analysis on the decomposed image to obtain a gradient energy tensor of the beef part to be segmented; Pt.4, carrying out self-adaptive threshold segmentation on the meat characteristic response area and the bone characteristic response area of the beef part to be segmented based on a fusion gradient energy field constructed by the gradient energy tensor, so as to obtain an initial distribution marker graph of the beef part to be segmented; pt.5, performing perspective projection transformation on the acquisition view angle of the source image data based on a preset standard skeleton three-dimensional frame to obtain a skeleton space occupation prior mask of the beef part to be segmented; Pt.6, based on the bone space occupation prior mask, performing spatial position consistency check on a bone feature response area in the initial distribution mark graph to obtain a spatial distribution feature graph of the beef part to be segmented; And Pt.7, performing feature vector cascade splicing on the surface texture feature response map and the spatial distribution feature map, and performing region aggregation on the multi-dimensional feature tensor after cascade so as to obtain a part distribution marking map of the beef part to be segmented. In a preferred embodiment, the capturing the spectral image data of the beef to be segmented to obtain the source image data of the beef to be segmented, includes: Projecting a structured light field to the surface of a beef part to be segmented, wherein the multiband structured light of the beef part to be segmented illuminates a sequence; Based on the multi-band structured light illumination sequence, carrying out spectral separation on the echo light beams of the beef pa