Search

CN-120976917-B - Food material multidimensional intelligent analysis system based on artificial intelligence

CN120976917BCN 120976917 BCN120976917 BCN 120976917BCN-120976917-B

Abstract

The invention relates to the technical field of image enhancement and artificial intelligence recognition, in particular to an artificial intelligence-based food material multidimensional intelligent analysis system. The system specifically comprises six modules, namely a data acquisition module, an image processing module, a characterization generation module, a feature extraction module, a feature fusion module and an analysis output module. The system adopts the modes of image enhancement and artificial intelligent recognition to comprehensively analyze the multidimensional attribute of food samples including food types, freshness indexes and nutritional ingredients, achieves the purposes of saving analysis cost and increasing analysis efficiency, truly realizes the mutual gain among analysis tasks, remarkably improves the analysis precision and generalization capability of the whole system, and provides good data basis and analysis suggestion for enterprises to evaluate and control food.

Inventors

  • PU YA
  • CHEN YUE
  • MA CHUANG

Assignees

  • 江苏华舜智能科技有限公司

Dates

Publication Date
20260508
Application Date
20250731

Claims (6)

  1. 1. Food material multidimension degree intelligent analysis system based on artificial intelligence, its characterized in that includes: Acquiring a visual image containing space and color information of a food material sample to be analyzed and a hyperspectral data cube containing spectral information of the food material sample in a plurality of spectral bands, and performing preprocessing operations comprising image enhancement and space alignment on the visual image; Converting a visual image into a high-dimensional feature vector, stacking feature vectors of all pixel points to construct an observation matrix, carrying out centering treatment on the observation matrix, calculating the product of the observation matrix and the transposition of the observation matrix, carrying out normalization to generate a covariance matrix, and carrying out normalization to obtain a covariance matrix, wherein for each pixel position after spatial alignment, the values of R, G, B color channels of the visual image and the values of all spectrum band channels of a hyperspectral data cube are spliced into a single feature vector, the feature vectors of all pixel positions are stacked to form the observation matrix, the mean value of each column of the observation matrix is calculated, the corresponding mean value of each column is subtracted to realize data centering, and the product of the observation matrix after centering is calculated and the transposition of the observation matrix is normalized to obtain the covariance matrix; inputting the two-dimensional representation image into a multitask intelligent model, carrying out convolution operation through a filter capable of learning in a network, and reserving the most remarkable characteristics through a shared characteristic extraction layer in the model to generate a shared characteristic representation; Processing the shared feature representation through an attention bottleneck fusion mechanism in the model to generate a collaborative feature representation, wherein the fusion mechanism performs cross-attention interaction with the shared feature representation through a learnable bottleneck token; The specific implementation process for predicting and outputting the food material sample based on the collaborative feature representation comprises the steps of feeding the collaborative feature representation into a plurality of parallel and independent prediction heads simultaneously, wherein one prediction head is used for executing a classification task to output the food material type, the other prediction head is used for executing a regression task to output the freshness index, and the other prediction head is used for executing the regression task to output the nutritional ingredients, so that the multi-task parallel prediction and output are realized.
  2. 2. The artificial intelligence based food material multi-dimensional intelligent analysis system of claim 1, wherein the acquisition process of visual images and hyperspectral data cubes comprises capturing the visual images using standard RGB sensors, recording texture, shape and surface color information of the food material sample, capturing the hyperspectral data cubes sequentially using line scan or snapshot hyperspectral cameras on a plurality of continuous narrow bands covering the visible to near infrared spectrum to record chemical composition information of the food material sample.
  3. 3. The artificial intelligence based food material multidimensional intelligent analysis system of claim 1, wherein the specific implementation process of the image preprocessing operation comprises the steps of applying an image enhancement algorithm to the visual image and the hyperspectral data cube respectively, identifying common characteristic points in two data sources to calculate a transformation matrix, performing spatial alignment on the visual image and the hyperspectral data cube, and performing data augmentation on the aligned data through geometric transformation.
  4. 4. The artificial intelligence-based food material multi-dimensional intelligent analysis system according to claim 1, wherein the specific implementation process of converting the covariance matrix into the two-dimensional representation image comprises the steps of regarding each element value of the covariance matrix as an intensity value of a pixel after normalization mapping, mapping the element value of the position of the covariance matrix into the pixel value of the two-dimensional representation image at coordinates, generating a single-channel gray-scale image with the same dimension as the covariance matrix, and enabling the single-channel gray-scale image to be suitable for being used as input of a convolutional neural network.
  5. 5. The artificial intelligence based food material multi-dimensional intelligent analysis system according to claim 1, wherein the extraction process of the shared feature representation by the multi-task intelligent model comprises inputting the two-dimensional representation image into a shared feature extraction network formed by stacking a plurality of convolution layers, an activation function layer and a pooling layer, performing convolution operation on the two-dimensional representation image through a leachable filter in the convolution layers to identify a correlation pattern encoded in the image, applying a nonlinear activation function after each convolution layer to introduce nonlinearity, and downsampling the feature image through the pooling layer to reduce spatial dimension and preserve the most significant features, wherein the final output of the shared feature extraction network is the shared feature representation.
  6. 6. The artificial intelligence based food material multi-dimensional intelligent analysis system of claim 1, wherein the processing of the shared feature representation comprises initializing a small set of fixed-size, learnable potential vectors as bottleneck tokens, taking the bottleneck tokens as queries and linear projection of the shared feature representation as keys and values in a cross-attention operation, and weighting and summing values to form the collaborative feature representation by calculating a dot product of the queries and keys and applying a Softmax function.

Description

Food material multidimensional intelligent analysis system based on artificial intelligence Technical Field The invention relates to the technical field of image enhancement and artificial intelligence recognition, in particular to an artificial intelligence-based food material multidimensional intelligent analysis system. Background Currently, the identification and analysis of food materials in images using computer vision and artificial intelligence techniques has become a research hotspot in the art. The prior art generally follows a basic processing procedure of firstly acquiring an image of food or dishes to be analyzed through an image acquisition device, then performing a series of preprocessing operations on the acquired original image, such as image enhancement to improve the image quality and highlight key features to improve the accuracy of subsequent recognition, and finally inputting the preprocessed image into a pre-trained artificial intelligent model to recognize the information such as the type, quantity or position of the food contained in the image. Part of the technology can realize simultaneous identification and positioning of a plurality of different food materials in a single image, or the identification result is combined with external data in an attempt to realize analysis of a certain dimension of the food materials, for example, the cost of dishes is evaluated by matching with real-time market price data. However, the prior art still has certain drawbacks. The prior art solutions usually focus on analysis in a single dimension, such as only evaluating cost or only detecting freshness, and lack a unified framework capable of performing multi-dimensional comprehensive evaluation on food materials, so that a user cannot obtain comprehensive and three-dimensional food material information. Meanwhile, the prior art fails to provide an effective technical architecture for integrating information from different sensor modalities and heterogeneous data sources, and analysis dimensions are mutually isolated and cannot form a synergistic effect. Therefore, the prior art needs to systematically integrate multidimensional data to construct an intelligent system for efficiently, cooperatively and comprehensively analyzing multiple dimensions such as food material types, cost, freshness, safety and the like, so as to solve the current technical problems. For this reason, a food material multidimensional intelligent analysis system based on artificial intelligence is provided. Disclosure of Invention The invention aims to provide an artificial intelligence-based food material multidimensional intelligent analysis system so as to solve the problems in the background technology. In order to achieve the above purpose, the present invention provides the following technical solutions: food material multidimension degree intelligent analysis system based on artificial intelligence includes: the data acquisition module is used for acquiring a visual image containing space and color information of a food material sample to be analyzed and a hyperspectral data cube containing spectral information of the food material sample in a plurality of spectral bands; an image processing module performing preprocessing operations including image enhancement and spatial alignment on the visual image and the data cube; the characterization generation module is used for constructing an observation matrix based on the preprocessed data, calculating paired covariance among the data channels through the observation matrix, generating a covariance matrix, and converting the covariance matrix into a two-dimensional characterization image; the feature extraction module inputs the two-dimensional representation image into a multitask intelligent model, and extracts shared feature representation through a shared feature extraction layer in the model; the feature fusion module is used for processing the shared feature representation through an attention bottleneck fusion mechanism in the model to generate a collaborative feature representation, and the fusion mechanism performs cross-attention interaction with the shared feature representation through a group of learnable bottleneck tokens; and the analysis output module predicts and outputs the multidimensional attribute of the food material sample including the food material type, the freshness index and the nutrition component based on the collaborative feature representation. Preferably, the process of acquiring the visual image and the hyperspectral data cube comprises: The visual image is captured using a standard RGB sensor, texture, shape and surface color information of the food material sample is recorded, and the hyperspectral data cube is sequentially captured using a line scan or snapshot hyperspectral camera over a plurality of continuous narrow bands covering the visible to near infrared spectrum to record chemical composition information of the food material