Search

CN-122027365-A - Cross-domain collaborative computing system based on AI SoC and safety MCU

CN122027365ACN 122027365 ACN122027365 ACN 122027365ACN-122027365-A

Abstract

The invention provides a cross-domain collaborative computing system based on an AI SoC and a safety MCU, which relates to the technical field of intelligent computation, wherein firstly, an AI SoC computing unit receives a multi-source perception data stream comprising an image frame sequence and a point cloud frame sequence, a built-in first hardware acceleration engine is utilized to carry out cross-source data characteristic preliminary analysis to obtain an image and point cloud preliminary analysis characteristic set, the image and point cloud preliminary analysis characteristic set is packaged into a cross-domain characteristic data packet to be verified, then the data packet is transmitted to the safety MCU control unit through a safety communication channel to carry out source validity verification, after verification is passed, a second hardware acceleration engine is built in the safety MCU control unit to carry out cross-domain characteristic collaborative computation to generate a cross-domain collaborative computation intermediate result, and finally, the intermediate result is transmitted back to the AI SoC computing unit to carry out subsequent high-level computation to obtain a target collaborative computation result. The method and the device improve the calculation efficiency, ensure the data safety, and are suitable for scenes such as automatic driving.

Inventors

  • HE YUNLONG

Assignees

  • 国芯韵(上海)智能信息科技有限公司

Dates

Publication Date
20260512
Application Date
20260414

Claims (10)

  1. 1. A cross-domain collaborative computing method based on an AI SoC and a secure MCU is characterized by comprising the following steps: Receiving a multi-source sensing data stream to be processed in an AI SoC computing unit, wherein the multi-source sensing data stream comprises an image frame sequence unit output by at least one image acquisition device and a point cloud frame sequence unit output by at least one radar detection device, each image frame in the image frame sequence unit is provided with a corresponding first acquisition time stamp, each point cloud frame in the point cloud frame sequence unit is provided with a corresponding second acquisition time stamp, and the first acquisition time stamps and the second acquisition time stamps are distributed in a staggered manner on a time axis; Invoking a first hardware acceleration engine built in the AI SoC calculation unit to perform cross-source data characteristic preliminary analysis operation on the multi-source perception data stream to obtain an image preliminary analysis characteristic set corresponding to the image frame sequence unit and a point cloud preliminary analysis characteristic set corresponding to the point cloud frame sequence unit, and packaging the image preliminary analysis characteristic set and the point cloud preliminary analysis characteristic set together into a cross-domain characteristic data packet to be verified; Transmitting the cross-domain feature data packet to be verified to the secure MCU control unit through a pre-established secure communication channel between the AI SoC calculation unit and the secure MCU control unit, triggering the secure MCU control unit to perform source validity verification processing on the cross-domain feature data packet to be verified based on a root key material stored in the secure MCU control unit, and obtaining a source validity verification passing result; After the source validity verification passing result is verified by the safety MCU control unit, a second hardware acceleration engine built in the safety MCU control unit is called, cross-domain feature collaborative computing operation is executed based on an image preliminary analysis feature set and the point cloud preliminary analysis feature set in the cross-domain feature data packet to be verified, and a cross-domain collaborative computing intermediate result containing the association relation between the image features and the point cloud features is generated; And the cross-domain cooperative computing intermediate result is returned to the AI SoC computing unit through the secure communication channel, and the AI SoC computing unit is instructed to execute subsequent high-level computing tasks on the multi-source sensing data stream according to the cross-domain cooperative computing intermediate result, so that a target cooperative computing result corresponding to the multi-source sensing data stream is obtained.
  2. 2. The cross-domain collaborative computing method based on an AI SoC and a secure MCU according to claim 1, wherein the invoking the first hardware acceleration engine built in the AI SoC computing unit performs a cross-source data feature preliminary analysis operation on the multi-source perceived data stream to obtain an image preliminary analysis feature set corresponding to the image frame sequence unit and a point cloud preliminary analysis feature set corresponding to the point cloud frame sequence unit, and encapsulates the image preliminary analysis feature set and the point cloud preliminary analysis feature set together into a cross-domain feature data packet to be verified, which specifically includes: Analyzing the multi-source perception data stream, extracting a plurality of image frames which are continuously arranged from the image frame sequence unit, distributing an image frame storage index in a temporary buffer area inside the AI SoC computing unit for each image frame, extracting a plurality of point cloud frames which are continuously arranged from the point cloud frame sequence unit, and distributing a point cloud frame storage index in the temporary buffer area inside the AI SoC computing unit for each point cloud frame; Invoking a configurable image processing pipeline in the first hardware acceleration engine, performing image edge feature enhancement calculation on the plurality of image frames frame by frame according to a preset first feature extraction parameter to obtain edge enhancement image data corresponding to each image frame, performing local binary pattern feature mapping calculation on the edge enhancement image data to generate an image preliminary texture feature vector corresponding to each image frame, and forming the image preliminary analysis feature set by the image preliminary texture feature vectors corresponding to all the image frames; invoking a configurable point cloud processing pipeline in the first hardware acceleration engine, performing point cloud space rasterization calculation on the plurality of point cloud frames frame by frame according to a preset second characteristic extraction parameter, mapping discrete point clouds in each point cloud frame into three-dimensional space grid units divided in advance, counting point cloud density distribution parameters in each three-dimensional space grid unit, generating a point cloud preliminary density distribution feature map corresponding to each point cloud frame, and aggregating the point cloud preliminary density distribution feature maps corresponding to all the point cloud frames to form the point cloud preliminary analysis feature set; Dividing a feature data encapsulation buffer area in a shared memory area of the AI SoC computing unit, sequentially writing all image preliminary texture feature vectors in the image preliminary analysis feature set into a first continuous storage section of the feature data encapsulation buffer area according to the sequence of the image frame storage indexes, and sequentially writing all point cloud preliminary density distribution feature images in the point cloud preliminary analysis feature set into a second continuous storage section of the feature data encapsulation buffer area according to the sequence of the point cloud frame storage indexes; Adding a data packet encapsulation head to the head of the characteristic data encapsulation buffer zone, wherein the data packet encapsulation head comprises a unique hardware identifier of the AI SoC computation unit, acquisition start time stamp information of the multi-source perception data stream, a total number of the image preliminary texture feature vectors and a data length descriptor of each image preliminary texture feature vector, and a total number of the point cloud preliminary density distribution feature images and a data length descriptor of each point cloud preliminary density distribution feature image; and performing cyclic redundancy check calculation on the written data content in the characteristic data encapsulation buffer zone, including the data packet encapsulation head, the first continuous storage section and the second continuous storage section, generating a checksum value, and attaching the checksum value to the tail end of the characteristic data encapsulation buffer zone to complete the encapsulation operation of the cross-domain characteristic data packet to be verified.
  3. 3. The method of claim 1, wherein the transmitting the cross-domain feature data packet to be verified to the secure MCU control unit through a secure communication channel pre-established between the AI SoC computing unit and the secure MCU control unit, triggering the secure MCU control unit to perform source validity verification processing on the cross-domain feature data packet to be verified based on a root key material stored therein, to obtain a source validity verification passing result, comprises: Invoking a built-in public key infrastructure coprocessor in the AI SoC computing unit, reading a device certificate of the AI SoC computing unit from a read-only storage area of the AI SoC computing unit, wherein the device certificate comprises a public key of the AI SoC computing unit and a digital signature signed by a root certificate issuing mechanism, and carrying out encryption processing on a unique hardware identifier in a data packet encapsulation head in the cross-domain feature data packet to be verified by utilizing the public key in the device certificate to generate an encrypted hardware identifier Fu Miwen; combining the whole cross-domain feature data packet to be verified and the encrypted hardware identifier Fu Miwen into a transmission data unit, and sending the transmission data unit to the secure MCU control unit through the secure communication channel, wherein the secure communication channel is an encrypted logic link established based on a secure socket layer protocol or a transport layer security protocol; After receiving the transmission data unit, the secure MCU control unit extracts preset root key materials from an internal tamper-proof secure storage area of the transmission data unit, wherein the root key materials comprise a public key of the root certificate issuing mechanism and a private key of the secure MCU control unit, the public key of the root certificate issuing mechanism is utilized to carry out digital signature validity verification on the equipment certificate of the AI SoC computing unit, if verification fails, the subsequent flow is terminated, and a verification failure log is generated; If the digital signature of the equipment certificate passes the verification, extracting the public key of the AI SoC computation unit from the equipment certificate, and decrypting the encrypted hardware identifier Fu Miwen in the transmission data unit by using the extracted public key of the AI SoC computation unit to obtain a decrypted hardware identifier Fu Mingwen; Comparing the decrypted hardware identifier Fu Mingwen with a unique hardware identifier analyzed from a data packet encapsulation header of a cross-domain feature data packet to be verified in the transmission data unit byte by byte, if the decrypted hardware identifier Fu Mingwen is completely consistent with the unique hardware identifier, judging that the source of the cross-domain feature data packet to be verified is legal, and generating a source validity verification passing result; And the safety MCU control unit sets a data source beaconing sign bit corresponding to the current calculation task in an internal register according to the source validity verification passing result, and feeds back the state of the data source beaconing sign bit to the AI SoC calculation unit through the safety communication channel.
  4. 4. The cross-domain collaborative computing method based on AI SoC and a secure MCU according to claim 1, wherein after the secure MCU control unit verifies that the source validity verification passes the result, invoking a second hardware acceleration engine built in the secure MCU control unit, and executing a cross-domain feature collaborative computing operation based on the image preliminary analysis feature set and the point cloud preliminary analysis feature set in the cross-domain feature data packet to be verified, to generate a cross-domain collaborative computing intermediate result including an association relationship between image features and point cloud features, specifically including: After confirming that a bearable flag bit of a data source is set, the safe MCU control unit analyzes the image preliminary analysis feature set and the point cloud preliminary analysis feature set from the cross-domain feature data packet to be verified, distributes each image preliminary texture feature vector in the image preliminary analysis feature set to a first vector processing unit array in the second hardware acceleration engine, and distributes each point cloud preliminary density distribution feature map in the point cloud preliminary analysis feature set to a second vector processing unit array in the second hardware acceleration engine; Invoking a programmable collaborative computing core in the second hardware acceleration engine, searching a point cloud preliminary density distribution feature map adjacent to a timestamp thereof in the point cloud preliminary analysis feature set for each image preliminary texture feature vector according to a preset cross-domain association mapping rule, wherein the timestamp adjacent is determined based on that the absolute value of the difference between the first acquisition timestamp and the second acquisition timestamp is smaller than a preset time window threshold value, and forming a preliminary image-point cloud feature pairing group set; For each group of image-point cloud feature pairing in the preliminary image-point cloud feature pairing group set, the programmable cooperative computing core executes feature space alignment computation, maps image feature coordinates associated with the image preliminary texture feature vector to a three-dimensional space grid coordinate system corresponding to the point cloud preliminary density distribution feature map based on a preset projective transformation matrix, and generates an aligned space texture mapping feature tensor according to the mapped coordinates; Performing element-by-element multiplication fusion calculation on the aligned spatial texture mapping feature tensor and the corresponding point cloud preliminary density distribution feature map to obtain local cross-domain fusion feature tensors corresponding to each image-point cloud feature pairing, and stacking and combining all the local cross-domain fusion feature tensors according to a time sequence to form a three-dimensional cross-domain fusion feature tensor sequence; And calling a feature dimension reduction module in the second hardware acceleration engine, carrying out sliding window average pooling calculation on the cross-domain fusion feature tensor sequence along the time dimension, carrying out maximum pooling calculation on each sliding window along the space dimension to obtain a compressed cross-domain collaborative feature map, and serializing the compressed cross-domain collaborative feature map into a one-dimensional feature vector form to serve as an intermediate result of the cross-domain collaborative calculation.
  5. 5. The cross-domain collaborative computing method based on an AI SoC and a secure MCU according to claim 1, wherein the cross-domain collaborative computing intermediate result is returned to the AI SoC computing unit through the secure communication channel, and the AI SoC computing unit is instructed to execute a subsequent higher-layer computing task on the multi-source perceived data stream according to the cross-domain collaborative computing intermediate result, so as to obtain a target collaborative computing result corresponding to the multi-source perceived data stream, and specifically includes: In the secure MCU control unit, the cross-domain cooperative computing intermediate result is packaged into a return data packet, wherein the return data packet comprises a unique secure chip identifier of the secure MCU control unit, a data body of the cross-domain cooperative computing intermediate result and a digital signature generated by encrypting a hash value of the cross-domain cooperative computing intermediate result by using a public key of the AI SoC computing unit; The feedback data packet is sent to the AI SoC computing unit through the secure communication channel, after the AI SoC computing unit receives the feedback data packet, the AI SoC computing unit decrypts the digital signature in the feedback data packet by using the private key of the AI SoC computing unit to obtain a decrypted hash value, and the same hash computation is carried out on the received data body of the intermediate result of the cross-domain collaborative computation to obtain a computed hash value; comparing the decrypted hash value with the calculated hash value, if the decrypted hash value and the calculated hash value are consistent, confirming that the cross-domain cooperative computing intermediate result is not tampered in the transmission process and the source is a legal safe MCU control unit, and releasing the cross-domain cooperative computing intermediate result to an executable memory area of the AI SoC computing unit; The AI SoC computing unit reads the cross-domain cooperative computing intermediate result from the executable memory area, wherein the cross-domain cooperative computing intermediate result is used as one of input parameters of the subsequent high-level computing task and is combined with the original data of the multi-source perceived data stream locally cached by the AI SoC computing unit or further extracted deep features; Invoking a neural network processing unit in the AI SoC calculation unit, loading a pre-trained target detection and identification neural network model, inputting the cross-domain cooperative calculation intermediate result and the deep features into the target detection and identification neural network model together, and executing forward reasoning calculation, wherein an output layer of the target detection and identification neural network model generates a reasoning result set comprising a target object category label, a boundary frame coordinate of a target object in an image and a position coordinate of the target object in a three-dimensional space; And carrying out association binding on the reasoning result set and timestamp information of a corresponding frame in the multi-source perception data stream to generate a structured target cooperative computing result, storing the target cooperative computing result into a storage device connected with an external memory interface of the AI SoC computing unit, and simultaneously transmitting the target cooperative computing result to a central control unit through a system bus for decision control.
  6. 6. The method for cross-domain collaborative computing based on AI SoC and safety MCU according to claim 2, wherein the invoking the configurable image processing pipeline in the first hardware acceleration engine performs image edge feature enhancement computation on the plurality of image frames frame by frame according to a preset first feature extraction parameter to obtain edge enhanced image data corresponding to each image frame, and performs local binary pattern feature mapping computation on the edge enhanced image data to generate an image preliminary texture feature vector corresponding to each image frame, specifically comprising: Reading a current image frame to be processed from the temporary buffer area according to the sequence of the image frame storage index, and inputting pixel data of the current image frame to be processed into an input end of the configurable image processing pipeline in a data stream form, wherein the configurable image processing pipeline comprises a programmable convolution filter kernel array; The first feature extraction parameters comprise a group of predefined edge detection convolution kernel coefficients, the edge detection convolution kernel coefficients are loaded into a register of the convolution filter kernel array, the convolution filter kernel array is controlled to carry out sliding window convolution operation on input pixel data of a current image frame to be processed, edge intensity response values corresponding to each pixel position are output, and the edge intensity response values of all the pixel positions form an edge intensity map corresponding to the current image frame to be processed; Inputting the edge intensity map into a nonlinear activation function unit in the configurable image processing pipeline, thresholding each edge intensity response value in the edge intensity map by the nonlinear activation function unit, setting a response value lower than a preset low response threshold value to be zero, and saturating a response value higher than a preset high response threshold value to obtain the thresholded edge intensity map as the edge enhanced image data; Inputting the edge enhanced image data into a local binary pattern feature extraction unit in the configurable image processing pipeline, wherein the local binary pattern feature extraction unit takes each pixel in the edge enhanced image data as a center, defines a circular neighborhood with a radius being a preset radius value, uniformly samples a preset number of sampling points on the boundary of the circular neighborhood, compares the pixel value of each sampling point with the size of a central pixel value, assigns a corresponding binary bit as one if the pixel value of the sampling point is greater than or equal to the central pixel value, assigns zero if the pixel value of the sampling point is greater than or equal to the central pixel value, and combines binary bits corresponding to comparison results of all the sampling points into a binary number according to a preset sequence to serve as a local binary pattern coding value of the central pixel; Traversing all pixels in the edge enhanced image data, calculating to obtain a local binary pattern coding value for each pixel, forming a local binary pattern coding diagram by the local binary pattern coding values of all pixels, carrying out histogram statistics on the local binary pattern coding diagram, dividing the local binary pattern coding diagram into a plurality of non-overlapped image sub-blocks, and counting the occurrence frequency of all possible local binary pattern coding values in each image sub-block to form a multidimensional histogram vector; And performing head-to-tail stitching on the multidimensional histogram vectors of all the image sub-blocks according to a preset sequence to form a one-dimensional feature vector, performing vector normalization calculation on the one-dimensional feature vector, dividing the value of each dimension by the modular length of the feature vector to obtain a normalized one-dimensional feature vector, serving as an image preliminary texture feature vector corresponding to the current image frame to be processed, and repeatedly performing all processing steps from reading the image frame to generating the image preliminary texture feature vector until all the image frames are processed.
  7. 7. The cross-domain collaborative computing method based on AI SoC and safety MCU according to claim 2, wherein the invoking the configurable point cloud processing pipeline in the first hardware acceleration engine performs point cloud space gridding computation on the plurality of point cloud frames frame by frame according to a preset second feature extraction parameter, maps discrete point clouds in each point cloud frame into pre-divided three-dimensional space grid units, counts point cloud density distribution parameters in each three-dimensional space grid unit, and generates a point cloud preliminary density distribution feature map corresponding to each point cloud frame, specifically comprising: Reading a current point cloud frame to be processed from the temporary buffer area according to the sequence of the point cloud frame storage indexes, wherein the current point cloud frame to be processed comprises a plurality of three-dimensional space points, each three-dimensional space point is represented by three-dimensional coordinate values of the three-dimensional space point, and all three-dimensional space point data of the current point cloud frame to be processed are input into the configurable point cloud processing pipeline; The second feature extraction parameters comprise a partition parameter of a three-dimensional space grid, wherein the partition parameter comprises grid start coordinates, grid end coordinates and grid dimensions in the X-axis direction, grid start coordinates, grid end coordinates and grid dimensions in the Y-axis direction, and grid start coordinates, grid end coordinates and grid dimensions in the Z-axis direction, and a three-dimensional grid counter array is constructed in an internal memory of the configurable point cloud processing pipeline according to the partition parameter, and each element of the grid counter array corresponds to a three-dimensional space grid unit; Traversing each three-dimensional space point in the current to-be-processed point cloud frame, calculating indexes of grid units to which the three-dimensional space point belongs in the grid counter array according to three-dimensional coordinate values of the three-dimensional space point and the dividing parameters, finding out corresponding grid counter array elements, increasing count values of the grid counter array elements by 1, and after traversing all the three-dimensional space points, wherein the count value of each element in the grid counter array represents the number of point clouds contained in the corresponding three-dimensional space grid units; Dividing the count value of each element in the grid counter array by the total number of point clouds in the current point cloud frame to be processed to obtain a point cloud quantity ratio of each three-dimensional space grid unit, and organizing all the point cloud quantity ratio into a three-dimensional data cube according to the sequence of the corresponding grid units in the directions of an X axis, a Y axis and a Z axis, wherein each element of the three-dimensional data cube is a point cloud quantity ratio; Performing maximum value clipping processing on the three-dimensional data cube, forcibly setting element values larger than a preset duty ratio upper limit threshold value as the preset duty ratio upper limit threshold value, performing overall linear scaling on the clipped three-dimensional data cube, and linearly mapping values of all elements into a closed interval from zero to one to obtain a normalized three-dimensional data cube; And carrying out maximum pooling projection on the normalized three-dimensional data cube along the Z-axis direction, namely taking the maximum value of the point cloud density relative values of all grid units of the XY plane position along the Z-axis direction as the projection value of the XY plane position on each XY plane position, forming a two-dimensional density projection graph by the projection values of all XY plane positions, taking the two-dimensional density projection graph as a point cloud preliminary density distribution characteristic graph corresponding to the point cloud frame to be processed currently, and repeatedly executing all processing steps from reading the point cloud frame to generating the point cloud preliminary density distribution characteristic graph on each of the rest of the plurality of point cloud frames.
  8. 8. The cross-domain collaborative computing method based on AI SoC and safety MCU according to claim 4, wherein the invoking the programmable collaborative computing core in the second hardware acceleration engine searches a point cloud preliminary density distribution feature map adjacent to a timestamp thereof in the point cloud preliminary analysis feature set for each image preliminary texture feature vector according to a preset cross-domain association mapping rule to form a preliminary image-point cloud feature pairing group set, specifically comprising: Analyzing the cross-domain feature data packet to be verified in the programmable collaborative computing core, and acquiring a first acquisition time stamp of an original image frame corresponding to each image preliminary texture feature vector in the image preliminary analysis feature set and a second acquisition time stamp of an original point cloud frame corresponding to each point cloud preliminary density distribution feature map in the point cloud preliminary analysis feature set; constructing a first hash mapping table taking a timestamp as a key value, carrying out index storage on all point cloud preliminary density distribution feature images in the point cloud preliminary analysis feature set according to a second acquisition timestamp of the point cloud preliminary density distribution feature images, constructing a second Ha Xiying table taking the timestamp as the key value, and carrying out index storage on all image preliminary texture feature vectors in the image preliminary analysis feature set according to the first acquisition timestamp of the image preliminary texture feature vectors; Traversing each image preliminary texture feature vector in the second Ha Xiying table, reading a first acquisition time stamp of the currently traversed image preliminary texture feature vector, and searching a point cloud preliminary density distribution feature map with absolute differences of all second acquisition time stamps and the first acquisition time stamp being smaller than or equal to a preset time window threshold in the first hash mapping table; If one or more point cloud preliminary density distribution feature images meeting the time proximity condition are found, calculating a time proximity score between the current image preliminary texture feature vector and each found point cloud preliminary density distribution feature image, wherein the time proximity score is inversely proportional to the absolute difference value, and selecting one point cloud preliminary density distribution feature image with the highest time proximity score as the best matching object of the current image preliminary texture feature vector; Combining the preliminary texture feature vector of the current image and the corresponding best matching image point cloud preliminary density distribution feature map into an image-point cloud feature pair, distributing a unique pairing identifier for the image-point cloud feature pair, and recording the time offset between a first acquisition time stamp of the preliminary texture feature vector of the image in the image-point cloud feature pair and a second acquisition time stamp of the point cloud preliminary density distribution feature map; And collecting all generated image-point cloud feature pairs to form a preliminary image-point cloud feature pair group set, and simultaneously, storing the time offset of each pair as additional information in association with the pair identifier for use in the subsequent feature space alignment calculation of the programmable cooperative computing core.
  9. 9. The cross-domain collaborative computing method based on AI SoC and safety MCU according to claim 4, wherein the element-wise multiplication fusion computation is performed on the aligned spatial texture mapping feature tensor and the corresponding point cloud preliminary density distribution feature map to obtain a local cross-domain fusion feature tensor corresponding to each image-point cloud feature pairing, and all the local cross-domain fusion feature tensors are stacked and combined according to a time sequence to form a three-dimensional cross-domain fusion feature tensor sequence, which specifically comprises: For each image-point cloud feature pairing in the preliminary image-point cloud feature pairing group set, inputting the aligned spatial texture mapping feature tensor into a first multiplier input end of the programmable cooperative computing core, inputting the point cloud preliminary density distribution feature map into a second multiplier input end of the programmable cooperative computing core, and keeping the spatial dimension of the spatial texture mapping feature tensor consistent with the spatial dimension of the point cloud preliminary density distribution feature map; In the programmable collaborative computing core, performing element-by-element multiplication operation of the spatial texture mapping feature tensor and the point cloud preliminary density distribution feature map at corresponding spatial positions, namely multiplying the value of the spatial texture mapping feature tensor at each coordinate point in a two-dimensional space with the value of the point cloud preliminary density distribution feature map at the coordinate point to obtain a new value as a fusion feature value of the coordinate point; The fusion feature values of all coordinate points form a two-dimensional fusion feature matrix, and the two-dimensional fusion feature matrix reserves the result of weighting the space texture information by the point cloud density information and is used as a local cross-domain fusion feature tensor corresponding to the current image-point cloud feature pairing; Traversing all image-point cloud feature pairs, repeatedly executing element-by-element multiplication fusion calculation on each pair to obtain a corresponding number of local cross-domain fusion feature tensors, wherein each local cross-domain fusion feature tensor is attached with timestamp information associated with the corresponding pair, and the timestamp information can be the first acquisition timestamp, the second acquisition timestamp or the timestamp obtained by averaging the first acquisition timestamp and the second acquisition timestamp; According to the time stamp information attached to each local cross-domain fusion feature tensor, all the local cross-domain fusion feature tensors are ordered according to time sequence, a continuous three-dimensional storage space is constructed in an internal storage area of the programmable collaborative computing core, the ordered local cross-domain fusion feature tensors are used as two-dimensional slices in the three-dimensional storage space, and the two-dimensional slices are sequentially placed along a third dimension to form a three-dimensional cross-domain fusion feature tensor sequence.
  10. 10. A cross-domain collaborative computing system based on an AI SoC and a secure MCU is characterized by comprising: A processor; a machine-readable storage medium storing machine-executable instructions for the processor; Wherein the processor is configured to perform the AI SoC and secure MCU-based cross-domain collaborative computing method of any of claims 1-9 via execution of the machine-executable instructions.

Description

Cross-domain collaborative computing system based on AI SoC and safety MCU Technical Field The invention relates to the technical field of intelligent computing, in particular to a cross-domain collaborative computing system based on an AI SoC and a safety MCU. Background In the current intelligent computing field, along with the continuous development of application scenes such as automatic driving, intelligent security and the like, the processing demand for multi-source perception data is growing increasingly. The multi-source perception data is typically from different types of sensors, such as image acquisition devices and radar detection devices. The image acquisition equipment can provide rich visual information, and the visual information is presented in the form of an image frame sequence, and the radar detection equipment can acquire information such as the distance, the speed and the like of a target object, and the visual information is presented in the form of a point cloud frame sequence. However, existing data processing methods face a number of challenges in processing such multi-source awareness data. On the one hand, the data acquisition time stamps of the different sensors are staggered on the time axis, which complicates the time synchronization and fusion process of the data. On the other hand, in the cross-domain data processing process, the validity and the safety of the data source are difficult to be effectively ensured, and the threat of malicious attack and data tampering is easy to be received. In addition, the conventional computing architecture tends to concentrate the data processing task on a single computing unit, resulting in low computing efficiency and failing to meet the application scenario with high real-time requirements. Disclosure of Invention In view of the above-mentioned problems, in combination with the first aspect of the present invention, an embodiment of the present invention provides a cross-domain collaborative computing method based on an AI SoC and a secure MCU, including: Receiving a multi-source sensing data stream to be processed in an AI SoC computing unit, wherein the multi-source sensing data stream comprises an image frame sequence unit output by at least one image acquisition device and a point cloud frame sequence unit output by at least one radar detection device, each image frame in the image frame sequence unit is provided with a corresponding first acquisition time stamp, each point cloud frame in the point cloud frame sequence unit is provided with a corresponding second acquisition time stamp, and the first acquisition time stamps and the second acquisition time stamps are distributed in a staggered manner on a time axis; Invoking a first hardware acceleration engine built in the AI SoC calculation unit to perform cross-source data characteristic preliminary analysis operation on the multi-source perception data stream to obtain an image preliminary analysis characteristic set corresponding to the image frame sequence unit and a point cloud preliminary analysis characteristic set corresponding to the point cloud frame sequence unit, and packaging the image preliminary analysis characteristic set and the point cloud preliminary analysis characteristic set together into a cross-domain characteristic data packet to be verified; Transmitting the cross-domain feature data packet to be verified to the secure MCU control unit through a pre-established secure communication channel between the AI SoC calculation unit and the secure MCU control unit, triggering the secure MCU control unit to perform source validity verification processing on the cross-domain feature data packet to be verified based on a root key material stored in the secure MCU control unit, and obtaining a source validity verification passing result; After the source validity verification passing result is verified by the safety MCU control unit, a second hardware acceleration engine built in the safety MCU control unit is called, cross-domain feature collaborative computing operation is executed based on an image preliminary analysis feature set and the point cloud preliminary analysis feature set in the cross-domain feature data packet to be verified, and a cross-domain collaborative computing intermediate result containing the association relation between the image features and the point cloud features is generated; And the cross-domain cooperative computing intermediate result is returned to the AI SoC computing unit through the secure communication channel, and the AI SoC computing unit is instructed to execute subsequent high-level computing tasks on the multi-source sensing data stream according to the cross-domain cooperative computing intermediate result, so that a target cooperative computing result corresponding to the multi-source sensing data stream is obtained. In still another aspect, an embodiment of the present invention further provides a cross-domain collaborati