CN-122019627-A - Vehicle-mounted interaction system and service prediction method based on cross-scene behavior learning
Abstract
The invention relates to the technical field of automobile artificial intelligence and provides a vehicle-mounted interaction system and a service prediction method based on cross-scene behavior learning, comprising a data layer, a cloud knowledge base and a service prediction layer, wherein the data layer is used for collecting behavior data of a user in a vehicle, constructing a four-dimensional behavior modeling engine and comparing the collected behavior data with behavior data stored in the cloud knowledge base; the system comprises a vehicle-mounted edge computing unit, a learning layer, an interaction layer, an execution layer and a prediction service matrix, wherein the vehicle-mounted edge computing unit runs the federal learning frame, generates a situation model of the current comprehensive state of a user and sends the situation model to the interaction layer, the interaction layer establishes a service execution grading mechanism based on trust degree, generates a prediction result, sends the prediction result to the execution layer and decides a service execution mode according to the prediction accuracy, and the execution layer establishes a prediction service matrix and executes corresponding service actions according to the prediction result. The method breaks through single-dimension association, merges dynamic situations, predicts the position more accurately, and adopts a progressive trust mechanism.
Inventors
- Diao Shihang
- WANG SHANGHUA
Assignees
- 润芯微科技(江苏)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260203
Claims (10)
- 1. A vehicle-mounted interactive system based on cross-scene behavior learning, comprising: The system comprises a data layer, a cloud knowledge base, a learning layer, a cloud knowledge base, a service management layer, a data layer and a service management layer, wherein the data layer is used for acquiring behavior data of a user in a vehicle, the behavior data comprises mobile information, voice information, limb information and environment information, a four-dimensional behavior modeling engine for constructing space habit dimension, emotion state dimension, decision mode dimension and environment association dimension according to the behavior data of the user, and comparing the acquired behavior data with the behavior data stored in the cloud knowledge base; the learning layer is used for receiving the behavior data collected by the data layer and the four-dimensional behavior modeling engine, operating a federal learning framework in the vehicle-mounted edge computing unit, learning the behavior data of the user by combining the output characteristics of the four-dimensional behavior modeling engine, generating a situation model of the current comprehensive state of the user and sending the situation model to the interaction layer; The interaction layer establishes a service execution grading mechanism based on the trust degree, predicts the service possibly needed by the user according to the received situation model, generates a prediction result, and sends the prediction result to the execution layer; calculating the system prediction accuracy according to the historical interaction data, determining a service execution mode according to the interaction times and the prediction accuracy of the user and the system, and sending an execution result, a prediction result and feedback information of the user on the execution mode back to a learning layer for updating a situation model and calculating the accuracy; And the execution layer establishes a prediction service matrix, the prediction service matrix stores the mapping relation between the prediction result and the corresponding execution action, receives the prediction result and the execution mode sent by the interaction layer, inquires the prediction service matrix according to the prediction result to execute the corresponding service action, and feeds back the execution state to the interaction layer.
- 2. The vehicle-mounted interactive system based on cross-scene behavior learning according to claim 1, wherein the method for constructing the four-dimensional behavior modeling engine comprises the following steps: The space habit dimension is to collect the movement information of the user, namely the selection preference of places and routes, analyze the space data by using ST-DBSCAN algorithm, evaluate the entropy value of the places, preferentially select places with high entropy value when executing service action, mine the habit pattern of the user on space movement and output the space habit feature vector of the user; Collecting voice information of a user, voice of the user and played music, analyzing tone, speech speed, keywords and types of music played by the user, learning the emotion state of the user by adopting an LSTM emotion transfer model, calculating emotion indexes in real time, judging what emotion state the user is in low, happy, dysphoric and angry, and outputting emotion state labels and emotion indexes; the decision mode dimension is that the limb information of the user, namely the driving action of the user, is collected, a Bayesian behavior network is utilized to construct and dynamically update a decision mode model of the user, so that behavior logic behind various function settings and driving operations in the vehicle of the user is understood, and decision preference probability distribution under different situations is output; Integrating environmental information including weather, time and road conditions, analyzing the association relation between environmental factors and user behaviors by means of a graph neural network, and outputting a weight matrix of influence of the environmental factors on the user behaviors.
- 3. The vehicle-mounted interactive system based on cross-scene behavior learning according to claim 2, wherein the space-time weight function used in the ST-DBSCAN algorithm is: Wherein, the For the weight value of the historical time data to the current analysis, Is the attenuation coefficient; for the current time period of time, Is the historical time.
- 4. The vehicle-mounted interactive system based on cross-scene behavior learning according to claim 3, wherein the entropy value calculation formula of the place in the spatial habit dimension is: Wherein, the As the entropy value of the location, Is a place Frequency of occurrence in the user history trace.
- 5. The vehicle-mounted interaction system based on cross-scene behavior learning according to claim 2, wherein the emotion state of the user is learned by adopting an LSTM emotion transition model, and a speech emotion index is calculated in real time, and the calculation formula is as follows: Wherein, the Mood index, wherein mood index is classified according to 0-10, 1-3 is low, 4-6 is happy, 7-9 is dysphoria, and 10 is qi; 、 、 And The emotion probabilities corresponding to the tone, the speech speed, the keywords and the music types are respectively; 、 、 And The feature weights for pitch, speech rate, keywords, and music types, respectively.
- 6. The vehicle-mounted interaction system based on cross-scene behavior learning according to claim 2, wherein the decision mode model of the user is constructed and dynamically updated by using a Bayesian behavior network, and the calculation formula is as follows: Wherein, the The conditional probability of the behavior B is caused by the decision factor A after updating; the conditional probability of leading to behavior B for pre-update decision factor a; Is a historical sample size; to indicate a function, a new behavior When this occurs, 1, otherwise 0.
- 7. The vehicle-mounted interactive system based on cross-scenario behavior learning of claim 1, wherein the operation of the federal learning framework comprises: Local model update in-vehicle edge computing units use local data sets For global model parameters Updating: Wherein, the Is the learning rate; As a loss function; the updated local model parameters are obtained; Is a model parameter; Representing a loss function Regarding parameters Is a gradient of (2); Cloud aggregation, namely collecting local model parameters of each participating device by a cloud server And its local sample number Weighted average is performed: Wherein, the For the number of samples to be local, A total number of samples for all participating devices; the new global model parameters after aggregation; is the first The local model parameters updated by the vehicle-mounted edge computing units; Differential privacy protection, in updating local model Before uploading to the cloud, adding Laplace noise: Wherein, the Updating the sensitivity of the vector for the model; For privacy budgeting, the privacy preserving intensity is controlled.
- 8. The vehicle-mounted interaction system based on cross-scene behavior learning according to claim 1, wherein the service possibly needed by the user is predicted according to the received context model, a prediction result is generated, and the prediction result is realized through multi-dimensional feature weighted fusion, and the calculation formula is as follows: Wherein, the For the predicted service type score to be a function of the service type, Is the first The weight of the individual dimensions is determined, Is the first And the service with the highest score is the prediction result.
- 9. The vehicle-mounted interactive system based on cross-scene behavior learning of claim 1, wherein the establishing a service execution classification mechanism based on the trust degree determines the service execution mode according to the interaction times of the user and the system and the prediction accuracy, and comprises the steps of determining the trust degree score T according to the historical interaction times N and the prediction accuracy counted in a sliding time window The calculation formula is as follows: Wherein the method comprises the steps of , As the weight coefficient of the light-emitting diode, ; As an increasing function of the number of interactions; if the trust level is scored Only explicit instructions, i.e. instructions actively entered by the user, are executed; if the trust level is scored Providing a prediction result option for the user to confirm and execute, and updating the prediction record and accuracy according to the user selection result ; If the trust level is scored Automatically executing the prediction result, feeding back the execution result to the user, and enabling the system to update the prediction record and the accuracy according to whether the user cancels the update ; The prediction accuracy The calculation method comprises the steps of successfully predicting the proportion of times of total predicting attempts in a set statistical period or sample number window, wherein success prediction is defined as that after the system provides options, the user confirms execution or after the system automatically executes, the user does not cancel, and failure prediction is defined as that the user refuses options or cancels automatic execution.
- 10. A service prediction method based on cross-scene behavior learning, comprising: The method comprises the steps of collecting in-vehicle behavior data of a user in real time, wherein the behavior data comprise voice information, limb information and environment information, and constructing a four-dimensional behavior modeling engine of space habit dimension, emotion state dimension, decision mode dimension and environment association dimension according to the behavior data of the user; Comparing the collected behavior data with behavior data stored in a cloud knowledge base, and if the behavior data are not identical, encrypting and storing the behavior data in the cloud knowledge base and sending the behavior data to a learning layer; operating a federal learning framework, and learning user behavior data by combining a four-dimensional behavior modeling engine to generate a situation model representing the current comprehensive state of the user; the method comprises the steps of establishing a service execution grading mechanism based on trust, predicting the service possibly needed by a user based on a received situation model, generating a prediction result, and sending the prediction result to an execution layer; determining service execution mode according to the interaction times of the user and the system prediction accuracy, if the trust degree is scored Execute only explicit instructions, if confidence scores Providing a prediction result option for the user to confirm and execute, and updating the trust degree score and the prediction accuracy rate if the trust degree score And automatically executing the prediction result and feeding back the execution result to the user. And updating the prediction record, the prediction accuracy and the trust degree score according to the selection of the user on the provided options or the cancel operation on the automatic execution result.
Description
Vehicle-mounted interaction system and service prediction method based on cross-scene behavior learning Technical Field The invention relates to the technical field of automobile artificial intelligence, in particular to a vehicle-mounted interaction system and a service prediction method based on cross-scene behavior learning. Background Along with the intelligent development of vehicle cabin experience, the user no longer only stops at mechanical properties such as power pushing back sense to its requirement, but pursues more exquisite, more comfortable intelligent cabin environment and interactive experience, and along with the intelligent trend of cabin multiscreen, large screen, the positive initiative is interactive helps reducing the distraction, promotes driving safety to promote user experience degree. However, the existing vehicle-mounted interactive system has a single-dimensional learning limitation, only analyzes single scene preference, cannot correlate cross-scene behaviors (such as CN 114567832A), and the traditional recommendation system ignores situation changes (such as mood/weather effect decisions), depends on explicit instruction response and lacks active prediction capability (such as US2023015334A 1). Disclosure of Invention The invention aims to provide a vehicle-mounted interaction system and a service prediction method based on cross-scene behavior learning, which break through cross-dimension association, merge dynamic situations and adopt a progressive trust mechanism. The invention is realized in such a way that a vehicle-mounted interaction system based on cross-scene behavior learning comprises: The system comprises a data layer, a cloud knowledge base, a learning layer, a cloud knowledge base, a service management layer, a data layer and a service management layer, wherein the data layer is used for acquiring behavior data of a user in a vehicle, the behavior data comprises mobile information, voice information, limb information and environment information, a four-dimensional behavior modeling engine for constructing space habit dimension, emotion state dimension, decision mode dimension and environment association dimension according to the behavior data of the user, and comparing the acquired behavior data with the behavior data stored in the cloud knowledge base; the learning layer is used for receiving the behavior data collected by the data layer and the four-dimensional behavior modeling engine, operating a federal learning framework in the vehicle-mounted edge computing unit, learning the behavior data of the user by combining the output characteristics of the four-dimensional behavior modeling engine, generating a situation model of the current comprehensive state of the user and sending the situation model to the interaction layer; The interaction layer establishes a service execution grading mechanism based on the trust degree, predicts the service possibly needed by the user according to the received situation model, generates a prediction result, and sends the prediction result to the execution layer; calculating the system prediction accuracy according to the historical interaction data, determining a service execution mode according to the interaction times and the prediction accuracy of the user and the system, and sending an execution result, a prediction result and feedback information of the user on the execution mode back to a learning layer for updating a situation model and calculating the accuracy; And the execution layer establishes a prediction service matrix, the prediction service matrix stores the mapping relation between the prediction result and the corresponding execution action, receives the prediction result and the execution mode sent by the interaction layer, inquires the prediction service matrix according to the prediction result to execute the corresponding service action, and feeds back the execution state to the interaction layer. Preferably, the method for constructing the four-dimensional behavior modeling engine comprises the following steps: The space habit dimension is to collect the movement information of the user, namely the selection preference of places and routes, analyze the space data by using ST-DBSCAN algorithm, evaluate the entropy value of the places, preferentially select places with high entropy value when executing service action, mine the habit pattern of the user on space movement and output the space habit feature vector of the user; Collecting voice information of a user, voice of the user and played music, analyzing tone, speech speed, keywords and types of music played by the user, learning the emotion state of the user by adopting an LSTM emotion transfer model, calculating emotion indexes in real time, judging what emotion state the user is in low, happy, dysphoric and angry, and outputting emotion state labels and emotion indexes; the decision mode dimension is that the limb information of the user, namely t