Search

CN-121996977-A - Automatic driving personification evaluation method based on real driver behavior modeling

CN121996977ACN 121996977 ACN121996977 ACN 121996977ACN-121996977-A

Abstract

The invention relates to the technical field of automatic driving of automobiles, in particular to an automated driving personification evaluation method based on modeling of the behavior of a driver. The method comprises the following steps of S1, driving scene extraction, S2, driving style clustering, S3, selecting a following and lane changing behavior model for each driving scene and style category, calibrating model parameters by using real driving data of a corresponding style of a real driver, establishing a multi-driving scene and multi-driving style driver reference model, and S4, constructing an evaluation index system, wherein the driving behavior data of a mobile phone driver is used for various driving scenes, the behavior styles of the drivers are identified and divided through a clustering algorithm, the model parameters are calibrated by using the corresponding style real driving data, and the key performance index set is established based on the output characteristics of the calibrated driver reference model and the testing requirements of an automatic driving system. S5, realizing anthropomorphic evaluation, namely acquiring actual measurement data of automatic driving, inputting the actual measurement data of the automatic driving into a driver reference model, acquiring an output predicted behavior sequence of the actual measurement data of the automatic driving, and outputting an anthropomorphic comprehensive evaluation result.

Inventors

  • WU JIAXUAN
  • LI YI
  • CHEN YUANYUAN
  • LI LIKAI
  • HU FEI
  • WANG HAN

Assignees

  • 中国汽车工程研究院股份有限公司
  • 中汽院智能网联科技有限公司
  • 中汽院(江苏)汽车工程研究院有限公司

Dates

Publication Date
20260508
Application Date
20260130

Claims (8)

  1. 1. An automatic driving personification evaluation method based on real driver behavior modeling is characterized by comprising the following steps: S1, extracting driving scenes, namely acquiring a multisource data set from a real driver, wherein the multisource data set at least comprises an aerial photographing data set and a vehicle-mounted natural driving data set, and extracting driving scene classification according to the multisource data set, wherein the driving scenes at least comprise stable following, congestion following and lane changing; s2, driving style clustering, namely aiming at driving behavior data of mobile phone drivers in various driving scenes, and identifying and dividing the behavior styles of the drivers through a clustering algorithm, wherein the driving styles comprise aggressive types, conservative types and common types; s3, selecting a following and lane changing behavior model aiming at each driving scene and style category, calibrating model parameters by using real driver natural driving data of corresponding styles, and establishing a driver reference model of multiple driving scenes and multiple driving styles; S4, constructing an evaluation index system, namely constructing a key performance index set from three dimensions of safety, comfort and high efficiency based on the output characteristics of the calibrated driver reference model and the test requirements of an automatic driving system; S5, realizing personification evaluation, namely acquiring actual measurement data of automatic driving, inputting the actual measurement data of automatic driving into a driver reference model, acquiring an output predicted behavior sequence of the actual measurement data, performing consistency test on actual measurement data distribution and predicted data distribution by KS test, constructing a mixed evaluation criterion by combining with a threshold criterion of key performance indexes, and outputting a personification comprehensive evaluation result.
  2. 2. The automated driving personification evaluation method based on real driver behavior modeling according to claim 1, wherein the step S2 comprises the steps of: S21, performing unsupervised clustering on driving behavior data by using a K-means algorithm: S22, selecting a maximum value of the inverse TTC for the driving scene of stable following and congestion following Average value of all moments THW in each following event Acceleration variance As a clustering feature variable; S23, selecting the maximum value of the transverse acceleration aiming at the lane change scene The channel change segment length LC is used as a clustering characteristic variable; S24, respectively performing K-Means clustering on the data sets of each driving scene, evaluating the clustering quality by calculating contour coefficients corresponding to different cluster numbers K, quantitatively evaluating the compactness and the classification degree of the clustering effect, selecting K values when the contour coefficients are optimal as the optimal driving style category number, and completing the division and the label definition of the driving styles, wherein the characteristic variable data are processed by adopting a threshold cap method before clustering so as to inhibit noise interference.
  3. 3. The automated driving personification evaluation method based on real driver behavior modeling according to claim 2, wherein the step S3 comprises the steps of: S31, aiming at stable following and congestion following scenes, selecting a driving-only model IDM as a calibration object, wherein an acceleration calculation formula of the IDM model is as follows: wherein a represents the acceleration, The maximum acceleration is indicated to be the maximum acceleration, Indicating the current acceleration of the vehicle, Indicating that a desired speed is to be achieved, Indicating the acceleration index, s indicating the actual distance from the involvement, Indicating the speed difference from the preceding vehicle, Indicating the required safety distance to be used, , For a preset minimum safety distance, T is a desired headway, b is a comfortable deceleration, wherein the parameters to be calibrated comprise Six parameters; S232, taking the root mean square percentage error of vehicle speed and ranging as an optimization target of a calibration process, and constructing the following objective function: Wherein the method comprises the steps of For the objective function value, the smaller the error, Is the model output value of the velocity at time t, Is the true value of the velocity at time t, The model output value for the position at time t, Is the true value of the position at the moment t, and N is the total observation number; S32, automatically calibrating based on a genetic algorithm, and performing global optimization on six parameters to be calibrated of the IDM model by adopting the genetic algorithm with the aim of minimizing the root mean square percentage error between actually measured vehicle track data and IDM model simulation output data to obtain an optimal parameter combination of the actual driving behavior under the rain of the scene.
  4. 4. The automated driving personification evaluation method based on real driver behavior modeling according to claim 3, wherein the step S3 further comprises the steps of: S33, aiming at the lane change scene, a Gipps lane change model is selected as a calibration object, an objective function is established, and an optimization algorithm is utilized to automatically optimize model parameters, so that model parameters reflecting the lane change behavior characteristics of a real driver are obtained.
  5. 5. The automated driving personification evaluation method based on real driver behavior modeling according to claim 4, wherein the step S4 comprises the following steps: S41, evaluating the dimension and constructing an index type, namely constructing key index types from three dimensions of safety, comfort and high efficiency based on the calibrated output characteristics of a driver reference model and anthropomorphic test requirements of an automatic driving system, wherein the index type of the safety dimension comprises collision possibility, collision avoidance characteristics and lane keeping characteristics, the index type of the comfort dimension comprises longitudinal stability, transverse stability, stopping smoothness and starting smoothness, and the index type of the high efficiency dimension comprises speed and lane changing efficiency; S42, specific quantitative index definition and system formation, namely, thinning each index type into a key performance evaluation index capable of being directly measured or calculated to form a personification evaluation index system, wherein the key performance evaluation index at least comprises collision time TTC, a headway time THW, collision avoidance deceleration DRAC, a braking distance BD, a transverse maximum deviation, longitudinal acceleration, a longitudinal acceleration change rate, transverse acceleration, a transverse acceleration change rate, braking deceleration, starting acceleration, vehicle speed, lane change time and parallel operation time.
  6. 6. The automated driving personification evaluation method based on real driver behavior modeling according to claim 5, wherein the step S5 comprises the following steps: s51, inputting the real-time acquired automatic driving test data into a driver reference model established by the value S3, and acquiring a behavior prediction sequence under the same initial condition and scene; s52, carrying out consistency quantitative analysis on the actual measurement data sequence of the automatic driving and the model prediction data sequence by adopting KS test, wherein the S52 comprises the following steps: s521, respectively calculating actual measurement data sequences of model predicted data sequences and experience cumulative distribution functions of the predicted data sequences And ; S522, calculating KS statistic D, which is defined as the maximum absolute vertical distance between two empirical cumulative distribution functions: Wherein, the And The empirical accumulation distribution functions of two samples of the actual measurement data sequence and the model prediction data sequence of automatic driving are respectively shown, n and m are the sizes of two sample sets, and n=m; S523, calculating a p value of the hypothesis test from the KS statistic D and the sample size, and comparing the p value with a set significance level α.
  7. 7. The automated driving personification evaluation method based on real driver behavior modeling according to claim 6, wherein the step S5 further comprises the steps of: S53, fusing the statistical result of the KS test and a key performance index threshold criterion, constructing a mixed evaluation criterion, calculating a personification comprehensive evaluation score and grading, wherein the S53 comprises the following steps: s531, calculating a distribution consistency score based on the P value of the KS test: S532, calculating the standard exceeding degree of each index actual measurement value and the corresponding threshold value based on the key performance index set constructed in the S4, and calculating a key index score according to the standard exceeding degree: Wherein the method comprises the steps of ; S533, carrying out weighted fusion on the distribution consistency score and the key index score to obtain a comprehensive personification score, and outputting a corresponding personification evaluation grade according to a numerical interval where the comprehensive personification score is located.
  8. 8. The automated driving personification evaluation method based on real driver behavior modeling of claim 1, wherein in S1, the rule of extracting driving scene comprises: (1) The extraction condition is that the vehicle distance between the front and the rear is in the range of 7 m to 150 m, the transverse distance difference is less than 0.85 and m, the vehicle headway is less than 3 s, and the following behavior is continuous over 8 s; (2) The extraction condition is that the vehicle distance between the front vehicle and the rear vehicle is less than 7 m, the transverse distance difference is less than 0.85 m, and the following behavior lasts for more than 8 s; (3) The lane change scene is extracted under the condition that the vehicle generates lane change behavior, and the change rate of the transverse distance of the vehicle relative to the center line of the lane is taken as a judgment basis: the lane change starting point is when the transverse distance change rate is more than 0.05 and the continuous change is started; And the channel switching end point is when the transverse distance change rate is less than 0.05 and the stable state is restored.

Description

Automatic driving personification evaluation method based on real driver behavior modeling Technical Field The invention relates to the technical field of automatic driving of automobiles, in particular to an automated driving personification evaluation method based on modeling of the behavior of a driver. Background The intelligent network-connected automobile is a new-generation automobile which is provided with advanced devices such as an on-vehicle sensor, a controller and an actuator, integrates modern communication and network technologies, realizes intelligent information exchange and sharing of the automobile, people, the automobile, road clouds and other objects, has environment sensing, intelligent decision making and cooperative control functions, can comprehensively realize safe, energy-saving, environment-friendly and comfortable running, and gradually replaces people to operate. The intelligent network-connected automobile is an important direction of transformation and upgrading in the automobile industry in China, a mass production on-road key node is imminent, and an automatic driving test evaluation system is used as an important ring for evaluating the safety performance of an automatic driving system, so that researchers are helped to know the system problem and determine the effect of optimizing the direction, and the automatic driving vehicle evaluation method plays a vital role in the development process of the intelligent network-connected automobile. With the rapid development of automatic driving technology, scientific and objective evaluation of the anthropomorphic degree of driving behaviors becomes a key technical requirement. Currently, the personification evaluation method commonly used in industry mainly relies on automatic driving data obtained from simulation or real vehicle test, quantitatively scores key performance indexes based on a preset experience threshold or an existing research result, and constructs a comprehensive scoring system through weight distribution. Although the quantitative evaluation is realized to a certain extent, the evaluation standard is mostly a static experience value, and the reflection of the dynamic characteristics of the actual driver is lacking, so that the consistency of the evaluation result and the actual driving experience of the human is insufficient, and the personification level of the automatic driving system is difficult to comprehensively and accurately evaluate. In the prior art, some researches for introducing a driver behavior model are tried, but most models are still constructed based on theoretical assumption or simplified scenes, and the actual driving data under multiple scenes are not fully fused, so that deviation exists between model output and actual driving behavior, and evaluation accuracy and reliability are limited. Therefore, an automatic driving personification evaluation method which can reflect driving behavior characteristics more truly, has scene adaptability and has strong evaluation process interpretability is needed to make up for the defects in the aspects of behavior modeling authenticity and evaluation consistency in the prior art. Disclosure of Invention The technical problem solved by the invention is to provide an automated driving personification evaluation method based on driver behavior modeling, which can directly aim at a target through a multi-scene driver model and an actual measurement truth value, remarkably improve the accuracy and the authenticity of evaluation and effectively overcome the limitation of the traditional method in the aspect of behavior representation dynamics and reference objectivity. The basic scheme provided by the invention is an automated driving personification evaluation method based on driver behavior modeling, which comprises the following steps: S1, extracting driving scenes, namely acquiring a multisource data set from a real driver, wherein the multisource data set at least comprises an aerial photographing data set and a vehicle-mounted natural driving data set, and extracting driving scene classification according to the multisource data set, wherein the driving scenes at least comprise stable following, congestion following and lane changing; s2, driving style clustering, namely aiming at driving behavior data of mobile phone drivers in various driving scenes, and identifying and dividing the behavior styles of the drivers through a clustering algorithm, wherein the driving styles comprise aggressive types, conservative types and common types; s3, selecting a following and lane changing behavior model aiming at each driving scene and style category, calibrating model parameters by using real driver natural driving data of corresponding styles, and establishing a driver reference model of multiple driving scenes and multiple driving styles; S4, constructing an evaluation index system, namely constructing a key performance index set from three dimensions of saf