Search

CN-121973796-A - Intelligent electric vehicle mass estimation method and system considering gradient fusion estimation

CN121973796ACN 121973796 ACN121973796 ACN 121973796ACN-121973796-A

Abstract

The invention discloses an intelligent electric vehicle quality estimation method and system considering gradient fusion estimation, and belongs to the technical field of intelligent electric commercial vehicle quality estimation. The method comprises the steps of collecting laser radar point cloud, clock source signals and vehicle-mounted CAN signal data, preprocessing signals, judging an estimation mode, removing point cloud data, extracting characteristics, estimating gradient fusion based on point cloud and dynamics, and estimating vehicle quality based on dynamics. According to the invention, the accurate estimation of the road gradient of the sensitive parameter of the mass estimation is solved by combining the laser radar and the vehicle dynamics, reliable input data is provided for the vehicle mass estimation, and the stability and the accuracy of the intelligent electric vehicle mass estimation are effectively improved.

Inventors

  • ZHAO SHIJIE
  • ZHANG JUNZHI
  • HE CHENGKUN

Assignees

  • 清华大学

Dates

Publication Date
20260505
Application Date
20260128

Claims (10)

  1. 1. An intelligent electric vehicle mass estimation method considering gradient fusion estimation is characterized by comprising the following steps: The method comprises the steps of acquiring multi-source time-space synchronous vehicle data, wherein the multi-source time-space synchronous vehicle data comprises laser radar point cloud, clock source signals and vehicle-mounted CAN signal data; preprocessing laser radar point cloud and vehicle-mounted CAN signal data, judging an estimation mode, outputting filtered data and the estimation mode, triggering a quality estimation mode if a preset condition is met, and triggering a gradient estimation mode if the preset condition is not met; Performing point cloud data rejection and feature extraction on the filtered point cloud data, and outputting a feature point set subjected to incidence angle and shielding rejection; Performing gradient fusion estimation based on the feature point set and the filtered dynamic signals, and outputting a fusion gradient obtained by fusing the point cloud gradient estimation value and the dynamic gradient estimation value according to weights; And in a mass estimation mode, outputting the vehicle mass estimated by the iterative least square method based on the fusion gradient and the filtered dynamics signal.
  2. 2. The method of claim 1, wherein acquiring multi-source spatiotemporal synchronous vehicle data comprises: Acquiring vehicle speed, vehicle longitudinal acceleration, driving motor moment, braking moment, steering wheel rotation angle, gear signals, ABS function enabling signals and TCS function enabling signals in real time through a vehicle-mounted CAN bus; the point cloud data are acquired through the vehicle-mounted laser radar, and the laser radar is arranged at a preset position of the vehicle roof and has no inclination in any direction; The inertial navigation system is used for collecting clock source signals through a GPRMC protocol, and the whole system records time stamps by taking the clock source signals as time references.
  3. 3. The method of claim 2, wherein after acquiring the multi-source spatiotemporal synchronous vehicle data, the method further comprises: transmitting the laser radar point cloud to a domain controller through a network port; the clock source signal of the inertial navigation system is transmitted to the domain controller through a serial port; The vehicle-mounted CAN bus signals are transmitted to the domain controller through the CAN interface, and the domain controller endows each frame of point cloud with a time stamp recorded by the CAN bus signals according to UTC time acquired by the inertial navigation system.
  4. 4. The method of claim 3, wherein preprocessing the lidar point cloud and the on-board CAN signal data comprises: the second-order Butterworth low-pass filter is adopted to carry out data filtering on the vehicle speed so as to obtain a filtered vehicle speed; carrying out data filtering on the longitudinal acceleration by adopting a Kalman filter to obtain filtered longitudinal acceleration; the driving motor moment is subjected to data filtering by adopting a sliding window weighted average filter to obtain a filtered driving motor moment; The second-order filter is adopted to carry out data filtering on the braking moment to obtain a filtered braking moment; and carrying out point cloud data filtering by adopting voxel grid filtering to obtain filtered point cloud.
  5. 5. The method of claim 4, wherein performing an estimation mode decision comprises: After the signal preprocessing is completed, the longitudinal acceleration, the vehicle speed, the gear, the steering wheel corner and the ABS/TCS enabling signals which are filtered at the same moment are selected by utilizing the timestamp information, and the triggering judgment of the quality estimation and the gradient estimation is carried out; If the gear is a forward gear, the longitudinal acceleration meets a preset condition, the vehicle speed is greater than a preset value, the steering wheel rotation angle meets a preset condition, and the ABS and the TCS are in an disabled state, triggering a quality estimation mode, counting by using a counter with a preset time interval, executing quality estimation in a preset counting range, and clearing after finishing one-time quality estimation; If the effective estimation cannot be completed after the whole vehicle is powered on, the idle load mass of the vehicle is used as a mass estimation result, and under the other conditions, the current mass estimation result is kept, and a gradient estimation mode is triggered until the mass estimation mode is triggered again.
  6. 6. The method of claim 5, wherein performing point cloud data rejection and feature extraction on the filtered point cloud data, outputting a set of feature points that are rejected by the angle of incidence and occlusion, comprises: screening the filtered point cloud data, removing points with the incidence angle of the light beam exceeding a preset angle range, and enabling the points to be shielded to meet preset shielding judgment conditions; Selecting adjacent points with the same preset number and the same number on two sides from the same row of points of the laser radar to form a set based on the screened point cloud, and calculating the local curvature of each point based on the set; dividing the horizontal view field into a plurality of areas, sorting according to local curvature in each area, and selecting points with curvature values in a preset ranking range to respectively form an edge characteristic point set and a plane characteristic point set.
  7. 7. The method of claim 6, wherein performing slope fusion estimation based on the feature point set and the filtered dynamics signal, outputting a fusion slope of the point cloud slope estimation and the dynamics slope estimation weighted fusion, comprises: Constructing a pose optimization equation based on the extracted feature point set, performing iterative optimization by calculating a distance residual error from a feature point of a current frame to a geometric element formed by a feature point of the nearest neighbor type of a previous frame, extracting a pitch angle from a transformation matrix obtained by the calculation, and obtaining a point cloud gradient estimation value through low-pass filtering of a preset cut-off frequency; establishing a longitudinal dynamics equation based on the filtered dynamics signals, and carrying out dynamics gradient estimation by adopting an iterative least square method with a preset forgetting factor range to obtain a dynamics gradient estimation value; And carrying out linear weighted fusion on the point cloud gradient estimation value and the dynamic gradient estimation value according to a preset weight, and outputting a fusion gradient result.
  8. 8. The method of claim 7, wherein in the mass estimation mode, outputting an iterative least squares estimated vehicle mass based on the blended grade and the filtered dynamics signal, comprises: under a mass estimation mode, based on the output fusion gradient result and the filtered dynamic signal, constructing a longitudinal dynamic mass estimation equation comprising driving force, braking force, rolling resistance, air resistance and gravity components; Defining a driving moment synthesis term, a braking moment synthesis term and a resistance synthesis term as intermediate variables of a longitudinal dynamics mass estimation equation; And (3) carrying out on-line estimation on the vehicle mass by adopting an iterative least square method based on intermediate variables and outputting a mass estimation value, and if the effective estimation cannot be completed after the whole vehicle is powered on, adopting the vehicle idle load mass as a default estimation result.
  9. 9. An intelligent electric vehicle mass estimation system that considers a grade fusion estimate, comprising: the multi-source data acquisition module is used for acquiring multi-source space-time synchronous vehicle data; the multi-source time-space synchronous vehicle data comprises laser radar point cloud, clock source signals and vehicle-mounted CAN signal data; The preprocessing and mode judging module is used for preprocessing laser radar point cloud and vehicle-mounted CAN signal data and judging an estimation mode, outputting filtered data and the estimation mode, triggering a quality estimation mode if preset conditions are met, and triggering a gradient estimation mode if the preset conditions are met; The feature extraction module is used for carrying out point cloud data rejection and feature extraction on the filtered point cloud data and outputting a feature point set subjected to incidence angle and shielding rejection; the gradient fusion estimation module is used for carrying out gradient fusion estimation based on the feature point set and the filtered dynamic signals and outputting fusion gradient obtained by fusing the point cloud gradient estimation value and the dynamic gradient estimation value according to weights; and the vehicle mass estimation module is used for outputting the vehicle mass estimated by the iterative least square method based on the fusion gradient and the filtered dynamic signal in the mass estimation mode.
  10. 10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of estimating mass of an intelligent electric vehicle taking into account slope fusion estimation as claimed in any of claims 1-8.

Description

Intelligent electric vehicle mass estimation method and system considering gradient fusion estimation Technical Field The invention relates to the technical field of intelligent electric commercial vehicle quality estimation, in particular to an intelligent electric vehicle quality estimation method and system considering gradient fusion estimation. Background Real-time estimation of vehicle dynamics model parameters is an important premise for achieving high-precision vehicle dynamics control. As a core parameter representing the inertia characteristics of the vehicle, the estimation accuracy of the vehicle mass directly affects the performance of the vehicle longitudinal and lateral sagging control. The accurate real-time vehicle quality estimation parameters can effectively improve the performances of an active safety system and an energy management system of the electric vehicle, support an optimal energy consumption strategy based on real-time working condition dynamic reconstruction, and simultaneously provide support for advanced functions such as redundant fault-tolerant control, predictive driving and the like. At present, an automobile longitudinal dynamics model is generally converted into a form conforming to a least square method for estimating the vehicle mass, but in practical application, the influence of a sensitive parameter road gradient on mass estimation is remarkable. The road gradient is estimated by adopting an inertial measurement unit sensor, is easily interfered by the longitudinal dynamic acceleration of the vehicle, has accumulated errors, and has suddenly reduced precision in scenes such as tunnels. The intelligent electric vehicle is provided with rich sensing sensors, and the laser radar and the vehicle dynamics are fused to provide accurate and reliable gradient estimation for mass estimation, so that the stability and the accuracy of the mass estimation of the intelligent electric vehicle are improved. Disclosure of Invention The invention mainly aims to provide an intelligent electric vehicle mass estimation method considering gradient fusion estimation, which solves the accurate estimation of the road gradient of a mass estimation sensitive parameter by fusing a laser radar and vehicle dynamics, provides reliable input data for vehicle mass estimation and effectively improves the stability and accuracy of intelligent electric vehicle mass estimation. Another object of the present invention is to provide an intelligent electric vehicle mass estimation system that considers gradient fusion estimation. A third object of the present invention is to propose a non-transitory computer readable storage medium. To achieve the above objective, an embodiment of a first aspect of the present invention provides an intelligent electric vehicle mass estimation method considering gradient fusion estimation, including: The method comprises the steps of acquiring multi-source time-space synchronous vehicle data, wherein the multi-source time-space synchronous vehicle data comprises laser radar point cloud, clock source signals and vehicle-mounted CAN signal data; preprocessing laser radar point cloud and vehicle-mounted CAN signal data, judging an estimation mode, outputting filtered data and the estimation mode, triggering a quality estimation mode if a preset condition is met, and triggering a gradient estimation mode if the preset condition is not met; Performing point cloud data rejection and feature extraction on the filtered point cloud data, and outputting a feature point set subjected to incidence angle and shielding rejection; Performing gradient fusion estimation based on the feature point set and the filtered dynamic signals, and outputting a fusion gradient obtained by fusing the point cloud gradient estimation value and the dynamic gradient estimation value according to weights; And in a mass estimation mode, outputting the vehicle mass estimated by the iterative least square method based on the fusion gradient and the filtered dynamics signal. In one embodiment of the invention, acquiring multi-source spatiotemporal synchronous vehicle data includes: Acquiring vehicle speed, vehicle longitudinal acceleration, driving motor moment, braking moment, steering wheel rotation angle, gear signals, ABS function enabling signals and TCS function enabling signals in real time through a vehicle-mounted CAN bus; the point cloud data are acquired through the vehicle-mounted laser radar, and the laser radar is arranged at a preset position of the vehicle roof and has no inclination in any direction; The inertial navigation system is used for collecting clock source signals through a GPRMC protocol, and the whole system records time stamps by taking the clock source signals as time references. In one embodiment of the invention, after acquiring the multi-source spatiotemporal synchronous vehicle data, the method further comprises: transmitting the laser radar point cloud to a domain control