Search

CN-121982916-A - Control method of vehicle, vehicle and electronic equipment

CN121982916ACN 121982916 ACN121982916 ACN 121982916ACN-121982916-A

Abstract

The embodiment of the application provides a vehicle control method, a vehicle and electronic equipment, wherein the method comprises the steps of responding to a control instruction triggered by a terminal device on the vehicle, obtaining state information of a wireless signal and environment perception information of an environment where the terminal device is located, wherein the environment perception information is used for representing characteristics of target environment factors in the environment, the target environment factors are used for influencing the accuracy of positioning the terminal device, a fusion map is obtained by fusing maps acquired from a plurality of acquisition dimensions respectively, the target position where the terminal device is located is determined based on the state information and the environment perception information, a driving route between the position where the vehicle is located and the target position is determined according to the fusion map of the vehicle, and controlling the vehicle to drive from the position where the vehicle is located to the target position according to the driving route. The application solves the technical problem of low control flexibility of the vehicle.

Inventors

  • CHENG FULIN
  • FANG MIN
  • ZHOU JIAN
  • LIN HUA

Assignees

  • 奇瑞汽车股份有限公司

Dates

Publication Date
20260505
Application Date
20260126

Claims (10)

  1. 1. A control method of a vehicle, wherein the vehicle and a terminal device are connected by wireless signals in a communication manner, the method comprising: Responding to a control instruction triggered by the terminal equipment to the vehicle, acquiring state information of the wireless signal and environment perception information of an environment where the terminal equipment is located, wherein the environment perception information is used for representing characteristics of target environment factors in the environment, and the target environment factors are used for influencing the accuracy of positioning the terminal equipment; Determining a target position of the terminal equipment based on the state information and the environment perception information; Determining a driving route between a position of the vehicle and the target position according to a fusion map of the vehicle, wherein the fusion map is obtained by fusing maps acquired from a plurality of acquisition dimensions respectively; and controlling the vehicle to travel from the position where the vehicle is located to the target position according to the travel route.
  2. 2. The method of claim 1, wherein determining the target location at which the terminal device is located based on the status information and the context awareness information comprises: determining a first position of the terminal equipment based on the state information and the fusion map, wherein the first position is used for representing the position of the terminal equipment determined by the state information; Determining a second position of the terminal equipment based on the environment awareness information, wherein the second position is used for representing the position of the terminal equipment determined by the environment awareness information; The target location is determined based on the first location and the second location.
  3. 3. The method according to claim 2, wherein the method further comprises: acquiring gesture information of the terminal equipment, wherein the gesture information is used for representing the motion state of the terminal equipment; based on the state information and the fusion map, determining a first position where the terminal device is located includes: Combining the state information and the gesture information, and positioning the wireless signals from the fusion map to obtain the first position; Or alternatively The environment perception information comprises environment semantic information and an image, the environment semantic information is used for representing the attribute of the environment feature, the image is acquired by the environment feature, the second position comprises a first sub-position and a second sub-position, and the determining of the second position of the terminal equipment based on the environment perception information comprises the following steps: Determining the first sub-position based on the environmental semantic information and the gesture information; And extracting attribute features of target feature points from the image, and determining the second sub-position based on the attribute features and the gesture information, wherein the target feature points are feature points in the image, the identification degree of which is larger than an identification degree threshold value, and the feature points are used for improving the positioning accuracy of the terminal equipment.
  4. 4. The method of claim 3, wherein determining the target location based on the first location and the second location comprises: Respectively determining observed noise covariances of the first position, the first sub-position and the second sub-position, wherein the observed noise covariances are used for representing the degree of difference between the first position, the first sub-position or the second sub-position and the actual position of the terminal equipment; And determining the target position based on the observed noise covariance corresponding to the first position, the first sub-position and the second sub-position respectively.
  5. 5. The method of claim 4, wherein determining the target location based on the observed noise covariance corresponding to the first location, the first sub-location, and the second sub-location, respectively, comprises: In response to one of the observed noise covariances corresponding to the first location, the first sub-location, and the second sub-location, respectively, being less than or equal to a covariance threshold, determining the target location based on the location where the observed noise covariances are less than or equal to the covariance threshold; In response to the observed noise covariance corresponding to the first position, the first sub-position and the second sub-position respectively, at least two observed noise covariance values are smaller than or equal to the covariance threshold value, and positions, where the observed noise covariance values are smaller than or equal to the covariance threshold value, are fused to obtain the target position; In response to the observed noise covariance corresponding to the first position, the first sub-position, and the second sub-position, respectively, not having the observed noise covariance less than or equal to the covariance threshold, and determining the target position based on a selection instruction aiming at a candidate position on the terminal equipment, wherein the candidate position is a position on the fusion map, and the vehicle can be controlled to travel.
  6. 6. The method of claim 1, wherein controlling the vehicle to travel from a location where the vehicle is located to the target location according to the travel route comprises: Projecting the target position onto the travel route; taking the target position on the driving route as an end point and taking the position of the vehicle as a starting point; Controlling the vehicle to travel from the starting point to the ending point; Or alternatively The vehicle is connected with the cloud, and the method further comprises: Responding to the control instruction, and acquiring the fusion map from the cloud; The map acquired by the plurality of acquisition dimensions at least comprises two of a wireless signal map, a semantic map and a visual characteristic map, and the method further comprises the following steps: And responding to the cloud end to receive the wireless signal map, the semantic map and the visual feature map which are sent by the vehicle, and fusing at least two of the wireless signal map, the semantic map and the visual feature map by utilizing the cloud end to obtain the fused map.
  7. 7. The method of claim 6, wherein in response to the cloud receiving the wireless signal map, the semantic map, and the visual feature map transmitted by the vehicle, fusing at least two of the wireless signal map, the semantic map, and the visual feature map with the cloud to obtain the fused map, comprising: responding to the cloud end to receive the wireless signal map, the semantic map and the visual feature map which are sent by the vehicle, and carrying out alignment processing on at least two of the wireless signal map, the semantic map and the visual feature map by utilizing the cloud end in a time dimension to obtain a time alignment result; in the space dimension, converting the coordinate system of the visual feature map and the coordinate system of the wireless signal map into the coordinate system of the semantic map to obtain a space alignment result; and obtaining the fusion map based on the time alignment result and the space alignment result.
  8. 8. The method according to any one of claims 1 to 7, further comprising: responding to the fact that the vehicle is in a manual driving mode and passes through a driving route in the fusion map, and collecting the state information and the environment sensing data of at least one track point on the driving route; fusing the acquired state information and the acquired environment perception data to the fusion map to obtain a target fusion map, wherein the target fusion map is used for replacing the fusion map stored in a cloud connected with the vehicle; Or alternatively According to the fusion map of the vehicle, determining a driving route between the position of the vehicle and the target position comprises the following steps: and acquiring the target fusion map from the cloud, and determining the driving route according to the target fusion map.
  9. 9. A vehicle, characterized by comprising: a memory storing an executable program; A processor for executing the program, wherein the program when run performs the method of any one of claims 1 to 8.
  10. 10. An electronic device, comprising: a memory storing an executable program; A processor for executing the program, wherein the program when run performs the method of any one of claims 1 to 8.

Description

Control method of vehicle, vehicle and electronic equipment Technical Field The embodiment of the application relates to the field of vehicles, in particular to a vehicle control method, a vehicle and electronic equipment. Background At present, a vehicle is usually called through a fixed calling point and a global positioning system (Global Positioning System, abbreviated as GPS) for remote intelligent calling of the vehicle by a vehicle owner. In the method for calling the vehicle through the fixed calling point, the vehicle owner needs to stand on the fixed calling point to call the vehicle, and the vehicle can travel according to a fixed route to the fixed calling point. The method for calling the vehicle by adopting the GPS positioning relies on better GPS signals, and is difficult to use for scenes without the GPS signals. Therefore, there is still a technical problem of low flexibility in controlling the vehicle. There is currently no good solution to the above problems. Disclosure of Invention The embodiment of the application provides a vehicle control method, a vehicle and electronic equipment, which are used for at least solving the technical problem of low flexibility in controlling the vehicle. According to one aspect of the embodiment of the application, a control method of a vehicle is provided, wherein the vehicle and terminal equipment are in communication connection by utilizing wireless signals, the method comprises the steps of responding to a control instruction triggered by the terminal equipment on the vehicle, obtaining state information of the wireless signals and environment perception information of an environment where the terminal equipment is located, wherein the environment perception information is used for representing characteristics of target environment factors in the environment, the target environment factors are used for influencing accuracy of positioning the terminal equipment, a fusion map is obtained by fusing maps acquired from a plurality of acquisition dimensions respectively, the target position where the terminal equipment is located is determined based on the state information and the environment perception information, a running route between the position where the vehicle is located and the target position is determined according to the fusion map of the vehicle, and the vehicle is controlled to run from the position where the vehicle is located to the target position according to the running route. Further, determining the target position of the terminal equipment based on the state information and the environment sensing information comprises determining a first position of the terminal equipment based on the state information and the fusion map, wherein the first position is used for representing the position of the terminal equipment determined by the state information, determining a second position of the terminal equipment based on the environment sensing information, wherein the second position is used for representing the position of the terminal equipment determined by the environment sensing information, and determining the target position based on the first position and the second position. The method further comprises the steps of obtaining gesture information of the terminal equipment, wherein the gesture information is used for representing the motion state of the terminal equipment, determining a first position where the terminal equipment is located based on the state information and the fusion map, wherein the first position is obtained by combining the state information and the gesture information and locating wireless signals from the fusion map, or the environment sensing information comprises environment semantic information and images, the environment semantic information is used for representing the attribute of the environment characteristic, the images are obtained by collecting the environment characteristic, the second position comprises a first sub-position and a second sub-position, the second position where the terminal equipment is located is determined based on the environment sensing information, the first sub-position is determined based on the environment semantic information and the gesture information, the attribute characteristic of a target characteristic point is extracted from the images, and the second sub-position is determined based on the attribute characteristic and the gesture information, wherein the target characteristic point is a characteristic point in the images, the identification degree of the characteristic point is larger than an identification degree threshold, and the characteristic point is used for improving the locating accuracy of the terminal equipment. Further, determining the target position based on the first position and the second position comprises determining observed noise covariances of the first position, the first sub-position and the second sub-position respectively, wherein the obs