Search

CN-117893978-B - Environment sensing method, device, storage medium and vehicle

CN117893978BCN 117893978 BCN117893978 BCN 117893978BCN-117893978-B

Abstract

The disclosure relates to an environment sensing method, device, storage medium and vehicle, which comprise the steps of obtaining image information of a current environment, inputting the image information into a pre-trained scene classification model, obtaining a classification result output by the scene classification model, wherein the classification result comprises each scene in a plurality of scenes and a first confidence coefficient corresponding to each scene, determining a sensing result of the current environment according to the classification result and at least one recognition result output by the recognition model based on the image information, wherein the recognition result comprises a recognized recognition object and a second confidence coefficient corresponding to the recognition object, wherein each scene corresponds to one pre-trained recognition model, and outputting the sensing result if the sensing result meets preset conditions. The vehicle sensing system and the vehicle sensing method can improve the efficiency and accuracy of the vehicle sensing the surrounding environment.

Inventors

  • LI GUIZENG
  • YANG DONGSHENG
  • WANG HUAN
  • XU CHI

Assignees

  • 比亚迪股份有限公司

Dates

Publication Date
20260505
Application Date
20221009

Claims (10)

  1. 1. A method of environmental awareness, comprising: Acquiring image information of a current environment; Inputting the image information into a pre-trained scene classification model, and acquiring a classification result output by the scene classification model, wherein the classification result comprises each scene in a plurality of scenes and a first confidence coefficient corresponding to each scene; Determining a perception result of the current environment according to the classification result and a recognition result output by at least one recognition model based on the image information, wherein the recognition result comprises a recognized recognition object and a second confidence coefficient corresponding to the recognition object, and each scene corresponds to a pre-trained recognition model; Outputting the sensing result if the sensing result meets a preset condition; The determining the perception result of the current environment according to the classification result and the identification result output by at least one identification model based on the image information comprises the following steps: Respectively inputting the image information into an identification model corresponding to each scene to obtain an identification result corresponding to each scene, wherein the identification result comprises an identification object and a second confidence coefficient corresponding to the identification object; For each recognition object, carrying out weighting processing on the second confidence coefficient of the recognition object under each scene and the first confidence coefficient corresponding to each scene to obtain a weighted second confidence coefficient of the recognition object; And determining the second confidence coefficient of each recognition object and the weighted second confidence coefficient of each recognition object as the perception result.
  2. 2. The method according to claim 1, wherein outputting the sensing result if the sensing result satisfies a preset condition comprises: And if the weighted second confidence coefficient is greater than or equal to a second threshold value, outputting a recognition result corresponding to the weighted second confidence coefficient as the perception result.
  3. 3. The method according to claim 1, wherein for each recognition object, weighting the second confidence coefficient of the recognition object under each scene and the first confidence coefficient corresponding to each scene to obtain the weighted second confidence coefficient of the recognition object includes: normalizing the first confidence coefficient corresponding to each scene in the classification result to obtain a processed first confidence coefficient; and for each recognition object, carrying out weighting processing on the second confidence coefficient of the recognition object under each scene and the processed first confidence coefficient corresponding to each scene to obtain a weighted second confidence coefficient of the recognition object.
  4. 4. A method according to any one of claims 1 to 3, wherein said inputting the image information into a pre-trained scene classification model and obtaining the classification result output by the scene classification model comprises: extracting a region of interest from the image information through a preset rule; Inputting the region of interest into a pre-trained scene classification model, and obtaining a classification result output by the scene classification model.
  5. 5. A method according to any one of claims 1 to 3, further comprising: acquiring a plurality of image samples of a pre-calibrated identification object; Scene labeling is carried out on the plurality of image samples, and labeled image samples are obtained; And training to obtain the scene classification model based on the noted image sample.
  6. 6. The method of claim 5, further comprising, prior to said training the scene classification model based on the annotated image samples: and determining that the image samples corresponding to each scene in the marked image samples meet the preset quantity requirement.
  7. 7. A method according to any one of claims 1 to 3, further comprising: and training to obtain an identification model corresponding to each scene based on an image sample corresponding to the scene, wherein the scenes comprise a general scene and a long tail scene, and the image sample corresponding to the general scene comprises an image sample with a label of the long tail scene.
  8. 8. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1 to 7.
  9. 9. An environmental awareness apparatus, comprising: a memory having a computer program stored thereon; A processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1 to 7.
  10. 10. A vehicle according to claim 9, comprising an environmental awareness apparatus.

Description

Environment sensing method, device, storage medium and vehicle Technical Field The disclosure relates to the technical field of automatic driving, in particular to an environment sensing method, an environment sensing device, a storage medium and a vehicle. Background The visual automobile environment sensing method in the related art is to train a single model to perform reasoning and sensing by using a pre-acquired image data set. However, this approach is only applicable to some general-purpose scenes, and for some long-tail scenes with a small sample size, the environment around the vehicle will not be accurately perceived. Disclosure of Invention To overcome the problems in the related art, the present disclosure provides an environment sensing method, apparatus, storage medium, and vehicle. According to a first aspect of embodiments of the present disclosure, there is provided an environment-aware method, the method comprising: Acquiring image information of a current environment; Inputting the image information into a pre-trained scene classification model, and acquiring a classification result output by the scene classification model, wherein the classification result comprises each scene in a plurality of scenes and a first confidence coefficient corresponding to each scene; Determining a perception result of the current environment according to the classification result and a recognition result output by at least one recognition model based on the image information, wherein the recognition result comprises a recognized recognition object and a second confidence coefficient corresponding to the recognition object, and each scene corresponds to a pre-trained recognition model; and if the sensing result meets the preset condition, outputting the sensing result. Optionally, the determining the perceived result of the current environment according to the classification result and the at least one recognition model based on the recognition result output by the image information includes: according to the classification result, acquiring a scene with the maximum first confidence from the plurality of scenes as a target scene; And inputting the image information into the recognition model corresponding to the target scene, acquiring a recognition result output by the recognition model corresponding to the target scene, and taking the recognition result as a perception result of the current environment. Optionally, if the sensing result meets a preset condition, outputting the sensing result includes: And if the second confidence coefficient in the identification result is larger than or equal to the first threshold value, outputting the identification result as the perception result. Optionally, the determining the perceived result of the current environment according to the classification result and the at least one recognition model based on the recognition result output by the image information includes: Respectively inputting the image information into an identification model corresponding to each scene to obtain an identification result corresponding to each scene, wherein the identification result comprises an identification object and a second confidence coefficient corresponding to the identification object; For each recognition object, carrying out weighting processing on the second confidence coefficient of the recognition object under each scene and the first confidence coefficient corresponding to each scene to obtain a weighted second confidence coefficient of the recognition object; And determining the second confidence coefficient of each recognition object and the weighted second confidence coefficient of each recognition object as the perception result. Optionally, if the sensing result meets a preset condition, outputting the sensing result includes: And if the weighted second confidence coefficient is greater than or equal to a second threshold value, outputting a recognition result corresponding to the weighted second confidence coefficient as the perception result. Optionally, for each recognition object, weighting the second confidence coefficient of the recognition object under each scene and the first confidence coefficient corresponding to each scene to obtain a weighted second confidence coefficient of the recognition object, including: normalizing the first confidence coefficient corresponding to each scene in the classification result to obtain a processed first confidence coefficient; and for each recognition object, carrying out weighting processing on the second confidence coefficient of the recognition object under each scene and the processed first confidence coefficient corresponding to each scene to obtain a weighted second confidence coefficient of the recognition object. Optionally, the inputting the image information into a pre-trained scene classification model, and obtaining a classification result output by the scene classification model includes: extracting a regi