CN-121994264-A - POI enhanced display method and related equipment
Abstract
The embodiment of the application discloses a POI enhanced display method and related equipment. The vehicle may run a navigation application and open a navigation line in the navigation application. The vehicle can start the camera to collect the front image of the windshield of the vehicle. The vehicle may then determine, based on the image, that a target scene is present in the driving field of view, the target scene being located on the navigation line, the vehicle projecting an identification of the target scene in front of a windshield of the vehicle, the identification being superimposed on the target scene in the driving field of view, the display position of the identification in front of the windshield of the vehicle being determined based on first three-dimensional coordinates, which may be determined by the vehicle by measurement of first sensors, the first sensors including one or more of radar, depth of field sensor, laser sensor. Therefore, the method can realize accurate positioning and enhanced display of the target scenery, and further optimize the navigation experience of the user.
Inventors
- QIU JUXIANG
- ZHANG HAIRONG
- LV SHUTIAN
- YE ZONGBO
Assignees
- 华为终端有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20241104
Claims (14)
- 1. The POI enhancement display method is characterized by being applied to vehicle-mounted equipment, and comprises the following steps: Running a navigation application program, and starting a first line in the navigation application program; Starting a camera, and collecting an image in front of a windshield of a vehicle; And judging that a target scenery appears in a driving view field based on the image, wherein the target scenery is positioned on the first line, the vehicle-mounted device projects a first virtual image in front of a windshield of the vehicle, in the driving view field, the first virtual image is superposed on the target scenery, the display position of the first virtual image in front of the windshield of the vehicle is determined based on first three-dimensional coordinates, the first three-dimensional coordinates are determined by the vehicle-mounted device through measurement of a first sensor, and the first sensor comprises one or more of a radar, a depth sensor and a laser sensor.
- 2. The method of claim 1, wherein the target scene comprises a first portal of a first building, the method further comprising, prior to the on-board device projecting a first virtual image in front of a windshield of the vehicle: Judging that a plurality of entrances of the first building appear in the driving view field based on the image acquired by the camera; the first inlet is selected from the plurality of inlets.
- 3. The method according to claim 1 or 2, characterized in that the in-vehicle device projects a first virtual image in front of the windscreen of the vehicle, in particular comprising: the in-vehicle apparatus projects the first virtual image onto a virtual image plane in front of the windshield, the virtual image plane being perpendicular to the ground, the first virtual image on the virtual image plane, the target subject, and the driving viewpoint being on the same straight line.
- 4. A method according to claim 3, wherein the first three-dimensional coordinates are three-dimensional coordinates of the target scene in a coordinate system having the driving viewpoint as an origin, and the two-dimensional coordinates of the first virtual image on the virtual image plane are converted from the first three-dimensional coordinates.
- 5. The method according to claim 4, wherein the method further comprises: determining the pose of the vehicle by a vehicle control sensor, wherein the vehicle control sensor comprises an accelerometer, a gyroscope, a global navigation satellite system, a laser radar and a camera; and determining the first three-dimensional coordinate according to the self-vehicle pose and the distance between the target scenery and the vehicle, which are measured by the first sensor.
- 6. The method according to any one of claims 1-5, wherein said determining, based on said image, that a target scene is present in the driving field of view, in particular comprises: judging that the target scenery appears in the driving view field based on the image acquired by the camera at the time t 1; The vehicle-mounted device projects a first virtual image in front of a windshield of the vehicle, and specifically comprises: the vehicle-mounted device projects a first virtual image at a time t2 to the front of a windshield of the vehicle, wherein the first virtual image at the time t2 is determined by the vehicle-mounted device based on a first three-dimensional coordinate determined by the first sensor at the time t2, and the time difference between the time t2 and the time t1 or between the time t2 and the time t1 is smaller than a first time threshold.
- 7. The method of any of claims 1-6, wherein the closer the in-vehicle device is to the target scene, the larger the first virtual image superimposed on the target scene, or the further the in-vehicle device is from the target scene, the smaller the first virtual image superimposed on the target scene.
- 8. The method according to any one of claims 1-7, further comprising: And judging that the target scenery disappears in the driving view field based on the image, wherein the vehicle-mounted device cancels the projection of the first virtual image to the front of a windshield of the vehicle.
- 9. The method of any one of claims 1-8, wherein prior to said turning on the camera, the method further comprises: and detecting that the distance between the vehicle-mounted equipment and the target scenery is smaller than or equal to a first distance threshold value.
- 10. The method of any of claims 1-9, wherein the first virtual image comprises one or more of a location icon, a name of the target scene, a contour of the target scene.
- 11. The method of any of claims 1-10, wherein the target scene comprises an entrance of the first building, a road in the first building, the first building.
- 12. An electronic device comprising one or more processors and one or more memories, wherein the one or more memories are coupled to the one or more processors, the one or more memories to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the method of any of claims 1-11 to be performed.
- 13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when run on an electronic device, causes the method according to any one of claims 1-11 to be performed.
- 14. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
Description
POI enhanced display method and related equipment Technical Field The application relates to the technical field of terminals, in particular to a POI enhanced display method and related equipment. Background An augmented reality head up display (AR HUD) is a technology that projects information such as vehicle information, navigation prompts, and warnings of an assisted driving system directly into the driver's line of sight in the form of graphics and/or text. The combination of the AR HUD and the vehicle navigation system can provide a novel navigation experience for a user, an intuitive and easy-to-understand interface is provided for a driver to enhance the perception of the driver to the surrounding environment, so that the driver can make a reflection faster, the risk of distraction of the driver due to checking of an instrument panel or a central control screen is reduced, and the driving safety and convenience are improved. However, due to limited accuracy of the points of interest (point of interest, POI) and deviations in the mapping process in the conventional navigation system, a situation that the positioning of the destination is not accurate enough often occurs, that is, the navigation system positioning destination often falls near the actual destination, so that the AR HUD marks the navigation destination inaccurately when approaching the navigation destination, that is, the AR HUD does not actually perform the fitting display on the display content of the HUD and the actual destination, so that the user still needs to make further autonomous judgment when approaching the navigation destination to find the destination entry. Disclosure of Invention The application provides a POI enhanced display method and related equipment. The vehicle can run the navigation application program and start the navigation line in the navigation application program. The vehicle can start the camera to collect the front image of the windshield of the vehicle. The vehicle may then determine, based on the image, that a target scene is present in the driving field of view, the target scene being located on the navigation line, the vehicle projecting an identification of the target scene in front of a windshield of the vehicle, the identification being superimposed on the target scene in the driving field of view, the display position of the identification in front of the windshield of the vehicle being determined based on first three-dimensional coordinates, the first three-dimensional coordinates being determinable by the vehicle as measured by a first sensor, the first sensor comprising one or more of radar, depth of field sensor, laser sensor. Therefore, the method can realize accurate positioning and enhanced display of the target scenery, and further optimize the navigation experience of the user. In a first aspect, the application provides a POI enhancement display method applied to vehicle-mounted equipment, the method comprises the steps of running a navigation application program, starting a first line in the navigation application program, starting a camera, collecting images in front of a windshield of a vehicle, judging that a target scenery is in a driving view field based on the images, projecting a first virtual image to the front of the windshield of the vehicle by the vehicle-mounted equipment, superposing the first virtual image on the target scenery in the driving view field, wherein the display position of the first virtual image in front of the windshield of the vehicle is determined based on first three-dimensional coordinates, the first three-dimensional coordinates are measured by the vehicle-mounted equipment through a first sensor, and the first sensor comprises one or more of a radar, a depth sensor and a laser sensor. In the navigation process, after the situation that the target scenery appears in the driving view field is judged based on the image captured by the camera, the vehicle-mounted equipment (namely the vehicle machine) can measure and obtain the accurate three-dimensional coordinates (namely the first three-dimensional coordinates) of the target scenery by using the first sensors such as the radar, the depth sensor and the laser sensor, and then the accurate three-dimensional coordinates can be used for determining the accurate position of the mark (namely the first virtual image) of the target scenery projected to the front of the windshield of the vehicle, so that the mark of the target scenery can be accurately overlapped on the target scenery, and further accurate positioning and enhanced display of the target scenery are realized, thereby optimizing the navigation experience of a user. In combination with the first aspect, in some embodiments, the target scene includes a first portal of the first building, and the method further includes determining that a plurality of portals of the first building are present in the driving field of view based on the image captured by