CN-121999336-A - Visual-touch fusion sensing and obstacle avoidance method and system for transparent obstacle of unmanned vehicle
Abstract
The invention belongs to the technical field of unmanned vehicle environment sensing and obstacle avoidance, and discloses a method for sensing and obstacle avoidance by fusion of vision and touch of transparent obstacles of an unmanned vehicle, which comprises the steps of detecting edges of transparent objects on images acquired by a vehicle-mounted looking-around camera through an improved Transformer semantic segmentation network, and outputting preliminary position coordinates and reliability scores; the method comprises the steps of constructing a haptic verification and enhancement module, deploying flexible multi-mode haptic sensors integrating triboelectric sensing and visual haptic sensing at the edges of an unmanned vehicle bumper and a hub, respectively generating approaching haptic characteristics in a non-contact stage, taking visual data as a main part when visual credibility is high, performing haptic data auxiliary correction, triggering a haptic signal leading mode when visual credibility is low, and generating and executing an obstacle avoidance path. The invention obviously improves the detection precision of the unmanned vehicle on transparent barriers in extreme scenes such as urban roads, indoor venues, rainy days, nights and the like, reduces collision risk and adapts to the requirement of the unmanned vehicle on complex moving working conditions.
Inventors
- XIONG TIFAN
- CHEN SIYUAN
- WANG CHAO
- XIAO WEIGUO
Assignees
- 武汉法博特机器人有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260119
Claims (10)
- 1. The vision-touch fusion sensing method for the transparent barrier of the unmanned vehicle is characterized by comprising the following steps of: acquiring an environment image acquired by a vehicle-mounted camera, performing visual pre-recognition on the transparent obstacle, and outputting the preliminary position coordinates of the transparent obstacle and the corresponding visual credibility score; And dynamically determining a dominant relationship between visual perception and tactile perception in transparent obstacle perception according to the visual credibility score, wherein: When the visual credibility score is in a high credibility interval, taking a preliminary position coordinate of visual pre-recognition as a dominant perception result, and carrying out deviation correction on the position coordinate by utilizing a touch perception result; When the visual credibility score is in a low credibility interval, triggering a touch perception dominant mode, taking a touch perception result as a main basis of the position of the transparent obstacle, wherein the visual result is only used for assisting noise suppression; Under the condition that both visual perception and tactile perception participate in fusion, calculating a space difference value between a visual perception position coordinate and a tactile perception deduction position coordinate, when the difference value exceeds a preset threshold value, adopting the tactile perception deduction position coordinate as a final position coordinate of the transparent barrier, and when the difference value does not exceed the preset threshold value, fusing the visual perception position coordinate and the tactile perception deduction position coordinate to obtain the final position coordinate of the transparent barrier.
- 2. The method of claim 1, wherein the visual credibility score is obtained by weighting calculation of clear obstacle edge definition, distinguishing degree of clear obstacle and environment background and illumination interference degree during image acquisition, and the score is used for representing the availability degree of visual perception results in clear obstacle identification.
- 3. The method of claim 1, wherein the spatial difference threshold is three centimeters, and the haptic derived position coordinates are used to update the position of the transparent obstacle when the maximum coordinate difference between the haptic derived position coordinates and the haptic derived position coordinates exceeds three centimeters.
- 4. A method of haptic perception of a transparent barrier in an unmanned vehicle, the method comprising: When the unmanned vehicle approaches the transparent obstacle but does not contact, entering a non-contact sensing stage, sensing an electrostatic sensing signal caused by the transparent obstacle through a touch sensing unit, and generating an approaching touch characteristic taking an approaching distance and electrostatic intensity as elements; when the unmanned vehicle and the transparent barrier are in micro-contact, entering a micro-contact sensing stage, acquiring contact state information of a contact area through a touch sensing unit, and generating contact touch characteristics taking contact area and pressure distribution as elements; And constructing a touch perception description of the transparent barrier according to the proximity touch feature generated in the non-contact perception stage and the contact touch feature generated in the micro-contact perception stage, and the touch perception description is used for identifying and positioning the transparent barrier of the unmanned vehicle.
- 5. The method of claim 4, wherein the relative distance between the drone and the transparent obstacle corresponding to the non-contact sensing phase is in a range of five millimeters to fifty millimeters, and the contact force corresponding to the micro-contact sensing phase is in a zero point one to one cow range.
- 6. The method of claim 4, wherein the tactile sensor unit has a sampling frequency of not less than one hundred hertz, a distance detection accuracy of plus or minus one millimeter, and an electrostatic signal detection sensitivity of zero-picocoulomb.
- 7. The obstacle avoidance decision method for the transparent obstacle of the unmanned vehicle is characterized by comprising the following steps of: Selecting point cloud data with reflection intensity higher than a preset intensity threshold from the laser radar point cloud data, and taking the point cloud data as a candidate point set for three-dimensional reconstruction of the transparent barrier; Combining a transparent obstacle position result obtained by vision-touch fusion sensing, performing space constraint on the candidate point set, and clustering the constrained point cloud data to remove isolated noise points and form an effective point cloud cluster of the transparent obstacle; Reconstructing a three-dimensional outline model of the transparent obstacle based on the effective point cloud cluster, and generating an obstacle avoidance path by combining the real-time running state of the unmanned vehicle, so as to control the unmanned vehicle to finish avoiding the transparent obstacle.
- 8. The method of claim 7, wherein the reflection intensity threshold is set to eighty for distinguishing transparent obstacle point clouds from regular object point clouds.
- 9. The method of claim 7, wherein the obstacle avoidance path is planned by minimizing cost targets for steering angle and travel path length construction.
- 10. The method of claim 7, wherein the obstacle avoidance path is converted into steering control commands and longitudinal drive control commands for the vehicle and sent to a chassis control system via a vehicle bus to enable autonomous obstacle avoidance travel of the unmanned vehicle.
Description
Visual-touch fusion sensing and obstacle avoidance method and system for transparent obstacle of unmanned vehicle Technical Field The invention belongs to the technical field of unmanned vehicle environment sensing and obstacle avoidance, and particularly relates to a method and a system for unmanned vehicle transparent obstacle vision-touch fusion sensing and obstacle avoidance. Background With the evolution of the automatic driving technology to the L4/L5 level, the unmanned vehicle has extremely high requirements on the perception precision and fault tolerance of the complex environment. In the prior art, unmanned vehicles generally use multi-sensor fusion such as cameras, liDAR, millimeter wave radar, ultrasonic waves and the like to acquire environmental information. However, for transparent obstacles, existing sensors tend to be difficult to detect reliably. On the one hand, the transparent object mainly transmits and rarely reflects visible light and laser, so that a transparent area in an image shot by a camera is often integrated with a background, and an algorithm is difficult to judge whether a barrier exists in front. For example, when the position of the glass in the scene image obtained by the camera of the unmanned vehicle is transparent, the algorithm may erroneously assume that the scene is passable, while the laser beam of the LiDAR or infrared depth sensor may penetrate the glass and not be reflected back to the sensor, thereby completely disabling obstacle detection. On the other hand, factors such as ambient light variation, motion blur and the like can further influence the accuracy of pure visual recognition. Pure vision systems are susceptible to direct sunlight or shadow occlusion and can easily confuse objects of similar shape in complex scenes. By combining the reasons, the unmanned vehicle frequently fails to detect or misjudge when encountering transparent barriers, and even directly collides, so that potential safety hazards are caused. Therefore, how to improve the perception reliability of the unmanned vehicle to the transparent obstacle and realize reliable obstacle avoidance is an important problem to be solved at present. Some prior studies have attempted to enhance glass detection with scene context information, but it is still difficult to fully cover all cases in diverse environments. Therefore, a novel fusion sensing method is needed, which combines vision and other sensing information to make up for the deficiency of a single sensor. Disclosure of Invention Aiming at the technical problems, the invention provides a vision-touch fusion sensing and obstacle avoidance method for transparent obstacles of an unmanned vehicle. The method aims to overcome the defects of missed detection and misjudgment when the existing unmanned vehicle relies on vision and laser radar to sense transparent barriers, and the technical defects of no haptic redundancy and stiff fusion mechanism in extreme scenes, and the high-precision detection and safe obstacle avoidance of the transparent barriers under complex working conditions are realized by constructing a full-flow technical system of vision pre-recognition, haptic verification enhancement, dynamic fusion and path planning and combining the pre-contact sensing capability of flexible multi-mode haptic sensing and a dynamic weight fusion algorithm, so that the collision risk of the unmanned vehicle is reduced. The invention discloses a visual touch fusion sensing and obstacle avoidance method for transparent obstacles of an unmanned vehicle, which comprises the following steps: s1, constructing a transparent barrier vision pre-recognition module, detecting the edges of transparent objects of images acquired by a vehicle-mounted surrounding camera through an improved form semantic segmentation network, and outputting preliminary position coordinates and reliability scores, wherein the improved form semantic segmentation network captures the weak edge information of the transparent objects through a multi-scale feature pyramid structure and introduces an edge perception loss function to optimize the outline segmentation precision of the transparent objects, and the multi-scale feature pyramid structure comprises 4 feature levels which respectively correspond to 192 multiplied by 192, 96 multiplied by 96, 48 multiplied by 48 and 24 multiplied by 24 image resolutions, and each level is connected with the edge feature in a jumping manner to transmit the edge features; s2, constructing a touch verification and enhancement module, deploying a flexible multi-mode touch sensor integrating triboelectric sensing and visual touch sensing on the edges of a bumper and a hub of the unmanned vehicle, and generating a near touch feature in a non-contact stage and a touch feature in a micro-contact stage respectively, wherein the sampling frequency of the flexible multi-mode touch sensor is not lower than 100Hz, the pressure detection range is 0-5N, the di