CN-121259790-B - Internet of vehicles collaboration method, system and storage medium
Abstract
The invention provides a cooperative method, a cooperative system and a storage medium of Internet of vehicles, wherein the method comprises the steps of constructing an environment cooperative image according to vehicle positions and environment acquisition images of all running vehicles, determining an environment blind area image of all the running vehicles according to the environment cooperative image and the environment acquisition image, determining associated vehicles in the running vehicles according to the vehicle positions, determining associated perception objects of the running vehicles according to the associated vehicles, carrying out collision risk prediction on the running vehicles and the associated vehicles according to the associated perception objects, obtaining a collision risk prediction result, and carrying out collision risk prediction reminding on the running vehicles according to the collision risk prediction result. According to the embodiment of the invention, the associated perception objects in the environment blind area image can be effectively determined based on the associated vehicles, the collision risk prediction is carried out on the running vehicles and the associated vehicles through the associated perception objects, the collision risk prediction reminding can be effectively carried out on the running vehicles, and the running safety of the vehicles is improved.
Inventors
- ZHU HONG
- WANG YU
- CAI SHUZHOU
- ZHANG XIAODA
- XIAO ZIXIAN
Assignees
- 厦门磁北科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251203
Claims (6)
- 1. A cooperative method of internet of vehicles, the method comprising: acquiring environment acquisition images of all running vehicles in a target road section, and constructing environment cooperative images according to the vehicle positions of all running vehicles and the environment acquisition images; determining an environmental blind area image of each running vehicle according to the environment cooperative image and the environment acquisition image; For any running vehicle, determining an associated vehicle in the running vehicle according to the vehicle position, and determining an associated perception object in the environment blind area image corresponding to the running vehicle according to the associated vehicle; Performing collision risk prediction on the running vehicle and the associated vehicle according to the associated perception object to obtain a collision risk prediction result, and performing collision risk prediction reminding on the running vehicle according to the collision risk prediction result; For any running vehicle, determining an associated vehicle in the running vehicle according to the vehicle position, and determining an associated perception object in the environment blind area image corresponding to the running vehicle according to the associated vehicle, wherein the method comprises the following steps: connecting according to the vehicle position of the running vehicle and the area boundary of the environment blind area image to obtain a blind area extending area, and determining the running vehicle in the blind area extending area as the associated vehicle; Performing entity identification on the environment blind area image corresponding to the running vehicle to obtain an entity object, and acquiring an entity position of a fixed entity in the entity object; Acquiring a running direction of the associated vehicle, and carrying out position prediction according to the running direction and the entity position of the fixed entity to obtain an associated predicted position, wherein the associated predicted position is obtained by predicting the coordinates of the associated vehicle when the associated vehicle runs to the same horizontal position as the fixed entity according to the running direction; If the position distance between the association prediction position and the entity position of the fixed entity is smaller than a first distance threshold value, determining the fixed entity as the association perception object; And predicting collision risk of the running vehicle and the associated vehicle according to the associated perception object to obtain a collision risk prediction result, wherein the method comprises the following steps of: if the association sensing object is a fixed entity, respectively acquiring the distances between the association vehicle and the association sensing object and the distance between the association vehicle and the driving vehicle to obtain a first association distance and a second association distance; if the first association distance is greater than a second distance threshold, setting the running vehicle and the association vehicle as a first collision risk level; if the second association distance is greater than a third distance threshold, setting the running vehicle and the association vehicle as a first collision risk level; If the first association distance is smaller than or equal to a second distance threshold value and the second association distance is smaller than or equal to a third distance threshold value, setting the running vehicle and the association vehicle as a second collision risk level; The collision risk prediction result includes a collision risk level between the running vehicle and a different one of the associated vehicles.
- 2. The cooperative method of internet of vehicles according to claim 1, wherein after performing entity recognition on the image of the environmental blind area corresponding to the traveling vehicle to obtain an entity object, further comprising: acquiring a moving track of a moving entity in the entity object, and screening the moving entity according to the running direction of the associated vehicle and the moving direction in the moving track to obtain a screening entity; Detecting movement abnormality of the screening entity according to the movement track; if the movement abnormality detection of the screening entity is not qualified, determining the screening entity as the association sensing object; If movement interference exists between the entity position of the fixed entity and the movement track of the screening entity, determining the screening entity as the association sensing object; and if movement interference exists between the movement tracks of different screening entities, determining the different screening entities as the associated perception objects.
- 3. The cooperative method of internet of vehicles according to claim 2, wherein the collision risk prediction is performed on the traveling vehicle and the associated vehicle according to the associated perception object, so as to obtain a collision risk prediction result, which includes: determining an meeting position according to the running direction of the associated vehicle and the moving direction in the moving track, and respectively acquiring the associated perception object and the distance between the associated vehicle and the meeting position to obtain a first meeting distance and a second meeting distance; Determining a first meeting time according to the moving speed of the associated perception object and the first meeting distance, determining a second meeting time according to the running speed of the associated vehicle and the second meeting distance, and calculating a time length difference value between the first meeting time and the second meeting; if the time length difference value is greater than or equal to a first time length threshold value, setting the running vehicle and the associated vehicle as a first collision risk level; and if the time length difference value is larger than a second time length threshold value and smaller than the first time length threshold value, setting the running vehicle and the associated vehicle as a second collision risk level.
- 4. The internet of vehicles collaboration method of claim 1, wherein determining an ambient blind zone image for each traveling vehicle from the ambient collaboration image and the ambient acquisition image comprises: And comparing the environment cooperative image with the environment acquired image to obtain a difference image, and carrying out image segmentation on the difference image according to the vehicle position and the running direction of the running vehicle to obtain the environment blind area image.
- 5. A co-ordination system for the internet of vehicles, the system comprising: The image construction module is used for acquiring environment acquisition images of all running vehicles in the target road section and constructing environment cooperative images according to the vehicle positions of all the running vehicles and the environment acquisition images; The blind area determining module is used for determining an environment blind area image of each running vehicle according to the environment cooperative image and the environment acquisition image; the object perception module is used for determining an associated vehicle in the running vehicles according to the vehicle position aiming at any running vehicle, and determining an associated perception object in the environment blind area image corresponding to the running vehicle according to the associated vehicle; The collision reminding module is used for predicting collision risk of the running vehicle and the associated vehicle according to the associated perception object, obtaining a collision risk prediction result, and carrying out collision risk prediction reminding on the running vehicle according to the collision risk prediction result; the object perception module is further used for connecting the vehicle position of the running vehicle with the area boundary of the environment blind area image to obtain a blind area extension area, and determining the running vehicle in the blind area extension area as the associated vehicle; Performing entity identification on the environment blind area image corresponding to the running vehicle to obtain an entity object, and acquiring an entity position of a fixed entity in the entity object; Acquiring a running direction of the associated vehicle, and carrying out position prediction according to the running direction and the entity position of the fixed entity to obtain an associated predicted position, wherein the associated predicted position is obtained by predicting the coordinates of the associated vehicle when the associated vehicle runs to the same horizontal position as the fixed entity according to the running direction; If the position distance between the association prediction position and the entity position of the fixed entity is smaller than a first distance threshold value, determining the fixed entity as the association perception object; The collision reminding module is further used for respectively acquiring the distances among the associated vehicle, the associated perceived object and the running vehicle if the associated perceived object is a fixed entity to acquire a first associated distance and a second associated distance; if the first association distance is greater than a second distance threshold, setting the running vehicle and the association vehicle as a first collision risk level; if the second association distance is greater than a third distance threshold, setting the running vehicle and the association vehicle as a first collision risk level; If the first association distance is smaller than or equal to a second distance threshold value and the second association distance is smaller than or equal to a third distance threshold value, setting the running vehicle and the association vehicle as a second collision risk level; The collision risk prediction result includes a collision risk level between the running vehicle and a different one of the associated vehicles.
- 6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
Description
Internet of vehicles collaboration method, system and storage medium Technical Field The invention relates to the technical field of Internet of vehicles, in particular to an Internet of vehicles collaboration method, an Internet of vehicles collaboration system and a storage medium. Background With the continuous maturity of the related technologies of the internet of vehicles, sensor technology, mobile communication technology, big data technology, intelligent computing technology and the like are all beginning to be deeply fused with the internet of vehicles. Driven by market demands, telematics terminal devices of the internet of vehicles (terminal devices for providing various information required for driving and living for people in driving at any time through a wireless network) are expected to be in explosive growth. Unlike traditional ITS (INTELLIGENT TRANSPORT SYSTEM, intelligent transportation system), the internet of vehicles more fills the interactive communication between heavy vehicles and vehicles, between vehicles and roads, and between vehicles and people, so to speak, the appearance of the internet of vehicles redefines the operation mode of vehicle traffic. In the existing internet of vehicles, the sensing range of a vehicle sensor is limited by physical characteristics and environmental interference, and when a shielding object exists, a sensing blind area is easy to appear, so that the safety of vehicle running is reduced. Disclosure of Invention The embodiment of the invention aims to provide a cooperative method, a cooperative system and a storage medium for Internet of vehicles, which are used for solving the problem of low vehicle driving safety caused by a perception blind area in the prior art. The embodiment of the invention is realized in such a way that the method for cooperated with the Internet of vehicles comprises the following steps: acquiring environment acquisition images of all running vehicles in a target road section, and constructing environment cooperative images according to the vehicle positions of all running vehicles and the environment acquisition images; determining an environmental blind area image of each running vehicle according to the environment cooperative image and the environment acquisition image; For any running vehicle, determining an associated vehicle in the running vehicle according to the vehicle position, and determining an associated perception object in the environment blind area image corresponding to the running vehicle according to the associated vehicle; And predicting collision risk of the running vehicle and the associated vehicle according to the associated perception object to obtain a collision risk prediction result, and prompting the running vehicle according to the collision risk prediction result. Preferably, for any one of the traveling vehicles, determining an associated vehicle in the traveling vehicle according to the vehicle position, and determining an associated perception object in the environment blind area image corresponding to the traveling vehicle according to the associated vehicle, including: connecting according to the vehicle position of the running vehicle and the area boundary of the environment blind area image to obtain a blind area extending area, and determining the running vehicle in the blind area extending area as the associated vehicle; Performing entity identification on the environment blind area image corresponding to the running vehicle to obtain an entity object, and acquiring an entity position of a fixed entity in the entity object; acquiring the running direction of the associated vehicle, and carrying out position prediction according to the running direction and the entity position of the fixed entity to obtain an associated predicted position; and if the position distance between the association prediction position and the entity position of the fixed entity is smaller than a first distance threshold value, determining the fixed entity as the association perception object. Preferably, the method further includes performing entity recognition on the image of the environmental blind area corresponding to the running vehicle, and after obtaining the entity object, further including: acquiring a moving track of a moving entity in the entity object, and screening the moving entity according to the running direction of the associated vehicle and the moving direction in the moving track to obtain a screening entity; Detecting movement abnormality of the screening entity according to the movement track; if the movement abnormality detection of the screening entity is not qualified, determining the screening entity as the association sensing object; If movement interference exists between the entity position of the fixed entity and the movement track of the screening entity, determining the screening entity as the association sensing object; and if movement interference exists between the movement t