US-12619313-B2 - Orientation assistance system comprising means for acquiring a real or virtual visual environment, non-visual human-machine interface means and means for processing the digital representation of said visual environment
Abstract
An orientation assistance system comprises means for acquiring a real or virtual visual environment, non-visual human-machine interface means and means for processing the digital representation of the visual environment in order to provide an electrical signal for controlling a haptic interface. The means for processing the digital representation periodically extracts at least one pulsed digital activation pattern for a subset of spikes of the haptic region. The haptic interface consists of a lumbar belt having an active surface of N×M actuators, where N and M are integers greater than or equal to 10. The processing means provides, for each acquisition of the visual environment, a sequence of P activation frames for the actuators, where P is an integer between 2 and 15, preferably between 5 and 10, each of the frames corresponding to the representation of the environment in an incremental depth plane.
Inventors
- Rémi du Chalard
- Amaury BUGUET
- Gabrielle DE PUYSEGUR
- Wandrille DU CHALARD
- Fraçois Outters
- Zacharie NATAF
Assignees
- ARTHA FRANCE
Dates
- Publication Date
- 20260505
- Application Date
- 20230118
- Priority Date
- 20220201
Claims (14)
- 1 . An orientation assistance system, comprising: means for acquiring a real or virtual visual environment; non-visual human-machine interface means comprising a haptic interface; and means for processing the digital representation acquired from a single acquisition of the visual environment to provide an electrical signal for controlling the haptic interface, the means for digital representation processing configured to periodically extract at least one pulsed digital activation pattern for a subset of spikes of the haptic interface, wherein: the haptic interface comprises a lumbar belt with an active surface of N×M spikes, where N and M are integers greater than or equal to 10; and the means for processing provides, for each single acquisition of the visual environment, a sequence of P activation frames for the spikes, where P is an integer between 2 and 15, each frame corresponding to the digital representation of the visual environment in an incremental depth plane, with the sequence of P activation frames being delivered in a burst for the single acquisition.
- 2 . The orientation assistance system of claim 1 , wherein the spikes are activated by solenoids.
- 3 . The orientation assistance system of claim 1 , wherein the means for acquiring a real or virtual visual environment comprise a spectacle frame carrying one or two cameras.
- 4 . A method of processing a digital representation of a visual environment to control a haptic interface comprising a lumbar belt with an active surface of N×M actuators, N and M being integers greater than or equal to 10, the method comprising: acquiring the visual environment and, for each single acquisition, calculating a sequence of P activation frames for the actuators, where P is an integer between 2 and 15, each of the frames corresponding to the digital representation of the visual environment in an incremental depth plane, with the sequence of P activation frames being delivered in a burst for the single acquisition.
- 5 . The method of claim 4 , further comprising calculating a digital image of N and M haptic pixels in a direction offset at a level of between 10 and 100 cm from the ground.
- 6 . The method of claim 5 , further comprising calculating for each digital image a sequence of P consecutive frames corresponding to incremental depth planes.
- 7 . The method of claim 5 , wherein calculating the digital image of N and M haptic pixels comprises processing involving assigning each haptic pixel a density value corresponding to a highest density value of visual voxels corresponding to a respective haptic pixel.
- 8 . The method of claim 5 , wherein calculating the digital image of N and M haptic pixels comprises processing involving assigning a non-zero density value to an area of the visual image corresponding to a hole.
- 9 . The method of claim 5 , wherein calculating the digital image of N and M haptic pixels comprises processing involving assigning a non-zero density value to an area of the visual image corresponding to an obstacle by automatic recognition processing.
- 10 . The method of claim 5 , wherein calculating the digital image of N and M haptic pixels comprises processing involving eliminating voxels outside a traffic lane of a user prior to calculating the digital image of N and M haptic pixels, the digital image being established from only remaining voxels.
- 11 . The method of claim 5 , wherein calculating the digital image of N and M haptic pixels comprises processing involving reducing processed voxels as a function of a parameter comprising a speed of movement of a user and/or a speed of movement of objects in a field of view of the visual acquisition means and/or a distance of objects in the field of view of the visual acquisition means, prior to calculating the digital image of N and M haptic pixels, the digital image being established from only remaining voxels.
- 12 . The method of claim 4 , further comprising transforming distances of objects relative to a camera into distances of the objects relative to a user.
- 13 . The method of claim 4 , further comprising detecting a change in orientation of a direction of observation of the environment by a user, and processing involving recalculating the digital representation of the visual environment.
- 14 . The method of claim 4 , further comprising modifying positions of voxels according to respective depths of the voxels.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is a national phase entry under 35 U.S.C. § 371 of International Patent Application PCT/EP2023/051121, filed Jan. 18, 2023, designating the United States of America and published as International Patent Publication WO 2023/147996 A1 on Aug. 10, 2023, which claims the benefit under Article 8 of the Patent Cooperation Treaty to French Patent Application Serial No. FR2200877, filed Feb. 1, 2022. TECHNICAL FIELD The present disclosure relates to the field of orientation assistance for visually impaired people or people moving in very low-visibility environments, for example, firefighters moving in a smoke-filled building or military personnel moving in the dark. BACKGROUND Various solutions are known, ranging from guide dog assistance to marking the ground with guidance strips, installing audio beacons, or even using canes to detect obstacles. It has also been proposed to use a haptic mode of information transmission, for example, in the form of a connected wristband. Haptic technology uses the sense of touch to convey information. WearWorks offers a smart bracelet called “Wayband” to guide the blind. The user begins by downloading an application on an associated smartphone and entering the desired address. The bracelet, linked to a GPS system, guides the user to their destination. When the user takes the wrong route, the bracelet vibrates. It stops vibrating once it is on the right track. Tactile language is sensitive, more intuitive and less intrusive, and relieves hearing, an overtaxed sense for the visually impaired. French Patent FR3100636B1 discloses an orientation assistance system comprising means for acquiring a real or virtual visual environment, non-visual human-machine interface means and means for processing the digital representation of the visual environment to provide an electrical signal for controlling an interface consisting of a bracelet having a single haptic zone with a surface area of between 60×60 millimeters and 150×150 millimeters, with an N×M set of active spikes where N is between 5 and 100 and M is between 10 and 100, the digital representation processing means consisting in periodically extracting at least one pulsed digital activation pattern for a subset of spikes of the haptic zone. Active belts have also been proposed to increase the surface area of the haptic zone. US Patent Application Publication No. US2013201308 relates to a visual blind-guiding method, which comprises the following steps: (1) shooting a black-and-white image, and extracting profile information from the black-and-white image to reduce detail elements and refine the image, so as to obtain an object profile signal;(2) according to ergonomic features, converting the object profile signal into a serial signal, conveying the serial signal to an image feeling instrument, wherein the image feeling instrument converts the serial signal into a mechanical tactile signal to emit feeler pin stimulation. An intermittent picture touch mode is used with respect to the speed of touch for vision. A feeler pin array enables the visually impaired to touch the shape of an object. Optionally, this document proposes to probe position information of the object, and to process the position information to obtain and prompt a distance of the object and a safe avoiding direction. The position information probed from the object is processed to obtain and prompt the distance of the object and safe avoiding direction, so that the blind can not only perceive the shape of the object but also know the distance of the object. US Patent Application Publication No. US2019332175 relates to a wearable electronic haptic vision device configured to be attached to or worn by a user. The wearable electronic haptic vision device is arranged to provide haptic feedback with pressurized air on the user's skin based on objects detected in the user's environment. Information about objects detected in the surroundings is captured using a digital camera, radar and/or sonar and/or a 3D capture device such as a 3D scanner or 3D camera attached to the wearable electronic haptic vision device. The wearable electronic haptic vision device is in the form of a helmet with at least two cameras placed at the user's eye position, or in the form of a t-shirt or other wearable accessory. Both US Patent Applications Publications mentioned above propose to provide the user with haptic transposition corresponding to the optical image, obtained from a perspective view. This is, of course, an obvious approach, consisting in compensating for the degradation of one of the senses, sight, by restoring the same information perceptible by another sense, touch. The problem is that perception of the environment is not limited to “reading” a flat photographic image, but is the result of a complex process involving interpretation by the brain, capable of providing rich information including depth, even when binocular vision is