EP-4738318-A1 - AUDITORY TRAINING METHOD AND DEVICE
Abstract
An auditory training method, according to one embodiment, may comprise an operation of providing a test sound having a first auditory pattern for at least one feature of sound. The auditory training method may comprise an operation of providing, through a user interface for receiving a visual pattern, a first visual object having a first visual pattern corresponding to a user input on the basis of the user input that is input through the user interface. A first sound which has at least one feature identified on the basis of a location defined in the user interface of the user input and which is substantially synchronized with a detection point of the user input can be provided. The auditory training method may comprise an operation of providing a second visual object having a second visual pattern corresponding to at least one feature of the first auditory pattern of the test sound on the basis of event identification for providing the second visual pattern corresponding to the test sound.
Inventors
- PARK, JONG HWA
- PARK, JEONG MI
- LEE, WON WOO
- HA, JI YEON
- LEE, JAE EUN
- KIM, JUNG HOO
- YUM, JOONG BAE
Assignees
- Bell Therapeutics Inc.
Dates
- Publication Date
- 20260506
- Application Date
- 20240726
Claims (20)
- A method for auditory training, the method comprising: providing a test sound having a first auditory pattern for at least one characteristic of a sound; based on a user input that is input through a user interface for receiving a visual pattern, providing, through the user interface, a first visual object having a first visual pattern corresponding to the user input, wherein a first sound, which is substantially synchronized with a detection time point of the user input and has at least one characteristic identified based on position of the user input defined in the user interface, is provided; and based on identifying an event for providing a second visual pattern corresponding to the test sound, providing a second visual object having the second visual pattern corresponding to at least one characteristic of the first auditory pattern.
- The method of claim 1, wherein providing the first visual object through the user interface comprises: based on detecting at least a part of the user input associated with a first position and a second position of the user interface, providing at least a part of the first visual object associated with the first position, the second position, and at least one intermediate position between the first position and the second position of the user interface.
- The method of claim 2, wherein based on at least a part of the user input being associated with the first position, a first portion of the first sound having at least one characteristic corresponding to the first position is provided; based on at least a part of the user input being associated with each of the at least one intermediate position, at least one intermediate portion of the first sound having at least one characteristic corresponding to each intermediate position is provided; and based on at least a part of the user input being associated with the second position, a second portion of the first sound having at least one characteristic corresponding to the second position is provided.
- The method of claim 2, wherein at least a part of the user input comprises an input for designating the first position and the second position, and/or an input for designating the first position, the at least one intermediate position, and the second position.
- The method of claim 1, wherein the test sound comprises a plurality of portions provided sequentially over time, and each of the plurality of portions has at least one characteristic that changes or remains constant over time according to the first auditory pattern.
- The method of claim 1, further comprising: providing a result of comparing the first visual object and the second visual object.
- The method of claim 6, wherein the providing the comparison result comprises: providing information about a user's vulnerable points identified based on the comparison result.
- The method of claim 1, wherein the event for providing the second visual pattern corresponding to the test sound comprises: a selection for affordance to provide a correct visual object, completion of providing the first visual object, and/or passage of a specified time.
- The method of claim 1, further comprising: after the first visual object is provided and before the second visual object is provided, performing modification of at least a part of the first visual object based on another user input for modifying the first visual object.
- The method of claim 9, wherein performing the modification of at least a part of the first visual object comprises: identifying a deletion command for a first portion of the first visual object; and deleting the first portion associated with the deletion command while maintaining display of remaining portion of the first visual object excluding the first portion.
- The method of claim 1, wherein at a specific time point, each of at least one characteristic of at least a portion of the test sound has a single value.
- The method of claim 1, wherein at a specific time point, each of at least one characteristic of at least a portion of the test sound has multiple values.
- The method of claim 1, wherein the user interface comprises a plurality of reference objects for association with the user input.
- The method of claim 13, wherein the plurality of reference objects are arranged in a grid, and the user input comprises an input connecting one of a plurality of first reference objects included in one of the columns to one of a plurality of second reference objects included in an adjacent column.
- The method of claim 13, wherein based on two or more of the reference objects being associated with a first temporary user input, a visual object associated with the two or more reference objects is provided as the first visual object; and based on two or more of the reference objects not being associated with a second temporary user input, a visual object temporarily provided based on a trajectory of the second temporary user input is discontinued.
- The method of claim 13, wherein the number, density, and/or arrangement of the plurality of reference objects is set based on user selection and/or test difficulty.
- The method of claim 1, wherein the first auditory pattern of the test sound is represented based on at least one quantifiable characteristic, and a lower limit value for the test sound, an upper limit value for the test sound, and/or a difference between the upper and lower limit values is set based on user selection and/or test difficulty.
- The method of claim 1, wherein a sound effect identified based on user selection and/or test difficulty is applied to at least a part of the test sound.
- The method of claim 1, further comprising: providing background sound identified based on user selection and/or test difficulty along with at least a part of the test sound during the provision of the test sound.
- The method of claim 1, further comprising, before providing the test sound: setting at least one different characteristic of a sound other than the at least one characteristic of the first auditory pattern based on user selection and/or test difficulty.
Description
[Technical Field] The present invention relates to a method and apparatus for auditory training. [Background Art] As the population ages, the number of individuals with hearing impairments due to presbycusis is increasing. Hearing specialists predict that, with the rise in average life expectancy, the number of individuals with hearing impairments will further increase. Additionally, hearing impairments can occur at any age due to congenital or acquired causes. Accordingly, there is growing interest in assistive devices for individuals with hearing impairments (e.g., hearing aids and cochlear implants). In particular, there is increasing interest in cochlear implants for individuals with severe hearing loss who do not experience significant improvements in hearing even with the use of hearing aids. However, even after cochlear implant surgery, rehabilitation training is essential as the ability to hear sounds is not immediate. Currently known rehabilitation training methods mainly involve listening repeatedly to recorded sounds (e.g., words, short sentences) and solving problems based on them. These methods are monotonous and inefficient, as they primarily rely on repetitive and uniform exercises, which may lead to user boredom. Meanwhile, even individuals who do not experience discomfort in daily life due to hearing issues may seek to improve their auditory functions for various reasons (e.g., enhancing musical abilities, developing talents in infants and young children). Therefore, there is a need for the development of user-friendly and effective auditory training (or auditory rehabilitation training) methods that take into account the individual characteristics of users, including those with hearing loss and/or those seeking to enhance their auditory functions. Currently known rehabilitation training methods may include providing sounds corresponding to words or short sentences with semantic meaning (syntax) repeatedly and evaluating whether the user has correctly perceived the sounds based on user input corresponding to these sounds. [Description of the Invention] [Problems to be Solved] According to conventional auditory training methods, sounds corresponding to words or short sentences are provided, enabling individuals with hearing loss to listen to the sounds and check whether their auditory recognition results match the correct answers. However, there are cases where the auditory recognition results of individuals with hearing loss differ from the actual sounds. In such cases, while individuals with hearing loss may realize that their recognition results are different from the actual sounds, this realization alone may lead to prolonged rehabilitation periods. Individuals with hearing loss need to engage in extended training sessions to listen to sounds repeatedly in order to reduce the discrepancy between the sounds they perceive and the actual sounds. As a result, the process of auditory rehabilitation can be time-consuming. Therefore, there is a demand for technologies that visually provide the characteristics of actual sounds along with the actual sounds themselves and/or receive input from individuals with hearing loss regarding the characteristics of the sounds. Methods that provide sounds corresponding to words or short sentences are limited in their ability to provide sounds covering a wide range of frequency bands. Many individuals with hearing loss struggle to perceive sounds in specific frequency bands. However, when conventional sounds corresponding to meaningful words or short sentences are provided, it may not be possible to deliver sounds across a wide frequency range and/or sounds within the frequency bands where the individual with hearing loss has difficulty may not be provided. Accordingly, there is a need for the development of technologies that provide not only sounds corresponding to words or short sentences but also sounds corresponding to various frequencies and/or targeted frequency bands. The technical problems addressed by the present invention are not limited to the aforementioned issues, and additional technical problems not explicitly mentioned will be readily understood by those skilled in the art from the following descriptions. [Means to Solve the Problem] According to one embodiment, a method for auditory training may include: providing a test sound having a first auditory pattern for at least one characteristic of a sound; providing, through a user interface for receiving a visual pattern, a first visual object having a first visual pattern corresponding to a user input that is input through the user interface; and providing a second visual object having a second visual pattern corresponding to at least one characteristic of the first auditory pattern of the test sound, based on identifying an event for providing the second visual pattern corresponding to the test sound. A first sound, which is substantially synchronized with a detection time point of the user inp