US-12621603-B2 - Information processing method and information processing apparatus
Abstract
An information processing method obtains first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined space, obtains second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined space, and obtains direction information that indicates a direction of the sound beam to be outputted from the acoustic device, calculates a locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, and the direction information that have been obtained, and generates a sound beam image that shows the locus of the sound beam, based on a result of calculation.
Inventors
- Junya Matsushita
- Yuki SUEMITSU
Assignees
- YAMAHA CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20230317
- Priority Date
- 20220318
Claims (20)
- 1 . An information processing method comprising: obtaining first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined space; obtaining second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined space; obtaining direction information that indicates a direction of the sound beam to be outputted from the acoustic device; calculating a locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, and the direction information that have been obtained; generating a sound beam image that shows the locus of the sound beam, based on a result of the calculating; obtaining characteristic information that indicates a degree of sound absorption of the at least one of the ceiling surface, the wall surface, or the floor surface; and varying a visual display of a reflection image showing a reflection of the sound beam reflecting off of at least one of the ceiling surface, the wall surface, or the floor surface, based on the degree of the sound absorption.
- 2 . The information processing method according to claim 1 , further comprising: calculating, based on the first position information, the second position information, and the direction information: a position of the reflection of the sound beam on the at least one of the ceiling surface, the wall surface, or the floor surface, and a locus of the sound beam after the reflection, wherein the sound beam image includes the reflection image that shows the locus of the sound beam after the reflection.
- 3 . The information processing method according to claim 1 , further comprising: obtaining first image data by capturing the at least one of the ceiling surface, the wall surface, or the floor surface; and performing first image processing to recognize the at least one of the ceiling surface, the wall surface, or the floor surface from the first image data, wherein the first position information is obtained based on a result of the first image processing.
- 4 . The information processing method according to claim 1 , further comprising: obtaining second image data by capturing the acoustic device; and performing second image processing to recognize the acoustic device from the second image data, wherein the second position information is obtained based on a result of the second image processing.
- 5 . The information processing method according to claim 1 , further comprising: obtaining camera image data by capturing by a camera; generating a display image from the camera image data; performing processing to superimpose the sound beam image on the display image; and outputting the display image on which the sound beam image is superimposed.
- 6 . The information processing method according to claim 1 , further comprising: obtaining user position information that indicates a user position, wherein the locus of the sound beam to be outputted from the acoustic device is calculated based on the first position information, the second position information, the direction information, and the user position information that have been obtained.
- 7 . The information processing method according to claim 1 , wherein the sound beam image is varied based on at least one of a channel of the sound beam, a volume of the sound beam, or frequency characteristics of the sound beam.
- 8 . The information processing method according to claim 1 , wherein: obtaining the first position information, obtaining the second position information, obtaining the direction information, calculating the locus of the sound beam, and generating the sound beam image are performed by a first apparatus; the method further comprising: obtaining, by a second apparatus, the sound beam image generated by the first apparatus; and displaying, by the second apparatus, the sound beam image on a display.
- 9 . An information processing apparatus comprising: at least one processor configured to: obtain first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined space; obtain second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined space; obtain direction information that indicates a direction of the sound beam to be outputted from the acoustic device; calculate a locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, and the direction information that have been obtained; generate a sound beam image that shows the locus of the sound beam, based on a result of calculation; obtain characteristic information that indicates a degree of sound absorption of the at least one of the ceiling surface, the wall surface, or the floor surface; and vary a visual display of a reflection image showing a reflection of the sound beam reflecting off of at least one of the ceiling surface, the wall surface, or the floor surface, based on the degree of the sound absorption.
- 10 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: calculate, based on the first position information, the second position information, and the direction information: a position of the reflection of the sound beam on the at least one of the ceiling surface, the wall surface, or the floor surface, and a locus of the sound beam after the reflection; and wherein the sound beam image includes the reflection image that shows the locus of the sound beam after the reflection.
- 11 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: obtain first image data by capturing the at least one of the ceiling surface, the wall surface, or the floor surface; perform first image processing to recognize the at least one of the ceiling surface, the wall surface, or the floor surface from the first image data; and obtain the first position information, based on a result of the first image processing.
- 12 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: obtain second image data by capturing the acoustic device; perform second image processing to recognize the acoustic device from the second image data; and obtain the second position information, based on a result of the second image processing.
- 13 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: obtain camera image data by capturing by a camera; generate a display image from the camera image data; perform processing to superimpose the sound beam image on the display image; and output the display image on which the sound beam image is superimposed.
- 14 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: obtain user position information that indicates a user position; and calculate the locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, the direction information, and the user position information that have been obtained.
- 15 . The information processing apparatus according to claim 9 , wherein the at least one processor is further configured to: vary the sound beam image, based on at least one of a channel of the sound beam, a volume of the sound beam, or frequency characteristics of the sound beam.
- 16 . The information processing apparatus according to claim 9 , wherein: a first processor of the at least one processor is configured to obtain the first position information, obtain the second position information, obtain the direction information, calculate the locus of the sound beam, and generate the sound beam image; and a second processor of the at least one processor is configured to: obtain the sound beam image generated by the first processor that is different from the second processor; and display an obtained sound beam image on a display.
- 17 . An information processing method comprising: obtaining first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined real space; obtaining second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined real space; obtaining direction information that indicates a direction in a three-dimensional rectangular coordinate system of the sound beam to be outputted from the acoustic device; calculating a locus of the sound beam to be outputted from the acoustic device and a locus of a reflected sound beam reflected off of the at least one of the ceiling surface, the wall surface, or the floor surface, based on the first position information, the second position information, and the direction information that have been obtained, wherein the calculating matches the three-dimensional rectangular coordinate system with a position of two-dimensional coordinates of a display; and superimposing an image of a sound beam image that shows the locus of the sound beam and the locus of the reflected sound beam onto the predetermined real space by generating a sound beam image on the display, based on a result of the calculating.
- 18 . The information processing method according to claim 17 , comprising: varying a visual display of the reflected sound beam based at least on a degree of sound absorption of the at least one of the ceiling surface, the wall surface, or the floor surface.
- 19 . The information processing method according to claim 17 , wherein: the three-dimensional rectangular coordinate system is a three-dimensional rectangular coordinate system of the real space visible from the display.
- 20 . The information processing method according to claim 17 , comprising: superimposing the image of the sound beam image on the display onto the real space visible through the display.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This Nonprovisional application claims priority under 35 U.S.C. § 119(a) to Japanese Patent Application No. 2022-044126 filed on Mar. 18, 2022, the entire content of which is hereby incorporated by reference. BACKGROUND Technical Field An embodiment of the present disclosure relates to an information processing method and an information processing apparatus. Background Information International Publication No. 2021/241421 discloses a sound processing apparatus that obtains an image of an acoustic space. The sound processing apparatus sets a plane and a virtual speaker from the image of the acoustic space. The sound processing apparatus calculates sound pressure distribution from characteristics of the virtual speaker, and generates an image in which the sound pressure distribution is overlapped with the plane. Japanese Unexamined Patent Application Publication No. 2008-035251 discloses a speaker apparatus and a remote controller. The speaker apparatus measures a position of the remote controller. The speaker apparatus directs a sound beam to the position of the remote controller. A user cannot visually recognize a direction of the sound beam to be outputted from an acoustic device such as a speaker. SUMMARY An embodiment of the present disclosure is directed to provide an information processing method in which a user can visually recognize a direction of a sound beam to be outputted from an acoustic device such as a speaker. An information processing method according to an embodiment of the present disclosure obtains first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined space, obtains second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined space, and obtains direction information that indicates a direction of the sound beam to be outputted from the acoustic device; calculates a locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, and the direction information that have been obtained; and generates a sound beam image that shows the locus of the sound beam, based on a result of calculation. According to the information processing method according to an embodiment of the present disclosure, a user can visually recognize a direction of a sound beam to be outputted from a speaker. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing an example of connection between MR goggles 1 and a speaker 2. FIG. 2 is a block diagram showing an example of a configuration of the MR goggles 1. FIG. 3 is a block diagram showing an example of a configuration of the speaker 2. FIG. 4 is a perspective view showing a sound beam B1 outputted in a space Sp. FIG. 5 is a plan view of the space Sp. FIG. 6 is a perspective view showing an example of an angle θ and an angle φ of the sound beam B1 in an X′ axis, a Y′ axis, and a Z′ axis with reference to the speaker 2. FIG. 7 is a diagram showing a functional configuration of a processor 13. FIG. 8 is a flow chart showing an example of processing of the MR goggles 1. FIG. 9 is a view showing the sound beam B1 and a sound beam B2 that have been outputted in the space Sp. FIG. 10 is a view showing an image of the speaker 2, a ceiling surface CS, a wall surface WS, and a floor surface FS that have been captured by a capturing camera different from the MR goggles 1. DETAILED DESCRIPTION First Embodiment Hereinafter, MR (Mixed Reality) goggles 1 that execute an information processing method according to a first embodiment will be described with reference to the drawings. FIG. 1 is a block diagram showing an example of connection between the MR goggles 1 and a speaker 2. FIG. 2 is a block diagram showing an example of a configuration of the MR goggles 1. FIG. 3 is a block diagram showing an example of a configuration of the speaker 2. FIG. 4 is a perspective view showing a sound beam B1 outputted in a space Sp. The MR goggles 1 are an example of an information processing apparatus. A user wearing the MR goggles 1 can visually recognize an image being displayed on the MR goggles 1 while visually recognizing a real space through the MR goggles 1. As shown in FIG. 1, the MR goggles 1 are connected to the speaker 2 (an example of an acoustic device). Specifically, the MR goggles 1 are connected to the speaker 2 by wireless such as Bluetooth (registered trademark) or Wi-Fi (registered trademark). It is to be noted that the MR goggles 1 do not necessarily need to be connected to the speaker 2 by wireless. The MR goggles 1 may be connected to the speaker 2 by wire. It is to be noted that the MR goggles 1 may be connected to a device (a PC, a smartphone, or the like, for example) other than the speaker 2, in addition to the speaker 2. As shown in FIG. 2, the MR goggles 1 include a communication interface 10, a