CN-115691550-B - Method for acquiring space direction parameters of sound generating device based on virtual reality
Abstract
According to the sound generating device space direction parameter acquisition method based on virtual reality, only the parameter analysis is needed to be carried out on the reference sound generating direction set, and the rest data in the reference sound source data are not needed to be processed, so that the workload of sound source data processing can be reduced, the accuracy of the sound source data to be processed can be ensured, and the workload of data processing is reduced on the premise of reducing the sound source data fragments to be processed. In addition, only the sound source data of the reference sound emitting direction set is required to be processed, so that the parameter identification accuracy for identifying the sound emitting direction is improved, and the sound emitting direction parameter can be more accurate.
Inventors
- WANG HAOHONG
- LI JIANHUA
- YANG WEI
Assignees
- 广州市影擎电子科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20221025
Claims (9)
- 1. The method for acquiring the space direction parameter of the sound generating device based on the virtual reality is characterized by at least comprising the following steps: obtaining a voice print data set to be processed corresponding to the audio of at least one sound source signal to be processed from the reference sound source data; Selecting a reference associated sound source signal associated with the reference mined sound source signal from the at least one sound source signal to be processed based on a first sound source data key description corresponding to the audio of the reference mined sound source signal and a second sound source data key description corresponding to each voiceprint data set to be processed; Based on a to-be-processed voiceprint data set of a reference associated sound source signal, obtaining a reference pronunciation direction set corresponding to a pronunciation direction of the reference associated sound source signal from the reference sound source data; Performing parameter analysis on the reference sound emitting direction set to determine sound emitting direction parameters of the reference associated sound source signals; The method for obtaining the reference sound emission direction set corresponding to the sound emission direction of the reference associated sound source signal from the reference sound source data based on the to-be-processed voiceprint data set of the reference associated sound source signal comprises the following steps: generating second mapping relation data between the voice print data set to be processed and the reference voice-print direction set by combining the first mapping relation data of the voice frequency and the voice-print direction; and combining the second mapping relation data with the voiceprint data set to be processed to obtain a reference pronunciation direction set corresponding to the pronunciation direction of the reference associated sound source signal from the reference sound source data.
- 2. The method according to claim 1, wherein obtaining a to-be-processed voiceprint data set corresponding to audio of not less than one to-be-processed sound source signal from the reference sound source data, comprises: Carrying out important information identification on the reference sound source data, and generating reference important nodes corresponding to the audio of each to-be-processed sound source signal in at least one to-be-processed sound source signal contained in the reference sound source data; And generating a to-be-processed voiceprint data set corresponding to at least one to-be-processed sound source signal based on the reference important node corresponding to the to-be-processed sound source signal aiming at each to-be-processed sound source signal in the to-be-processed sound source signals.
- 3. The method according to claim 1 or2, wherein the selecting a reference associated sound source signal associated with the reference mined sound source signal from the at least one sound source signal based on the first sound source data key description corresponding to the audio of the reference mined sound source signal and the second sound source data key description corresponding to the respective sound print data sets comprises associating the first sound source data key description corresponding to the audio of the reference mined sound source signal with the second sound source data key description corresponding to the respective sound print data sets, and determining the sound source signal to be processed corresponding to the second sound source data key description associated with the first sound source data key description as the reference associated sound source signal.
- 4. The method of claim 3, further comprising, prior to obtaining a to-be-processed voiceprint data set corresponding to audio of at least one to-be-processed sound source signal from the reference sound source data, obtaining mined sound source data, and generating a first sound source data key description corresponding to the reference mined sound source signal and audio of the reference mined sound source signal in the mined sound source data in combination with the mined sound source data.
- 5. The method of claim 4, wherein said generating a reference mined sound source signal in combination with said mined sound source data comprises: carrying out important information identification on the excavated sound source data, and generating reference important nodes corresponding to the audio of each original sound source signal in at least one original sound source signal included in the excavated sound source data; and selecting and determining the reference mining sound source signal from the at least one original sound source signal based on the important node bias degree information of the reference important node corresponding to the audio of each original sound source signal.
- 6. The method of claim 3, further comprising generating a new reference mined sound source signal based on importance node bias information for reference importance nodes corresponding to the audio of each sound source signal to be processed based on determining that there is no second sound source data key description associated with the first sound source data key description.
- 7. The method of claim 6, wherein said performing a parametric analysis of said set of reference sound emission directions to determine sound emission direction parameters of said reference associated sound source signal comprises: Reading sound source characteristic data corresponding to the reference sound source data from the reference sound source data by combining the reference sound source direction set; Acquiring sound source vibration description and sound source azimuth description of the sound source characteristic data through an artificial intelligent thread configured in advance; The method comprises the steps of obtaining sound source vibration descriptions corresponding to different characteristic indications one by one in the sound source characteristic data, wherein the plurality of characteristic indications comprise similar first characteristic indications and second characteristic indications, and the first characteristic indications are smaller than the second characteristic indications; the first characteristic indication corresponding sound source vibration description is obtained by combining the second characteristic indication corresponding sound source vibration description and the second characteristic indication corresponding sound source vibration description; the second characteristic indicates that the sound source azimuth description of the corresponding sound source vibration description is obtained through the artificial intelligence thread configured in advance; And carrying out parameter analysis on the sound source characteristic data based on the sound source vibration descriptions corresponding to the characteristic indications with the differences one by one, and determining the pronunciation direction parameters of the reference associated sound source signals.
- 8. The method of claim 7, wherein the parametric analysis of the sound source signature data based on the sound source vibration descriptions corresponding one by one with the signature indications of the differences comprises: generating a first sound source analysis result of the sound source characteristic data under the characteristic indication according to each characteristic indication in the characteristic indications with the difference based on the sound source vibration description corresponding to the characteristic indication; combining the first sound source analysis result of the sound source characteristic data under each characteristic instruction to generate the possibility that each sound source in the sound source characteristic data is positioned as the sound source positioning corresponding to the sound emitting direction; And carrying out parameter analysis on the sound source characteristic data by combining the sound source localization possibility of each sound source localization corresponding to the sound emitting direction and the appointed mining possibility vector.
- 9. The method of claim 8, wherein the generating, in combination with the first sound source analysis result of the sound source feature data under the respective feature directions, the likelihood that each sound source in the sound source feature data is located as a sound source location corresponding to a sound emitting direction includes: Determining the possibility that each sound source in the sound source characteristic data is positioned as the sound source positioning corresponding to the sound emitting direction after the multi-round splicing processing is carried out according to the characteristic indication with the difference, wherein the m-th round splicing processing in the multi-round splicing processing comprises the steps of generating first bias degree information of a first sound source analysis result under the first characteristic indication; Splicing the first sound source analysis result under the first characteristic indication and the first sound source analysis result under the second characteristic indication through the first bias degree information of the first sound source analysis result under the first characteristic indication, and determining a reference sound source analysis result under the second characteristic indication; Wherein after determining the sound emitting direction parameter of the reference associated sound source signal, the method further comprises: combining the pronunciation direction parameters, and configuring an analysis variable of sound source localization corresponding to the pronunciation direction in the reference sound source data as a first reference variable; And configuring analysis variables of sound source localization except the pronunciation direction in the reference sound source data as second reference variables.
Description
Method for acquiring space direction parameters of sound generating device based on virtual reality Technical Field The application relates to the technical field of data acquisition, in particular to a method for acquiring a space direction parameter of a sound generating device based on virtual reality. Background Virtual reality technology (VR) is a computer simulation system that can create and experience a virtual world by using a computer to create a simulated environment into which a user is immersed. The virtual reality technology is to use data in real life, combine electronic signals generated by computer technology with various output devices to convert the electronic signals into phenomena which can be perceived by people, wherein the phenomena can be real and cut objects in reality or substances which can not be seen by naked eyes, and the phenomena are shown by a three-dimensional model. In real life, sound is transmitted in different directions, so that the heard sound has a layering sense. However, the sound is transmitted through different directions, during which there are various interference factors, resulting in a problem that the parameters of the acquired sound emitting direction are inaccurate. Therefore, a technical solution is needed to improve the above technical problems. Disclosure of Invention In order to improve the technical problems in the related art, the application provides a method for acquiring the space direction parameters of a sound generating device based on virtual reality. According to the first aspect, a sound emitting device space direction parameter obtaining method based on virtual reality is provided, and the method at least comprises the steps of obtaining a to-be-processed voiceprint data set corresponding to the audio of at least one to-be-processed sound source signal from reference sound source data, selecting a reference associated sound source signal associated with the reference digging sound source signal from the at least one to-be-processed sound source signal based on a first sound source data key description corresponding to the audio of the reference digging sound source signal and a second sound source data key description corresponding to each to-be-processed voiceprint data set, obtaining a reference sound emitting direction set corresponding to the sound emitting direction of the reference associated sound source signal from the reference sound source data based on the to-be-processed voiceprint data set of the reference associated sound source signal, and carrying out parameter analysis on the reference sound emitting direction set to determine sound emitting direction parameters of the reference associated sound source signal. In an independent embodiment, the obtaining of the to-be-processed voiceprint data set corresponding to the audio of at least one to-be-processed sound source signal from the reference sound source data includes performing important information identification on the reference sound source data, generating reference important nodes corresponding to the audio of each to-be-processed sound source signal in the at least one to-be-processed sound source signal included in the reference sound source data, and generating the to-be-processed voiceprint data set corresponding to the to-be-processed sound source signal based on the reference important nodes corresponding to the to-be-processed sound source signal for each to-be-processed sound source signal in the at least one to-be-processed sound source signal. In an independent embodiment, the selecting the reference associated sound source signal associated with the reference mined sound source signal from the at least one sound source signal based on the first sound source data key description corresponding to the audio of the reference mined sound source signal and the second sound source data key description corresponding to each of the to-be-processed voiceprint data sets comprises associating the first sound source data key description corresponding to the audio of the reference mined sound source signal with the second sound source data key description corresponding to each of the to-be-processed voiceprint data sets, and determining the to-be-processed sound source signal corresponding to the second sound source data key description associated with the first sound source data key description as the reference associated sound source signal. In an independent embodiment, before the to-be-processed voiceprint data set corresponding to the audio of at least one to-be-processed sound source signal is obtained from the reference sound source data, the method further comprises the steps of obtaining excavation sound source data, and generating a first sound source data key description corresponding to the reference excavation sound source signal and the audio of the reference excavation sound source signal in the excavation sound source data by combining the excavation sou