CN-122020513-A - Space fusion degree data processing method and system based on sound source localization
Abstract
The invention discloses a space fusion degree data processing method and system based on sound source localization, and relates to the technical field of data processing, wherein the method comprises the steps of obtaining a target site and a microphone array, obtaining a space coordinate system, subspaces and a mobile unit, obtaining physical coordinates of microphone nodes, and collecting sound source coordinates of the mobile unit; the method comprises the steps of obtaining unit offset distance between physical coordinates and sound source coordinates, obtaining group average offset distance, obtaining stability indexes based on the group average offset distance, obtaining typical time differences of subspaces by obtaining starting time points of a mobile unit, obtaining initial space fusion degrees of the subspaces according to the typical time differences of the subspaces, obtaining target space fusion degrees according to the stability indexes and the initial space fusion degrees, and obtaining total space fusion degrees of target sites according to the target space fusion degrees of the subspaces. The invention has the advantages of accurate quantization, dynamic association and self-adaption.
Inventors
- Zhao Luanyu
- YANG ZONGDE
- HU YI
Assignees
- 四川万博智汇云教育科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251229
Claims (10)
- 1. A method for processing spatial fusion data based on sound source localization, comprising: Acquiring a target site and a microphone array arranged on the target site, acquiring a space coordinate system based on the microphone array, uniformly dividing the target site into a plurality of subspaces, acquiring a plurality of mobile units positioned in each subspace, acquiring physical coordinates of each microphone node in the microphone array, and acquiring sound source coordinates of each mobile unit; Obtaining unit offset distances between the physical coordinates of the microphone nodes closest to each mobile unit and the sound source coordinates of each mobile unit, averaging the unit offset distances of all the mobile units, taking the average value as a group average offset distance, and obtaining a stability index based on the group average offset distance; acquiring a starting time point of each mobile unit, acquiring typical time differences of the subspaces according to the starting time points of a plurality of mobile units in the same subspace, and acquiring initial space fusion degrees of the subspaces according to the typical time differences of the subspaces; And acquiring the target space fusion degree of each subspace according to the stability index and the initial space fusion degree of each subspace, and acquiring the total space fusion degree of the target site according to the target space fusion degree of each subspace.
- 2. The method for processing spatial fusion data based on sound source localization according to claim 1, wherein the obtaining a unit offset distance between a physical coordinate of a microphone node closest to each mobile unit and a sound source coordinate of each mobile unit comprises: An absolute distance between the physical coordinates of the closest microphone node of the i-th mobile unit and the sound source coordinates of the i-th mobile unit is acquired, and the absolute distance is defined as a unit offset distance of the i-th mobile unit.
- 3. The sound source localization-based spatial fusion data processing method of claim 1, wherein the obtaining a stability index based on a group average offset distance comprises: Obtaining a standard offset distance, subtracting the group average offset distance from the standard offset distance to obtain an offset difference, and dividing the offset difference by the standard offset distance to obtain a difference proportion; If the difference proportion is greater than zero, defining the difference proportion as a stability index; if the difference ratio is not greater than zero, 1 is defined as the stability index.
- 4. The sound source localization-based spatial fusion data processing method according to claim 1, wherein the obtaining the typical time difference of the subspace according to the starting time points of the plurality of mobile units in the same subspace comprises: and acquiring the interval time difference of the starting time points of any two mobile units in the same subspace, and taking the maximum interval time difference as the typical time difference of the subspace.
- 5. The sound source localization-based spatial fusion data processing method according to claim 1, wherein the obtaining the initial spatial fusion degree of each subspace according to the typical time difference of each subspace comprises: Acquiring standard time differences of all subspaces according to the maximum propagation distance of all subspaces; Dividing the typical time difference of each subspace by the standard time difference of each subspace to obtain the to-be-processed proportion of each subspace, and subtracting the to-be-processed proportion of each subspace from 1 to obtain the initial space fusion degree of each subspace.
- 6. The method for processing spatial fusion data based on sound source localization according to claim 1, wherein the obtaining the target spatial fusion degree of each subspace according to the stability index and the initial spatial fusion degree of each subspace comprises: Subtracting the stability index from 2 to obtain a correction proportion, and multiplying the correction proportion by the initial space fusion degree of each subspace to obtain the target space fusion degree of each subspace.
- 7. A spatial fusion data processing system based on sound source localization, the system comprising: The acquisition module is used for acquiring a target site and a microphone array arranged on the target site, acquiring a space coordinate system based on the microphone array, uniformly dividing the target site into a plurality of subspaces, acquiring a plurality of mobile units positioned in each subspace, acquiring physical coordinates of each microphone node in the microphone array, and acquiring sound source coordinates of each mobile unit; The first data processing module is used for acquiring unit offset distances between the physical coordinates of the microphone nodes closest to each mobile unit and the sound source coordinates of each mobile unit, averaging the unit offset distances of all the mobile units, taking the average value as a group average offset distance, and acquiring a stability index based on the group average offset distance; the second data processing module is used for acquiring the starting time points of each mobile unit, acquiring typical time differences of the subspaces according to the starting time points of a plurality of mobile units in the same subspace, and acquiring the initial space fusion degree of each subspace according to the typical time differences of each subspace; And the third data processing module is used for acquiring the target space fusion degree of each subspace according to the stability index and the initial space fusion degree of each subspace, and acquiring the total space fusion degree of the target site according to the target space fusion degree of each subspace.
- 8. The sound source localization-based spatial fusion data processing system of claim 7, wherein the first data processing module is further configured to: An absolute distance between the physical coordinates of the closest microphone node of the i-th mobile unit and the sound source coordinates of the i-th mobile unit is acquired, and the absolute distance is defined as a unit offset distance of the i-th mobile unit.
- 9. The sound source localization-based spatial fusion data processing system of claim 7, wherein the first data processing module is further configured to: Obtaining a standard offset distance, subtracting the group average offset distance from the standard offset distance to obtain an offset difference, and dividing the offset difference by the standard offset distance to obtain a difference proportion; If the difference proportion is greater than zero, defining the difference proportion as a stability index; if the difference ratio is not greater than zero, 1 is defined as the stability index.
- 10. The sound source localization-based spatial fusion data processing system of claim 7, wherein the second data processing module is further configured to: Acquiring standard time differences of all subspaces according to the maximum propagation distance of all subspaces; Dividing the typical time difference of each subspace by the standard time difference of each subspace to obtain the to-be-processed proportion of each subspace, and subtracting the to-be-processed proportion of each subspace from 1 to obtain the initial space fusion degree of each subspace.
Description
Space fusion degree data processing method and system based on sound source localization Technical Field The invention relates to the technical field of data processing, in particular to a method and a system for processing spatial fusion degree data based on sound source positioning. Background In the field of vocal performances (such as chorus groups and dramas) of large groups, the spatial fusion degree is the overall sound field coordination formed by the propagation, superposition and interaction of a plurality of mobile sound sources (such as chorus performers) in a specific physical space, specifically, the sound output positions of the mobile sound sources of the vocal performances and the structure of the singing works, finally, the sound effect to be achieved is determined, and whether the sound field has a coordinated and balanced hearing effect is determined. For example, the symmetry is emphasized on the design of the form of the mobile sound source in the traditional works of the evening of the eighteen years, for example, the eight-second chorus group plays in the singing Liu Xiaogeng works of the evening of the moon, men and women are exactly arranged in half, men and men play accompaniment, women play melodies more, the eight-second chorus group singing Dong big song of the big bring, the male and female vocal parts diverge and dispatch, for example, the bamboo flute and chorus form change in the eight-second chorus group singing of the arousing, and the two-stringed bowed instrument and chorus placing in the river tear pay attention to the space feel between the musical instrument and the chorus group. Therefore, if the actual position of the moving sound source is shifted, the final presentation effect will be affected, and the existing method relies on the microphone array to collect the sound signals, so that the spatial fusion degree of the performance group cannot be accurately quantized and estimated. Specifically, at present, the actual position (vocal cord vibration point) of the sound source is offset due to the physical action of the moving sound source, but the sound source is calculated only based on the coordinates of the fixed microphone node or the position of a preset performer, and the offset of the sound source caused by the action cannot be captured, so that the sound space distribution data is distorted, and the accuracy of fusion analysis is affected. Secondly, the existing scheme analyzes synchronism through the overall vibration starting time variance, but ignores time coordination in subspaces (such as vocal part areas), if part of performers in the same subspace delays sounding, the existing method cannot quantify the asynchronism degree of the local area, and the space fusion degree is too general to evaluate. Furthermore, the collective motion stability of the mobile sound source directly affects the sound source effect, and the time difference reflects the arrival synergy of the sound, and the two determine the spatial fusion degree together, but the prior art lacks to correlate the motion stability index with the subspace time differential state, so that the final spatial fusion degree cannot truly reflect the sound field synergy effect. Finally, the difference in contribution of subspaces to overall fusion (e.g., central region is more important than edges) is often reduced to uniform weights in the prior art, ignoring spatial non-uniformities in the acoustic field energy distribution, and deviating the evaluation result from the actual auditory experience. Disclosure of Invention Aiming at the technical problems described in the background art, the invention provides a spatial fusion degree data processing method and system based on sound source localization. A spatial fusion degree data processing method based on sound source localization comprises the steps of obtaining a target site and a microphone array arranged on the target site, obtaining a spatial coordinate system based on the microphone array, evenly dividing the target site into a plurality of subspaces, obtaining a plurality of mobile units located in each subspace, obtaining physical coordinates of microphone nodes in the microphone array, obtaining sound source coordinates of each mobile unit, obtaining unit offset distances between the physical coordinates of the microphone nodes closest to each mobile unit and the sound source coordinates of each mobile unit, obtaining an average value of the unit offset distances of all mobile units, taking the average value as a group average offset distance, obtaining a stability index based on the group average offset distance, obtaining a starting time point of each mobile unit, obtaining typical time differences of the subspaces according to the starting time points of the mobile units in the same subspace, obtaining initial spatial fusion degrees of the subspaces according to the typical time differences of the subspaces, obtaining the target spatial f