Search

CN-121985256-A - AI sound effect system adaptive to seat position and angle and adjusting method

CN121985256ACN 121985256 ACN121985256 ACN 121985256ACN-121985256-A

Abstract

The invention discloses an AI sound effect system capable of adapting to the position and angle of a seat, which comprises a data acquisition layer, an AI algorithm layer and an execution control layer, wherein the data acquisition layer is used for acquiring geometrical parameters and environmental noise information of the seat in real time, calculating real-time space coordinates of ears of passengers relative to a loudspeaker based on the geometrical parameters, the AI algorithm layer comprises a CNN-LSTM hybrid neural network model and is used for receiving the real-time space coordinates and the environmental noise information as input and outputting sound effect adjusting parameters, the sound effect adjusting parameters comprise loudness gain G, channel delay tau and equalizer EQ parameters theta, and the execution control layer is used for carrying out real-time processing and rendering on multichannel audio signals according to the sound effect adjusting parameters and outputting the processed signals to the loudspeaker for playing. The invention solves the problem that the sound effect cannot be dynamically adapted to the position change of the seat in real time in the prior art, realizes the automatic real-time adaptation of the sound effect, effectively avoids the positioning offset and the frequency response distortion of the sound field, and improves the comfort and individuation of the vehicle-mounted audio experience.

Inventors

  • YU HAO

Assignees

  • 上海瑞和锋电子科技有限公司

Dates

Publication Date
20260505
Application Date
20260127

Claims (6)

  1. 1. An AI sound effect system for adapting seat position and angle, comprising: the data acquisition layer is used for acquiring the geometric parameters of the seat and the environmental noise information in real time and calculating the real-time space coordinates of the ears of the passengers relative to the loudspeaker based on the geometric parameters; The AI algorithm layer comprises a CNN-LSTM hybrid neural network model and is used for receiving the real-time space coordinates and the environmental noise information as input and outputting sound effect adjusting parameters, wherein the sound effect adjusting parameters comprise loudness gain G, channel delay tau and equalizer EQ parameters theta; and the execution control layer is used for carrying out real-time processing and rendering on the multi-channel audio signal according to the sound effect adjusting parameter and outputting the processed signal to a loudspeaker for playing.
  2. 2. The adaptive seat position and angle AI sound effect system of claim 1, wherein the data acquisition layer comprises: Angle sensor for measuring body leg angle of seat And the upper warping angle of the cushion ; The displacement sensor is used for measuring front-back displacement x and high-low displacement z of the seat; And the noise sensor is used for collecting the environmental noise information.
  3. 3. The adaptive seat position and angle AI sound effect system of claim 2 wherein the real-time spatial coordinates are calculated by: Establishing a three-dimensional coordinate system by taking a seat design reference point R point as an origin, and calculating the spatial coordinates of the ears of the passengers relative to the loudspeaker through the following formula ; Wherein, the Indicating the forward and backward displacement of the seat, Represents the horizontal distance from the R point to the passenger's ear, Is the angle of the leg and the body, Is the upward-warping angle of the cushion, The height of the R point in the Y axis direction, The coordinates of the R point in the axial direction.
  4. 4. The adaptive seat position and angle AI sound effect system of claim 3 wherein the AI algorithm layer performs the following: Coordinates the space The environmental noise information is input into a pre-trained CNN-LSTM hybrid neural network model, the loudness gain G, the channel delay tau and the equalizer EQ parameter theta are output, and the parameter sets { G, tau, theta } are adopted to dynamically adjust the audio signal in real time; The loudness gain G is used for fusing distance attenuation and AI compensation, and modifying the loudness attenuation caused by the distance, and the formula adopted by the loudness gain G is as follows: In the formula, For the reference distance to be a reference distance, For real-time ear distance, The compensation value is output by the CNN-LSTM hybrid neural network model, and N is environmental noise information; the sound channel delay tau is used for correcting the positioning deviation of the sound field and ensuring that the sound synchronously reaches the ear, and the adopted formula is as follows: In the formula, For real-time ear distance, For the reference distance to be a reference distance, In order to be a speed of sound propagation, Channel delay compensation values optimized for the CNN-LSTM hybrid neural network model; The equalizer EQ parameter is a spatial rotation angle parameter, and is used for supporting three-dimensional sound field positioning calculation.
  5. 5. The adaptive seat position and angle AI sound effect system of claim 3 wherein the CNN-LSTM hybrid neural network model training process employs a multi-objective loss function: where Loss is a multiple objective Loss function, As the weighting coefficient of the loudness gain, For the loudness gain of the model prediction, For a true loudness gain, Is a weight coefficient of the channel delay, For the channel delay of the model prediction, For a real channel delay time, As the weight coefficients of the equalizer parameters, For the equalizer EQ parameters of the model prediction, Is the real equalizer EQ parameter that is used, Is the euclidean norm.
  6. 6. A method of adjusting an AI sound effect system based on the adaptive seat position and angle of any of claims 1-5, comprising the steps of: S1, collecting geometric parameters and environmental noise information of a seat in real time; s2, calculating real-time space coordinates of the ears of the passengers relative to the loudspeaker based on the acquired geometric parameters; s3, inputting the real-time space coordinates and the environmental noise information into a pre-trained CNN-LSTM hybrid neural network model; S4, obtaining the loudness gain G, the channel delay tau and the EQ parameter theta output by the CNN-LSTM hybrid neural network model, and rendering and playing the multichannel audio signals in real time by adopting parameter sets { G, tau, theta }.

Description

AI sound effect system adaptive to seat position and angle and adjusting method Technical Field The invention relates to the technical field of automobile electronics and intelligent audio, in particular to an AI sound effect system capable of self-adapting to the position and angle of a seat and an adjusting method. Background With the continuous evolution of automobile intelligent cabin technology, the user's demand for vehicle-mounted sound effect experience has gradually shifted from a static fixed preset mode to a dynamic real-time adaptation mode. However, prior art systems face multiple obstacles in achieving this transition. Traditional vehicle-mounted sound effect systems are generally managed in a preset mode, such as a main driving exclusive mode or a rear-row optimization mode, and the scheme has obvious limitations when sliding rails are displaced forwards and backwards, and the height of the sliding rails are adjusted or the angles of the sliding rails are adjusted. When the relative spatial relationship between the passenger and the speaker changes due to the change of parameters such as the inclination angle of the seat back and the height of the headrest, a significant shift phenomenon occurs in sound field positioning, such as the shift of the human voice positioning from the central area to the lateral position, and the distortion of frequency response characteristics, particularly the nonlinear attenuation of low-frequency energy, is accompanied. In addition, the existing system can not effectively integrate the seat pressure distribution data and the sitting posture angle information of the passengers, and can not accurately identify the body type characteristic difference and the dynamic posture change of the passengers, for example, when the passengers are in a forward leaning office posture or a backward leaning rest state, the sensitivity of receiving sound by the ears is changed, but the system lacks a targeted sound effect compensation strategy, so that the unified sound effect scheme is difficult to meet diversified hearing demands. The prior art architecture relies on manual sound effect switching operation after seat adjustment is completed to form a passive response mechanism, a real-time closed loop for data acquisition and feedback execution is not constructed, dynamic decision making capability driven by artificial intelligence is not introduced, obvious delay exists in the adjustment process, personalized optimization cannot be performed by combining historical hearing preference of passengers, and overall experience cracking feeling is caused. Although some technical schemes try to improve the sound effect performance through a sound field calibration means, the methods cannot establish an accurate mapping relation between the seat dynamic parameters and the sound effect compensation parameters, and an artificial intelligent real-time decision mechanism is not fused, so that the cooperative problems of dynamic adjustment, real-time adaptation and personalized optimization cannot be systematically solved. Disclosure of Invention The invention aims to provide an AI sound effect system and an adjusting method for self-adapting to the position and the angle of a seat, which have the advantages of realizing automatic real-time adaptation of sound effects, effectively avoiding sound field positioning offset and frequency response distortion and improving comfort and individuation of vehicle-mounted audio experience. The technical aim of the invention is realized by the following technical scheme: an AI sound effect system that adapts seat position and angle, comprising: the data acquisition layer is used for acquiring the geometric parameters of the seat and the environmental noise information in real time and calculating the real-time space coordinates of the ears of the passengers relative to the loudspeaker based on the geometric parameters; The AI algorithm layer comprises a CNN-LSTM hybrid neural network model and is used for receiving the real-time space coordinates and the environmental noise information as input and outputting sound effect adjusting parameters, wherein the sound effect adjusting parameters comprise loudness gain G, channel delay tau and equalizer EQ parameters theta; and the execution control layer is used for carrying out real-time processing and rendering on the multi-channel audio signal according to the sound effect adjusting parameter and outputting the processed signal to a loudspeaker for playing. Further, the data acquisition layer includes: Angle sensor for measuring body leg angle of seat And the upper warping angle of the cushion; The displacement sensor is used for measuring front-back displacement x and high-low displacement z of the seat; And the noise sensor is used for collecting the environmental noise information. Further, the real-time spatial coordinates are calculated by the following method: Establishing a three-dimensional coordin