Search

CN-122028262-A - Intelligent lamplight control method and system based on sound change

CN122028262ACN 122028262 ACN122028262 ACN 122028262ACN-122028262-A

Abstract

The invention discloses a lamplight intelligent control method and system based on sound change, and relates to the technical field of lamplight intelligent control, wherein the method comprises the steps of obtaining an environment sound signal containing a sound pressure level time sequence in a target sound field region; the method comprises the steps of importing the voice print characteristics into a voice print characteristic analysis engine, constructing a time pane matrix which stretches along with the sound intensity to generate a voice print energy streamline, capturing slope abrupt change events of the voice print energy streamline, extracting an acoustic disturbance characteristic package, driving a brightness controller to output a light intensity compensation deformation instruction by the characteristic package, applying compensation deformation to the time pane matrix to generate new voice print energy streamline distribution, and identifying a steady-state voice print channel and mapping the steady-state voice print channel into a target illumination value instruction and a color temperature offset instruction. The method can realize the accurate adaptation of the light and sound change and improve the consistency and stability of light control.

Inventors

  • JIANG XIAOFU
  • SHEN SHANGYI
  • SHEN JIAHAO

Assignees

  • 浙江来和科技股份有限公司

Dates

Publication Date
20260512
Application Date
20260313

Claims (10)

  1. 1. The intelligent light control method based on sound change is characterized by comprising the following steps: acquiring an ambient sound signal in a target sound field region, wherein the ambient sound signal comprises a sound pressure level time sequence generated by air vibration; The environmental sound signals are led into a voiceprint feature analysis engine to be dynamically sliced, a time pane matrix which stretches and contracts along with the change of sound intensity is constructed in the voiceprint feature analysis engine, and voiceprint energy streamline is generated among grids of the time pane matrix according to a sound pressure level time sequence; tracking and capturing slope abrupt change events of the voiceprint energy streamline, and when the local slope of the voiceprint energy streamline exceeds a preset slope threshold, extracting waveform fragments of the voiceprint energy streamline and a time pane matrix area to which the waveform fragments belong to form an acoustic disturbance feature packet; The acoustic disturbance feature packet is taken as input, a brightness controller based on voiceprint deformation feedback is driven, and the brightness controller based on voiceprint deformation feedback outputs a light intensity compensation deformation instruction according to the frequency spectrum migration mode of the waveform segment in the acoustic disturbance feature packet and the energy density distribution of the time pane matrix area; according to the light intensity compensation deformation instruction, applying corresponding compensation deformation to the time pane matrix in the voiceprint feature analysis engine, and generating new voiceprint energy streamline distribution by the time pane matrix after the compensation deformation; and identifying a stable voiceprint channel in the new voiceprint energy streamline distribution, mapping the stable voiceprint channel back to a physical light control space, and converting the stable voiceprint channel into a target illumination value instruction and a color temperature offset instruction.
  2. 2. The intelligent lighting control method based on sound variation according to claim 1, wherein the step of guiding the environmental sound signal into a voiceprint feature analysis engine for dynamic slicing, and constructing a time pane matrix which stretches and contracts with sound intensity variation in the voiceprint feature analysis engine comprises the steps of: Calculating a sound intensity level value and a frequency spectrum center of gravity frequency value at each sampling moment according to the sound pressure level time sequence; constructing a two-dimensional voiceprint field by taking a sound intensity level value as a horizontal axis coordinate and a frequency spectrum gravity center frequency value as a vertical axis coordinate, and drawing voiceprint state points at each sampling moment in the two-dimensional voiceprint field; Connecting voiceprint state points of adjacent sampling moments in the two-dimensional voiceprint field to form a dynamic voiceprint track line; Based on the space density of the dynamic voiceprint track line, automatically generating a non-uniform rectangular grid in a two-dimensional voiceprint field to cover the dynamic voiceprint track line, wherein the time width of the rectangular grid is positively correlated with the sound intensity level value, and the rectangular grid is a time pane matrix in a voiceprint feature analysis engine, the grid size and arrangement relation of which are adjusted along with the change of voiceprint state points.
  3. 3. The intelligent sound-variation-based light control method according to claim 2, wherein the generating voiceprint energy streamlines between grids of the time pane matrix according to a sound pressure level time sequence comprises: For each rectangular grid unit of the time pane matrix, calculating gradient vectors of sound intensity level values corresponding to four vertexes and frequency values of the center of gravity of the frequency spectrum; Vector synthesis is carried out on gradient vectors of four vertexes in each rectangular grid unit, and voiceprint flow direction vectors of the rectangular grid units are obtained; And drawing a streamline pointing to the low voiceprint density area from the high voiceprint density area on the time pane matrix along the voiceprint flow direction vector field, wherein the streamline is a voiceprint energy streamline representing a real-time sound energy converging and diffusing path.
  4. 4. The intelligent sound variation-based light control method of claim 3, wherein the tracking and capturing the slope abrupt event of the voiceprint energy streamline comprises: calculating local slopes of the voiceprint energy streamlines along each voiceprint energy streamline with fixed step length to form slope sequences of the voiceprint energy streamlines; applying sliding window mean value operation to the slope sequence, and detecting a moment point, which is deviated from the mean value and exceeds a mutation detection threshold value, in the slope sequence, wherein the moment point is marked as the occurrence moment of a slope mutation event; And tracing back to the occurrence time, intercepting voiceprint energy streamline segments in a preset time window before and after the occurrence time, and simultaneously recording a rectangular grid unit set in a time pane matrix through which the voiceprint energy streamline segments pass, wherein the voiceprint energy streamline segments and the rectangular grid unit set form an acoustic disturbance feature packet together.
  5. 5. The intelligent light control method based on sound variation according to claim 4, wherein the driving a brightness controller based on voiceprint deformation feedback using the acoustic disturbance feature packet as input comprises: Carrying out time-frequency joint analysis on waveform segments in the acoustic disturbance characteristic package, and extracting dominant peak frequency and harmonic attenuation coefficient in a time spectrum of the waveform segments as spectrum migration mode characteristics; Analyzing a time pane matrix area in an acoustic disturbance feature packet, and calculating the dispersion of the variance of sound intensity level values of all rectangular grid units and the frequency of the center of gravity of a frequency spectrum in the time pane matrix area to be used as an energy density distribution feature; The spectrum migration mode characteristics and the energy density distribution characteristics are input into a pre-trained deformation decision network, and the deformation decision network outputs instructions for applying expansion, contraction or translation deformation to a target area of a time pane matrix, and deformation amplitudes and directions corresponding to the deformation instructions, so that light intensity compensation deformation instructions are formed together.
  6. 6. The intelligent control method for lighting based on sound variation according to claim 5, wherein the applying corresponding compensation distortion to the time pane matrix in the voiceprint feature analysis engine according to the light intensity compensation distortion command comprises: Positioning a target area designated by a light intensity compensation deformation instruction in a two-dimensional voiceprint field, wherein the target area is identified by a group of rectangular grid units; According to the deformation type and the deformation amplitude in the light intensity compensation deformation instruction, synchronously transforming the central coordinates of all rectangular grid units in the target area; and after the central coordinate transformation, recalculating and updating the physical boundary and adjacent relation of all the affected rectangular grid units, and completing the construction of the time pane matrix after the compensation deformation.
  7. 7. The intelligent sound variation-based light control method of claim 6, wherein identifying a steady-state voiceprint channel in a new voiceprint energy streamline profile comprises defining the steady-state voiceprint channel by a slope of a voiceprint energy streamline continuously falling below a steady-state threshold; On the time pane matrix after the deformation compensation, calculating gradient vectors of all grid units, synthesizing a voiceprint flow direction vector field, and regenerating voiceprint energy streamline pointing to a low voiceprint density area from a high voiceprint density area along the flow direction field; Monitoring slope change of a newly generated voiceprint energy streamline in a preset observation time period, and judging that the voiceprint energy streamline segment is a steady-state voiceprint channel if the maximum value of the slope of a certain section of voiceprint energy streamline in the whole observation time period is continuously lower than a steady-state threshold value; and recording the starting point coordinates and the ending point coordinates of all the steady-state voiceprint channels in the two-dimensional voiceprint field and passing grid unit sequences.
  8. 8. The intelligent control method of light based on sound variation according to claim 7, wherein the mapping the steady-state voiceprint channel back to the physical light control space is converted into a target illuminance value instruction and a color temperature shift instruction, and the method comprises: For each stable voiceprint channel, calculating an average sound intensity level value and an average frequency spectrum center-of-gravity frequency value of the stable voiceprint channel in a two-dimensional voiceprint field according to a starting point coordinate and an ending point coordinate of the stable voiceprint channel; and converting the average sound intensity progression value into an illumination value instruction of a target area through a preset sound-light mapping relation, and converting the average spectrum center-of-gravity frequency value into a color temperature shift instruction relative to reference white light.
  9. 9. The intelligent sound variation-based light control method as set forth in claim 8, further comprising: Waveform fusion is carried out on a steady-state light signal defined by a target illumination value instruction and a color temperature deviation instruction and a residual noise oscillation mode, so as to synthesize a lamplight driving waveform signal; generating a pulse width modulation signal of the lighting equipment according to the lamplight driving waveform signal, and realizing intelligent control of lamplight based on sound change; The overtone mode stripping is carried out on the environment sound signal, and a fundamental frequency energy mode and a residual noise oscillation mode are separated; The step of performing overtone mode stripping on the environmental sound signal to separate a fundamental frequency energy mode and a residual noise oscillation mode comprises the following steps: Applying Hilbert yellow transform to a sound pressure level time sequence of an environment sound signal to obtain a series of natural mode function components, screening out natural mode function components with instantaneous frequencies within an audio frequency fundamental frequency range according to the instantaneous frequencies of the natural mode function components, reconstructing the natural mode function components into fundamental frequency energy modes, subtracting the fundamental frequency energy modes from an original environment sound signal, and obtaining the residual waveform components as residual noise oscillation modes; The waveform fusion is carried out on the steady-state light signal defined by the target illumination value instruction and the color temperature deviation instruction and the residual noise oscillation mode, and the light driving waveform signal is synthesized, which comprises the following steps: generating a composite reference waveform of direct current bias superposition alternating current modulation as a steady-state light signal according to the target illumination value instruction and the color temperature offset instruction; convolving and superposing the steady-state optical signal and the residual noise oscillation mode on a time domain to obtain an initial lamplight waveform; And carrying out nonlinear normalization processing and high-frequency jitter suppression on the initial lamplight waveform to ensure that the value range of the initial lamplight waveform accords with the driving specification of the lighting equipment and the waveform is smooth and continuous, wherein the processed waveform is the lamplight driving waveform signal finally used for generating the pulse width modulation signal.
  10. 10. Intelligent sound-variation-based lighting control system comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor, when executing the computer program, realizes the steps of the intelligent sound-variation-based lighting control method according to any one of claims 1 to 9.

Description

Intelligent lamplight control method and system based on sound change Technical Field The invention belongs to the technical field of intelligent light control, and particularly relates to an intelligent light control method and system based on sound change. Background At present, the technology of controlling light by sound is widely applied to various scenes, the prior art extracts the characteristics of fixed dimensions such as sound pressure level, frequency spectrum and the like by collecting sound signals in the environment, adopts a time window with fixed duration to slice the sound signals, and then directly outputs light control instructions according to the extracted characteristics so as to realize the functions of switching on and off of light, brightness adjustment and the like. Such techniques rely on a fixed time window setting to divide the sound signal at uniform time intervals, ignoring the dynamic variation characteristics of the sound intensity itself. The fixed time window in the prior art cannot adapt to the dynamic changes of sounds with different intensities, and when the intensity of the sounds suddenly changes, the fixed time window is difficult to accurately capture the instantaneous changes of the sound characteristics, so that deviation exists in the analysis of the voiceprint characteristics, and the accuracy of the lamplight control instruction is further affected. Meanwhile, the existing sound control lamp technology mostly adopts unidirectional control logic, only directly generates a light control instruction through sound signals, lacks a feedback adjustment mechanism, cannot correct voiceprint analysis deviation according to the actual effect after light control, ensures poor consistency and stability of light adjustment, is difficult to match with the dynamic change rule of sound, and cannot realize accurate and self-adaptive light intelligent control. Disclosure of Invention The present invention aims to solve at least one of the technical problems existing in the prior art; Therefore, the invention provides a lamplight intelligent control method based on sound change, which comprises the following steps: acquiring an ambient sound signal in a target sound field region, wherein the ambient sound signal comprises a sound pressure level time sequence generated by air vibration; The environmental sound signals are led into a voiceprint feature analysis engine to be dynamically sliced, a time pane matrix which stretches and contracts along with the change of sound intensity is constructed in the voiceprint feature analysis engine, and voiceprint energy streamline is generated among grids of the time pane matrix according to a sound pressure level time sequence; tracking and capturing slope abrupt change events of the voiceprint energy streamline, and when the local slope of the voiceprint energy streamline exceeds a preset slope threshold, extracting waveform fragments of the voiceprint energy streamline and a time pane matrix area to which the waveform fragments belong to form an acoustic disturbance feature packet; The acoustic disturbance feature packet is taken as input, a brightness controller based on voiceprint deformation feedback is driven, and the brightness controller based on voiceprint deformation feedback outputs a light intensity compensation deformation instruction according to the frequency spectrum migration mode of the waveform segment in the acoustic disturbance feature packet and the energy density distribution of the time pane matrix area; according to the light intensity compensation deformation instruction, applying corresponding compensation deformation to the time pane matrix in the voiceprint feature analysis engine, and generating new voiceprint energy streamline distribution by the time pane matrix after the compensation deformation; and identifying a stable voiceprint channel in the new voiceprint energy streamline distribution, mapping the stable voiceprint channel back to a physical light control space, and converting the stable voiceprint channel into a target illumination value instruction and a color temperature offset instruction. Further, the environmental sound signal is led into a voiceprint feature analysis engine to be dynamically sliced, a time pane matrix which stretches and contracts along with the change of sound intensity is built in the voiceprint feature analysis engine, and the method comprises the following steps: Calculating a sound intensity level value and a frequency spectrum center of gravity frequency value at each sampling moment according to the sound pressure level time sequence; constructing a two-dimensional voiceprint field by taking a sound intensity level value as a horizontal axis coordinate and a frequency spectrum gravity center frequency value as a vertical axis coordinate, and drawing voiceprint state points at each sampling moment in the two-dimensional voiceprint field; Connecting voiceprint state points