CN-122006059-A - Personalized music generation method and system based on electroencephalogram signal analysis
Abstract
The invention relates to the technical field of electroencephalogram signal processing, in particular to a personalized music generation method and a personalized music generation system based on electroencephalogram signal analysis, comprising the steps of collecting multi-element data, and carrying out association fusion on the multi-element data to obtain association fusion data; the method comprises the steps of constructing a user psychological characteristic data set through multi-dimensional data structuring, extracting core characteristics of a current psychological state of a user and long-term historical trend characteristics, building a dual-characteristic and music parameter collaborative mapping model, fusing the core characteristics of the current psychological state of the user and the long-term historical trend characteristics through a dual-characteristic weighted fusion algorithm, optimizing to generate initial music, setting a treatment period, respectively collecting user psychological state scores before and after the treatment period, judging treatment effect levels according to preset standards, and combining the dual-characteristic and music parameter collaborative mapping model and the long-term historical trend characteristics to generate optimized personalized music. The personalized music accurate generation fitting the dynamic psychological state of the user can be realized.
Inventors
- CHEN HAO
- Lv Linyang
- ZHU DEJIANG
- FAN LIWEN
Assignees
- 杭州皓世天辉科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260415
Claims (10)
- 1. A personalized music generation method based on electroencephalogram signal analysis, characterized in that the method comprises: Collecting multi-element data, wherein the multi-element data comprises original electroencephalogram data, psychological state evaluation data and historical evaluation data, and performing multi-dimensional verification and association fusion on the multi-element data to obtain association fusion data; constructing a user psychological characteristic data set through multi-dimensional data structuring based on the associated fusion data; synchronously extracting core features and long-term historical trend features of the current psychological state of the user from the user psychological feature data set, and establishing a collaborative mapping model of the double features and the music parameters; Fusing the core characteristics of the current psychological state of the user with the long-term historical trend characteristics through a double-characteristic weighted fusion algorithm of differential weight configuration, and optimally generating initial music; setting a treatment period, respectively collecting user psychological state scores before and after the treatment period, and judging treatment effect levels according to the score variation and a preset standard; Based on the treatment effect level, the dual-feature and music parameter collaborative mapping model and the long-term history trend feature are combined, and the music core parameters are adjusted to generate optimized personalized music.
- 2. The personalized music generation method based on electroencephalogram analysis according to claim 1, wherein the specific acquisition process of the multivariate data is as follows: Acquiring brain wave signals of a key monitoring channel of a user to be detected according to a preset sampling frequency to obtain original brain wave data; Generating psychological state assessment data containing depression risk scores, anxiety risk scores and emotion fluctuation grades through psychological state quantitative assessment logic based on the original electroencephalogram data; The past evaluation records of the user, the adaptive music parameters and the feedback data are called through a system history database, so that history evaluation data are obtained; Carrying out production event association and encapsulation on the original electroencephalogram data, the psychological state evaluation data and the historical evaluation data based on the user identification to obtain encapsulation data; Based on the unified user identification, checking the identity consistency of the packaged data, aligning the time sequence relations of different types of data through the acquisition time stamp, and completing multidimensional association checking by combining the data acquisition scene label to obtain the multi-metadata.
- 3. The personalized music generation method based on electroencephalogram analysis according to claim 2, wherein the specific acquisition process of the associated fusion data is as follows: Performing dimension difference elimination and time deviation calibration on the original electroencephalogram data, the psychological state evaluation data and the historical evaluation data in the multivariate data by adopting a multisource data space-time alignment algorithm to obtain alignment data; Abnormal data screening and eliminating are carried out on the aligned data through an abnormal value eliminating algorithm, and abnormal data eliminating is obtained; carrying out data normalization processing on the anomaly removal data by adopting a normalization method to obtain normalization data; Based on the data type, differential weight is distributed to the standardized data, and the standardized data is subjected to association fusion calculation by using a Bayesian fusion algorithm to obtain fusion intermediate data; And carrying out structured packaging on the fusion intermediate data to obtain the associated fusion data.
- 4. A personalized music generation method based on electroencephalogram analysis according to claim 3, wherein the specific acquisition process of the user psychological characteristic data set is as follows: constructing a standardized user psychological characteristic data set based on the associated fusion data; Constructing a three-level data structure frame comprising a time sequence dimension, a feature dimension and an index dimension based on a standardized user psychological characteristic data set; The time sequence dimension is related to the evaluation data of different time nodes, the feature dimension is divided into an electroencephalogram feature, a current psychological state feature and a historical trend feature, and the index dimension is clear of specific quantization parameters corresponding to the features; classifying and filling the associated fusion data according to a three-level frame, synchronously checking the matching of the data and each dimension, and eliminating the abnormal data of adaptation; And after filling and checking, carrying out format standardization processing on the data to obtain a user psychological characteristic data set.
- 5. The personalized music generating method based on electroencephalogram analysis according to claim 4, wherein the specific obtaining process of the core feature of the current psychological state of the user and the long-term history trend feature is as follows: Taking the user mark as a primary key, extracting a corresponding full-quantity characteristic data set from the user psychological characteristic data set to obtain a user characteristic set; Constructing a parallel extraction framework by adopting a double-branch feature extraction network, and splitting to obtain a current state feature subset and a historical time sequence feature subset based on a user feature set; The core feature screening and extraction are carried out on the current state feature subset through a principal component analysis algorithm, so that the current psychological state core feature is obtained; Performing time sequence smoothing on the historical time sequence feature subset, and performing trend analysis and feature extraction by adopting a trend fitting model to obtain initial historical trend features; Setting a feature screening threshold value to perform redundant feature elimination on the initial historical trend feature, and reserving the feature with obvious influence on music suitability to obtain a long-term historical trend feature; And carrying out synchronous association packaging on the core characteristics of the current psychological state and the long-term historical trend characteristics, and completing synchronous extraction of the double characteristics.
- 6. The personalized music generation method based on electroencephalogram analysis according to claim 5, wherein the specific obtaining process of the dual-feature and music parameter collaborative mapping model is as follows: presetting a music parameter library, wherein the music parameter library comprises music parameters of four core dimensions of rhythm, adjustment, tone intensity and tone color and corresponding parameter ranges; collecting dual-feature samples in different physiological states and corresponding adaptive music parameter samples, and constructing a model training data set and a model testing data set; Setting up a mapping model frame by taking double features as input variables and music parameters as output variables and adopting a BP neural network algorithm; training a mapping model framework based on a model training data set to obtain an initial mapping model; and performing accuracy verification on the initial mapping model through a model test data set, and completing establishment of the dual-feature and music parameter collaborative mapping model when the adaptation error is lower than a preset threshold value.
- 7. The personalized music generating method based on electroencephalogram analysis according to claim 6, wherein the specific obtaining process of the initial music is as follows: According to the user psychological state adaptation priority, differential weighting fusion weights are distributed for the core features of the current psychological state and the long-term historical trend features, and a weight configuration rule is obtained; Inputting the two synchronously extracted features into a double-feature weighted fusion algorithm, and calculating by combining a weight configuration rule to obtain a fusion feature vector; Inputting the fusion feature vector into a dual-feature and music parameter collaborative mapping model, and outputting an adaptive initial music parameter combination by the model; Performing music generation processing based on the initial music parameter combination to obtain initial music; And recording a feature fusion logic and parameter output result to finish the optimized generation of the initial music.
- 8. The personalized music generation method based on electroencephalogram analysis according to claim 7, wherein the specific obtaining process of the optimized personalized music is as follows: Setting a configurable treatment period, wherein the period duration is adaptively adjusted based on the user mental state severity and historical treatment feedback; Determining a treatment period starting node and a treatment period ending node according to the psychological state quantitative evaluation logic, and generating pre-treatment psychological state scores and post-treatment psychological state scores respectively through multivariate data acquisition and association fusion processes; calculating the scoring variable quantity before and after treatment, and judging the treatment effect grade by combining with a preset standard; Extracting updated long-term historical trend characteristics which are incorporated into the treatment period data, combining the dual characteristics with a music parameter collaborative mapping model, and adjusting the music parameters according to the treatment effect level; and generating optimized personalized music based on the adjusted parameters.
- 9. A personalized music generation system based on electroencephalogram analysis, characterized in that the system is used for executing a personalized music generation method based on electroencephalogram analysis according to any one of claims 1 to 8.
- 10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement a personalized music generation method based on electroencephalogram signal analysis according to any one of the preceding claims 1-8.
Description
Personalized music generation method and system based on electroencephalogram signal analysis Technical Field The invention relates to the technical field of electroencephalogram signal processing, in particular to a personalized music generation method and system based on electroencephalogram signal analysis. Background The brain wave music synthesis technology is used as the intersection field of brain electrical signals (EEG) and music engineering, and gradually evolves from early brain wave sounding to practical scenes such as biofeedback, hypnotic intervention and the like, the core depends on the corresponding relation between brain electrical parameters and music elements, namely, the period corresponding to the duration, the amplitude corresponding to the pitch and the average energy change corresponding to the intensity of EEG, and the non-scale property of the brain electrical signals provides theoretical support for technology landing. Along with the promotion of psychological health demands and the prominent personalized consumption trend, the customized music demands of users for fitting dynamic psychological states are increased, but the prior art still has a remarkable short board. The traditional scheme mainly adopts a unified frequency mapping rule, ignores baseline differences and individual characteristics of brain electrical signals of different users, so that generated music cannot accurately capture individual emotion fluctuation, the adaptation degree is greatly reduced, the acquisition equipment is mainly an industrial heavy instrument, the volume is large, the operation is complex, the acquisition equipment depends on conductive paste and a professional electromagnetic shielding environment and needs special personnel to assist in operation, the convenient acquisition requirement of a daily scene cannot be adapted completely, the large-scale popularization of the technology is limited, the data fusion dimension is single, most schemes only depend on single brain electrical data, the psychological assessment result, the history adaptation preference and other multi-source information are not integrated effectively, the characteristics extract multi-focus transient state, the long-term history trend characteristics are ignored, the user psychological state is described to be incomplete, the music adaptation is lack of consistency, in addition, the most schemes are in a one-time generation mode, the user brain electrical feedback and the dynamic optimization parameters are not acquired in real time in the music playing process, the real-time fluctuation of the user emotion is not met, the long-term adaptation effect is limited, and the core requirement of dynamic fitting is difficult to be really realized. Disclosure of Invention According to the invention, through the dual-feature extraction and AI collaborative mapping model and combining a real-time electroencephalogram feedback dynamic adjustment mechanism, personalized music accurate generation fitting the dynamic psychological state of the user is realized. The technical scheme provided by the invention is that the personalized music generation method based on electroencephalogram signal analysis comprises the following steps: Collecting multi-element data, wherein the multi-element data comprises original electroencephalogram data, psychological state evaluation data and historical evaluation data, and performing multi-dimensional verification and association fusion on the multi-element data to obtain association fusion data; constructing a user psychological characteristic data set through multi-dimensional data structuring based on the associated fusion data; synchronously extracting core features and long-term historical trend features of the current psychological state of the user from the user psychological feature data set, and establishing a collaborative mapping model of the double features and the music parameters; Fusing the core characteristics of the current psychological state of the user with the long-term historical trend characteristics through a double-characteristic weighted fusion algorithm of differential weight configuration, and optimally generating initial music; setting a treatment period, respectively collecting user psychological state scores before and after the treatment period, and judging treatment effect levels according to the score variation and a preset standard; Based on the treatment effect level, the dual-feature and music parameter collaborative mapping model and the long-term history trend feature are combined, and the music core parameters are adjusted to generate optimized personalized music. Preferably, the specific process of obtaining the multivariate data is as follows: Acquiring brain wave signals of a key monitoring channel of a user to be detected according to a preset sampling frequency to obtain original brain wave data; Generating psychological state assessment data containing d