CN-121997182-A - Financial public opinion emotion analysis method based on multi-modal dynamic weight fusion
Abstract
The application relates to the field of public opinion analysis, in particular to a financial public opinion emotion analysis method based on multi-modal dynamic weight fusion, which comprises the steps of acquiring multi-modal data input in a financial public opinion scene, and respectively extracting characteristics of different modal data input; based on a self-adaptive attention fusion mechanism, emotion contribution weights of different modes in a current financial scene are dynamically calculated, multi-mode input is subjected to weighted fusion, multi-mode characteristics are classified by adopting an emotion classifier to obtain emotion polarity and continuous emotion intensity values, abnormal conflict detection and secondary verification are performed on the inputs of different modes, dynamic confidence calibration is performed, and the emotion intensity values are written into a time sequence database in real time. According to the application, the judgment accuracy and the robustness of the financial public opinion emotion analysis method are comprehensively improved through the design of pre-constructing a financial scene template library, an abnormal conflict detection and secondary verification mechanism, a time attenuation factor and a behavior confidence calibration mechanism.
Inventors
- XU GUANGXIA
- TANG JIN
Assignees
- 重庆邮电大学
Dates
- Publication Date
- 20260508
- Application Date
- 20260210
Claims (9)
- 1. A financial public opinion emotion analysis method based on multi-mode dynamic weight fusion is characterized by comprising the following steps: S1, acquiring multi-mode data input in a financial public opinion scene, and carrying out standardized preprocessing on the multi-mode data input, wherein the multi-mode data input comprises text data, image data and metadata; S2, respectively extracting features of data input in different modes to obtain feature input in different modes; s3, dynamically calculating emotion contribution weights of different modes in a current financial scene based on a self-adaptive attention fusion mechanism, and carrying out weighted fusion on multi-mode input to obtain multi-mode characteristics; S4, performing abnormal conflict detection and secondary verification on different modal inputs; the abnormal conflict detection is carried out, and consistency among modes is judged by calculating cosine similarity between text emotion score and image emotion embedded vector, if the similarity is lower than a preset similarity threshold value, the mode conflict is judged, and a secondary verification mechanism is triggered; S5, carrying out dynamic confidence calibration on the emotion intensity value, and writing into a time sequence database in real time.
- 2. The financial public opinion emotion analysis method based on multi-modal dynamic weight fusion of claim 1, wherein the obtaining emotion polarity and continuous emotion intensity values comprises: s31, calculating semantic relevance scores between texts and images, and generating a cross-modal attention initial weight; S32, constructing a financial scene template library, and matching the corresponding financial scene template library according to the content type and the platform source of the metadata; S33, adopting an exponential decay function to assign a decreasing weight to the fusion behavior, and carrying out weighted fusion on the multi-modal input; S34, adopting an emotion classifier to process the multi-modal characteristics, obtaining emotion polarity and continuous emotion intensity values, and generating an interpretability basis.
- 3. The method for analyzing financial public opinion emotion based on multi-modal dynamic weight fusion according to claim 2, wherein the financial scene template library is composed of different financial scenes The composition is that, ; 、 、 Representing financial scenes And when the financial scene template library is constructed, labeling the scene type of each data, setting the perception weight factor value for different financial scenes r, and setting the perception weight factor value to be different in different scenes.
- 4. The method for analyzing financial public opinion emotion based on multi-modal dynamic weight fusion of claim 3, wherein the financial scene , 。
- 5. The method for analyzing financial public opinion emotion based on multi-modal dynamic weight fusion according to claim 2, wherein the exponential decay function is adopted to give decreasing weight to the fusion behavior, and a time sensitivity factor is set The multi-mode input is weighted and fused, and the formula of the time sensitivity factor is as follows: ; Wherein, the Indicating the rate of decay, The time of occurrence of the event is indicated, Indicating the current time.
- 6. The method for analyzing financial public opinion emotion based on multi-modal dynamic weight fusion of claim 5, wherein the setting time sensitivity factor The multi-mode input is weighted and fused, and the formula is as follows: ; Wherein, the The multi-modal characteristics are represented and, 、 、 Respectively representing text semantic embedded vectors, visual semantic features and metadata features, 、 、 And respectively representing the relative fusion weight of the text semantic embedding vector, the relative fusion weight of the visual semantic features and the relative fusion weight of the metadata features.
- 7. The method for analyzing financial public opinion emotion based on multi-modal dynamic weight fusion of claim 6, wherein, And (3) with The formula of (2) is: ; ; Wherein, the Representing the self-adaptive balance coefficient of the system, Representing the initial weight of the cross-modal attention, And (3) with Respectively representing the text perception weight factor and the visual perception weight factor.
- 8. The financial public opinion emotion analysis method based on multi-modal dynamic weight fusion of claim 1, wherein the secondary verification comprises: step 1, calculating the current confidence coefficient of each mode; Step 2, evaluating confidence coefficient differences among modes; step 3, executing fusion strategy recalibration, reducing the weight of a low confidence coefficient mode or temporarily shielding contradictory modes, and outputting a preliminary emotion only by relying on a high confidence coefficient mode; and 4, generating an explanatory basis, recording reasons triggering secondary verification, confidence coefficient of each mode, historical behavior reference basis and the like, and using the results for subsequent audit or manual rechecking.
- 9. The financial public opinion emotion analysis method based on multi-modal dynamic weight fusion of claim 1, wherein the dynamic confidence level is calibrated by calculating modal fusion entropy With historical prediction accuracy, for emotion intensity values The formula for carrying out dynamic confidence calibration on the emotion intensity is as follows: ; ; Wherein, the A calibration value representing the intensity of the emotion, A balance parameter representing entropy and accuracy, Represents the fusion entropy of the modes, The accuracy of the historical predictions is indicated, Representing the relative fusion weights of the different modalities.
Description
Financial public opinion emotion analysis method based on multi-modal dynamic weight fusion Technical Field The application relates to the field of public opinion analysis, in particular to a financial public opinion emotion analysis method based on multi-mode dynamic weight fusion. Background With the rapid development of the internet and social media, the information propagation speed in the financial market is remarkably increased, and the information amount is also rapidly increased. Such information includes not only traditional news stories and corporate announcements, but also a large amount of multimodal data such as user comments, expression packages, and charts from the social media platform. In a complex information environment, accurately capturing the emotion of the public has important significance for risk early warning, investment decision making and supervision compliance. In the prior art, traditional financial public opinion emotion analysis mainly depends on a single-mode data processing method, and in different scenes, especially when facing to content containing abundant visual elements or unstructured expressions, public opinion data taste noise information containing irony, opposite language or multi-mode contradictions, so that judgment results of public opinion analysis are affected. In addition, traditional financial public opinion emotion analysis often ignores historical public opinion evolution trends. Therefore, there is a need for a financial public opinion emotion analysis method that can fuse multi-source information and effectively process contradictory noise in multi-modal data. Disclosure of Invention In view of the above, the application discloses a financial public opinion emotion analysis method based on multi-modal dynamic weight fusion, which solves the problems in the prior art, and comprises the following steps: S1, acquiring multi-mode data input in a financial public opinion scene, and carrying out standardized preprocessing on the multi-mode data input; S2, respectively extracting features of data input in different modes to obtain feature input in different modes; s3, dynamically calculating emotion contribution weights of different modes in a current financial scene based on a self-adaptive attention fusion mechanism, and carrying out weighted fusion on multi-mode input to obtain multi-mode characteristics; S4, performing abnormal conflict detection and secondary verification on different modal inputs; the abnormal conflict detection is carried out, and consistency among modes is judged by calculating cosine similarity between text emotion score and image emotion embedded vector, if the similarity is lower than a preset similarity threshold value, the mode conflict is judged, and a secondary verification mechanism is triggered; S5, carrying out dynamic confidence calibration on the emotion intensity value, and writing into a time sequence database in real time. The beneficial effects of the application include: By constructing a financial scene template library in advance, carrying out dynamic self-adaptive fusion on text, image and metadata modal information according to different financial scenes, capturing complex signals such as chart trends, expression symbols, ironic contexts and the like in financial public opinion, improving accuracy and fine granularity of emotion polarity and strength identification, further combining context characteristics such as content types, source platforms and time sensitivity and the like, automatically adjusting contribution weights of all modes under different financial scenes, realizing intelligent emotion assessment of 'environment Shi Ce', and enhancing generalization capability and practicability; The abnormal conflict detection and secondary verification mechanism is designed, and high noise situations such as irony, opposite language or multi-modal contradiction are identified and processed by quantifying the consistency of the text and the image emotion, and secondary verification is performed based on historical behavior statistics and confidence coefficient differences, so that the misjudgment rate is effectively reduced, and the robustness of emotion analysis in a real complex environment is improved; Through the fusion time attenuation factor and the behavior confidence calibration mechanism, emotion judgment not only reflects current input, but also can correlate historical public opinion evolution trend, deviation caused by isolated evaluation is avoided, dynamic compression or amplification of emotion intensity is supported, and output results are ensured to have more realistic interpretation. Drawings FIG. 1 is a schematic flow chart of a financial public opinion emotion analysis method based on multi-modal dynamic weight fusion in embodiment 1 of the present application; FIG. 2 is a schematic diagram of a neural network model in embodiment 2 of the present application; FIG. 3 is a schematic diagram of a dy