Search

CN-121979598-A - Digital aging interface dynamic adaptation method based on multi-source data fusion and related device

CN121979598ACN 121979598 ACN121979598 ACN 121979598ACN-121979598-A

Abstract

The invention provides a digital aging interface dynamic adaptation method based on multi-source data fusion and a related device, and relates to the technical field of man-machine interaction. By integrating multisource data fusion, dynamic user portrait generation, hybrid decision model and real-time feedback mechanism, the problems of inaccurate adaptation, lack of flexibility and sustainability in the prior art are solved, and the method has the advantage of being capable of realizing accurate, flexible and sustainable dynamic adaptation of an aging interface.

Inventors

  • LUO YA
  • Mei Jiyuan
  • ZHOU YING
  • LUO ZHUN

Assignees

  • 中南大学

Dates

Publication Date
20260505
Application Date
20251218

Claims (10)

  1. 1. A digital aging interface dynamic adaptation method based on multi-source data fusion is characterized by comprising the following steps: The enterprise public data are captured at regular time by deploying the distributed crawler clusters, meanwhile, the device sensor interface is started to collect the user interaction track in real time, and after the two types of data are synchronized by the timestamp alignment module, the two types of data are input into the data cleaning pipeline; Inputting the cleaned data into a multi-mode fusion model, wherein text data is converted into a feature matrix through a TF-IDF vectorizer, behavior data is extracted into time sequence features through an LSTM neural network, and the two types of features are weighted and fused by adopting an attention mechanism to generate a dynamic user portrait comprising vision grade, operation preference and cognitive ability; generating interface parameters through a rule engine and a machine learning mixed decision model based on user portraits, wherein the rule engine presets basic adaptation rules according to WCAG 2.1.1 standards; the rendering engine drives the GPU to generate an adaptive interface according to the interface parameters, and captures user operation precision data through a touch screen driving layer, counts interface response delay through a front-end performance monitoring module, and feeds real-time data back to a rule engine to trigger parameter recalibration; And establishing an effect evaluation index, and dynamically adjusting parameter weights by a PID controller to form a continuous optimization link of the cross-module.
  2. 2. The method of claim 1, wherein the multi-source heterogeneous data collaborative collection phase specifically comprises: The distributed crawler cluster is constructed based on Scrapy-Redis frames, at least three data sources are simultaneously grabbed by adopting an asynchronous I/O model, and repeated URLs are deduplicated through a bloom filter; The device sensor interface captures the original coordinates, pressure values and time stamps of the touch event through an operating system bottom HID API, and the sampling frequency is not lower than 100Hz; the timestamp alignment module adopts NTP protocol to perform network time synchronization, and uses sliding time window algorithm to perform time sequence matching on heterogeneous data streams, and the window size is set to be 5 minutes.
  3. 3. The method of claim 1, wherein the dynamic generation phase of the digital representation of the user further comprises: The LSTM neural network is of a double-layer bidirectional structure, and the dimension of the hidden layer is set to 128 and is used for extracting long-period operation habit features from the behavior sequence; the attention mechanism adopts an additive attention model; The update period of the dynamic user portrait is dynamically adjusted by the user activity, the active user update period is 1 hour, and the inactive user update period is 24 hours.
  4. 4. The method of claim 1, wherein the co-operating of the rules engine with the machine learning hybrid decision model comprises: The preset adaptation rules of the rule engine are stored in a form of a decision table and at least comprise three types of rules of font size, contrast and operation timeout time; the machine learning model adopts XGBoost algorithm, the input feature dimension is 256 dimensions, and the output is continuous value prediction of an interface parameter set; And setting a confidence arbitration mechanism, namely adopting output of the machine learning model when the prediction confidence of the machine learning model is higher than a threshold value of 0.7, or backing to output of a rule engine, and recording the case through an online learning module to optimize the model.
  5. 5. The method of claim 1, wherein the rendering and feedback linkage stage comprises: When the GPU generates an adaptive interface, a multi-level caching strategy is adopted, wherein static interface elements are cached in a video memory, dynamic data are cached in a memory, and the caching failure time is 10 minutes; After the captured user operation precision data is subjected to Kalman filtering smoothing processing, calculating the standard deviation of the operation offset, and triggering parameter recalibration when the standard deviation exceeds a threshold value of 5 pixels for three times; The front-end Performance monitoring module acquires the first drawing time and the maximum content drawing time through Performance TIMING API, and triggers rendering optimization when any index exceeds 150% of a base line value.
  6. 6. The method of claim 1, wherein the closed loop optimization phase comprises: the effect evaluation index comprises a task completion rate, a single operation duration and misoperation times, and weights of the task completion rate, the single operation duration and the misoperation times are respectively set to 0.5, 0.3 and 0.2; the parameter setting of the PID controller adopts a Ziegler-Nichols method; and (3) establishing an optimization decision log, recording index changes before and after parameter adjustment each time, and automatically triggering retraining of the mixed decision model when the index improvement is smaller than 1% after continuous 10 times of optimization.
  7. 7. The method of claim 1, further comprising an exception handling mechanism: Setting a data quality check rule in a data acquisition stage, and automatically switching to a standby data source when the data loss rate exceeds 20% or the abnormal value proportion exceeds 10%; implementing a degradation strategy in an interface rendering stage, namely automatically closing unnecessary visual effects when the GPU load is continuously more than 90%, and gradually reducing the rendering resolution to 720p; And introducing an abnormal mode library into the feedback loop, matching the user operation sequence with the known abnormal modes in the library in real time, and skipping the conventional optimization flow after the matching is successful, so as to directly call a preset emergency adaptation scheme.
  8. 8. A digital aging interface dynamic adaptation system based on multi-source data fusion, comprising: the data acquisition module is used for regularly grabbing enterprise public data by deploying the distributed crawler clusters, simultaneously enabling the equipment sensor interface to acquire user interaction tracks in real time, synchronizing the two types of data through the timestamp alignment module and then inputting the two types of data into the data cleaning pipeline; The data input module is used for inputting the cleaned data into a multi-mode fusion model, wherein text data is converted into a feature matrix through a TF-IDF vectorizer, behavior data is extracted into time sequence features through an LSTM neural network, and the two types of features are weighted and fused by adopting an attention mechanism to generate a dynamic user portrait comprising vision grade, operation preference and cognitive ability; The system comprises a parameter generation module, a machine learning model, a parameter generation module and a control module, wherein the parameter generation module is used for generating interface parameters based on a user portrait through a rule engine and a machine learning mixed decision model, wherein the rule engine presets basic adaptation rules according to WCAG 2.1.1 standards; the parameter driving module is used for driving the GPU to generate an adaptive interface according to the interface parameters by the rendering engine, and capturing user operation precision data through the touch screen driving layer; And the index evaluation module is used for establishing an effect evaluation index, and dynamically adjusting the parameter weight by the PID controller to form a continuous optimization link of the cross module.
  9. 9. A computer device comprising a memory, a processor, which when executing computer instructions stored in the memory, performs the method of any one of claims 1 to 7.
  10. 10. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 7.

Description

Digital aging interface dynamic adaptation method based on multi-source data fusion and related device Technical Field The application relates to the technical field of man-machine interaction, in particular to a digital aging interface dynamic adaptation method based on multi-source data fusion and a related device. Background With the progress of global population aging, digital technology has been widely penetrated into various fields of social life, but the elderly population still faces serious challenges when using digital products. The existing digital interface design is generally not fully suitable for the physiological and cognitive characteristics of the old, for example, small characters are difficult to identify due to vision deterioration, touch operation errors are caused by reduced finger flexibility, and complex flow understanding is difficult due to reduced cognitive ability, so that a large number of old users are strongly frustrated when the old users are in initial contact with the digital application. The current aging-adaptive scheme is mostly adjusted by adopting a static unified standard, such as simply amplifying fonts or increasing voice prompts, but ignores the high heterogeneity in the old user group, namely that different individuals have obvious differences in vision grade, operation habit, digital experience and the like, so that the 'one-cut' adaptation cannot meet the personalized requirements. In the data support level, the prior art excessively relies on limited samples under an offline questionnaire or laboratory environment, and cannot effectively integrate enterprise public data, real-time user interaction tracks and multi-source heterogeneous information acquired by equipment sensors, so that user portraits are built on one side and lag, and the adaptation decision lacks comprehensive data basis. The evaluation mechanism has obvious defects, mainly depends on subjective satisfaction evaluation of users, lacks objective quantitative indexes such as task completion efficiency, operation accuracy and the like, and is mutually fractured with iteration of products in the evaluation process, so that continuous optimization cannot be driven. In technical realization, the existing aging adapting function is often deployed in an isolated way, such as only optimizing interface elements or adding auxiliary functions, and the modules lack of cooperative linkage to form a technical island, the system has serious adaptability and cannot adjust interface parameters in real time according to real-time operation feedback of a user, the initial configuration is still used when the user has operation difficulty, an algorithm model excessively depends on preset rules, the dynamic evolution of a complex behavior mode is difficult to capture, response is slow due to stiffness of an updating mechanism, a cross-platform adaptation strategy is inconsistent, and old users need to repeatedly adapt to differentiated operation logic in different digital environments, so that learning cost is remarkably increased. Deep analysis shows that the problems are caused by core defects of single data acquisition dimension, weak algorithm fusion capability, real-time feedback closed loop deficiency, non-standardization of an evaluation system and the like, so that accurate, flexible and sustainable aging interface adaptation is difficult to realize in the prior art. In view of the above, there is a need in the art for improvements. Disclosure of Invention The application provides a digital aging interface dynamic adaptation method based on multi-source data fusion and a related device, which have the advantage of being capable of realizing accurate, flexible and sustainable aging interface dynamic adaptation. In a first aspect, the method for dynamically adapting a digital aging interface based on multi-source data fusion provided by the application adopts the following technical scheme: a digital aging interface dynamic adaptation method based on multi-source data fusion comprises the following steps: The enterprise public data are captured at regular time by deploying the distributed crawler clusters, meanwhile, the device sensor interface is started to collect the user interaction track in real time, and after the two types of data are synchronized by the timestamp alignment module, the two types of data are input into the data cleaning pipeline; Inputting the cleaned data into a multi-mode fusion model, wherein text data is converted into a feature matrix through a TF-IDF vectorizer, behavior data is extracted into time sequence features through an LSTM neural network, and the two types of features are weighted and fused by adopting an attention mechanism to generate a dynamic user portrait comprising vision grade, operation preference and cognitive ability; generating interface parameters through a rule engine and a machine learning mixed decision model based on user portraits, wherein the rule engine