Search

CN-122018742-A - Display content and data processing method, device, system and chip

CN122018742ACN 122018742 ACN122018742 ACN 122018742ACN-122018742-A

Abstract

The embodiment of the application provides a method, equipment, a system and a chip for processing display content and data. The display content processing method comprises the steps of obtaining a first illustration and first interaction metadata, wherein the first interaction metadata are used for indicating to generate a first enhanced image corresponding to the first illustration, displaying the first illustration on a display screen, obtaining first gazing duration of a user on the first illustration, and generating and displaying the first enhanced image according to the first interaction metadata under the condition that the first gazing duration reaches a first preset duration. According to the display content processing method, under the condition that the user has the gazing intention on the first illustration, based on the first interaction metadata corresponding to the first illustration, the first enhanced image corresponding to the first illustration can be generated and displayed, and the display effect of the first illustration is enhanced.

Inventors

  • SUN YIN

Assignees

  • 华为技术有限公司

Dates

Publication Date
20260512
Application Date
20250806

Claims (20)

  1. 1. A display content processing method, characterized by being applied to an electronic device, the method comprising: Acquiring a first illustration and first interaction metadata, wherein the first interaction metadata is used for indicating generation of a first enhanced image corresponding to the first illustration; Displaying the first illustration on a display screen; acquiring a first gazing time length of a user on the first illustration; and under the condition that the first gazing time length reaches a first preset time length, generating and displaying the first enhanced image according to the first interaction metadata.
  2. 2. The method of claim 1, wherein the first interaction metadata includes interaction object information including a corresponding coordinate range in the first illustration as an independent interaction object, the interaction object information indicating interaction patterns supported by the first illustration.
  3. 3. The method of claim 2, wherein in the event that the first gaze duration is determined to reach a first preset duration, the method further comprises: If the coordinate range is a first coordinate range corresponding to the first illustration, determining that the first illustration supports a global interaction mode; And under the condition of the global interaction mode, displaying interaction prompt information corresponding to the first illustration on the display screen so as to prompt the user to carry out interaction operation on the first illustration.
  4. 4. The method of claim 2, wherein in the event that the first gaze duration is determined to reach a first preset duration, the method further comprises: If the coordinate range is a second coordinate range corresponding to at least one sub-region or at least one main body element in the first illustration, determining that the first illustration supports a local interaction mode; under the condition of the local interaction mode, displaying interaction prompt information corresponding to at least one sub-area in the first illustration on the display screen so as to prompt the user to perform interaction operation on a first sub-area, wherein the first sub-area is one of the at least one sub-area, or And under the condition of the local interaction mode, displaying interaction prompt information corresponding to at least one main body element in the first illustration on the display screen so as to prompt the user to perform interaction operation on the first main body element, wherein the first main body element is one of the at least one main body element.
  5. 5. The method of claim 3 or 4, wherein generating and displaying the first enhanced image from the first interaction metadata comprises: acquiring image enhancement data corresponding to the first illustration according to the interactive operation of the user on a target interactive object, wherein the target interactive object is one of the first illustration, a first sub-region in the first illustration or a first main body element; and generating and displaying the first enhanced image according to the first interaction metadata and the image enhancement data.
  6. 6. The method according to claim 5, wherein obtaining the image enhancement data corresponding to the first illustration according to the interaction operation performed by the user on the target interaction object includes: acquiring a second gazing duration of the user on the target interactive object; And under the condition that the second gazing time length reaches a second preset time length, determining that the user carries out interactive operation on the target interactive object, and acquiring image enhancement data corresponding to the first illustration.
  7. 7. The method according to claim 5, wherein obtaining the image enhancement data corresponding to the first illustration according to the interaction operation performed by the user on the target interaction object includes: and responding to the clicking operation of the user on the target interaction object, and acquiring the image enhancement data corresponding to the first illustration.
  8. 8. The method according to claim 5, wherein obtaining the image enhancement data corresponding to the first illustration according to the interaction operation performed by the user on the target interaction object includes: and responding to the interactive operation of the user on the target interactive object through a touch preset interactive component, and acquiring image enhancement data corresponding to the first illustration.
  9. 9. The method of claim 8, wherein responding to the user interaction with the target interaction object through a touch preset interaction component comprises: responding to the touch operation of the user on the preset interaction component based on a pre-registered touch monitoring event and determining a touch mode corresponding to the touch operation; and determining that the user carries out interactive operation on the target interactive object according to the touch mode through the first registration and callback interface bound with the touch monitoring event.
  10. 10. The method of any of claims 5-9, wherein the first interaction metadata includes display description information indicating enhancement modes supported by the first illustration; generating and displaying the first enhanced image according to the first interaction metadata and the image enhancement data, including: determining a first enhancement mode supported by the first illustration according to the display description information; And generating and displaying the first enhanced image according to the first enhancement mode and the image enhancement data.
  11. 11. The method of claim 10, wherein generating and displaying the first enhanced image from the first enhancement mode and the image enhancement data comprises: If the first enhancement mode is a global enhancement mode, generating an enhanced image of the first illustration according to the image enhancement data, and taking the enhanced image as the first enhanced image; Displaying the first enhanced image on the display screen.
  12. 12. The method of claim 10, wherein generating and displaying the first enhanced image from the first enhancement mode and the image enhancement data comprises: If the first enhancement mode is a local enhancement mode, generating an enhanced image of a first sub-region or a first main body element in the first illustration according to the image enhancement data, and taking the enhanced image as the first enhanced image; Displaying the first enhanced image on the display screen.
  13. 13. The method according to any one of claims 5-12, wherein obtaining image enhancement data corresponding to the first illustration comprises: Acquiring image enhancement data corresponding to the first illustration from a cloud server; Or acquiring image enhancement data corresponding to the first illustration from a local preset storage space, wherein the image enhancement data is obtained from the cloud server in advance.
  14. 14. The method according to any one of claims 10-13, further comprising: performing definition enhancement processing on the first illustration according to the first enhancement mode to obtain a second enhanced image; and displaying the second enhanced image on the display screen.
  15. 15. The method of claim 14, wherein performing sharpness enhancement processing on the first illustration according to the first enhancement mode to obtain a second enhanced image, comprises: And if the first enhancement mode is a global enhancement mode, performing definition enhancement processing on the first illustration to obtain a corresponding enhanced image, and taking the corresponding enhanced image as a second enhanced image.
  16. 16. The method of claim 14, wherein performing sharpness enhancement processing on the first illustration according to the first enhancement mode to obtain a second enhanced image, comprises: And if the first enhancement mode is a local enhancement mode, performing definition enhancement processing on the first sub-region or the first main body element in the first illustration to obtain a corresponding enhanced image, and taking the corresponding enhanced image as a second enhanced image.
  17. 17. The method of any of claims 1-16, wherein obtaining the first artwork and the first interaction metadata comprises: Obtaining digital content from a cloud server, wherein the digital content comprises structured text data, text metadata, at least one original picture and picture metadata, wherein the at least one original picture is used as at least one illustration in the text content, the at least one illustration comprises the first illustration, and the text metadata or the picture metadata comprises the first interaction metadata; typesetting the text structural data and the at least one original picture according to the text metadata and the picture metadata based on a typesetting engine to obtain corresponding typesetting information; and displaying the text content and the at least one illustration on a display screen according to the typesetting information.
  18. 18. The method of claim 17, wherein displaying the first illustration on a display screen comprises: According to the typesetting information, determining a first display area occupied by the first illustration on a display screen; And displaying the first illustration in the first display area.
  19. 19. The method of claim 18, wherein obtaining a first length of gaze of a user on the first illustration comprises: Determining screen coordinates of a gaze point of the user on the display screen; And acquiring a first gazing duration of the user on the first illustration according to the screen coordinates and the first display area.
  20. 20. The method of claim 19, wherein determining screen coordinates of the gaze point of the user on the display screen comprises: Acquiring eye information of the user; Determining the relative position relation between the gaze point of the user and the display screen from the eye information based on a pre-registered gaze point tracking event; And determining screen coordinates of the gaze point of the user on the display screen according to the relative position relation through a second registration and callback interface bound by the gaze point tracking event.

Description

Display content and data processing method, device, system and chip Technical Field The embodiment of the application relates to the technical field of digital reading, in particular to a method, a device, a system and a chip for processing display content and data. Background The digital reading has the characteristics of mass downloading, no carrying burden, readability at any time and the like, and brings great convenience for users to read. But abstract, obscure text content and static illustrations do not meet the more diverse reading needs of users. Thus, some vendors develop supporting AR/VR media for electronic books through augmented Reality (Augmented Reality, AR) or Virtual Reality (VR) technologies. When a user reads an electronic book, the corresponding AR/VR media are loaded and the corresponding AR/VR animation is displayed, so that the digital reading form can be enriched, the understanding difficulty is reduced, the interestingness of digital reading is improved especially in the aspect of reading books of children, and the interest of reading children is stimulated. However, the development of the matched AR/VR media for the electronic book needs to be completed by means of professional manufacturing tools and manufacturing teams, the whole manufacturing process is long in period and high in cost, and the terminal equipment needs to rely on a specific AR/VR engine and a hardware chip to load the AR/VR media, so that the universality is difficult to guarantee in application. Therefore, how to ensure reading experience and reduce manufacturing cost and application difficulty becomes a technical problem to be solved in the field. Disclosure of Invention The embodiment of the application provides a method, equipment, a system and a chip for processing display content and data, which are used for solving the problems of high manufacturing cost of image enhancement data and high enhancement image display threshold. In a first aspect, an embodiment of the present application provides a display content processing method, which is applied to an electronic device, and the method includes acquiring a first illustration and first interaction metadata, where the first interaction metadata is used to indicate generation of a first enhancement, displaying the first illustration on a display screen, acquiring a first gazing duration of a user on the first illustration, and generating and displaying the first enhancement image according to the first interaction metadata when it is determined that the first gazing duration reaches a first preset duration. In the embodiment of the application, based on the gaze point of the user, under the condition that the gaze intention of the user on the first illustration is determined, the first enhanced image corresponding to the first illustration can be directly generated and displayed according to the first interaction metadata, and the electronic equipment can enhance the image of the first illustration without complex processing operation, so that the display effect of the first illustration is improved, the use threshold is low, and the universality is stronger. In an optional embodiment, the first interaction metadata includes interaction object information, where the interaction object information includes a coordinate range corresponding to an independent interaction object in the first illustration, and the interaction object information is used to indicate an interaction mode supported by the first illustration, where the interaction mode is a global interaction mode or a local interaction mode. In this embodiment, according to the interaction object information included in the first interaction metadata, the independent interaction object in the first illustration and the interaction mode supported by the first illustration may be determined, and based on this, different interaction operation modes may be provided for the user, which is helpful for improving user experience. In an optional embodiment, under the condition that the first gazing time length reaches a first preset time length, the display content processing method further includes determining that the first illustration supports a global interaction mode if the coordinate range is a first coordinate range corresponding to the first illustration, and displaying interaction prompt information corresponding to the first illustration on the display screen under the condition of the global interaction mode so as to prompt the user to conduct interaction operation on the first illustration. In an optional embodiment, if the first gazing duration reaches a first preset duration, the display content processing method further includes determining that the first illustration supports a local interaction mode if the coordinate range is a second coordinate range corresponding to at least one sub-area in the first illustration or at least one main body element respectively, displaying interaction prompt in