Search

US-20260127116-A1 - Memory Page Processing Method and Device, Apparatus, and Storage Medium

US20260127116A1US 20260127116 A1US20260127116 A1US 20260127116A1US-20260127116-A1

Abstract

According to a memory page processing method, after a memory page is allocated to a virtual address that causes a page fault, the allocated memory page is pre-padded with first data, and then the memory page padded with the first data is accessed. Subsequently, if data in the memory page is migrated to a cache, the first data padded in the memory page is also migrated to the cache. If the data in the memory page is evicted from the cache to a memory, the evicted cache data may also include the first data.

Inventors

  • Zheng Li
  • Shunning Jiang
  • Jie Peng

Assignees

  • HUAWEI TECHNOLOGIES CO., LTD.

Dates

Publication Date
20260507
Application Date
20251218
Priority Date
20230620

Claims (20)

  1. 1 . A method comprising: allocating, when a first virtual address accessed by an application program causes a page fault, a first memory page to the first virtual address; padding the first memory page with first data to obtain a padded first memory page, wherein the first data is in a data sequence corresponding to a cache compression mode to be used when cache data written by the application program in a cache is migrated to a memory page; and accessing the padded first memory page based on the first virtual address.
  2. 2 . The method of claim 1 , wherein padding the first memory page comprises padding the first memory page with the first data when the application program enables a memory pre-padding function, and wherein the memory pre-padding function indicates to pad an empty memory page with the first data before the application program accesses the empty memory page.
  3. 3 . The method of claim 2 , wherein before padding the first memory page, the method further comprises periodically enabling the memory pre-padding function for the application program based on a running status of the application program.
  4. 4 . The method of claim 2 , wherein before padding the first memory page, the method further comprises enabling, when a type of the application program is a target type, the memory pre-padding function for the application program, and wherein the cache compression mode is applicable to the application program of the target type.
  5. 5 . The method of claim 1 , wherein accessing the padded first memory page comprises: receiving a first write request of the application program indicating to write second data to the first virtual address; and updating the first data at a first location corresponding to the first virtual address in the first memory page to the second data.
  6. 6 . The method of claim 1 , further comprising: receiving a second write request of the application program indicating to write the task data to a second virtual address; and updating the first data at a second location corresponding to the second virtual address in a plurality of second memory pages to the task data, wherein the plurality of second memory pages is used to store task data of a same computing task of the application program, and wherein the plurality of second memory pages is padded with the first data.
  7. 7 . The method of claim 1 , wherein the first data is each piece of data in the data sequence, S pieces of data that appear most frequently in the data sequence, or R consecutive pieces of data in the data sequence, and wherein S and R are greater than 0.
  8. 8 . The method of claim 1 , wherein the first data is S pieces of data that appear most frequently in a plurality of data sequences or the first data is data shared by the plurality of data sequences, and wherein the data sequences correspond to different cache compression modes.
  9. 9 . A memory page processing method comprising: obtaining an allocation notification message, wherein the allocation notification message notifies that a first memory page has been allocated to a first virtual address for a page fault, and wherein the page fault is caused by an application program accessing the first virtual address; and padding the first memory page with first data, wherein the first data is in a data sequence corresponding to a cache compression mode to be used when cache data written by the application program in a cache is migrated to a memory page.
  10. 10 . The method of claim 9 , wherein padding the first memory page comprises padding the first memory page with the first data when the application program enables a memory pre-padding function, and wherein the memory pre-padding function indicates to pad an empty memory page with the first data before the application program accesses the empty memory page.
  11. 11 . The method of claim 10 , wherein before padding the first memory page, the method further comprises periodically enabling the memory pre-padding function for the application program based on a running status of the application program.
  12. 12 . The method of claim 10 , wherein before padding the first memory page, the method further comprises enabling, when a type of the application program is a target type, the memory pre-padding function, and wherein the cache compression mode is applicable to the application program of the target type.
  13. 13 . An electronic device comprising: a memory configured to store program code; and one or more processors coupled to the memory and configured to execute the program code to cause the electronic device to: allocate, when a first virtual address accessed by an application program causes a page fault, a first memory page to the first virtual address; pad the first memory page with first data to obtain a padded first memory page, wherein the first data is in a data sequence corresponding to a cache compression mode to be used when cache data written by the application program in a cache is migrated to a memory page; and access the padded first memory page based on the first virtual address.
  14. 14 . The electronic device of claim 13 , wherein the one or more processors are further configured to execute the program code to cause the electronic device to further pad the first memory page with the first data when the application program enables a memory pre-padding function, and wherein the memory pre-padding function indicates to pad an empty memory page with the first data before the application program accesses the empty memory page.
  15. 15 . The electronic device of claim 14 , wherein before padding the first memory page, the one or more processors are further configured to execute the program code to cause the electronic device to periodically enable, before padding the first memory page, the memory pre-padding function based on a running status of the application program.
  16. 16 . The electronic device of claim 14 , wherein before padding the first memory page, the one or more processors are further configured to execute the program code to cause the electronic device to enable, before padding the first memory page and when a type of the application program is a target type, the memory pre-padding function, and wherein the cache compression mode is applicable to the application program of the target type.
  17. 17 . The electronic device of claim 13 , wherein the one or more processors are further configured to execute the program code to cause the electronic device to further access the padded first memory page by: receiving a first write request of the application program indicating to write second data to the first virtual address; and updating the first data at a first location corresponding to the first virtual address in the first memory page to the second data.
  18. 18 . The electronic device of claim 13 , wherein the one or more processors are further configured to execute the program code to cause the electronic device to: receive a second write request of the application program indicating to write the task data to a second virtual address; and update the first data at a second location corresponding to the second virtual address in a plurality of second memory pages to the task data, wherein the plurality of second memory pages is used to store task data of a same computing task of the application program.
  19. 19 . The electronic device of claim 13 , wherein the first data is each piece of data in the data sequence; the first data is S pieces of data that appear most frequently in the data sequence, wherein S is greater than 0; or the first data is R consecutive pieces of data in the data sequence, wherein R is greater than 0.
  20. 20 . The electronic device of claim 13 , wherein the first data is S pieces of data that appear most frequently in a plurality of data sequences, wherein the data sequences correspond to different cache compression modes, and S is greater than 0; or the first data is data shared by the plurality of data sequences.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This is a continuation of International Patent Application No. PCT/CN2024/079614 filed on Mar. 1, 2024, which claims priority to Chinese Patent Application No. 202310740304.0 filed on Jun. 20, 2023 and Chinese Patent Application No. 202311086477.1 filed on Aug. 25, 2023, all of which are hereby incorporated by reference in their entireties. TECHNICAL FIELD The present disclosure relates to the field of communication technologies, and in particular, to a memory page processing method and device, an apparatus, and a storage medium. BACKGROUND To alleviate a performance bottleneck caused by a memory wall, in a related technology, an attempt is made to bridge a performance gap between a central processing unit (CPU) and a memory through optimization at a microarchitecture level. For example, in a design of a chip microarchitecture, an attempt is made to reduce an amount of data in off-chip communication by using a cache compression technology, for example, frequent pattern compression (FPC). Specifically, when data in a cache of the CPU is evicted from a cache line to the memory, a memory management unit (MMU) compresses the data in the cache line based on a cache compression mode that matches the data in the cache line, to obtain compressed data. The compressed data is stored in the memory, to evict the data in the cache line to the memory. However, in addition to data written by an application program, the cache line may further include some random values. As a result, the data in the cache line cannot well match a data compression mode. Data that does not match a data compression mode cannot be stored in the memory in a compressed manner when being evicted from the cache line, reducing a compression ratio of cache data when data is evicted from the cache to the memory. SUMMARY Embodiments of the present disclosure provide a memory page processing method and device, an apparatus, and a storage medium, to increase a compression ratio of cache data. The technical solutions are as follows: According to a first aspect, a memory page processing method is provided. The method includes the following steps: when a virtual address (referred to as a first virtual address) accessed by an application program causes a page fault, allocating a memory page (referred to as a first memory page) to the first virtual address; padding the first memory page with data (referred to as first data) in a data sequence corresponding to a cache compression mode, where the cache compression mode is a data compression mode used when data written by the application program in a cache is migrated to the memory page; and then accessing the padded first memory page based on the first virtual address. According to the method, after the memory page is allocated to the virtual address that causes the page fault, the allocated memory page is pre-padded with the first data, and then the memory page padded with the first data is accessed. Subsequently, if data in the memory page is migrated to the cache, the first data padded in the memory page is also migrated to the cache. If the data in the memory page is evicted from the cache to a memory, the evicted cache data may also include the first data. Because the first data is the data in the data sequence corresponding to the cache compression mode, when the cache data including the first data is evicted back to the memory, a probability that the cache data matches the cache compression mode can be increased. In this way, a compression ratio of the cache data can be increased. In a possible implementation, a process of padding the first memory page with the first data may be: padding the first memory page with the first data when the application program enables a memory pre-padding function, where the memory pre-padding function indicates to pad an empty memory page with the first data before the application program accesses the empty memory page. Based on the foregoing possible implementation, a memory page allocated to the application program is pre-padded with the first data only when the application program enables the memory pre-padding function, and there is no need to pre-pad, with the first data, a memory page allocated to each running application program, to reduce a workload of padding the first data. In a possible implementation, before the padding the first memory page with the first data when the application program enables the memory pre-padding function, the method further includes the following step: periodically enabling the memory pre-padding function for the application program based on a running status of the application program. In a possible implementation, before the padding the first memory page with the first data when the application program enables the memory pre-padding function, the method further includes the following step: when a type of the application program is a target type, enabling the memory pre-padding function for the ap