Search

EP-4293522-B1 - SYSTEM AND METHOD FOR CACHING IN STORAGE DEVICES

EP4293522B1EP 4293522 B1EP4293522 B1EP 4293522B1EP-4293522-B1

Inventors

  • KANNAN, SUDARSUN
  • REN, Yujie
  • PITCHUMANI, REKHA

Dates

Publication Date
20260506
Application Date
20230602

Claims (10)

  1. A method, comprising: opening a first file, by a first thread; reading a first page of data, from the first file, into a page cache in host (100) memory of a host (100); adding, to a first data structure, a first pointer, the first pointer pointing to the first page of data; opening a second file, by a second thread; reading a second page of data, from the second file, into the page cache; and adding, to the first data structure, a second pointer, the second pointer pointing to the second page of data, the method further comprising: moving the first pointer to a second data structure, different from the first data structure.
  2. The method of claim 1, further comprising: modifying, by the host (100), data in the first page of data.
  3. The method of claim 2, further comprising: flushing the first page to the first file; and moving the first pointer to the first data structure.
  4. The method of any one of claims 1 to 3, wherein the first thread is a member of a first control group, and the second thread is a member of the first control group.
  5. The method of claim 4, wherein the page cache is part of a memory budget of the first control group.
  6. The method of any one of claims 1 to 5, wherein the first file is within a first directory, and the second file is within the first directory.
  7. The method of any one of claims 1 to 6, further comprising receiving a request, from the second thread, to group the second file with the first file.
  8. The method of any one of claims 1 to 7, further comprising: opening a third file, by the first thread; reading a third page of data, from the third file, into the page cache; and adding, to a first data structure, a pointer to the third page of data.
  9. The method of claim 8, further comprising receiving a request, from the first thread, to group the third file with the first file.
  10. A system, comprising: a processing circuit (105); and memory, operatively connected to the processing circuit (105) and storing instructions that, when executed by the processing circuit (105), cause the system (115) to perform the method according to any one of claims 1 to 9.

Description

FIELD One or more aspects of embodiments according to the present disclosure relate to computing systems, and more particularly to a system and method for caching in storage devices. BACKGROUND WO 2022/068760 A1 discloses a method for memory management, and an apparatus for same. The method and apparatus can be used for fine management on process access situations in physical hosts, virtual machines or containers, and traffic occupancy situations of cgroups. The method comprises: recording, by means of a first record table corresponding to a target buffer page, all cgroups that have performed input/output on target data in the target buffer page, and the number of inputs/outputs of the target data performed by all the cgroups. The accuracy of traffic statistics results of cgroups can be improved, and a first cgroup is subjected to throttling on the basis of the updated first record table, such that the throttling of the first cgroup can be fairer and more accurate. US 2022/027327 A1 discloses an apparatus that includes a memory including a shared page cache and program instructions for a distributed virtual file system, VFS, for use in performing input/output, I/O, operations. An operating system of the computing system executes a central VFS in a first thread and executes a first application and the program instructions for the distributed VFS in a second thread. The distributed VFS determines that a first page, including data to which a first application has requested access, is stored in the shared page cache. In response to the determination, the distributed VFS accesses the requested data from the shared page cache without signaling the operating system or the central VFS. The computing system may be implemented in a device including a microkernel operating system. CN 104 750 621 B discloses a kind of caching method and control system, wherein caching method comprises the following steps: control caching is organized as cache layer in units of page, wherein a cache layer is controlled to be divided into read buffer layer and write buffer layer; according to digital independent or the time order and function of write-in order, caching of page puts in order in the read buffer layer of dynamic adjustment in real time, and caching of page puts in order in write buffer layer. It is by sharing read buffer layer and write buffer layer, and according to digital independent or the time order and function of write-in order, caching of page puts in order in the caching of dynamic adjustment in real time, improves the hit rate of caching, efficiently solves the problem of traditional cache layer is ageing poor. When data is read from persistent storage by a host, the host may cache data in host memory, and, in some circumstances, the host may prefetch data. The extent to which data are cached may affect the cache hit rate (the fraction of cache read accesses that succeed) and the performance of applications running on the host. It is with respect to this general technical environment that aspects of the present disclosure are related. SUMMARY Embodiments of the invention are set out in the appended claims. According to an embodiment of the present disclosure, there is provided a method, including: opening a first file, by a first thread; reading a first page of data, from the first file, into a page cache in host memory of a host; adding, to a first data structure, a first pointer, the first pointer pointing to the first page of data; opening a second file, by a second thread; reading a second page of data, from the second file, into the page cache; and adding, to the first data structure, a second pointer, the second pointer pointing to the second page of data, the method further includes moving the first pointer to a second data structure different from the first data structure. In some embodiments, the method further includes: modifying, by the host, data in the first page of data. In some embodiments, the method further includes: flushing the first page to the first file; and moving the first pointer to the first data structure. In some embodiments, the first thread is a member of a first control group, and the second thread is a member of the first control group. In some embodiments, the page cache is part of a memory budget of the first control group. In some embodiments, the first file is within a first directory, and the second file is within the first directory. In some embodiments, the method further includes receiving a request, from the second thread, to group the second file with the first file. In some embodiments, the method further includes: opening a third file, by the first thread; reading a third page of data, from the third file, into the page cache; and adding, to a first data structure, a pointer to the third page of data. In some embodiments, the method further includes receiving a request, from the first thread, to group the third file with the first file. According to an embodiment of the