KR-20260063314-A - SYSTEM AND METHOD FOR PROVIDING ULTRA-LOW LATENCY XR CONTENT USING MEMORY-BASED CLOUD CACHING
Abstract
The present invention relates to a memory-based cloud caching system for providing ultra-low latency XR content. A memory-based cloud caching system according to one embodiment may include: a computation cache unit for caching intermediate computation data of XR content; a data structure management unit using an extensible data structure in the form of a hash map and a linked list structure to store and manage data stored in the computation cache unit; a data replacement unit for replacing data based on a Least Recently Used (LRU) algorithm for data management in the data structure management unit; a garbage collector for recording access times for caching the intermediate computation data and performing garbage collection according to a pre-set usage standard time; and a cache management unit for performing storage and retrieval of DBMS data and high-dimensional data by dynamically adjusting the structure according to the type of cache data.
Inventors
- 최병은
- 이정헌
- 이지훈
Assignees
- 주식회사 나눔기술
Dates
- Publication Date
- 20260507
- Application Date
- 20241030
Claims (12)
- A computation cache unit that caches intermediate computation data of XR content; A data structure management unit that uses an expandable data structure in the form of a hash map and a linked list structure to store and manage data stored in the above-mentioned operation cache unit; A data replacement unit that replaces data based on the LRU (Least Recently Used) algorithm for data management in the above data structure management unit; A garbage collector that records the access time for caching the above intermediate operation data and performs garbage collection according to a preset usage standard time; and A cache management unit that performs storage and retrieval of DBMS data and high-dimensional data by dynamically adjusting the structure according to the type of cache data. A memory-based cloud caching system for providing ultra-low latency XR content including
- In paragraph 1, The above operation cache unit is, Among the intermediate computation data of XR content, ambient occlusion data is cached preferentially, and A memory-based cloud caching system for providing ultra-low latency XR content, characterized by optimizing by comparing the data with existing cache data whenever duplicate computation data occurs, and omitting the computation if identical data exists.
- In paragraph 1, The above data structure management unit is, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by managing data by applying a chaining technique to resolve hash collisions that may occur in a hash table structure.
- In paragraph 1, The above data replacement unit is, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by storing the usage history of the data being replaced in a log when the data is replaced by an LRU algorithm.
- In paragraph 1, The above garbage collector is, Determine the targets for garbage collection by considering the access frequency and access time of the above cache data together, and A memory-based cloud caching system for providing ultra-low latency XR content, characterized by analyzing user access patterns and prioritizing the deletion of unnecessary cache data when cache memory usage exceeds a certain threshold.
- In paragraph 1, The above cache management unit is, Automatically recognizes structural differences between RDBMS data and high-dimensional data, dynamically adjusts the cache structure to match the data format, and A memory-based cloud caching system for providing ultra-low latency XR content, characterized by automatically operating when a preset memory usage threshold is reached to manage cache data so as not to exceed the threshold.
- In paragraph 1, The above data structure management unit is, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by additionally using a binary search algorithm that can search starting from a specific index to minimize node access time in a linked list structure.
- In paragraph 1, The above cache management unit is, In a multi-user environment, it manages the data space allocated to each user by separating it, and performs dynamic allocation of cache data based on the user's connection status, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by monitoring the size of cache data and automatically expanding new data structures as usage increases.
- In paragraph 1, The above data replacement unit is, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by dynamically adjusting data replacement priority based on the access frequency of cached data to prioritize the replacement of the least used data.
- In paragraph 1, A memory-based cloud caching system for providing ultra-low latency XR content, characterized by the above-mentioned computation cache unit synchronizing data changes between the Edge Node and the cloud when updating cached data is required.
- In paragraph 1, The above data replacement unit is, A memory-based cloud caching system for providing ultra-low latency XR content characterized by maintaining and managing separate cache data for each user session.
- A step of caching intermediate computation data of XR content; A step of storing and managing the cached data using an extensible data structure in the form of a hash map and a linked list structure; A step of managing the data structure by replacing data based on the LRU (Least Recently Used) algorithm; A step of recording the access time of cache data and performing garbage collection according to a preset usage standard time; and A step of performing data storage and retrieval for DBMS data and high-dimensional data by dynamically adjusting the structure according to the type of cache data. A method of operation for a memory-based cloud caching system including
Description
Memory-based cloud caching system and method for providing ultra-low latency XR content The present invention relates to a memory-based cloud caching system for providing ultra-low latency XR content, and more specifically, to a technology for optimizing data retrieval and system performance by efficiently caching and managing intermediate computation data of XR content. Recently, XR (Extended Reality) content is being used in various application fields such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), and active technological development is underway to provide high-quality content in real time. Such technology requires a method to efficiently manage network and computing resources to process large-scale data in real time and provide it to users without delay. A cloud-based caching system is a system designed to provide XR content in a cloud environment, utilizing caching technology to reduce data transmission speed and latency. Through this, the structure is evolving to minimize redundant data transmission and provide data quickly on the client side. In particular, methods that efficiently manage and replace cached data using algorithms such as LRU (Least Recently Used) are being primarily adopted. Furthermore, regarding the optimization of data structures, high-performance caching systems are developing technologies that use data structures such as hash tables and linked lists to optimize data access speeds and effectively resolve collisions when they occur. There is a demand for a data structure that possesses the flexibility to process various forms of data along with efficient data management. Because XR content requires the real-time processing of large volumes of data, there are difficulties in managing network latency and maintaining data consistency. In particular, if data is not properly managed when it is updated or unused in a caching system, delays affecting the user experience or problems where the latest data is not reflected may occur. On the other hand, there is also the issue of memory usage inefficiency. When operating a cache system in a cloud environment, memory resources may be used inefficiently. In particular, if infrequently accessed data occupies memory or garbage collection is not performed properly, system performance degradation and excessive memory usage may occur. In addition, there are difficulties in processing various data types. XR content includes diverse forms of data, such as complex high-dimensional data and DBMS data, in addition to general media data. In order for these data to be processed consistently, cache management and data structure adjustments optimized for each data type are essential, but there are technical difficulties in effectively implementing this. FIG. 1 is a diagram illustrating the connection structure of the components and the control unit of a memory-based cloud caching system for providing ultra-low latency XR content according to one embodiment. FIG. 2 is a diagram illustrating the operation of a rendering process and a cache module of an ultra-low latency XR content provision system according to one embodiment. FIG. 3 is a diagram illustrating a hash table structure and a data mapping process according to an embodiment. FIG. 4 is a diagram illustrating a structure for managing fields and values of cache data according to one embodiment. FIG. 5 is a diagram illustrating the operation of cache garbage collection and the linkage between the memory DB according to one embodiment. Figure 6 is a diagram illustrating the results of analyzing data hit and miss situations in a cache system according to one embodiment over time. FIG. 7 is a flowchart illustrating the operation procedure of a cache system according to one embodiment. Specific structural or functional descriptions of embodiments according to the concept of the present invention disclosed herein are provided merely for the purpose of explaining embodiments according to the concept of the present invention, and embodiments according to the concept of the present invention may be implemented in various forms and are not limited to the embodiments described herein. Embodiments according to the concept of the present invention may be subject to various modifications and may take various forms; therefore, embodiments are illustrated in the drawings and described in detail in this specification. However, this is not intended to limit the embodiments according to the concept of the present invention to specific disclosed forms, and includes modifications, equivalents, or substitutions that fall within the spirit and scope of the present invention. Terms such as "first" or "second" may be used to describe various components, but said components should not be limited by said terms. For the sole purpose of distinguishing one component from another, for example, without departing from the scope of rights according to the concept of the present invention, the first componen