KR-20260066934-A - SERVER, METHOD, AND PROGRAM FOR PROVIDING OBJECT MOVEMENT PATH BASED ON IMAGE ANALYSIS
Abstract
It includes a data receiving unit that receives a movement path of an object, a 2D image generated based on the movement path, and an interaction variable from a user terminal; an image processing unit that converts the 2D image into a multidimensional image by adding the interaction variable of the object to each coordinate on the movement path; a feature vector extraction unit that extracts a feature vector from the multidimensional image by considering the interaction variable through a pre-trained learning model; and a result prediction unit that predicts the next movement position of the object in an image space where the multidimensional image is clustered based on the feature vector.
Inventors
- 김경수
Assignees
- 주식회사 케이티
Dates
- Publication Date
- 20260512
- Application Date
- 20241105
Claims (17)
- In an image analysis-based object movement path providing server, A data receiving unit that receives an object's movement path, a 2D image generated based on the movement path, and interaction variables from a user terminal; An image processing unit that converts the 2D image into a multidimensional image by adding interaction variables of the object according to coordinates on the above movement path; A feature vector extraction unit that extracts feature vectors from the multidimensional image by considering the interaction variables by a pre-trained learning model; and A result prediction unit that predicts the next movement position of the object in an image space where the multidimensional image is clustered based on the above feature vector; An image analysis-based object movement path providing server including
- In Article 1, The above image processing unit An image analysis-based object movement path providing server that converts the 2D image into a multidimensional image by further considering the object's movement speed and dwell time variables in addition to the interaction variables.
- In Article 1, The above interaction variable is An image analysis-based object movement path providing server comprising at least one of an interaction frequency, an interaction duration, and an interaction sensitivity level for a predetermined interaction behavior of the above object.
- In Paragraph 3, The above interaction variable is An image analysis-based object movement path providing server, calculated by using either the interaction duration or the interaction sensitivity level as a weight to the value obtained by dividing the above interaction frequency by the maximum interaction frequency.
- In Article 1, The above result prediction unit The number of clusters is derived based on the display resolution of the above user terminal, and An image analysis-based object movement path providing server that generates the image space by clustering the multidimensional image as many times as the number of clusters.
- In Article 1, The above result prediction unit An image analysis-based object movement path providing server that selects an image space close to the current location of the object among clustered image spaces and assigns a feature vector to a location corresponding to the center of the image space.
- In Article 6, The above result prediction unit An image analysis-based object movement path providing server that determines the next movement position based on the probability frequency of the next movement position of the object calculated using the above-mentioned pre-trained learning model.
- In Article 1, The above result prediction unit An image analysis-based object movement path providing server that displays a direction indicator on the multidimensional image to guide the object to the next movement position.
- In a method for providing an image analysis-based object movement path of an image analysis-based object movement path providing server, A step of receiving an object's movement path, a 2D image generated based on the movement path, and interaction variables from a user terminal; A step of converting the 2D image into a multidimensional image by adding interaction variables of the object according to coordinates on the movement path; A step of extracting feature vectors from the multidimensional image by considering the interaction variables by a pre-trained learning model; and A step of predicting the next movement position of the object in an image space where the multidimensional image is clustered based on the above feature vectors; A method for providing an object movement path based on image analysis including
- In Article 9, The step of converting to the above multidimensional image A method for providing an object movement path based on image analysis, wherein the 2D image is converted into a multidimensional image by further considering the movement speed and dwell time variables of the object in addition to the interaction variables.
- In Article 9, The above interaction variable is A method for providing an object movement path based on image analysis, comprising at least one of an interaction frequency, an interaction duration, and an interaction sensitivity degree for a predetermined interaction behavior of the object.
- In Article 11, The above interaction variable is A method for providing an object movement path based on image analysis, calculated by using either the interaction duration or the interaction sensitivity level as a weight to the value obtained by dividing the above interaction frequency by the maximum interaction frequency.
- In Article 9, The above-mentioned predicting step The number of clusters is derived based on the display resolution of the above user terminal, and A method for providing an image analysis-based object movement path, wherein the multidimensional image is clustered by the number of clusters to generate the image space.
- In Article 9, The above-mentioned predicting step A method for providing an object movement path based on image analysis, wherein an image space close to the current position of the object among clustered image spaces is selected, and a feature vector is assigned to a position corresponding to the center of the image space.
- In Article 14, The above-mentioned predicting step An image analysis-based object movement path providing method, wherein the next movement position is determined based on the probability frequency of the next movement position of the object calculated using the above-mentioned pre-trained learning model.
- In Article 9, The above-mentioned predicting step An image analysis-based object movement path providing method, wherein a direction indicator is displayed on the multidimensional image to guide the object to the next movement position.
- In a computer program stored on a computer-readable recording medium comprising instructions for providing an object movement path based on image analysis, Receives an object's movement path, a 2D image generated based on the movement path, and interaction variables from a user terminal, and By adding interaction variables of the object for each coordinate on the above movement path, the 2D image is converted into a multidimensional image, and Feature vectors are extracted from the multidimensional image by considering the interaction variables using a pre-trained learning model, and A computer program stored on a computer-readable recording medium, comprising a sequence of instructions for predicting the next movement position of the object in an image space in which the multidimensional image is clustered based on the feature vector.
Description
Server, method, and computer program for providing object movement path based on image analysis The present invention relates to an image analysis-based object movement path providing server, method, and program. Recently, the metaverse, an immersive virtual space created to mimic the real world, has been gaining popularity. The metaverse integrates Web 3.0 technologies such as decentralization, blockchain, and NFTs to provide more realistic digital content. With advancements in technologies like graphics processing, digital content is expanding from the traditional web environment into new media technologies such as XR and 3D virtual spaces. Accordingly, the W3C identified accessibility issues and solutions that vulnerable groups might face in XR environments, published a draft of the User Requirements for Extended Reality Accessibility in 2020, and continues to develop the document through ongoing communication with user and expert groups. The W3C’s User Requirements for Extended Reality Accessibility recommend considering the provision of personalized environments, interaction and targeting, and notifications to improve accessibility for advanced digital content like the metaverse. As a result, while various studies are being conducted to enhance accessibility in the metaverse, related technologies remain underdeveloped. Large-scale metaverses both domestically and internationally, such as ZEPETO and Ifland, utilize game UI/UX to easily reach users. However, for the elderly and digitally vulnerable, there are barriers starting from the controls, and the most common issue is not knowing where to go or what to do once they enter the space. When examining accessibility methods within existing metaverse spaces, most require the user to perform a direct action to reach the final destination. For instance, borders and alternative text are displayed only after the user hovers the mouse over an object or moves close to it. Voices are also typically only perceived when the user is in close proximity. Because these methods provide guidance only after the user has clicked around and explored various objects, rather than immediately indicating what to look for upon entering the space, users may experience particular difficulty during initial navigation. Since it is very hard to obtain information immediately regarding where to go, what to do, or what to click upon entry, users frequently leave the space after moving around a few times or clicking around. According to a survey conducted in 2023 regarding accessibility in the metaverse, the technologies for this purpose include the following. First is alternative text. Since various visual objects such as images and avatars exist in the metaverse, this method involves assigning alternative text, such as displaying text when a mouse hovers over them. However, as of 2023, the percentage of cases where alternative text is not provided is very low at 18.7%. Second is technology utilizing non-visual elements such as voice. This includes methods using distance-based sound, such as making sounds louder when avatars get closer. Finally, there is technology that emphasizes an object is a clickable object in addition to alternative text, such as by adding a border to an object that needs to be clicked when it gets close. FIG. 1 is a configuration diagram for explaining an image analysis-based object movement path providing system according to one embodiment of the present invention. FIG. 2 is a configuration diagram for explaining an image analysis-based object movement path providing server according to an embodiment of the present invention. FIG. 3 is a configuration diagram illustrating a user terminal according to an embodiment of the present invention. FIGS. 4 and 5 are drawings for illustrating and explaining a canvas output through a platform providing a metaverse space according to an embodiment of the present invention. FIG. 6 is a drawing illustrating an example of a canvas output through a platform providing a virtual workspace according to an embodiment of the present invention. FIG. 7 is a drawing illustrating information provided to a user terminal according to an embodiment of the present invention. FIG. 8 is a flowchart illustrating an image analysis-based object movement path providing method performed in an image analysis-based object movement path providing server and a user terminal according to an embodiment of the present invention. Embodiments of the present invention are described below with reference to the attached drawings so that those skilled in the art can easily implement the invention. However, the present invention may be embodied in various different forms and is not limited to the embodiments described herein. Furthermore, in order to clearly explain the present invention in the drawings, parts unrelated to the explanation have been omitted, and similar parts throughout the specification are denoted by similar reference numerals. Throughout th