EP-4742007-A1 - WEARABLE DEVICE AND METHOD FOR IDENTIFYING LOCATION OF TARGET OBJECT
Abstract
This wearable device comprises a display, one or more cameras, and at least one processor. The at least one processor is configured to identify information about a target object. The at least one processor is configured to identify, in at least one image, a visual object related to an external object corresponding to the target object. The at least one processor is configured to identify whether a first image including the visual object is displayed. The at least one processor is configured to modify the first image to emphasize the visual object on the basis of identifying that the first image including the visual object is displayed. The at least one processor is configured to, on the basis of identifying that a second image distinguished from the first image including the visual object is displayed, overlay an affordance for changing a user's gaze to display the first image, on to the second image.
Inventors
- CHOI, SUNGSOO
- KIM, Sungoh
- YEOM, DONGHYUN
- KIM, Beomsu
- LEE, BOYOUNG
- LEE, SANGHUN
- CHO, HYOJIN
Assignees
- Samsung Electronics Co., Ltd.
Dates
- Publication Date
- 20260513
- Application Date
- 20240531
Claims (15)
- A wearable device comprising: a display; one or more cameras; memory including one or more storage mediums storing instructions; and at least one processor comprising processing circuitry, wherein the instructions, when being executed by the at least one processor individuall y or collectively, cause the wearable device to: identify information on a target object; identify a visual object related to an external object corresponding to the target object in at least one image acquired by the one or more cameras and obtained based on a gaze of a user of the wearable device during a designated time interval; identify whether a first image including the visual object is displayed through the display based on the gaze of the user; based on identifying that the first image including the visual object is displayed through the display, change the first image to emphasize the visual object; and based on identifying that a second image, which is distinct from the first image including the visual object, is displayed through the display, display an affordance for changing the gaze of the user to display the first image by overlapping the second image.
- The wearable device of claim 1, wherein the information on the target object comprises information on a shape and/or a visual pattern of the target object and information on a size of the target object, wherein the instructions, when being executed by the at least one processor individuall y or collectively, cause the wearable device to: identify one or more candidate visual objects corresponding to the target object based on the information on the shape and/or the visual pattern of the target object; and based on the information on the size of the target object, identify the visual object among the one or more candidate visual objects.
- The wearable device of claim 2, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to: identify at least one target image related to the target object; and identify, based on the at least one target image, the information on the shape and/or the visual pattern of the target object.
- The wearable device of claim 3, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to identify, based on a user input, the information on the size of the target object.
- The wearable device of claim 1, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to, based on identifying that the first image including the visual object is displayed through the display, change a display of a remaining area other than an area corresponding to the visual object in the first image.
- The wearable device of claim 5, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to display information on a distance between the external object and the wearable device in association with the visual object.
- The wearable device of claim 1, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to: based on obtaining information on a plurality of visual objects included in the at least one image, store the information on the plurality of visual objects in the memory; set the information on the plurality of visual objects and the information on the target object as an input value of a designated model indicated by a plurality of parameters; and based on an output value of the designated model, identify the visual object related to the external object corresponding to the target object in the at least one image.
- The wearable device of claim 1, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to: obtain an image of an external environment including the first image and the second image using the one or more cameras; and based on a portion of the image of the external environment, display one of the first image and the second image through the display.
- The wearable device of claim 1, wherein the instructions, when being executed by th e at least one processor individually or collectively, cause the wearable device to: request identification of the visual object to an external wearable device connected to the wearable device; receive information on the visual object identified through the external wearable device based on the request; and based on the information on the visual object, display an element for indicating a location of the external object.
- The wearable device of claim 9, wherein the information on the visual object comprises at least one of information on the location of the external object or information on a time at which the visual object was identified.
- The wearable device of claim 10, wherein the instructions, when being executed by the at least one processor individually or collectively, cause the wearable device to: identify a spatial map of an external space including a first location where the wearable device is located and a second location where the external wearable device is located, based on the information on the visual object; and identify a location of the external object as a third location in the spatial map.
- The wearable device of claim 11, wherein the instructions, when being executed by the at least one processor individually or collectively, cause the wearable device to: identify a path from the first location where the wearable device is located to the third location identified as the location of the external object based on the location information of the external object; and based on the identified path, display the element for indicating the location of the external object.
- The wearable device of claim 1, wherein the instructions, when being executed by t he at least one processor individually or collectively, cause the wearable device to: obtain a spatial map of the external object based on the at least one image; and based on the spatial map, identify the visual object related to the external object.
- A method performed by a wearable device comprising: identifying information on a target object; identifying a visual object related to an external object corresponding to the target object in at least one image obtained based on a gaze of a user of the wearable device during a designated time interval; identifying whether a first image including the visual object is displayed through a display of the wearable device based on the gaze of the user; based on identifying that the first image including the visual object is displayed through the display, changing the first image to emphasize the visual object; and based on identifying that a second image, which is distinct from the first image including the visual object, is displayed through the display, displaying an affordance for changing the gaze of the user to display the first image by overlapping the second image.
- The method of claim 14, wherein the information on the target object comprises information on a shape and/or a visual pattern of the target object and information on a size of the target object, wherein the method further comprises: identifying one or more candidate visual objects corresponding to the target object based on the information on the shape and/or the visual pattern of the target object, and based on the information on the size of the target object, identifying the visual object among the one or more candidate visual objects.
Description
BACKGROUND Technical Field The following description relates to a wearable device and a method for identifying a location of a target object. Description of Related Art In order to provide an enhanced user experience, an electronic device providing an augmented reality (AR) service and/or a virtual reality (VR), can display information generated by a computer in association with an external object within the real-world. The electronic device may be a wearable device that may be worn by a user. For example, the electronic device may be AR glasses and/or a head-mounted device (HMD). The above-described information may be provided as related art for the purpose of helping to understand the present disclosure. No claim or determination is raised as to whether any of the above-described information may be applied as prior art related to the present disclosure. SUMMARY According to an embodiment, a wearable device may include a display, one or more cameras, memory including one or more storage mediums storing instructions, and at least one processor comprising processing circuitry. The instructions, when being executed by the at least one processor individually or collectively, may cause the wearable device to identify information on a target object. The instructions, when being executed by the at least one processor individually or collectively, may cause the wearable device to identify a visual object related to an external object corresponding to the target object in at least one image acquired by the one or more cameras and obtained based on a gaze of a user of the wearable device during a designated time interval. The instructions, when being executed by the at least one processor individually or collectively, may cause the wearable device to identify whether a first image including the visual object is displayed through the display based on the gaze of the user. The instructions, when being executed by the at least one processor individually or collectively, may cause the wearable device to change the first image to emphasize the visual object, based on identifying that the first image including the visual object is displayed through the display. The instructions, when being executed by the at least one processor individually or collectively, may cause the wearable device to display an affordance for changing the gaze of the user to display the first image by overlapping the second image, based on identifying that a second image, which is distinct from the first image including the visual object, is displayed through the display. According to an embodiment, a method of a wearable device may comprise identifying information on a target object, identifying a visual object related to an external object corresponding to the target object in at least one image obtained based on a gaze of a user of the wearable device during a designated time interval. The method may comprise identifying whether a first image including the visual object is displayed through a display of the wearable device based on the gaze of the user. The method may comprise, based on identifying that the first image including the visual object is displayed through the display, changing the first image to emphasize the visual object. The method may comprise, based on identifying that a second image, which is distinct from the first image including the visual object, is displayed through the display, displaying an affordance for changing the gaze of the user to display the first image by overlapping the second image. According to an embodiment, a method of a wearable device may include identifying information on a target object and requesting identification of a visual object related to the target object to an external wearable device connected to the wearable device. The method may include receiving information on the visual object identified through the external wearable device based on the request and based on the information on the visual object, displaying an element for indicating a location of the external object on a display of the wearable device. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of an electronic device in a network environment, according to an embodiment.FIGS. 2A and 2B illustrate an example of a perspective view of a wearable device, according to an embodiment.FIGS. 3A and 3B illustrate an example of the exterior of a wearable device, according to an embodiment.FIG. 4 illustrates an example of a block diagram of a wearable device, according to an embodiment.FIG. 5 illustrates an example of a screen displayed through a display of a wearable device, according to an embodiment.FIG. 6 illustrates components of a wearable device for identifying an external object corresponding to a target object, according to an embodiment.FIG. 7 is a flowchart illustrating an operation of a wearable device, according to an embodiment.FIG. 8A illustrates an example of an operation of a wearable device, according to an embodiment.