Search

EP-4741878-A1 - METHOD FOR GENERATING SPATIAL MAP USING CAPTURED IMAGE OF TARGET AREA AND ELECTRONIC DEVICE FOR PERFORMING SAME

EP4741878A1EP 4741878 A1EP4741878 A1EP 4741878A1EP-4741878-A1

Abstract

A method of generating a spatial map includes obtaining a captured image of a target area in a space, obtaining light detection and ranging (LiDAR) scan data by scanning a depth of the target area with respect to a first height, performing object detection on the captured image, and, according to the performed object detection, generating a spatial map of the target area, based on all of the LiDAR scan data and the captured image.

Inventors

  • CHOI, Isak
  • BYUN, Dongnam
  • Hwang, Jinyoung

Assignees

  • Samsung Electronics Co., Ltd.

Dates

Publication Date
20260513
Application Date
20241024

Claims (15)

  1. A method comprising: obtaining a captured image of a target area in a space; obtaining light detection and ranging (LiDAR) scan data by scanning a depth of the target area with respect to a first height; performing object detection on the captured image; and according to the performed object detection, generating a spatial map of the target area based on the LiDAR scan data and the captured image.
  2. The method of claim 1, wherein, based on an object being detected from the captured image, the generating the spatial map comprises: obtaining depth values at a plurality of spots along a second height different from the first height in the target area, based on the captured image; and generating the spatial map, based on the obtained depth values.
  3. The method of any one of claims 1 and 2, wherein the obtaining the depth values comprises: obtaining a depth image from the captured image; performing depth calibration for making a scale of the LiDAR scan data identical to a scale of the depth image; determining, as the second height, a height at which the detected object is not detected in the target area; and obtaining depth values at the plurality of spots along the second height from the depth image, based on a result of the depth calibration.
  4. The method of any one of claims 1 to 3, wherein the performing the depth calibration comprises: obtaining absolute depth values at a plurality of spots along the first height from the LiDAR scan data; obtaining relative depth values at the plurality of spots along the first height from the depth image; and obtaining a scale factor for transforming a relative depth value included in the depth image into an absolute depth value, based on the obtained absolute depth values and the obtained relative depth values.
  5. The method of any one of claims 1 to 4, wherein the obtaining the scale factor comprises determining a value of the scale factor such that a difference between the absolute depth values of the plurality of spots along the first height, and a product of the relative depth values of the plurality of spots along the first height and the scale factor is minimal, according to a cost function.
  6. The method of any one of claims 1 to 5, wherein the determining the second height comprises: identifying a range of heights at which the detected object is not detected in the target area, based on the captured image; and determining a height within the identified range as the second height.
  7. The method of any one of claims 1 to 6, wherein the determining the second height comprises: based on the depth image, determining borders of the spatial map of the target area at a plurality of heights different from the first height; and determining, as the second height, a height corresponding to a border having a minimum number of junctions among the determined borders.
  8. The method of any one of claims 1 to 7, wherein the determining the second height comprises: based on the depth image, calculating areas of the spatial map of the target area at a plurality of heights different from the first height; and determining a height corresponding to a maximum area among the calculated areas as the second height.
  9. The method of any one of claims 1 to 8, further comprising: based on an object being detected from the captured image, determining an area at which the detected object is located, by comparing a border of the spatial map determined based only on the LiDAR scan data with a border of the spatial map determined based on the LiDAR scan data and the captured image; and displaying the detected object on the determined area on the spatial map.
  10. A non-transitory computer-readable recording medium having recorded thereon a computer program, which, when executed by a computer, performs the method of any one of claims 1 to 9.
  11. An electronic device 100 for generating a spatial map, the electronic device comprising: memory 130 storing a program for generating a spatial map; and at least one processor 120, wherein, by executing the program stored in the memory 130, the at least one processor 120 is configured to: obtain a captured image of a target area in a space; obtain light detection and ranging (LiDAR) scan data by scanning a depth of the target area with respect to a first height; perform object detection on the captured image; and according to the performed object detection, generate a spatial map of the target area, based on the LiDAR scan data and the captured image.
  12. The electronic device of claim 11, wherein, in the generating the spatial map, the at least one processor 120 is configured to: based on an object being detected from the captured image, obtain depth values at a plurality of spots along a second height different from the first height in the target area, based on the captured image, and generate the spatial map, based on the obtained depth values.
  13. The electronic device of any one of claims 11 and 12, wherein, in the obtaining the depth values, the at least one processor 120 is configured to: obtain a depth image from the captured image, perform depth calibration for making a scale of the LIDAR scan data identical to a scale of the depth image, determine, as the second height, a height at which the detected object is not detected in the target area, and obtain depth values at the plurality of spots along the second height from the depth image, based on a result of the depth calibration.
  14. The electronic device of any one of claims 11 to 13, wherein, in the performing the depth calibration, the at least one processor 120 is configured to: obtain absolute depth values at a plurality of spots along the first height from the LiDAR scan data, obtain relative depth values at the plurality of spots along the first height from the depth image, and obtain a scale factor for transforming a relative depth value included in the depth image into an absolute depth value, based on the obtained absolute depth values and the obtained relative depth values.
  15. The electronic device of any one of claims 11 to 14, wherein, in the obtaining the scale factor, the at least one processor 120 is configured to: determine a value of the scale factor such that a difference between the absolute depth values at the plurality of spots along the first height, and a product of the relative depth values at the plurality of spots along the first height and the scale factor is minimal, according to a cost function.

Description

TECHNICAL FIELD The disclosure relates to a method of generating a spatial map, and more particularly, to a method of increasing the accuracy of a spatial map by using a captured image of a target area. BACKGROUND ART Electronic devices such as robot cleaners can scan a space by using a built-in light detection and ranging (LiDAR) sensor while moving through the space, and generate a map of the space (spatial map) based on scan data. However, when the spatial map is generated using only data (LiDAR scan data) obtained by the scanning of the space by using the LiDAR sensor, the generated spatial map may differ from the structure of the actual space. Because various objects (e.g., furniture and home appliances) may exist in the space, and electronic devices cannot determine whether a point where a signal (light) transmitted by the LiDAR sensor is reflected is a wall in the space or an object existing in the space. Electronic devices can actually generate a spatial map by recognizing a point where an object exists as a wall of a space, and therefore the border of the spatial map may be different from the actual space. DISCLOSURE OF INVENTION SOLUTION TO PROBLEM Accordingly, provided is an electronic device configured to generate a more accurate spatial map of a surrounding environment based on LiDAR scan data and an image captured from a camera. Further provided is a method of generating the same. Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments. According to an aspect of the disclosure, a method of generating a spatial map, disclosed as a technology for achieving a technical task, may include obtaining a captured image of a target area in a space, obtaining light detection and ranging (LiDAR) scan data by scanning a depth of the target area with respect to a first height, performing object detection on the captured image, and, according to the performed object detection, generating a spatial map of the target area based on the LiDAR scan data and the captured image. According to an aspect of the disclosure, an electronic device for generating a spatial map, disclosed as a technology for achieving a technical task, may include at least one memory storing a program for generating a spatial map, and at least one processor, wherein, by executing the program stored in the at least one memory, the at least one processor is configured to: obtain a captured image of a target area in a space, obtain LiDAR scan data by scanning a depth of the target area with respect to a first height, perform object detection on the captured image, and according to the performed object detection, generate a spatial map of the target area, based on the LiDAR scan data and the captured image. According to an aspect of the disclosure, a non-transitory computer-readable recording medium, disclosed as a technology for achieving a technical task, may have recorded thereon a computer program, which, when executed by a computer, performs at least one of embodiments of the above-described method. According to an embodiment of the disclosure, a computer program is stored in a non-transitory medium to execute at least one of the embodiments of the above-described method. BRIEF DESCRIPTION OF DRAWINGS The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which: FIG. 1 is a diagram of a system for generating a spatial map, according to an embodiment of the disclosure;FIG. 2 is a diagram for comparing a spatial map generated according to an embodiment of the disclosure with a spatial map generated using only light detection and ranging (LiDAR) scan data in the related art;FIG. 3 is a block diagram for explaining the structure of a server included in a system for generating a spatial map, according to an embodiment of the disclosure;FIG. 4 is a block diagram for explaining the structure of a robot cleaner included in a system for generating a spatial map, according to an embodiment of the disclosure;FIG. 5 is a view for describing objectives and major features of a spatial map generating method according to an embodiment of the disclosure;FIG. 6 is a flowchart of a method of generating a spatial map, according to an embodiment of the disclosure;FIG. 7 is a flowchart for explaining sub-operations included in operation 605 of FIG. 6 according to an embodiment of the disclosure;FIG. 8 is a flowchart for explaining sub-operations included in operation 701 of FIG. 7 according to an embodiment of the disclosure;FIG. 9 is a flowchart for explaining sub-operations included in operation 802 of FIG. 8 according to an embodiment of the disclosure;FIG. 10 is a flowchart for explaining sub-operations included in operation 803 of FIG. 8 according to an embodiment of the