Search

US-12620187-B2 - Systems, methods, and user interfaces for generating a three-dimensional virtual representation of an object

US12620187B2US 12620187 B2US12620187 B2US 12620187B2US-12620187-B2

Abstract

Generating a three-dimensional virtual representation of a three-dimensional physical object can be based on capturing or receiving a capture bundle or a set of images. In some examples, generating the virtual representation of the physical object can be facilitated by user interfaces for identifying a physical object and capturing a set of images of the physical object. Generating the virtual representation can include previewing or modifying a set of images. In some examples, generating the virtual representation of the physical object can include generating a first representation of the physical object (e.g., a point cloud) and/or generating a second three-dimensional virtual representation of the physical object (e.g., a mesh reconstruction). In some examples, a visual indication of the progress of the image capture process and/or the generation of the virtual representation of the three-dimensional object can be displayed, such as in a capture user interface.

Inventors

  • Zachary Z. Becker
  • Michelle Chua
  • Thorsten Gernoth
  • Michael P. Johnson
  • Allison W. Dryer

Assignees

  • APPLE INC.

Dates

Publication Date
20260505
Application Date
20230515

Claims (20)

  1. 1 . A method, comprising: at an electronic device in communication with a display: while presenting a view of a physical environment, displaying, using the display, a two-dimensional virtual reticle overlaid with the view of the physical environment, the virtual reticle having an area and displayed in a plane of the display; and in accordance with a determination that one or more criteria are satisfied, wherein the one or more criteria includes a criterion that is satisfied when the area of the virtual reticle overlays, on the display, at least a portion of a physical object that is within a threshold distance of a center of the virtual reticle: displaying, using the display, an animation that transforms the virtual reticle into a virtual three-dimensional shape around the at least the portion of the physical object, wherein an outline of the virtual three-dimensional shape is automatically resized to enclose the physical object as the electronic device is moved around the physical object based on detecting that portions of the physical object are not enclosed by the virtual three-dimensional shape or that there is more than a threshold distance between an edge of the physical object and a surface of the virtual three-dimensional shape.
  2. 2 . The method of claim 1 , further comprising: in accordance with a determination that the one or more criteria are not satisfied: providing feedback to a user of the electronic device.
  3. 3 . The method of claim 2 , wherein the feedback includes a haptic alert, a visual alert, an audible alert, or a combination of these.
  4. 4 . The method of claim 1 , wherein the virtual reticle includes one or more visual indications of the area of the virtual reticle.
  5. 5 . The method of claim 4 , wherein the visual indications of the area of the virtual reticle are visual indications of vertices of a virtual two-dimensional shape corresponding to the area of the virtual reticle.
  6. 6 . The method of claim 4 , wherein the visual indications of the area of the virtual reticle are visual indications of an outline of a virtual two-dimensional shape corresponding to the area of the virtual reticle.
  7. 7 . The method of claim 1 , wherein the one or more criteria include a criterion that is satisfied when at least a portion of the physical object is overlaid by the center of the virtual reticle.
  8. 8 . The method of claim 1 , wherein the one or more criteria include a criterion is satisfied when the physical object is within a threshold distance of the center of the virtual reticle and is entirely within the area of the virtual reticle on the display.
  9. 9 . A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: while presenting a view of a physical environment, display, using the display, a two-dimensional virtual reticle overlaid with the view of the physical environment, the virtual reticle having an area and displayed in a plane of the display; and in accordance with a determination that one or more criteria are satisfied, wherein the one or more criteria includes a criterion that is satisfied when the area of the virtual reticle overlays, on the display, at least a portion of a physical object that is within a threshold distance of a center of the virtual reticle: display, using the display, an animation that transforms the virtual reticle into a virtual three-dimensional shape around the at least the portion of the physical object, wherein an outline of the virtual three-dimensional shape is automatically resized to enclose the physical object as the electronic device is moved around the physical object based on detecting that portions of the physical object are not enclosed by the virtual three-dimensional shape or that there is more than a threshold distance between an edge of the physical object and a surface of the virtual three-dimensional shape.
  10. 10 . The non-transitory computer readable storage medium of claim 9 , wherein the view of the physical environment is captured by a camera of the electronic device and displayed on the display of the electronic device.
  11. 11 . The non-transitory computer readable storage medium of claim 9 , wherein the two-dimensional virtual reticle is screen-locked, and wherein the instructions, when executed by the one or more processors of the electronic device, further cause the electronic device to: display a screen-locked targeting affordance in the center of the two-dimensional virtual reticle.
  12. 12 . The non-transitory computer readable storage medium of claim 9 , wherein displaying the animation includes: visually rotating an outline of a virtual two-dimensional shape corresponding to the area of the virtual reticle such that the outline appears to overlay the plane of a physical surface with which a bottom portion of the physical object is in contact and encloses the bottom portion of the physical object; and adding height to the outline of the virtual two-dimensional shape to transition to displaying an outline of the virtual three-dimensional shape around the at least the portion of the physical object, wherein a height of the virtual three-dimensional shape is based on a height of the physical object.
  13. 13 . The non-transitory computer readable storage medium of claim 12 , wherein displaying the animation includes: before visually rotating the outline of the virtual two-dimensional shape, displaying an animation visually connecting visual indications of the area of the two-dimensional virtual reticle to form the outline of the virtual two-dimensional shape.
  14. 14 . The non-transitory computer readable storage medium of claim 12 , wherein visually rotating the outline of the virtual two-dimensional shape includes resizing the outline of the virtual two-dimensional shape based on an area of a bottom portion of physical object.
  15. 15 . An electronic device, comprising: a display; memory; and one or more processors configured to: while presenting a view of a physical environment, display, using the display, a two-dimensional virtual reticle overlaid with the view of the physical environment, the virtual reticle having an area and displayed in a plane of the display; and in accordance with a determination that one or more criteria are satisfied, wherein the one or more criteria includes a criterion that is satisfied when the area of the virtual reticle overlays, on the display, at least a portion of a physical object that is within a threshold distance of a center of the virtual reticle: display, using the display, an animation that transforms the virtual reticle into a virtual three-dimensional shape around the at least the portion of the physical object, wherein an outline of the virtual three-dimensional shape is automatically resized to enclose the physical object as the electronic device is moved around the physical object based on detecting that portions of the physical object are not enclosed by the virtual three-dimensional shape or that there is more than a threshold distance between an edge of the physical object and a surface of the virtual three-dimensional shape.
  16. 16 . The electronic device of claim 15 , wherein the virtual three-dimensional shape is a cuboid.
  17. 17 . The electronic device of claim 15 , wherein one or more surfaces of the virtual three-dimensional shape are transparent such that the physical object is visible through the one or more surfaces of the virtual three-dimensional shape.
  18. 18 . The electronic device of claim 15 , wherein the one or more processors are further configured to: display one or more virtual handle affordances on a top portion of the virtual three-dimensional shape; detect an input corresponding to a request to move a first virtual handle affordance of the one or more virtual handle affordances; and in response to detecting the input, resize a height, width, depth, or a combination of these of the virtual three-dimensional shape in accordance with the input.
  19. 19 . The electronic device of claim 18 , wherein the one or more processors are further configured to: detect that user attention is directed to the first virtual handle affordance; and in response to detecting that the user attention is directed to the first virtual handle affordance, enlarge the first virtual handle affordance.
  20. 20 . The electronic device of claim 15 , wherein the one or more processors are further configured to: increase a visual prominence of a second virtual handle affordance on a bottom surface of the virtual three-dimensional shape in accordance with detecting that a field of view of the electronic device is moving closer to an elevation of the bottom surface of the three-dimensional shape.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 63/364,878, filed May 17, 2022, the content of which is incorporated herein by reference in its entirety for all purposes. FIELD OF THE DISCLOSURE This relates generally to systems, methods, and user interfaces for capturing and/or receiving images of a physical object and generating a three-dimensional virtual representation of the physical object based on the images. SUMMARY OF THE DISCLOSURE This relates generally to systems, methods, and user interfaces for capturing and/or receiving images of a physical object and generating a three-dimensional virtual representation of the physical object based on the images. In some examples, generating a three-dimensional representation of a three-dimensional object can be based on capturing a set of images of the physical object (e.g., using user interfaces for identifying a target physical object and capturing images of the object) and/or on receiving a capture bundle or a set of images of the physical object (e.g., using a user interface for importing a capture bundle or a set of images). In some embodiments, generating the virtual representation of the physical object includes generating one or more point cloud representations of the physical object and/or one or more mesh representations of the object. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an example system that can generate a three-dimensional representation of a three-dimensional object according to examples of the disclosure. FIG. 2 illustrates an example user interface for importing images to generate a three-dimensional representation of a three-dimensional object according to examples of the disclosure. FIG. 3 illustrates an example preview of images according to examples of the disclosure. FIG. 4 illustrates an example first point representation of the three-dimensional object according to examples of the disclosure. FIG. 5 illustrates an example second point representation of the three-dimensional object according to examples of the disclosure. FIG. 6 illustrates an example third point representation of the three-dimensional object according to examples of the disclosure. FIG. 7 illustrates an example second representation of the three-dimensional object according to examples of the disclosure. FIGS. 8-9 illustrate example flowcharts of generating a three-dimensional representation of an object from images or object captures according to example of the disclosure. FIGS. 10-29 illustrate example user interfaces for generating a three-dimensional virtual representation of a physical object according to examples of the disclosure. FIGS. 30-31 illustrate example flowcharts of generating a three-dimensional virtual representation of a physical object according to examples of the disclosure. DETAILED DESCRIPTION In the following description of examples, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the disclosed examples. This relates generally to systems, methods, and user interfaces for generating a three-dimensional virtual representation of a three-dimensional physical object. In some examples, generating the virtual representation of the physical object can be based on capturing a set of images (e.g., using user interfaces for identifying a target physical object and capturing images of the object), receiving a capture bundle, and/or receiving a set of images (e.g., using a user interface for importing a capture bundle or a set of images). In some examples, generating the three-dimensional representation of the three-dimensional object can include previewing and/or modifying a set of images (e.g., using a preview user interface). In some examples, generating the three-dimensional representation of the three-dimensional object can include generating a first representation of the three-dimensional object (e.g., a point cloud). In some examples, generating the three-dimensional representation of the three-dimensional object can include generating a second three-dimensional representation of the three-dimensional object (e.g., a three-dimensional mesh reconstruction of the three-dimensional object). In some examples, generating the first representation of the three-dimensional object and generating the second representation of the three-dimensional object can include display of progress using progress bars and/or using an indication of progress associated with a plurality of points derived from the images and/or using the point cloud. For example, in some examples, while displaying the first representation of a three-dimensional object, a first visual indication of progress of the generation of the second representation of the three-dimensional o