Search

EP-3980935-B1 - GENERATING AND RENDERING MOTION GRAPHICS EFFECTS BASED ON RECOGNIZED CONTENT IN CAMERA VIEW FINDER

EP3980935B1EP 3980935 B1EP3980935 B1EP 3980935B1EP-3980935-B1

Inventors

  • FAABORG, ALEXANDER JAMES
  • MENG, KANKAN
  • KORNGOLD, Joost

Dates

Publication Date
20260506
Application Date
20200917

Claims (15)

  1. A computer-implemented method (500) comprising: receiving (502), by an electronic device (104; 200; 300), a visual scene (100A; 201) within a viewing window depicting a multi-frame real-time visual scene captured by a camera (202) onboard the electronic device (104; 200; 300); identifying (504), by the electronic device (104; 200; 300), a plurality of elements (106, 108, 110, 308) of the visual scene (100A; 201); detecting (506), by the electronic device (104; 200; 300) and based on the plurality of elements (106, 108, 110, 308) identified in the visual scene (100A; 201), at least one graphic indicator (114, 116; 212; 310; 402) associated with at least one of the plurality of elements (106, 108, 110, 308), wherein the at least one graphic indicator (114, 116; 212; 310; 402) is located on a portion of the at least one of the plurality of elements (106, 108, 110, 308); detecting (508), by the electronic device (104; 200; 300), at least one boundary (122; 312; 404) associated with the at least one element (106, 108, 110, 308), wherein the boundary is detected as an edge of the at least one element (106, 108, 110, 308); generating (510), in the viewing window and based on the detection of the at least one graphic indicator (114, 116; 212; 310; 402), Augmented Reality, AR, motion graphics (124; 214; 314; 406, 408) within the detected boundary (122; 312; 404), wherein the detected edge is configured to contain the AR motion graphics (124; 214; 314; 406, 408) to a portion of the visual scene; and in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is available, retrieving (512) the content (216; 316; 410) and visually indicating an AR tracked control (318; 412) on the at least one element (106, 108, 110, 308) within the viewing window.
  2. The method of claim 1, further comprising: in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is unavailable, dissipating the AR motion graphics (124; 214; 314; 406, 408).
  3. The method of claim 1 or 2, wherein the AR motion graphics (124; 214; 314; 406, 408) include animated effects initiated at a location of the at least one graphic indicator (114, 116; 212; 310; 402) and expanded to the boundary (122; 312; 404), the animated effects including moving elements (106, 108, 110, 308) presented within the detected boundary (122; 312; 404).
  4. The method of any of claims 1 to 3, wherein: the AR tracked control (318; 412) is a play button configured to initiate, in the viewing window, an immersive AR experience with the content (216; 316; 410); and receiving, from a user accessing the electronic device (104; 200; 300), input at the play button triggers execution of the immersive AR experience.
  5. The method of any of claims 1 to 4, wherein: the graphic indicator (114, 116; 212; 310; 402) is a logo; and the AR motion graphics (124; 214; 314; 406, 408) include a plurality of animated and non-overlapping shapes presented within the detected boundary (122; 312; 404).
  6. The method of any of claims 1 to 5, wherein the at least one element (106, 108, 110, 308) is a virtual object; and the detected boundary (122; 312; 404) defines a surface of the virtual object; or the detected boundary (122; 312; 404) defines a volume of the virtual object.
  7. The method of any of claims 1 to 6, wherein the retrieved content (216; 316; 410) is based on a geographic location of the electronic device (104; 200; 300).
  8. A computer program product tangibly embodied on a non-transitory computer-readable storage medium and comprising instructions that, when executed by at least one processor of an electronic device, are configured to cause the at least one processor to: receive, by the electronic device (104; 200; 300), a visual scene (100A; 201) within a viewing window depicting a multi-frame real-time visual scene captured by a camera (202) onboard the electronic device (104; 200; 300); identify, by the electronic device (104; 200; 300), a plurality of elements (106, 108, 110, 308) of the visual scene (100A; 201); detect, by the electronic device (104; 200; 300) and based on the plurality of elements (106, 108, 110, 308) identified in the visual scene (100A; 201), at least one graphic indicator (114, 116; 212; 310; 402) associated with at least one of the plurality of elements (106, 108, 110, 308), ,wherein the at least one graphic indicator (114, 116; 212; 310; 402) is located on a portion of the at least one of the plurality of elements (106, 108, 110, 308); detect, by the electronic device (104; 200; 300), at least one boundary (122; 312; 404) associated with the at least one element (106, 108, 110, 308), wherein the boundary is detected as an edge of the at least one element (106, 108, 110, 308); generate, in the viewing window and based on the detection of the at least one graphic indicator (114, 116; 212; 310; 402), Augmented Reality, AR, motion graphics (124; 214; 314; 406, 408) within the detected boundary (122; 312; 404), wherein the detected edge is configured to contain the AR motion graphics (124; 214; 314; 406, 408) to a portion of the visual scene; and in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is available, retrieve the content (216; 316; 410) and visually indicating an AR tracked control (318; 412) on the at least one element (106, 108, 110, 308) within the viewing window.
  9. The computer program product of claim 8, further comprising: in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is unavailable, dissipating the AR motion graphics (124; 214; 314; 406, 408).
  10. The computer program product of claim 8 or 9, wherein the AR motion graphics (124; 214; 314; 406, 408) include animated effects initiated at a location of the at least one graphic indicator (114, 116; 212; 310; 402) and expanded to the boundary (122; 312; 404), the animated effects including moving elements (106, 108, 110, 308) presented within the detected boundary (122; 312; 404).
  11. The computer program product of any of claims 8 to 10, wherein: the graphic indicator (114, 116; 212; 310; 402) is a logo; and the AR motion graphics (124; 214; 314; 406, 408) include a plurality of animated and non-overlapping shapes presented within the detected boundary (122; 312; 404).
  12. A system comprising: at least one processor (232; 602, 652); memory (234; 604, 664) storing instructions that, when executed by the at least one processor (232; 602, 652), cause the system to perform operations including: receiving a visual scene (100A; 201) within a viewing window depicting a multi-frame real-time visual scene captured by a camera (202) associated with the at least one processor; identifying a plurality of elements (106, 108, 110, 308) of the visual scene (100A; 201); detecting, based on the plurality of elements (106, 108, 110, 308) identified in the visual scene (100A; 201), at least one graphic indicator (114, 116; 212; 310; 402) associated with at least one of the plurality of elements (106, 108, 110, 308), wherein the at least one graphic indicator (114, 116; 212; 310; 402) is located on a portion of the at least one of the plurality of elements (106, 108, 110, 308); detecting at least one boundary (122; 312; 404) associated with the at least one element (106, 108, 110, 308), wherein the boundary is detected as an edge of the at least one element (106, 108, 110, 308); generating, in the viewing window and based on the detection of the at least one graphic indicator (114, 116; 212; 310; 402), Augmented Reality, AR, motion graphics (124; 214; 314; 406, 408) within the detected boundary (122; 312; 404), wherein the detected edge is configured to contain the AR motion graphics (124; 214; 314; 406, 408) to a portion of the visual scene; and in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is available, retrieving the content (216; 316; 410) and visually indicating an AR tracked control (318; 412) on the at least one element (106, 108, 110, 308) within the viewing window.
  13. The system of claim 12, further comprising: in response to determining that content (216; 316; 410) related to the at least one element (106, 108, 110, 308) is unavailable, dissipating the AR motion graphics (124; 214; 314; 406, 408).
  14. The system of claim 12 or 13, wherein the AR motion graphics (124; 214; 314; 406, 408) include animated effects initiated at a location of the at least one graphic indicator (114, 116; 212; 310; 402) and expanded to the boundary (122; 312; 404), the animated effects including moving elements (106, 108, 110, 308) presented within the detected boundary (122; 312; 404).
  15. The system of any of claims 12 to 14, wherein: the graphic indicator (114, 116; 212; 310; 402) is a logo; and the AR motion graphics (124; 214; 314; 406, 408) include a plurality of animated and non-overlapping shapes presented within the detected boundary (122; 312; 404); or wherein the at least one element (106, 108, 110, 308) is a virtual object; and the detected boundary (122; 312; 404) defines a volume of the virtual object.

Description

TECHNICAL FIELD This document generally relates to approaches for generating motion graphics of elements included in a visual scene of a camera view finder based on recognizing content in the view finder. BACKGROUND Electronic devices, such as smartphones and tablets, continue to evolve and provide consumers with new and/or improved functional capabilities. For instance, such devices can capture a visual scene using a camera included in the device. Such devices, using artificial intelligence, computer-vision and/or machine-learning can identify content within a given view and provide (e.g., obtain) information on the identified content. Possibilities exist, however, for additional approaches for providing information relevant to a user for content within a given visual scene. EP 2 560 145 A2 describes methods and systems for enabling creation of augmented reality content on a user device including a digital imaging part, a display, a user input part and an augmented reality client, wherein said augmented reality client is configured to provide an augmented reality view on the display of the user device using a live image data stream from the digital imaging part are disclosed. User input is received from the user input part to augment a target object that is at least partially seen on the display while in the augmented reality view. A graphical user interface is rendered to the display part of the user device, said graphical user interface enabling a user to author augmented reality content for the two-dimensional image. SUMMARY The invention is set forth in the independent claims. Specific embodiments are presented in the dependent claims. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. Implementations may include any or all of the following aspects. In some implementations, the method also includes dissipating the AR motion graphics in response to determining that content related to the at least one element is unavailable. In some implementations, the AR motion graphics include animated effects initiated at a location of the at least one graphic indicator and expanded to the boundary, the animated effects including moving elements presented within the detected boundary. In some implementations, the AR tracked control is a play button configured to initiate, in the viewing window, an immersive AR experience with the content and the method may b further configured to receive, from a user accessing the electronic device, input at the play button triggers execution of the immersive AR experience. The detected boundary defines an edge of the at least one element, the defined edge configured to contain the AR motion graphics to a portion of the visual scene. In some implementations, the graphic indicator is a logo and the AR motion graphics include a plurality of animated and non-overlapping shapes presented within the detected boundary. In some implementations, the at least one element is a virtual object and the detected boundary defines a surface of the virtual object. In some implementations, the at least one element is a virtual object and the detected boundary defines a volume of the virtual object. In some implementations, the retrieved content is based on a geographic location of the electronic device. Implementations of the described techniques may include hardware, a method or process, and/or computer software on a computer-accessible medium. The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS FIGS. 1A-1B depict visual scenes that can be analyzed and modified using the approaches described herein.FIG. 2 is a block diagram illustrating a system configured to employ the approaches described herein, according to an example implementation.FIGS. 3A, 3B, 3C, 3D, and 3E are diagrams schematically illustrating a user interface (UI) of an electronic device, according to an example implementation.FIGS. 4A, 4B, 4C, and 4D are diagrams illustrating a sequence of generating Augmented Reality (AR) motion graphics effects for a visual scene, according to an example implementation.FIG. 5 is a flow diagram of an example process of implementing a user experience (UX) with animated visual scenes triggered by camera capture of recognized logos, in accordance with implementations described herein.FIG. 6 is an example of a computing device and a mobile computing device that can be used to implement the techniques desc