Search

US-12626468-B2 - Hyper-personalized augmented objects

US12626468B2US 12626468 B2US12626468 B2US 12626468B2US-12626468-B2

Abstract

A system for presenting hyper-personalized content over objects is provided. The system determines context features based on digital media, sensor data, and a user profile of a user of a mobile device. The system computes a context vector representing a context associated with the digital media and the user based on a correlation of the context features and compares the context vector with a plurality of context vectors of a plurality of multimedia content. The system selects, from the plurality of multimedia content, a multimedia content based on the comparison of the context vector with the plurality of context vectors. The system renders, on a display of the mobile device, an augmented reality presentation in which the selected multimedia content is superimposed on a target object displayed in the digital media. The augmented reality presentation is hyper-personalized to map to the context associated with the user.

Inventors

  • Rajat Gupta
  • Shourya Agarwal
  • Malhar Patil
  • Prakharkumar Srivastava
  • Kalpit Singh Kushwaha

Assignees

  • FLYING FLAMINGOS INDIA PVT. LTD.

Dates

Publication Date
20260512
Application Date
20221119
Priority Date
20211119

Claims (20)

  1. 1 . A system, comprising: a memory configured to store a database, wherein the database includes: a mapping between a plurality of multimedia content and a first target object, and a plurality of context vectors of the plurality of multimedia content, wherein each of the plurality of context vectors corresponds to a numerical representation of a context of a corresponding multimedia content of the plurality of multimedia content; a transceiver configured to: receive a digital media being captured by one or more imaging devices associated with a mobile device, wherein the digital media displays the first target object; and receive sensor data generated by one or more sensors of the mobile device, wherein the sensor data is generated by the one or more sensors during the capture of the digital media; and a processor configured to: determine one or more context features based on one or more of the digital media, the sensor data, and a user profile of a user of the mobile device; compute a context vector representing a context associated with the digital media and the user based on a correlation of the determined one or more context features; compare the computed context vector with the plurality of context vectors; select, from the plurality of multimedia content, a multimedia content based on the comparison of the computed context vector with the plurality of context vectors; and render, on a display of the mobile device, an augmented reality presentation in which the selected multimedia content is superimposed on the first target object displayed in the digital media, wherein the augmented reality presentation is hyper-personalized to map to the context associated with the user.
  2. 2 . The system as claimed in claim 1 , wherein the processor is configured to: process the digital media to identify the first target object in the digital media; and search the database to retrieve the plurality of multimedia content mapped to the first target object based on the identification of the first target object in the digital media.
  3. 3 . The system as claimed in claim 2 , wherein the identification of the first target object includes identification of one or more of a shape, a size, a surface curvature, and a three-dimensional model of the first target object.
  4. 4 . The system as claimed in claim 3 , wherein the processor is further configured to transform the selected multimedia content in accordance with at least one of the shape, the size, and the surface curvature of the first target object, wherein the selected multimedia content is transformed prior to the superimposition on the first target object, and wherein the multimedia content superimposed on the first target object is the transformed multimedia content.
  5. 5 . The system as claimed in claim 1 , wherein the processor is configured to: identify a second target object in the digital media; select another multimedia content from the database based on the identification of the second target object in the digital media; and update, on the display of the mobile device, the augmented reality presentation to superimpose the selected other multimedia content on the second target object.
  6. 6 . The system as claimed in claim 5 , wherein the other multimedia content is superimposed on the second target object concurrently with the multimedia content being superimposed on the first target object.
  7. 7 . The system as claimed in claim 1 , wherein the processor is configured to: detect a trigger action based on one or more of the digital media, the superimposed multimedia content, and the sensor data; select another multimedia content from the plurality of multimedia content based on the detected trigger action; and update, on the display of the mobile device, the augmented reality presentation to superimpose the selected other multimedia content concurrently with the multimedia content on the first target object.
  8. 8 . The system as claimed in claim 1 , wherein the processor is configured to: detect a trigger action based on the digital media and the sensor data; and manipulate the multimedia content superimposed on the first target object in accordance with the trigger action to create a perception of the multimedia content being altered in response to the trigger action.
  9. 9 . The system as claimed in claim 8 , wherein the trigger action is one of a gesture made by the user of the mobile device and an environmental input recorded in one of the digital media and the sensor data.
  10. 10 . The system as claimed in claim 8 , wherein the processor manipulates the multimedia content superimposed on the first target object in accordance with the trigger action after a time delay.
  11. 11 . The system as claimed in claim 1 , wherein the transceiver is configured to: receive, from the mobile device, a signal indicating that a first gesture is made by the user on a multimedia item being displayed during the augmented reality presentation; and receive, from the mobile device, another signal indicating that a second gesture is made by the user as a follow-up to the first gesture, wherein the other signal includes an image frame captured by the mobile device at the time the second gesture was made by the user.
  12. 12 . The system as claimed in claim 11 , wherein the processor is configured to: map the multimedia item with the image frame captured by the mobile device at the time the second gesture was made by the user; and store the mapping between the multimedia item and the image frame in the database for subsequent augmented reality presentation.
  13. 13 . The system as claimed in claim 1 , wherein the processor is configured to authenticate the user of the mobile device prior to rendering the augmented reality presentation based on authentication information received from the mobile device.
  14. 14 . The system as claimed in claim 13 , wherein the authentication information includes one or more of a faceprint of the user, a fingerprint of the user, an iris scan of the user, a retina scan of the user, a voiceprint of the user, a facial expression code, a secret code, a secret phrase, a public key, and an account identifier-password pair.
  15. 15 . The system as claimed in claim 1 , wherein the transceiver is configured to stream the selected multimedia content to the mobile device for rendering the augmented reality presentation on the mobile device.
  16. 16 . The system as claimed in claim 1 , wherein the plurality of multimedia content includes two or more of a video content, an audio-video content, special effects, three-dimensional virtual objects, augmented reality filters, and emoticons.
  17. 17 . The system as claimed in claim 1 , wherein the digital media is one of a live image, a live video, an image of another image, and a video of another video.
  18. 18 . The system as claimed in claim 1 , wherein the digital media is one of a two-dimensional (2.D) image, three-dimensional (3D) image, a 2D video, and a 3D video.
  19. 19 . The system as claimed in claim 1 , wherein the one or more context features include at least one of demographics of the user, a geo-location of the mobile device, a current season, a current timestamp, identity variables of the user, a travel history of the user, event information associated with the user, and current affairs information.
  20. 20 . A method, comprising the steps of: storing, by a memory of a system, a database including: a mapping between a plurality of multimedia content and a first target object, and a plurality of context vectors of the plurality of multimedia content, wherein each of the plurality of context vectors corresponds to a numerical representation of a context of a corresponding multimedia content of the plurality of multimedia content; receiving, by a transceiver of the system, a digital media being captured by one or more imaging devices associated with a mobile device, wherein the digital media displays the first target object; receiving, by the transceiver of the system, sensor data generated by one or more sensors of the mobile device, wherein the sensor data is generated by the one or more sensors during the capture of the digital media; determining, by a processor of the system, one or more context features based on one or more of the digital media, the sensor data, and a user profile of a user of the mobile device; computing, by the processor of the system, a context vector representing a context associated with the digital media and the user based on a correlation of the determined one or more context features; comparing, by the processor of the system, the computed context vector with the plurality of context vectors, selecting, by the processor of the system, from the plurality of multimedia content, a multimedia content based on the comparison of the computed context vector with the plurality of context vectors; and rendering, by the processor of the system, on a display of the mobile device, an augmented reality presentation in which the selected multimedia content is superimposed on the first target object displayed in the digital media, wherein the augmented reality presentation is hyper-personalized to map to the context associated with the user.

Description

CROSS-REFERENCE TO RELATED APPLICATION The instant application is a national phase of PCT International Application No. PCT/IN2022/051015 filed Nov. 19, 2022, and claims priority to Indian Patent Application Serial No. 202141053367 filed Nov. 19, 2021, the entire specifications of both of which are expressly incorporated herein by reference. BACKGROUND Field of the Disclosure Various embodiments of the disclosure relate generally to computer vision. More particularly, various embodiments of the present disclosure relate to hyper-personalized augmented objects. Description of the Related Art People have been known to exchange messages through digital media. A receiver is able to view or listen to the digital media; however, the digital media cannot be recycled to suit different contextual situations of the underlying objects. For example, objects, such as greeting cards, do not change their message based on the receiver or an event at which the receiver receives the objects. Additionally, the receiver cannot interact or change content being portrayed by the objects. In light of the above, there is a need for a technical solution that enhances perceived utility of objects for communication and/or entertainment purposes. SUMMARY A system for hyper-personalized augmented objects is provided substantially as shown in, and described in connection with, at least one of the figures, as set forth more completely in the claims. These and other features and advantages of the present disclosure may be appreciated from a review of the following detailed description of the present disclosure, along with the accompanying figures in which like reference numerals refer to like parts throughout. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings illustrate the various embodiments of systems, methods, and other aspects of the disclosure. It will be apparent to a person skilled in the art that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. In some examples, one element may be designed as multiple elements, or multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Various embodiments of the present disclosure are illustrated by way of example, and not limited by the appended figures, in which like references indicate similar elements: FIG. 1A is a block diagram that illustrates a system environment for presenting hyper-personalized augmented objects, in accordance with an exemplary embodiment of the present disclosure; FIG. 1B is a block diagram that illustrates another system environment for presenting hyper-personalized augmented objects, in accordance with an exemplary embodiment of the present disclosure; FIG. 2 is a diagram that illustrates an interactive GUI rendered on a display of a mobile device, in accordance with an exemplary embodiment of the present disclosure; FIGS. 3A and 3B are diagrams that collectively illustrate exemplary scenarios for presenting hyper-personalized augmented objects on a mobile device, in accordance with an exemplary embodiment of the present disclosure; FIGS. 4A and 4B are diagrams that collectively illustrate exemplary scenarios for presenting hyper-personalized augmented objects on a mobile device, in accordance with an exemplary embodiment of the present disclosure; FIG. 5 is a block diagram that illustrates a system for presenting hyper-personalized augmented objects, in accordance with an exemplary embodiment of the disclosure; and FIG. 6 is a flow chart that illustrates a method for presenting hyper-personalized augmented objects, in accordance with an exemplary embodiment of the disclosure. Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description of exemplary embodiments is intended for illustration purposes only and is, therefore, not intended to necessarily limit the scope of the disclosure. DETAILED DESCRIPTION The present disclosure is best understood with reference to the detailed figures and description set forth herein. Various embodiments are discussed below with reference to the figures. However, those skilled in the art will readily appreciate that the detailed descriptions given herein with respect to the figures are simply for explanatory purposes as the methods and systems may extend beyond the described embodiments. In one example, the teachings presented and the needs of a particular application may yield multiple alternate and suitable approaches to implement the functionality of any detail described herein. Therefore, any approach may extend beyond the particular implementation choices in the following embodiments that are described and shown. References to “an embodiment”, “another embodiment”, “yet anot