CN-113892095-B - Context-based media curation
Abstract
Is configured to perform operations including capturing an image at a client device, wherein the image includes a description of an object, identifying an object category of the object based on the depiction of the object within the image, accessing media content associated with the object category within a media store, generating a presentation of the media content, and causing the presentation of the media content within the image to be displayed at the client device.
Inventors
- K. Anwalipur
- E.J. Charlton
- T.CHEN
- C. N. Merck giannis
- K.D.Tang
Assignees
- 斯纳普公司
Dates
- Publication Date
- 20260508
- Application Date
- 20200326
- Priority Date
- 20190329
Claims (12)
- 1. A method for curating a collection of media content, comprising: capturing, at a client device, an image using a camera of the client device, the image comprising a real-time depiction of an object in a physical environment; receiving an input context, the input context comprising at least time data indicative of a current time at which the image was acquired; Identifying an object based on the image; Determining a location of the identified object within the acquired image based on a subset of a plurality of image features of the image; selecting a category based on the identified object, the category corresponding to one or more media tags; Generating a query comprising a set of query terms based at least on the category and the time data in the input context, wherein the time data is included in the query for limiting media terms to those media terms that are tagged with a time attribute corresponding to the current time at which the image was acquired; Based on the query, accessing a media store comprising a plurality of media items, each media item of the plurality of media items being associated with a media tag; Filtering a set of media items from the plurality of media items of the media store for object categories and temporal dependencies based on the one or more media tags corresponding to the categories and the temporal data in the input context; generating a set of media content comprising the set of media items arranged according to the input context, the set of media content comprising a ranking of the set of media items based on the image and the temporal data in the input context; Retrieving a media template defining one or more display positions of the captured image relative to the determined position of the identified object based at least on a portion of the plurality of image features and the input context; populating the media template with at least a subset of the ranked set of media items, and Rendering the set of media content as an augmented reality overlay on the captured image by rendering a filled media template within the captured image at the one or more display locations defined by the media template such that at least one media item is rendered at a spatial location relative to the identified object, the rendering of the set of media content including a result indicator displaying a number of media items in the set of media content.
- 2. The method of claim 1, wherein the method further comprises: A request to assign one or more media tags to at least a portion of the plurality of media items is received.
- 3. The method of claim 1, wherein the collection of media content comprises one or more of: A graphical icon; augmented reality media content; Media overlay, and Auditory content.
- 4. The method of claim 1, wherein the method further comprises: receiving a selection of media content from a presentation of the collection of media content, and The media content is presented at the client device.
- 5. The method of claim 1, wherein the object is a first object, and generating the query comprises: identifying the first object depicted in the image; Identifying a second object depicted in the image; Selecting a first category corresponding to the first object and a second category corresponding to the second object; generating the set of query terms of the query based on the first category and the second category, and Based on the query, the media store is queried.
- 6. A system for curating a collection of media content, comprising: Memory, and At least one hardware processor coupled to the memory and comprising instructions that cause the system to perform operations comprising: capturing, at a client device, an image using a camera of the client device, the image comprising a real-time depiction of an object in a physical environment; receiving an input context, the input context comprising at least time data indicative of a current time at which the image was acquired; Identifying an object based on the image; Determining a location of the identified object within the acquired image based on a subset of a plurality of image features of the image; selecting a category based on the identified object, the category corresponding to one or more media tags; Generating a query comprising a set of query terms based at least on the category and the time data in the input context, wherein the time data is included in the query for limiting media terms to those media terms that are tagged with a time attribute corresponding to the current time of the captured image; Based on the query, accessing a media store comprising a plurality of media items, each media item of the plurality of media items being associated with a media tag; Filtering a set of media items from the plurality of media items of the media store for object categories and temporal dependencies based on the one or more media tags corresponding to the categories and the temporal data in the input context; generating a set of media content comprising the set of media items arranged according to the input context, the set of media content comprising a ranking of the set of media items based on the image and the temporal data in the input context; Retrieving a media template defining one or more display positions of the captured image relative to the determined position of the identified object based at least on a portion of the plurality of image features and the input context; populating the media template with at least a subset of the ranked set of media items, and Rendering the set of media content as an augmented reality overlay on the captured image by rendering a filled media template within the captured image at the one or more display locations defined by the media template such that at least one media item is rendered at a spatial location relative to the identified object, the rendering of the set of media content including a result indicator displaying a number of media items in the set of media content.
- 7. The system of claim 6, wherein the instructions cause the system to perform operations further comprising: A request to assign one or more media tags to at least a portion of the plurality of media items is received.
- 8. The system of claim 6, wherein the collection of media content comprises one or more of: A graphical icon; augmented reality media content; Media overlay, and Auditory content.
- 9. The system of claim 6, wherein the instructions cause the system to perform operations further comprising: receiving a selection of media content from a presentation of the collection of media content, and The media content is presented at the client device.
- 10. The system of claim 6, wherein the object is a first object, and generating the query comprises: identifying the first object depicted in the image; Identifying a second object depicted in the image; Selecting a first category corresponding to the first object and a second category corresponding to the second object; generating the set of query terms of the query based on the first category and the second category, and Based on the query, the media store is queried.
- 11. A machine-readable storage medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: capturing, at a client device, an image using a camera of the client device, the image comprising a real-time depiction of an object in a physical environment; receiving an input context, the input context comprising at least time data indicative of a current time at which the image was acquired; Identifying an object based on the image; Determining a location of the identified object within the acquired image based on a subset of a plurality of image features of the image; selecting a category based on the identified object, the category corresponding to one or more media tags; Generating a query comprising a set of query terms based at least on the category and the time data in the input context, wherein the time data is included in the query for limiting media terms to those media terms that are tagged with a time attribute corresponding to the current time of the captured image; Based on the query, accessing a media store comprising a plurality of media items, each media item of the plurality of media items being associated with a media tag; Filtering a set of media items from the plurality of media items of the media store for object categories and temporal dependencies based on the one or more media tags corresponding to the categories and the temporal data in the input context; generating a set of media content comprising the set of media items arranged according to the input context, the set of media content comprising a ranking of the set of media items based on the image and the temporal data in the input context; Retrieving a media template defining one or more display positions of the captured image relative to the determined position of the identified object based at least on a portion of the plurality of image features and the input context; populating the media template with at least a subset of the ranked set of media items, and Rendering the set of media content as an augmented reality overlay on the captured image by rendering a filled media template within the captured image at the one or more display locations defined by the media template such that at least one media item is rendered at a spatial location relative to the identified object, the rendering of the set of media content including a result indicator displaying a number of media items in the set of media content.
- 12. The machine-readable storage medium of claim 11, wherein the set of media content comprises one or more of: A graphical icon; augmented reality media content; Media overlay, and Auditory content.
Description
Context-based media curation Priority claim The present application claims priority from U.S. patent application Ser. No. 16/370,373, filed on 3/29 of 2019, the entire contents of which are incorporated herein by reference. Technical Field Embodiments of the present disclosure relate generally to mobile computing technology and, more particularly, but not by way of limitation, to systems for curating and presenting collections of media content based on user context. Background Augmented Reality (AR) is a real-time direct or indirect view of a physical, real-world environment, the elements of which are augmented by computer-generated sensory inputs. Drawings For ease of identifying discussions of any particular element or act, one or more of the most significant digits in a reference number refer to the figure number in which that element is first introduced. Fig. 1 is a block diagram illustrating an example message system for exchanging data (e.g., messages and associated content) over a network, wherein the message system includes a media curation system, according to some embodiments. Fig. 2 is a block diagram illustrating further details regarding a messaging system, according to an example embodiment. Fig. 3 is a block diagram illustrating various modules of a media curation system according to some example embodiments. Fig. 4 is a flowchart depicting a method of curating a collection of media content based on input received at a client device, in accordance with certain example embodiments. FIG. 5 is a flowchart depicting a method of generating custom context filters, according to some example embodiments. FIG. 6 is a flowchart depicting a method of generating custom context filters, according to some example embodiments. FIG. 7 is an interface flow diagram depicting an interface presented by a media curation system, according to some example embodiments. FIG. 8 is an interface flow diagram depicting an interface presented by a media curation system, according to some example embodiments. FIG. 9 is a diagram depicting a collection of media content curated based on input, according to some example embodiments. FIG. 10 is a flowchart depicting a method of curating a collection of media content based on input including images and input context, according to some example embodiments. Fig. 11 is a block diagram illustrating a representative software architecture that may be used in connection with the various hardware architectures described herein and for implementing the various embodiments. Fig. 12 is a block diagram illustrating components of a machine capable of reading instructions from a machine-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methods discussed herein, according to some example embodiments. Detailed Description As described above, the AR system provides a user with a real-time direct or indirect view of the physical, real-world environment displayed within a Graphical User Interface (GUI), wherein elements of the view are enhanced by computer-generated sensory input. For example, the AR interface may present the media content at a location within the display of the view of the real-world environment such that the media content appears to interact with elements in the real-world environment. Similarly, a media overlay or "shot" includes a set of media items that may be presented as an overlay or filter on media content presented at a client device, and then modify or transform the media content in some way. For example, the AR content of a shot may be used to make complex additions or transformations to media content presented on a client device, such as adding rabbit ears to a person's head in a video or image, adding floating hearts and stars to a video or image, changing the scale of person or object features in a video or image, or many other such transformations. The conversion includes real-time conversion of the modified image or video as the client device captures and displays it on the screen, as well as modification of the stored content (e.g., video clips in a gallery or store accessible to the client device). Example embodiments described herein relate to a context-based media curation system that determines a user context based on one or more inputs received at a client device and curates a set of media content based on the user context, where the set of media content may include auditory content, video content, images, and AR content, including shots. According to some embodiments, a media curation system is configured to perform operations comprising receiving, at a client device, an input, wherein the input comprises an input context and an image comprising a plurality of image features, identifying a category based on the plurality of image features of the image, generating a query based on the category and the input context, querying a media store based on the query, wherein the media store comprises a set of media items