Search

US-20260126893-A1 - MEDIA GENERATING SYSTEM AND METHOD WITH DYNAMIC CONTENT PERSONALIZATION

US20260126893A1US 20260126893 A1US20260126893 A1US 20260126893A1US-20260126893-A1

Abstract

A media generation system and method with dynamic content personalization.

Inventors

  • Ann Greenberg
  • Philippe Piernot

Assignees

  • Sceneplay, Inc.

Dates

Publication Date
20260507
Application Date
20251230

Claims (19)

  1. 1 . A media generation system, comprising: a store that contains a marked-up script in a markup language and one or more pieces of personalization data associated with a user, the marked-up script contains a script having a scene with a plurality of shots, wherein each shot has a plurality of splitscenes, wherein each splitscene is a portion of a shot derived from the marked-up script; and a processor, executing instructions stored on a memory, to operate as: an editor that, without user interaction, based on the marked-up script and the one or more pieces of personalization data, automatically selects a subset of the plurality of splitscenes matching the personalization data and automatically edits and combines the selected subset of splitscenes into a personalized combined media presentation for the user.
  2. 2 . The system of claim 1 , wherein the personalization data comprises a ratings setting, and wherein the editor excludes splitscenes that do not match the ratings setting.
  3. 3 . The system of claim 1 , wherein the personalization data comprises a privacy setting, and wherein the editor selects splitscenes based on the privacy setting.
  4. 4 . The system of claim 1 , wherein the personalization data identifies a social network group of the user, and wherein the editor selects splitscenes recorded by members of the social network group.
  5. 5 . The system of claim 1 , wherein the personalization data comprises user favorites, and wherein the editor selects splitscenes corresponding to the user favorites.
  6. 6 . The system of claim 1 , wherein the personalized combined media presentation includes a product placement, and wherein the editor selects the product placement by filtering available branded images based on the personalization data.
  7. 7 . The system of claim 1 , wherein the editor renders the personalized combined media presentation on the fly.
  8. 8 . The system of claim 1 , wherein the personalization data comprises a location of the user, and wherein the editor selects splitscenes relevant to the location.
  9. 9 . The system of claim 1 , wherein the personalization data comprises physical attributes of the user, and wherein the editor selects splitscenes matching the physical attributes.
  10. 10 . A media generation method, comprising: storing, in a computer system, a marked-up script in a markup language and one or more pieces of personalization data associated with a user, the marked-up script contains a script having a scene with a plurality of shots wherein each shot has a plurality of splitscenes, wherein each splitscene is a portion of a shot derived from the marked-up scrip; automatically selecting a subset of the plurality of splitscenes based on the one or more pieces of personalization data by filtering the plurality of splitscenes; and automatically editing and combining, by an editor unit without user interaction and based on the marked-up script, the selected plurality of splitscenes into a personalized combined media presentation for the user.
  11. 11 . The method of claim 10 , wherein the personalization data comprises a ratings setting, and further comprising excluding splitscenes from the selected subset of splitscenes that do not match the ratings setting.
  12. 12 . The method of claim 10 , wherein the personalization data comprises a privacy setting, and wherein selecting the subset of splitscenes further comprises selecting the subset of splitscenes based on the privacy setting.
  13. 13 . The method of claim 10 , wherein the personalization data identifies a social network group of the user, and wherein selecting the subset of splitscenes further comprises selecting the subset of splitscenes that are generated by members of the social network group.
  14. 14 . The method of claim 10 , wherein the personalization data comprises user favorites, and wherein selecting the subset of splitscenes further comprises selecting the subset of splitscenes corresponding to the user favorites.
  15. 15 . The method of claim 10 , wherein the personalized combined media presentation includes a product placement, and further comprising selecting the product placement by filtering available branded images based on the personalization data.
  16. 16 . The method of claim 10 further comprising rendering the personalized combined media presentation on the fly.
  17. 17 . The method of claim 10 , wherein the personalization data comprises a location of the user, and wherein selecting the subset of splitscenes further comprises selecting the subset of splitscenes relevant to the location.
  18. 18 . The method of claim 10 , wherein the personalization data comprises physical attributes of the user, and wherein selecting the subset of splitscenes further comprises selecting the subset of splitscenes matching the physical attributes.
  19. 19 . A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the method of claim 10 .

Description

RELATED APPLICATIONS This application is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 18/765,925, filed Jul. 8, 2024 that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 17/188,956 filed Mar. 1, 2021 (issued as U.S. Pat. No. 12,032,810 on Jul. 9, 2024) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 16/445,160 filed Jun. 18, 2019 (now U.S. Pat. No. 10,936,168 issued on Mar. 2, 2021) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 14/660,877, filed Mar. 17, 2015 and titled “SYSTEM AND METHOD FOR DESCRIBING A SCENE FOR A PIECE OF MEDIA” (now U.S. Pat. No. 10,346,001 issued Jul. 9, 2019) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 12/499,686, filed Jul. 8, 2009 and entitled “MEDIA GENERATING SYSTEM AND METHOD” (now U.S. Pat. No. 9,002,177 issued on Apr. 7, 2015) that in turn claims the benefit under 35 USC 119 (e) and priority under 35 USC 120 to U.S. Provisional Patent Application Ser. No. 61/079,041 filed on Jul. 8, 2008 and entitled “Media Generating System and Method”, the entirety of all of which are incorporated by reference. FIELD A media generation system and method are provided and the system and method is particularly applicable to a piece of video media. BACKGROUND Movies are typically made by studios for mass distribution to audiences. The tools to generate media (e.g., camcorders, Web Cameras, etc.) have become progressively cheaper, the cost of production has gone down, and there has been an adoption of both tools and production by a mass audience. While such tools come with instructions on how to operate the tool, lessons on what kind of content to create have not been forthcoming. There has been a proliferation in User Generated Content (UGC). Sites such as Youtube (when not showing commercial content created for mostly offline media or home movies) show that consumers have taken the cheaper tools to heart. UGC, however, is rarely compelling, and most often is amateurish. There are existing media system and services which are currently employed by users to generate and manipulate entertainment content. For example, there are multi-player games and virtual worlds that are avatar-based, animated settings where users can interact with other animated characters in a spontaneous, non-scripted way. (See Second Life at www.secondlife.com). There are also websites that permit a user to generate an avatar based movie using a game engine. (See www.machinima.com). There are also tools that allow a user to record, edit, post and share media, allowing them to be creators and distributors. There are also video assemblers and sequencers that provide a drag and drop way for users to create their own sequences of previously recorded material such as ways to synchronize their own photos and video to music to create more compelling presentations of their personal media. There are also systems that permit mashups wherein users can combine together found footage or user generated combinations of media (often in random combinations, or unified by theme or graphics, but not based upon scripts.) There are also community stories (Wiki stories) that are stories written by multiple participants with text-based co-creative effort. There are also web based solutions for generating simple animated scenarios wherein users choose settings, time, characters, dialog and/or music. Finally, there are “Cinema 2.0” efforts that are more sophisticated efforts at crowd sourced script generation and video coverage in order to assemble a linear movie-type experience online that allow users to bypass high budget productions. However, these existing systems and services do not provide a language and platform that will allow users to generate content that can be combined with a plurality of other users' content (within a social network) so that the users appear to be in the same scene together. It is desirable for users to see themselves in the story (thus earning their “15 MB of Fame”). Along with their remote peers, users want to interact with a plurality of other users of a social network, to create nonlinear narratives. None of the existing systems and methods provide the proper data, technology and social network that enables an easy-to-use user-generated interactive cinema system and it is to this end that the present invention is directed. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of an example of a media generation system; FIG. 2 illustrates more details of the media generation system; FIG. 3 illustrates a method for media data creation for the media generation system; FIGS. 4A-B illustrate two examples of a marked-up script code that may be used by the media generation system; FIG. 5 illustrates an example of a user interface for user d