Search

US-20260126891-A1 - MEDIA GENERATING SYSTEM AND METHOD WITH USER GENERATED CONTENT

US20260126891A1US 20260126891 A1US20260126891 A1US 20260126891A1US-20260126891-A1

Abstract

A media generation system and method for user generated content for social media.

Inventors

  • Ann Greenberg
  • Philippe Piernot

Assignees

  • Sceneplay, Inc.

Dates

Publication Date
20260507
Application Date
20251229

Claims (18)

  1. 1 . A media generation system, comprising: a store that contains a marked-up script in a markup language that contains a script having a scene with a plurality of shots, wherein each shot has a plurality of splitscenes, wherein each splitscene is a portion of a shot derived from the marked-up script and involves at least one actor who performs in the shot, one or more actions of the actor, and one or more blocking directions from the script; and a processor, executing instructions stored in a memory, to operate as: a director that, based on the marked-up script, automatically directs a plurality of actors to perform in a plurality of shots specified in the marked-up script and presents, based on the marked-up script, instructions to perform the one or more actions to the actor to generate two or more splitscenes, each splitscene having a timing for the respective shot in that splitscene; and an editor that, without user interaction, based on the marked-up script and the timing for each shot in each splitscene, automatically edits and combines two or more generated splitscenes into a combined media presentation that is a composite layout of the two or more generated splitscenes.
  2. 2 . The system of claim 1 , further comprising a social network interface that identifies a user who has generated a complementary splitscene for the two or more generated splitscenes.
  3. 3 . The system of claim 1 , wherein the editor further synchronizes, using signal analysis on audio levels of the generated splitscenes, the two or more generated splitscenes.
  4. 4 . The system of claim 1 , wherein each generated splitscene has a permission levels indicating who can combine the recorded splitscene.
  5. 5 . The system of claim 1 , wherein the director operates on a video-enabled cell phone.
  6. 6 . The system of claim 1 , wherein the editor dynamically assembles the generated splitscenes that have been generated at different times and places.
  7. 7 . The system of claim 1 , wherein the marked-up script is derived from metadata of an existing media file.
  8. 8 . The system of claim 1 , wherein the combined media presentation includes textual overlays comprising user identification data derived from the marked-up script.
  9. 9 . The system of claim 1 , wherein the composite layout comprises a product placement digitally inserted into the combined media presentation.
  10. 10 . A media generation method, comprising: providing, in a computer system, a marked-up script in a markup language that contains a script having a scene with a plurality of shots wherein each shot has a plurality of splitscenes, wherein each splitscene is a portion of a shot derived from the marked-up script and involves at least one actor who performs in the shot, one or more actions of the actor, and one or more blocking directions from the script; directing, by a director unit executed by the computer system, a plurality of actors to perform in the one or more shots specified in the marked-up script to generate splitscene content comprised of one or more generated splitscenes, each generated splitscene having a timing for the respective shot in the splitscene; presenting, by the director unit executed by the computer system and based on the marked-up script, instructions to perform the one or more actions to the plurality of actors; and automatically editing and combining, by an editor unit executed by the computer system, without user interaction and based on the marked-up script and the timing for each shot in each generated splitscene, the generated splitscenes into a combined media presentation that is a composite layout of the recorded splitscenes.
  11. 11 . The method of claim 10 , further comprising providing a social network interface that identifies a user who has generated a complementary splitscene for the two or more generated splitscenes.
  12. 12 . The method of claim 10 further comprising synchronizing, by the editor using signal analysis on audio levels of the generated splitscenes, the two or more generated splitscenes.
  13. 13 . The method of claim 10 , wherein each generated splitscene has a permission levels indicating who can combine the recorded splitscene.
  14. 14 . The method of claim 10 further comprising dynamically assembling, by the editor, the generated splitscenes that have been generated at different times and places.
  15. 15 . The method of claim 10 , wherein the marked-up script is derived from metadata of an existing media file.
  16. 16 . The method of claim 10 , wherein the combined media presentation includes textual overlays comprising user identification data derived from the marked-up script.
  17. 17 . The method of claim 10 , wherein the composite layout comprises a product placement digitally inserted into the combined media presentation.
  18. 18 . A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the method of claim 10 .

Description

RELATED APPLICATIONS This application is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 18/765,925, filed Jul. 8, 2024 that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 17/188,956 filed Mar. 1, 2021 (issued as U.S. Pat. No. 12,032,810 on Jul. 9, 2024) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 16/445,160 filed Jun. 18, 2019 (now U.S. Pat. No. 10,936,168 issued on Mar. 2, 2021) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 14/660,877, filed Mar. 17, 2015 and titled “SYSTEM AND METHOD FOR DESCRIBING A SCENE FOR A PIECE OF MEDIA” (now U.S. Pat. No. 10,346,001 issued Jul. 9, 2019) that in turn is a continuation of and claims priority under 35 USC 120 to U.S. patent application Ser. No. 12/499,686, filed Jul. 8, 2009 and entitled “MEDIA GENERATING SYSTEM AND METHOD” (now U.S. Pat. No. 9,002,177 issued on Apr. 7, 2015) that in turn claims the benefit under 35 USC 119(e) and priority under 35 USC 120 to U.S. Provisional Patent Application Ser. No. 61/079,041 filed on Jul. 8, 2008 and entitled “Media Generating System and Method”, the entirety of all of which are incorporated by reference. FIELD A media generation system and method are provided and the system and method is particularly applicable to a piece of video media. BACKGROUND Movies are typically made by studios for mass distribution to audiences. The tools to generate media (e.g., camcorders, Web Cameras, etc.) have become progressively cheaper, the cost of production has gone down, and there has been an adoption of both tools and production by a mass audience. While such tools come with instructions on how to operate the tool, lessons on what kind of content to create have not been forthcoming. There has been a proliferation in User Generated Content (UGC). Sites such as Youtube (when not showing commercial content created for mostly offline media or home movies) show that consumers have taken the cheaper tools to heart. UGC, however, is rarely compelling, and most often is amateurish. There are existing media system and services which are currently employed by users to generate and manipulate entertainment content. For example, there are multi-player games and virtual worlds that are avatar-based, animated settings where users can interact with other animated characters in a spontaneous, non-scripted way. (See Second Life at www.secondlife.com). There are also websites that permit a user to generate an avatar based movie using a game engine. (See www.machinima.com). There are also tools that allow a user to record, edit, post and share media, allowing them to be creators and distributors. There are also video assemblers and sequencers that provide a drag and drop way for users to create their own sequences of previously recorded material such as ways to synchronize their own photos and video to music to create more compelling presentations of their personal media. There are also systems that permit mashups wherein users can combine together found footage or user generated combinations of media (often in random combinations, or unified by theme or graphics, but not based upon scripts.) There are also community stories (Wiki stories) that are stories written by multiple participants with text-based co-creative effort. There are also web based solutions for generating simple animated scenarios wherein users choose settings, time, characters, dialog and/or music. Finally, there are “Cinema 2.0” efforts that are more sophisticated efforts at crowd sourced script generation and video coverage in order to assemble a linear movie- type experience online that allow users to bypass high budget productions. However, these existing systems and services do not provide a language and platform that will allow users to generate content that can be combined with a plurality of other users' content (within a social network) so that the users appear to be in the same scene together. It is desirable for users to see themselves in the story (thus earning their “15 MB of Fame”). Along with their remote peers, users want to interact with a plurality of other users of a social network, to create nonlinear narratives. None of the existing systems and methods provide the proper data, technology and social network that enables an easy-to-use user-generated interactive cinema system and it is to this end that the present invention is directed. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of an example of a media generation system; FIG. 2 illustrates more details of the media generation system; FIG. 3 illustrates a method for media data creation for the media generation system; FIGS. 4A-B illustrate two examples of a marked-up script code that may be used by the media generation system; FIG. 5 illustrates an example of a user interface for user d