Search

EP-4736125-A2 - SYSTEMS AND METHODS FOR USE IN FILMING

EP4736125A2EP 4736125 A2EP4736125 A2EP 4736125A2EP-4736125-A2

Abstract

Example systems and methods for filming are disclosed. In an example, prop spatial data for a prop device and digital asset spatial data for a virtual asset can be received. The digital asset's spatial data can be updated based on the prop data, adjusting its position, movement, and orientation in a virtual space. In some examples, a scene model, a virtual representation of a scene, can be created using depth and color data. A miniaturized diorama of the scene model can be generated. A virtual camera can be configured to provide a perspective view of the diorama, visualized on an output device. In some examples, the scene model can include virtual points corresponding to scene locations, creating a waypoint scene model. Augmented video data can incorporate these virtual points based on the waypoint scene model. The waypoint scene model can also be miniaturized to provide the diorama.

Inventors

  • FAYETTE, BRANDON
  • REDDICK, GENE

Assignees

  • FD IP & Licensing LLC

Dates

Publication Date
20260506
Application Date
20240611

Claims (20)

  1. 1. A computer-implemented method comprising: providing a scene model that is a virtual representation of a scene based on depth and color data captured for the scene; creating a miniaturized version of the scene model corresponding to a diorama of the scene; setting a virtual camera with respect to the scene model to provide a perspective view of the diorama; and causing the diorama of the scene to be outputted at the perspective view on an output device.
  2. 2. The computer-implemented method of claim 1, wherein said causing comprises generating augmented video data comprising one or more composited video frames with the diorama.
  3. 3. The computer-implemented method of claim 2, wherein said generating augmented video data comprises compositing one or more video frames provided by a video camera and the diorama to provide the augmented video data.
  4. 4. The computer-implemented method of claim 2. wherein the virtual camera is further set based on camera viewpoint data for the video camera to specify the perspective view of the diorama.
  5. 5. The computer-implemented method of claim 4, wherein the perspective view includes a top down view, an oblique view, or a slanted view.
  6. 6. The computer-implemented method of claim 4, wherein the camera is a first video camera, the output device is a first output device, the virtual camera is a first virtual camera, and the perspective view is a first perspective view, and the method further comprising: setting a second virtual camera in the virtual environment with respect to the scene model to provide a second perspective view of the diorama based on camera viewpoint data for a second video camera: and causing the diorama of the scene at the second perspective to be rendered on a second output device.
  7. 7. The computer-implemented method of claim 6, wherein the first and second output devices are mobile devices.
  8. 8. The computer-implemented method of claim 2, further comprising inserting one or more digital assets into the scene model representative of digital assets to be used in the scene.
  9. 9. The computer-implemented method of claim 8, wherein the one or more digital assets is a first digital asset, the method further comprising insert a second digital asset representative of an actor into the scene model, wherein movements of the second digital asset in the scene model are synced to movements of the actor.
  10. 10. The computer-implemented method of claim 1, wherein the diorama is animated to provide a visual representation of the scene.
  11. 11. A computer-implemented method comprising: receiving waypoint instructions identifying virtual points for a digital asset for use in a scene: updating a scene model to include the virtual points at locations in the scene model corresponding to locations in the scene to provide a waypoint scene model; providing augmented video data comprising one or more composited video frames with the virtual points in the scene based waypoint scene model; and causing the augmented video data to be rendered on an output device.
  12. 12. The computer-implemented method of claim 11, further comprising receiving animation instructions identifying an animation of a digital asset between neighboring virtual points of the virtual points, wherein the waypoint scene model is provided with data specifying the animation of the digital asset between the neighboring virtual points.
  13. 13. The computer-implemented method of claim 12, wherein said providing comprises generating the augmented video data with composited frames with the virtual points in the scene and the digital asset between the neighboring virtual points.
  14. 14. The computer-implemented method of claim 13, wherein said causing comprises causing the augmented video to be rendered on the output device to provide a visual animation of the digital asset at and/or between the neighboring points based on the waypoint scene model.
  15. 15. The computer-implemented method of claim 11, further comprising: creating a miniaturized version of the waypoint scene model corresponding to a diorama of the scene; and setting a virtual camera with respect to the scene model to provide a perspective view of the diorama.
  16. 16. The computer-implemented method of claim 15. wherein said augmented video data is provided with the one or more composited video frames with the diorama.
  17. 17. The computer-implemented method of claim 16, wherein said providing augmented video data comprises compositing one or more video frames provided by a video camera and the diorama to provide the augmented video data.
  18. 18. The computer-implemented method of claim 17, wherein the virtual camera is further set based on camera viewpoint data for the video camera to specify the perspective view of the diorama.
  19. 19. The computer-implemented method of claim 18, wherein the output device is a portable device and is one of a mobile phone, a tablet, a television (TV) device, and a laptop computer.
  20. 20. The computer-implemented method of claim 19, wherein the diorama is animated to provide a visual representation of the scene.

Description

SYSTEMS AND METHODS FOR USE IN FILMING RELATED APPLICATIONS [0001] This application claims the benefit and priority of U.S. Provisional Application No. 63/510,465, titled “SYSTEMS AND METHODS FOR USE IN SCENE PREVISUALIZATION,” filed June 27, 2023, and U.S. Provisional Application No. 63/606,804, titled “SYSTEM AND METHOD FOR DIGITAL PROP TRACKING," filed December 06, 2023, each of which is incorporated herein by reference in its entirety. FIELD OF THE DISCLOSURE [0002] This disclosure relates generally to filmmaking. BACKGROUND OF THE DISCLOSURE [0003] Previsualization, often abbreviated as previs, is a process used in filmmaking, animation, and other visual media industries to create a preliminary visual representation of a planned scene or sequence. It involves creating simplified or rough versions of the intended shots, using storyboards, 3D computer graphics, or other visual aids. Alternative techniques have been developed to traditional previs techniques for scene visualization. Previsualization systems have been developed to allow a director to have a “good enough” shot of a scene with digital assets. Such systems enable the director to visualize the scene with animated (or still) digital assets using compositing techniques. Prior to the development of previsualization systems, directors filmed the scene without the digital asset, and a post production team (or individual) was used to insert the digital assets to allow the producer to visualize a shot of the scene with the digital asset. [0004] Compositing is a process or technique of combining visual elements from separate sources into single images, often to create an illusion that all those elements are parts of the same scene. Today, most, though not all, compositing is achieved through digital image manipulation. All compositing involves replacement of selected parts of an image with other material, usually, but not always, from another image. In a digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the designated color range with a pixel from another image, aligned to appear as part of the original. Visual effects (sometimes abbreviated as VFX) is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. VFX involves the integration of live- action footage (which may include in-camera special effects) and other live-action footage or computer-generated imagery7 (CGI) (digital or optics, animals or creatures) which look realistic, but would be dangerous, expensive, impractical, time-consuming or impossible to capture on film. [0005] Prop tracking in film and television production (or filmmaking) can refer to a process of tracking a movement, position, and/or orientation of a physical prop within a video scene (e.g., film footage) so that digital assets can be accurately aligned with it in postproduction. Props are currently being tracked using a marker-based tracking system. Markerbased tracking uses or requires placement of one or more markers on the prop - often high contrast spheres or cubes - at key points on the prop. The one or more markers can be simple geometric shapes or more complex patterns, depending on the marker-based tracking system. [0006] During filming, one or more cameras capture the movement of these markers along with the action. In post-production, tracking software analyzes the footage frame by frame. The software is designed to recognize the markers and calculate marker positions and motions in three-dimensional (3D) space. This data is then used to create a “skeleton” or framework that matches the prop's movement. CGI elements can be attached to this skeleton, ensuring they move perfectly in sync with the filmed prop. The benefits of marker-based tracking include its accuracy and reliability. However, the downside is that the markers must often be removed from the final footage through a process called “painting out,” which is time-consuming. SUMMARY OF THE DISCLOSURE [0007] Various details of the present disclosure are hereinafter summarized to provide a basic understanding. This summary is not an extensive overview of the disclosure and is neither intended to identify certain elements of the disclosure nor to delineate the scope thereof. Rather, the primary' purpose of this summary' is to present some concepts of the disclosure in a simplified form prior to the more detailed description that is presented hereinafter. [0008] In an example, a computer implemented method can include receiving prop spatial data for a prop device in a physical space, receiving digital asset spatial data for a digital asset in a virtual space, updating the spatial data for the digital asset based on the prop spatial data, and updating a position, a movement, and/or an orientation of the digital asset in the virtual space based on the updated spatial