Search

EP-4735134-A1 - LIVE VENUE PERFORMANCE CAPTURE AND VISUALIZATION OVER GAME NETWORK

EP4735134A1EP 4735134 A1EP4735134 A1EP 4735134A1EP-4735134-A1

Abstract

Systems, methods, and apparatuses disclosed herein can incorporate one or more real-world performers that are performing within a venue into interactive content. These systems, methods, and apparatuses can identify one or more joints or ligaments, for example, left shoulders, right knees, among others, of the one or more real-world performers from the image, or the series of images, of the one or more real-world performers. These joints or ligaments can be represented as one or more performer markers.These systems, methods, and apparatuses can generate one or more three-dimensional models of the one or more real-world performers in a three-dimensional space from the one or more performer markers. These systems, methods, and apparatuses can apply the one or more three-dimensional models of the one or more real-world performers to the one or more virtual characters in the three-dimensional space. These systems, methods, and apparatuses can render the one or more virtual characters from the three-dimensional space into a two-dimensional space of the interactive content to integrate the one or more real-world performers into the interactive content.

Inventors

  • POYNTER, BENJAMIN

Assignees

  • Sphere Entertainment Group, LLC

Dates

Publication Date
20260506
Application Date
20240625

Claims (20)

  1. 1. A content server for incorporating a real -world performer within a venue into an interactive content, the content server comprising: a memory that stores a video of the real-world performer; and a processor configured to execute instructions stored in the memory, the instructions when executed by the processor, configuring the processor to: capture movement of the real-world performer from the video, generate a three-dimensional model of the real-world performer that emulates the movement of the real-world performer, apply the three-dimensional model of the real -world performer to a virtual character included within the interactive content to cause the virtual character to emulate the movement of the real-world performer, and provide the virtual character to a portable electronic device to be incorporated into the interactive content.
  2. 2. The content server of claim 1, wherein the instructions when executed by the processor, further configure the processor to receive the video from a camera as the real-world performer is performing within the venue.
  3. 3. The content server of claim 1, wherein the instructions when executed by the processor, configure the processor to capture the movement of the real-world performer with a markerless approach.
  4. 4. The content server of claim 1, wherein the instructions when executed by the processor, further configure the processor to: identify a plurality of performer markers of the real-world performer from the video; and generate the three-dimensional model of the real-world performer from the plurality of performer markers.
  5. 5. The content server of claim 1, wherein the instructions when executed by the processor, configure the processor to: generate the three-dimensional model of the real-world performer in a three-dimensional space, and apply the three-dimensional model of the real-world performer to the virtual character in the three-dimensional space.
  6. 6. The content server of claim 5, wherein the instructions when executed by the processor, further configure the processor to render the virtual character from the three-dimensional space to a two-dimensional space of the interactive content.
  7. 7. The content server of claim 6, wherein the instructions when executed by the processor, configure the processor to provide the virtual character in the two-dimensional space.
  8. 8. A method for incorporating a real -world performer into an interactive content, the method comprising: accessing, by one or more computer systems, a video of the real-world performer; capturing, by the one or more computer systems, movement of the real-world performer within the venue from the video; generating, by the one or more computer systems, a three-dimensional model of the real- world performer that emulates the movement of the real-world performer; applying, by the one or more computer systems, the three-dimensional model of the real- world performer to a virtual character included within the interactive content to cause the virtual character to emulate the movement of the real-world performer; and providing, by the one or more computer systems, the virtual character to a portable electronic device to be incorporated into the interactive content.
  9. 9. The method of claim 8, wherein the accessing comprises receiving the video from a camera as the real-world performer is performing within the venue.
  10. 10. The method of claim 8, wherein the capturing comprises capturing the movement of the real-world performer with a markerless approach.
  11. 11. The method of claim 8, wherein the generating comprises: identifying a plurality of performer markers of the real-world performer from the video; and generating the three-dimensional model of the real-world performer from the plurality of performer markers.
  12. 12. The method of claim 8, wherein the generating comprises generate the three- dimensional model of the real-world performer in a three-dimensional space, and wherein the applying comprises apply the three-dimensional model of the real-world performer to the virtual character in the three-dimensional space.
  13. 13. The method of claim 12, wherein the providing comprises rendering the virtual character from the three-dimensional space to a two-dimensional space of the interactive content.
  14. 14. The method of claim 13, wherein the providing further comprises providing the virtual character in the two-dimensional space.
  15. 15. A venue for incorporating a real -world performer into an interactive content, the venue comprising: a content server configured to: capture movement of the real-world performer within the venue from a video of the real-world performer, and apply a three-dimensional model of the real-world performer to a virtual character included within the interactive content to cause the virtual character to emulate the movement of the real-world performer; and a plurality of electronic devices configured to: receive the virtual character from the content server, execute a software application having the interactive content, and incorporate the virtual character into the interactive content.
  16. 16. The venue of claim 15, further comprising a camera to capture the video as the real- world performer is performing within the venue, and wherein the content server is further configured to receive the video from the camera.
  17. 17. The venue of claim 15, wherein the content server is configured to capture the movement of the real-world performer with a markerless approach.
  18. 18. The venue of claim 15, wherein the content server is further configured to: identify a plurality of performer markers of the real-world performer from the video; and generate the three-dimensional model of the real-world performer from the plurality of performer markers.
  19. 19. The venue of claim 18, wherein the content server is configured to: generate the three-dimensional model of the real-world performer in a three-dimensional space, and apply the three-dimensional model of the real-world performer to the virtual character in the three-dimensional space.
  20. 20. The method of claim 19, wherein the content server is further configured to: render the virtual character from the three-dimensional space to a two-dimensional space of the interactive content; and provide the virtual character in the two-dimensional space to the plurality of electronic devices.

Description

LIVE VENUE PERFORMANCE CAPTURE AND VISUALIZATION OVER GAME NETWORK BACKGROUND [0001] Live performances, such as concerts or theatre to provide some examples, are typically one-way interactions in which one or more real-world performers present for an audience. In a traditional audience-performer relationship, the interaction between performer and the audience flows only from the performer to the audience. Even if interactions occur from the audience to the performer, these are typically minimal interactions on the part of the audience. They can include an audience chanting in response to a request by the performer, an audience singing lyrics with a song performed, an audience holding lighters or glow sticks to illuminate a venue, an audience clapping in response to a performance, filling out a questionnaire following the show, etc. For the audience that chooses to witness a live performance, they consume more but never so often participate, which leaves the value of attending a live performance with more to be desired. BRIEF DESCRIPTION OF THE DRAWINGS [0002] The present disclosure is described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left most digit(s) of a reference number identifies the drawing in which the reference number first appears. In the accompanying drawings: [0003] FIG. 1 graphically illustrates an exemplary venue for integrating an exemplary performer into an exemplary interactive content in accordance with some exemplary embodiments of the present disclosure; [0004] FIG. 2 graphically illustrates an exemplary operational control flow that can be implemented within the exemplary venue to integrate the exemplary performer into the exemplary interactive content in accordance with some exemplary embodiments of the present disclosure; [0005] FIG. 3 graphically illustrates another exemplary operational control flow that can be implemented within the exemplary venue to integrate the exemplary performer into the exemplary interactive content in accordance with some exemplary embodiments of the present disclosure; and [0006] FIG. 4 illustrates a simplified block diagram of an exemplary computer system that can be implemented within the exemplary model processing system according to some exemplary embodiments of the present disclosure. [0007] The present disclosure will now be described with reference to the accompanying drawings. DETAILED DESCRIPTION [0008] The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. The present disclosure may repeat reference numerals and/or letters in the various examples. This repetition does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It is noted that, in accordance with the standard practice in the industry, features are not drawn to scale. In fact, the dimensions of the features may be arbitrarily increased or reduced for clarity of discussion. OVERVIEW [0009] Systems, methods, and apparatuses disclosed herein can incorporate one or more real-world performers that are performing within a venue into interactive content. These systems, methods, and apparatuses can identify one or more joints or ligaments, for example, left shoulders, right knees, among others, of the one or more real-world performers from an image, or a series of images, of the one or more real-world performers. As to be described in further detail bellow, these joints or ligaments can be represented as one or more performer markers. These systems, methods, and apparatuses can generate one or more three-dimensional models of the one or more real-world performers in a three- dimensional space from the one or more performer markers. These systems, methods, and apparatuses can apply the one or more three-dimensional models of the one or more real- world performers to the one or more virtual characters in the three-dimensional space. These systems, methods, and apparatuses can render the one or more virtual characters from the three-dimensional space into a two-dimensional space of the interactive content to integrate the one or more real-world performers into the interactive content. EXEMPLARY VENUE FOR INTEGRATING AN EXEMPLARY PERFORMER INTO AN EXEMPLARY INTERACTIVE CONTENT [0010] FIG. 1 graphically illustrates an exemplary venue for integrating an exemplary performer into an exemplary interactive content in accordance with some exemplary embodiments of the present disclosure. In the exemplary embodiment illustrated in FIG. 1, a venue