Search

EP-4572870-B1 - SHOW EFFECT SYSTEM FOR AMUSEMENT PARK ATTRACTION SYSTEM

EP4572870B1EP 4572870 B1EP4572870 B1EP 4572870B1EP-4572870-B1

Inventors

  • ALLINSON, JACOB DAVID
  • TRAYNOR, MARK JAMES
  • JORDAN, ROBERT MICHAEL
  • WILLIAMS, Thomas Owen
  • CHAN, ERIC TO
  • CARSILLO, Peter

Dates

Publication Date
20260513
Application Date
20230815

Claims (15)

  1. A system (60) for an attraction, comprising: a display (62) configured to present images (64); and a control system (70) configured to perform operations, the operations comprising: receiving (212) imagery (77) of a real-world environment (101) from a sensor configured to capture imagery; identifying (214) visual characteristics of real-world objects in the imagery; generating (216) image data that includes a virtual object and an additional virtual object that corresponds to the visual characteristics of the real-world objects; and transmitting (218) the image data to the display (62).
  2. The system (60) of claim 1, wherein the virtual object comprises a virtual cloud (104), and wherein the additional virtual object comprises a virtual sun (106).
  3. The system (60) of claim 2, wherein the imagery (77) comprises a first imagery (130), and the control system (70) is configured to perform operations comprising: receiving a second imagery (132) of the real-world environment (101); determining that second visual characteristics of the real-world objects in the second imagery (132) are different from the visual characteristics of the real-world objects in the first imagery (130); and generating updated image data by modifying the image data based on the second visual characteristics.
  4. The system (60) of claim 3, wherein the real-world objects comprise clouds and foliage, and wherein the image data includes virtual clouds (104) that corresponds to the visual characteristics of the clouds, and wherein the image data includes virtual foliage that corresponds to the visual characteristics of the foliage.
  5. The system (60) of claim 4, wherein the visual characteristics of the clouds comprise a class of the clouds, a population density of the clouds, or both.
  6. The system (60) of claim 1, comprising a ride vehicle (54), wherein the display (62) is configured to present the images to guests in the ride vehicle (54).
  7. The system (60) of claim 1, wherein the controller system (70) is further configured to: identify a population density associated with one or more real-world clouds depicted within the imagery (77) of the real-world environment (101); and identify a cloud class of a plurality of cloud classes, wherein the cloud class is associated with the one or more real-world clouds, wherein the generated image data comprises one or more virtual clouds that visually correspond to the one or more real-world clouds based on the population density and the cloud class.
  8. A non-transitory, computer-readable medium comprising instructions that, when executed by processing circuitry (74), cause the processing circuitry (74) to: receive (212) captured imagery (77) of a real-world environment (101) from a sensor configured to capture imagery; identify (214) visual characteristics of real-world objects in the captured imagery (77); generate (216) image data that includes a virtual object and an additional virtual object that corresponds to the visual characteristics of the real-world objects; and output the image data to a display (62) configured to present images (102).
  9. The non-transitory, computer-readable medium of claim 8, wherein the processing circuitry (74) is configured to identify a first class or a first population density of the real-world object in the captured imagery (77), and the virtual object corresponds to the first class or the first population density.
  10. The non-transitory, computer-readable medium of claim 9, wherein the instructions, when executed by the processing circuitry (74), cause the processing circuitry (74) to: receive additional captured imagery; identify a second class or a second population density of the real-world object in the additional captured imagery; generate updated image data having an updated virtual object corresponding to the second class or the second population density and maintaining appearance of the additional virtual object; and output the updated image data.
  11. The non-transitory, computer-readable medium of claim 9, wherein the instructions, when executed by the processing circuitry (74), cause the processing circuitry (74) to: receive training data (272) that associates input imagery with a corresponding class or a corresponding population density; and generate (274) a machine learning model based on the training data.
  12. The non-transitory, computer-readable medium of claim 10, wherein the instructions, when executed by the processing circuitry (74), cause the processing circuitry (74) to: receive (276) additional input imagery; use (278) the machine learning model to identify an additional corresponding class or an additional corresponding population density of the additional input imagery; receive (28) feedback indicative of whether the additional corresponding class or the additional corresponding population density is accurately identified; and update (282) the machine learning model based on the feedback.
  13. The non-transitory, computer-readable medium of claim 11, wherein the instructions, when executed by the processing circuitry (74), cause the processing circuitry (74) to use the machine learning model to identify the first class or the first population density of the real-world object in the captured imagery.
  14. The non-transitory, computer-readable medium of claim 9, wherein the instructions, when executed by the processing circuitry (74), cause the processing circuitry (74) to identify a presence of precipitation in a real-world environment (101); and wherein the image data comprises additional virtual objects corresponding to the precipitation.
  15. A method (210) of operating an attraction comprising: receiving (212) imagery (77) of a real-world environment (101) from a sensor configured to capture imagery; identifying (214) visual characteristics of real-world objects in the imagery (77); generating (216) image data that includes a virtual object and an additional virtual object that corresponds to the visual characteristics of the real-world objects; and transmitting (218) the image data to a display (62).

Description

BACKGROUND This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art. Throughout amusement parks and other entertainment venues, special effects can be used to help immerse guests in the experience of a ride or attraction. Immersive environments may include three-dimensional (3D) props and set pieces, robotic or mechanical elements, and/or display surfaces that present media. In addition, the immersive environment may include audio effects, smoke effects, and/or motion effects. Thus, immersive environments may include a combination of dynamic and static elements. However, implementation and operation of special effects may be complex. For example, it may be difficult to operate certain elements of the special effects in a desirable manner to create the immersive environment. With the increasing sophistication and complexity of modern ride attractions, and the corresponding increase in expectations among guests, improved and more creative attractions are desirable, including ride attractions having special effects to provide the immersive environment. US 9 933 624 B1 describes a system for providing virtual reality imagery to a rider of an amusement park attraction. The system includes a headgear piece for securing a viewing screen to the head of the rider. The imagery displayed on the screen is associated with data and imagery from around the user for integration into a virtual reality experience. BRIEF DESCRIPTION According to the present invention there is provided a system for an attraction according to Claim 1, a non-transitory, computer-readable medium according to Claim 8, and a method of operating an attraction according to Claim 15. Preferred embodiments of the invention are defined in Claims 2 to 7 and 9 to 14. In the following description, embodiments will be described. DRAWINGS These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: FIG. 1 is a schematic diagram of an embodiment of an amusement park system, in accordance with an aspect of the present disclosure;FIG. 2 is a schematic diagram of an embodiment of an amusement park system, in accordance with an aspect of the present disclosure;FIG. 3 is a schematic diagram of an embodiment of a show effect system configured to present an image, in accordance with an aspect of the present disclosure;FIG. 4 is a schematic diagram of an embodiment of a show effect system configured to present an image, in accordance with an aspect of the present disclosure;FIG. 5 is a schematic diagram of an embodiment of a show effect system configured to present an image, in accordance with an aspect of the present disclosure;FIG. 6 is a flowchart of a method or process for operating a show effect system to present an image, in accordance with an aspect of the present disclosure;FIG. 7 is a flowchart of a method or process for operating a show effect system to present an image, in accordance with an aspect of the present disclosure; andFIG. 8 is a flowchart of a method or process for operating a show effect system to present an image, in accordance with an aspect of the present disclosure. DETAILED DESCRIPTION When introducing elements of various embodiments of the present disclosure, the articles "a," "an," and "the" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to "one embodiment" or "an embodiment" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. One or more specific embodiments of the present disclosure will be described below. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having