Search

US-20260129266-A1 - System For Synchronization of Captioning to Smart Devices

US20260129266A1US 20260129266 A1US20260129266 A1US 20260129266A1US-20260129266-A1

Abstract

An automatic personal enhancement system that is tailored to each user's special access needs. The system having a multimedia presentation device, an onsite node having a user protocol time code message protocol, and the system sending a real-time time-code to the system, over a private LAN, a transmission device to transmit information to the onsite node, a fleet management subsystem, a content database server, a smart device, and a web server. The web server having instructions for: identifying a user entering a venue; connecting to the show control subsystem; selecting a language for multimedia pretensions from the show control subsystem; tracking the user's smart device with the personal enhancement system for automatically playing multimedia associated with the location in the venue in proximity to the user's smart device; and stopping the tracking and disconnecting the personal enhancement system from the user's smart device after the user exits the venue.

Inventors

  • Darwin Gilmore III
  • Jesse Garrison

Assignees

  • Darwin Gilmore III
  • Jesse Garrison

Dates

Publication Date
20260507
Application Date
20240404

Claims (19)

  1. 1 . An automatic personal enhancement system that is tailored to each user's special access needs comprising: a. at least one presentation device, wherein the at least one presentation device is a multimedia device; b. at least one onsite node operably connected to the at least one presentation device, wherein the at least one onsite node comprises a user protocol time code message protocol, and wherein the at least one onsite node sends a real-time time-code to the system, over a private local area network; c. at least one transmission device operably connected to the at least one onsite node, wherein the at least one transmission device is configured to transmit information to the onsite node wired, wirelessly, or both wired and wirelessly; d. a fleet management subsystem that is operably connected to the at least one transmission device; e. at least one content database server that is operably connected to the fleet management subsystem; f. at least one smart device operably connected to at least one web server; and g. the at least one web server operably connected to the fleet management subsystem and the at least one content database server, wherein the at least one web server comprises instructions operable on one or more than one processor for: 1) identifying a user entering a venue equipped with the system; 2) connecting to the show control subsystem; 3) selecting a language for multimedia pretensions from the show control subsystem; 4) tracking the user's smart device with the personal enhancement system for automatically playing multimedia associated with the location in the venue in proximity to the user's smart device; and 5) stopping the tracking and disconnecting the personal enhancement system from the user's smart device after the user exits the venue.
  2. 2 . The system of claim 1 , wherein the at least one onsite node comprises instructions operable on one or more than one processor to send a time code to a specific local internet protocol address for the user to access on the one or more than one smart device.
  3. 3 . The system of claim 2 , wherein the at least one onsite node further comprises instructions to: a. rate-limit and standardize messages and transmits the messages to an application program interface; and b. transmit an industry time code from preexisting onsite nodes to cloud-based application program interface hosted on a least one at least one presentation device, to synchronize the system across multiple physical locations.
  4. 4 . The system of claim 1 , wherein The fleet management subsystem comprises instructions operable on one or more than on processor for a secure container-based technology stack that enables a subtext manager to deploy, manage, and scale fleets of internet of things devices.
  5. 5 . The system of claim 1 , wherein system further comprises an application program interface for receiving time codes, interfacing with the at least one content database and outputting multimedia to the user's smart device.
  6. 6 . The system of claim 1 , wherein the system further comprises instructions operable on one or more than one processor for at least one presentation device-less application program interface to connect users to the system without additional software added to the at least one smart device, and wherein the users connect to the at least one web server by scanning a QR code, a bar code or a RFID tag or an NFC device, and select a language that opens a streaming connection to the application program interface and the show control subsystem.
  7. 7 . The system of claim 1 , wherein an at least one onsite node rate-limits and standardizes messages and transmits them to the application program interface on the at least one web server.
  8. 8 . The system of claim 1 , wherein when the application program interface receives a new time-code, a multimedia file stored in the at least one content database server is propagated to the users connected smart device, wherein the multimedia file is coincident with the venue location.
  9. 9 . The system of claim 1 , wherein the system is automatic and operates without an operator and delivers multimedia content enhancements for both impaired and non-impaired users, along with translation services directly to the user's smart device without the need of downloading an extra application to the smart device.
  10. 10 . The system of claim 1 , wherein connecting to the show control subsystem is made by the user scanning a code provided by the venue with a user's smart device.
  11. 11 . The system of claim 1 , wherein the connection to the show control subsystem is wireless.
  12. 12 . The system of claim 1 , wherein other user assistance options are made available so that any user is able to maximize the personal experience at the venue, where the other user assistance options are selected from the group consisting of closed captioning, volume, and vibration.
  13. 13 . The system of claim 1 , wherein a direct connection between the user's smart device and the user's hearing aides, or visual aides.
  14. 14 . The system of claim 1 , wherein the user can manually active or deactivate the automatic personalized enhancements.
  15. 15 . The system of claim 1 , wherein prior to exiting the venue, gifts, tickets, discounts and other offers can be displayed to the user on the smart device for immediate purchase or redemption at a later date.
  16. 16 . The system of claim 1 , wherein impaired users that are deaf, hard of hearing, or blind can receive emergency alerts using the show control subsystem presenting auditory, vibratory and visual alerts.
  17. 17 . The system of claim 1 , wherein both impaired and non-impaired users are informed of any emergency situation and where to go or how to proceed as an additional communication method for the venue.
  18. 18 . The system of claim 1 , wherein the system further comprises instructions for: a. interpreting timing based solely on listening to an incoming audio feed; b. synchronizing a media source with subtitles or other associated content without the need for onsite node programming or networking, where programming isn't feasible due to complexity, cost, where security is a concern due to legacy media, and where digital time codes are not an available. c. ingesting target media in advance; d. dividing the target media it into snippets; e. creating metadata from the snippets; f. uploading the metadata to a data server; and g. listening to incoming audio over an analog input port, and the at least one onsite node downloading and comparing the incoming audio to the snippets and metadata; wherein the system regularly samples an incoming audio stream and compares the sample to a library of snippets to develop a confidence rating using a cross-correlation algorithm, and wherein the system outputs a time code of the moment of highest correlation across all of the snippets.
  19. 19 . A method for using the automatic personal enhancement system that is tailored to each user's special access needs, the method comprising the steps of: a. entering a venue equipped with the system; b. connecting to the show control subsystem, wherein the connection can be made by scanning a code provided by the venue with a user's smart device; c. selecting a language for multimedia pretensions from the show control subsystem; d. tracking the user's smart device the personal enhancement system for automatically playing multimedia associated with the location in the venue in proximity to the user's smart device; and f. exiting the venue, wherein tracking is stopped and the personal enhancement system disconnects from the user's smart device.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This Application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Ser. No. 63/448,673, filed on 05-Apr.-2023, the contents of which are incorporated herein by reference in their entirety. The Application also claims the benefit of PCT patent application Serial No. PCT/US 24/23163, filed on Apr. 4, 2024, the contents of which are incorporated herein by reference in their entirety. FIELD OF THE INVENTION The present invention is in the technical field of enhancement systems and more particularly to an automatic personal enhancement system that is tailored to each user's special access needs using hardware that a user already possesses and is familiar, without additional software. BACKGROUND Currently, when a user is at an immersive entertainment experience, such as a projection mapped site activation or museum installation, many individual audience members do not have sufficient accessibility to the attraction. One of the most common barriers is language. There are currently many different concepts about how to handle closed captions in entertainment environments. Many systems require a specific hardware placed near the end user. Such as, for example, a teleprompter or a custom handheld device. There are a few smart device application that attempt to provide a solution, but the point of entry for the user is more difficult. In addition many of the application based solutions do not connect a show control network in order to eliminate the need for an operator. This increases the cost and complexity of the proposed solution. Additionally, pre-existing subtitle systems for captioning or other user enhancements require: 1) displaying text over the content, referred to as open enhancement, or 2) an investment in specific compatible hardware, such as screens, headsets, etc., or 3) having the user download an application that they are unfamiliar with and must learn to use before accessing any enhancements. Moreover, many of the current systems require handing out custom hardware to guests. This is wasteful in resources need to hand out, recover and clean the hardware between each use. Also, the hardware solutions are generally not upgradeable and the hardware ages out quickly with advancements in electronics. This is cost prohibitive for most venues and can be confusing for the users. The application downloading can be both intimidating and frustrating for many users. Therefore, there is a need for an automatic personal enhancement system that is tailored to each user's special access needs using hardware that a user already possesses and is familiar, without additional software, overcoming the limitations of the prior art. SUMMARY The present invention overcomes the limitations of the prior art by providing an automatic personal enhancement system that is tailored to each user's special access needs. The system has at least one multimedia presentation device, at least one on-site node that has a user protocol time code message protocol, and that sends a real-time time-code to the system over a private local area network. There is also at least one transmission device operably connected to the at least one on-site node, where the at least one transmission device is configured to transmit information wired, wireless, or both wired and wireless. A fleet management subsystem is also provided and is operably connected to the at least one transmission device. At least one content database server is also operably connected to the fleet management subsystem. And at least one web server is operably connected to the fleet management subsystem and the at least one content database server. The at least one web server has instructions operable on one or more than one processor for: 1) identifying a user entering a venue equipped with the system; 2) connecting to the show control subsystem; 3) selecting a language for multimedia pretensions from the show control subsystem; 4) tracking the user's smart device the personal enhancement system for automatically playing multimedia associated with the location in the venue in proximity to the user's smart device; and 5) exiting the venue, wherein tracking is stopped and the personal enhancement system disconnects from the user's smart device. At least one user's smart device operably connected to the at least one web server. The system also has an application program interface for receiving time codes, interfacing with the at least one content database and outputting multimedia to the user's smart device. The system further comprises instructions for at least one presentation device-less application program interface to connect users to the system without additional software to be added to the at least one smart device, and wherein the users connect to the at least one web server by scanning a QR code, a bar code or a RFID tag or an NFC device. The user can then select a language that opens a streaming connectio