Search

JP-3255721-U - Immersive stage performance smart subtitle device

JP3255721UJP 3255721 UJP3255721 UJP 3255721UJP-3255721-U

Abstract

[Problem] To address the lack of support for hearing-impaired, dyslexic, and multilingual audiences in theaters, and to further enhance the interactive integration of subtitles and performance, thereby improving the audience's artistic experience and emotional participation, by providing a smart subtitling system that enhances the emotional engagement of the stage. [Solution] The stage emotion-immersive smart subtitling device 100, applicable to theatrical performance fields, includes: a creative team input side that receives language settings, emotions, and cognitive needs from creative team members; a smart subtitling control center unit connected to the creative team input side that translates based on the subtitle language type selected and input by the creative team and processes subtitle content and emotion annotations; an emotion detection/action recognition module connected to the smart subtitling control center unit that determines the performance state based on a pre-configured performance action and emotion database; a dynamic adaptive font processing module connected to the emotion detection/action recognition module that performs dyslexia-friendly font conversion, personalized layout, and dynamic color adjustment based on the expression needs and content situation of the work; a low-latency synchronous transmission unit that synchronously transmits subtitle information and stage lighting, sound effects, and performance actions; and a stage real-time display device connected to the low-latency synchronous transmission unit that displays smart subtitles integrated with the performance. [Selection Diagram] Figure 1

Inventors

  • 黄舒郁
  • 黄奕勳

Assignees

  • 傳郁國際創智文化有限公司

Dates

Publication Date
20260507
Application Date
20251212

Claims (10)

  1. A stage-oriented, emotionally immersive, smart subtitling device applicable to theatrical performances, A creative team inputs language settings and audience cognitive needs, A smart subtitle control center unit is connected to the input side of the creative team, and based on the subtitle language type selected and input by the creative team, it translates the language of the performers, generates subtitle content, and adds emotional annotations. A single emotion detection and motion recognition module connected to the aforementioned smart subtitle control center unit, which determines the performance state based on a pre-configured performance action and emotion database, A dynamic adaptive font processing module is connected to the aforementioned emotion detection and motion recognition module and performs dyslexia-friendly font conversion, personalized layout, and dynamic color adjustment based on the expressive needs and content of the artwork. A single low-latency synchronized transmission unit that synchronizes the transmission of subtitle information with stage lighting, sound effects, and performance actions, and Includes a stage real-time display device connected to the low-latency synchronous transmission unit, which displays smart subtitles integrated with the performance, A smart subtitling device that allows for immersive stage performances.
  2. The smart subtitle control center unit is characterized by providing emotional annotations for subtitles based on the performer's tone, intonation, and emotions, as described in claim 1.
  3. The stage emotion-immersive smart subtitling device according to claim 1, characterized in that the emotion detection and motion recognition module includes a machine learning algorithm trained to identify motion and audio data during the rehearsal period.
  4. The dynamic adaptive font processing module is characterized by using a font designed for people with dyslexia, as described in claim 1. This is a stage-immersive smart subtitling device.
  5. The dynamic adaptive font processing module is characterized by adjusting the font weight, size, or color contrast based on the semantic importance of the content, as described in claim 1, for the stage immersion smart subtitling device.
  6. The low-latency synchronous transmission unit is characterized by having a data transmission capability with a delay of less than 50 milliseconds, and by enabling data exchange through a standard such as 5G, Wi-Fi® 6, or Bluetooth® 5.0 or higher, as described in claim 1.
  7. The aforementioned real-time stage display device includes multiple block display modules, can automatically change the subtitle display position according to the performance scene, and employs a transparent OLED or projection-type display module to reduce visual obstruction, as described in claim 1.
  8. The smart subtitle control center unit is characterized by outputting a translation of the synchronized subtitles, as described in claim 1.
  9. The stage immersion smart subtitling device according to claim 1, characterized in that the subtitle content displayed on the real-time stage display device includes direction-related instructions other than dialogue, and these direction-related instructions include emotional descriptions, environmental sound instructions, and physical movement notes.
  10. The stage-based emotional immersion smart subtitling device according to claim 1, characterized in that the aforementioned theatrical performance field includes theater, concerts, dance performances, opera, immersive installation art, and other live performance fields.

Description

This invention relates to smart assistive technology applicable to theatrical performances, and more particularly to a stage-immersive smart subtitling system that interactively integrates with live performances, providing a multilingual and barrier-free viewing experience to meet the theater viewing needs of hearing-impaired, dyslexic, and multilingual/multicultural audiences. Traditional theater subtitling technology is commonly seen in operas, musicals, and foreign-language plays, providing only text translation services and typically using static subtitles fixed above or on either side of the stage. However, for audiences with hearing or dyslexia, dialogue alone cannot convey the emotions, tone, and rhythm of the performance, making it difficult to have a complete artistic experience. Furthermore, many existing subtitle systems use standard fonts and layouts, lacking adaptation for dyslexia, thus increasing the reading burden and impacting the theatrical experience. On the other hand, traditional subtitling systems lack real-time interaction with stage elements such as lighting, stage design, and performer movements, creating discrepancies between subtitles and the performance, detracting from overall immersion and artistic beauty. According to the International Association for Performing Arts and Culture (IPAC), many hearing-impaired audience members cannot understand the emotions and subtle nuances of a performance even with subtitles, making it difficult to effectively support the participation of multilingual audiences and cultural exchange. Based on the above, a stage-oriented, emotionally immersive smart subtitling system is needed to solve the aforementioned problems and improve audience immersion and emotional comprehension. Therefore, given that the problems of the prior art have not been effectively resolved or overcome, the inventor files this utility model application to solve the aforementioned problems. This is a block diagram of the smart subtitling device for immersive stage performances related to this utility model.This diagram shows the interaction and synchronization between the subtitles and the stage in this utility model. The embodiments of this utility model will be described in detail below with reference to the drawings. The drawings used herein are for illustrative purposes and supplementary information to the specification, and do not represent the actual proportions or precise arrangements after the implementation of this utility model. Therefore, it goes without saying that the scope of claims in the actual implementation of this utility model should not be limited by the proportions or arrangements shown in the attached drawings. Figure 1 is a block diagram of the stage-immersive smart subtitling device according to this utility model. Figure 2 is a diagram showing the interaction and synchronization between the subtitles and the stage according to this utility model. As shown in Figures 1 and 2, the embodiment of this utility model provides a stage immersive smart subtitling device 100 applicable to theatrical performance fields, comprising: a creative team input side 110 that receives language settings from the creative team and the cognitive needs of the audience; a smart subtitling control center unit 120 connected to the creative team input side 110, which performs translation based on the subtitle language type selected and input by the creative team and processes the subtitle content and emotion annotations; an emotion detection/action recognition module 130 connected to the smart subtitling control center unit 120, which determines the performance state based on a pre-configured performance action and emotion database; a dynamic adaptive font processing module 140 connected to the emotion detection/action recognition module 130, which performs dyslexia-friendly font conversion, personalized layout, and dynamic color adjustment based on the expression needs and content situation of the work; a low-latency synchronous transmission unit 150 that synchronously transmits subtitle information and stage lighting, sound effects, and performance actions; and a stage real-time display device 160 connected to the low-latency synchronous transmission unit 150 that displays smart subtitles integrated with the performance. As shown in Figures 1 and 2, in this embodiment of the utility model, the smart subtitle control center unit 120 provides emotional annotations for subtitles based on the performer's tone, intonation, and emotion. As shown in Figures 1 and 2, in the embodiment of this utility model, the emotion detection and motion recognition module 130 includes a machine learning algorithm trained to identify motion and voice data during the rehearsal period. As shown in Figures 1 and 2, in the embodiment of this utility model, the dynamically adaptive font processing module 140 uses a font designed for people with dyslexia. As shown in Figures 1 and 2, in the embodiment of