JP-7856907-B2 - Systems and computer programs
Inventors
- 奥山 幹樹
- 川上 智司
- 田中 大将
- 津原 一成
- 北口 里英
- 谷村 駿也
- 己波(保田) 祐子
Assignees
- 株式会社カプコン
Dates
- Publication Date
- 20260512
- Application Date
- 20231008
Claims (7)
- A system that causes a computer to perform the following steps: In the display step, a predetermined image is displayed on the display device. In the shooting step, the predetermined image displayed on the display device is captured using a predetermined shooting device. In the recognition step, image recognition processing is performed on the predetermined image that has been captured. In the setup step, you set the user's visual information for operating a designated terminal device. In the additional step, the additional processing is performed using the results of the image recognition processing. The aforementioned additional processing is a process that generates an auxiliary image by adding new information to support the predetermined image based on the visual information and the predetermined image. In the superposition step, the auxiliary image is superimposed on the predetermined image and displayed on the terminal device . In the certification step, it is determined whether the predetermined image is within the user's field of view and whether the predetermined image is within the field of view of the imaging device. In the superposition step, if it is determined that the predetermined image is not within the user's field of view, and/or if it is determined that the predetermined image is not within the field of view of the shooting device, the content of the auxiliary image is changed. system.
- The aforementioned predetermined image is an image related to a video game, In the image recognition process, an analysis is performed based on the predetermined image, according to the progress of the video game. In the above additional step, the additional processing is performed based on the above determination. The system according to claim 1.
- The aforementioned visual information is information relating to the user's visual acuity (hereinafter referred to as "visual acuity information"), which indicates whether or not the user can visually perceive the predetermined image. In the aforementioned additional step, if it is determined that the user cannot see the predetermined image based on the visual acuity information, the auxiliary image is generated by enlarging an image relating to at least a portion of the information of the captured predetermined image. The system according to claim 1.
- The aforementioned visual information is information indicating whether or not the predetermined image can be viewed based on predetermined information relating to the user's vision (hereinafter referred to as "visual information"). In the aforementioned additional step, if it is determined based on the visual information that the user cannot see the predetermined image, an image related to the captured predetermined image is generated as the auxiliary image. The system according to claim 1.
- In the aforementioned additional step, if it is determined that the user cannot see the predetermined image based on the visual information, an image relating to the information of the captured predetermined image is generated as the auxiliary image. The system according to claim 4.
- In the aforementioned additional step, if it is determined that the user cannot see the predetermined image based on the visual information, information relating to the predetermined image is received via the internet, and the auxiliary image is generated based on the received information. The system according to claim 5.
- A computer program that causes a computer to perform the following steps: In the display step, a predetermined image is displayed on the display device. In the shooting step, the predetermined image displayed on the display device is captured using a predetermined shooting device. In the recognition step, image recognition processing is performed on the predetermined image that has been captured. In the setup step, you set the user's visual information for operating a designated terminal device. In the additional step, the additional processing is performed using the results of the image recognition processing. The aforementioned additional processing is a process that generates an auxiliary image by adding new information to support the predetermined image based on the visual information and the predetermined image. In the superposition step, the auxiliary image is superimposed on the predetermined image and displayed on the terminal device . In the certification step, it is determined whether the predetermined image is within the user's field of view and whether the predetermined image is within the field of view of the imaging device. In the superposition step, if it is determined that the predetermined image is not within the user's field of view, and/or if it is determined that the predetermined image is not within the field of view of the shooting device, the content of the auxiliary image is changed. Computer program.
Description
This invention relates to a system and a computer program. Conventionally, television systems utilizing AR (Augmented Reality) technology are known (see, for example, Patent Documents 1-3). This system allows users to capture images displayed on a television screen (television image) using the camera on their mobile device, thereby displaying an augmented reality (AR) image related to that television image on their mobile device (AR synthesis). Furthermore, this system, by utilizing the gyroscope sensor built into the mobile device, can estimate the position and orientation of the television even when the camera is not capturing a television image. This system works by pre-detecting and registering the TV's location and orientation using a mobile device. Even if the mobile device's location and orientation change afterward, the TV's location and orientation can still be estimated. Japanese Patent Publication No. 5259519Japanese Patent Publication No. 5265468Japanese Patent Publication No. 6010373 This is a diagram showing the system configuration in this embodiment.This diagram shows the configuration of the game device included in the system in this embodiment.(a) is a diagram showing the configuration of the AR terminal device included in the system in this embodiment, and (b) is an external perspective view of the AR terminal device.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This figure shows an example of an image displayed on the AR terminal device in this embodiment.This is a flowchart showing the AR synthesis process in this embodiment. [Embodiment] System 1 according to an embodiment of the present invention will be described with reference to Figures 1 to 10. Note that System 1 and the processing procedure described later are examples, and the embodiments of the present invention are not limited to these. System 1 and the processing procedure can be modified as appropriate without altering the essence of the present invention. <System Description> The system 1 of this embodiment consists of a game device 2 (part of the "computer," an example of a distribution device), a monitor 3 (part of the "computer," an example of a display device), an AR terminal device 4 (part of the "computer," an example of a terminal device), and a network 5 that connects these devices, as shown in Figures 1 to 3. System 1 performs AR synthesis processing based on the functions of the game device 2, monitor 3, and AR terminal device 4. In this embodiment, the game device 2, as shown in Figure 2, is a stationary game device with a monitor 3 for displaying game images and a controller 250 for receiving user input, both externally connected. In this embodiment, the game device 2 is, for example, wirelessly connected to the controller 250 and wired to the monitor 3. As shown in Figures 3(a) and 3(b), the AR terminal device 4 is a head-mounted display, a type of wearable terminal, equipped with an optically transmissive display 430 (for example, a display including a transmissive liquid crystal display, a transmissive organic EL display, or a transmissive inorganic EL display) that allows the user to see behind the display, a speaker 440, a camera 450 (an example of a shooting device), and setting buttons 460. As shown in Figure 4, the user can directly view the real-world RS (Real Space) through the transparent display 430 (i.e., without relying on computer images captured by the camera 450). Furthermore, the AR terminal device 4, using its camera 450, can capture a game image GI (an example of a predetermined image) displayed on the monitor 3 installed in the real-world space RS, as shown in Figure 5. This allows it to generate an auxiliary image (hereinafter referred to as "AR image AI") with newly added information to support the game image GI. This allows the AR terminal device 4 to display (AR composite) not only the real-world RS (including game image GI) that can be viewed via the transparent display 430, but also AR image AI that assists the game image GI. The AR terminal device 4 performs image recognition processing on the game image GI captured by the camera 430 in order to generate AR image AI. Specifically, the AR terminal device 4 understands the game's progress by analyzing the captured game image GI. For example, as shown in Figure 5, if the image recognition process determines that an enemy character is performing a special attack in the game image GI, the AR terminal device 4 performs additional processing to generate an AR image AI consisting of a wave indicating (supplementarily