JP-7855539-B2 - Operation identification device and program
Inventors
- 木下 泰宏
Assignees
- 東芝テック株式会社
Dates
- Publication Date
- 20260508
- Application Date
- 20230316
Claims (5)
- Illumination that irradiates a predetermined irradiation area with visible light having a predetermined wavelength with a wavelength configuration different from ambient light , A camera that captures an image of the shooting area including the aforementioned irradiation area, and acquires an image of that area. The system includes a processor that identifies the movement of a human body captured in the captured image based on the captured image , The aforementioned processor, Based on the aforementioned wavelength, the region of the object in which the human body is captured is identified from the captured image. A motion identification device that identifies the movement of the human body based on an image included in the object region .
- The aforementioned processor, A valid region including the irradiation area is set in the aforementioned captured image, The object region is identified from the image included in the effective region. The operation identification device according to claim 1 .
- The aforementioned human body is a hand, The system includes a sensor that detects the human body, The operation identification device according to claim 1 or 2 , wherein the processor turns on the lighting when it detects the human body using the sensor.
- A program executed by a processor, The aforementioned processor, The lighting system has a function to irradiate a predetermined irradiation area with visible light having a predetermined wavelength and a different wavelength configuration from the ambient light , A function to acquire a captured image from the camera that includes the illumination area, Based on the aforementioned captured image, a function is provided to identify the movement of the human body captured in the image, Make it run , The aforementioned processor, Based on the aforementioned wavelength, the region of the object in which the human body is captured is identified from the captured image. A program that further performs a function to identify the movement of the human body based on an image included in the aforementioned object region .
- The aforementioned processor, A valid region including the irradiation area is set in the aforementioned captured image, The program according to claim 4, further comprising the function of identifying the object region from an image included in the effective region.
Description
Embodiments of the present invention relate to an operation-specific device and a program. Motion identification devices are available that identify human body movements by photographing parts of the human body, such as hands. Some such motion identification devices identify human body movements from images obtained by photographing a predetermined area including the human body. Motion identification devices may not be able to properly identify human body movements if a part of the human body being photographed is not captured in the correct position. Conventionally, motion detection devices have the problem of not being able to guide the human body to the appropriate position relative to the camera that is taking the picture. Japanese Patent Publication No. 2006-106388 Figure 1 is a schematic diagram showing an example of the configuration of the operation identification system according to the embodiment.Figure 2 is a schematic diagram showing the camera and lighting of the operation-determining device according to the embodiment.Figure 3 is a block diagram showing an example configuration of an operation identification device according to an embodiment.Figure 4 shows an example of an effective area according to the embodiment.Figure 5 shows an example of an object region according to the embodiment.Figure 6 is a flowchart showing an example of operation of the operation identification device according to the embodiment.Figure 7 is a flowchart showing an example of operation of the operation identification device according to the embodiment. The embodiments will be described below with reference to the drawings. The action identification system according to this embodiment identifies the movements of the user's hands (human body) related to handwashing. The action identification system photographs the user's hands in a sink or the like. Based on the captured images, the action identification system identifies the movements of the user's hands. For example, the action determination system identifies the state of the hands, which can be classified as hand movements based on the area being washed (back of the hand, between the fingers, thumb, etc.). Figure 1 schematically shows an example of the configuration of the operation identification system 100 according to the embodiment. As shown in Figure 1, the operation identification system 100 consists of an operation identification device 1 and a sink 20, etc. Sink 20 is a basin where user P washes their hands. Sink 20 has a recessed structure. A drain is also formed at the bottom of sink 20. Furthermore, a faucet 21 is installed in the sink 20. The faucet 21 dispenses water for the user P to wash their hands. The dispensed water is discharged from the drain of the sink 20. An operation identification device 1 is installed on top of the sink 20. The motion identification device 1 identifies the actions of user P. In this case, the motion identification device 1 identifies the hand movements of user P while washing hands. The motion identification device 1 includes a camera 3, a depth sensor 4, and lighting 6. Camera 3 is installed facing downwards above the sink 20. Camera 3 captures the area (shooting area) including the hands of user P as they wash their hands in the sink 20. Camera 3 may also be installed to photograph objects from an oblique angle above. The position and orientation of camera 3 are not limited to a specific configuration. Camera 3 captures a two-dimensional image (captured image). Camera 3 outputs color information (RGB (Red Green Blue)) for each dot. The depth sensor 4 is installed facing downwards on the top of the sink 20. The depth sensor 4 measures distance within the area including the user P's hand. The depth sensor 4 measures distance from each point to the depth sensor 4 itself. Here, the depth sensor 4 functions as a sensor to detect the position of the user's hand. The depth sensor 4 generates distance information indicating the distance from a predetermined reference plane (or predetermined reference point) to each point based on the measurement results. For example, the distance information may indicate the coordinates of each point in a predetermined three-dimensional coordinate system. For example, the depth sensor 4 comprises a light source and a sensor that detects the reflected light emitted from the light source. The depth sensor 4 measures distance based on the reflected light (visible or invisible light) emitted from the light source. For example, the depth sensor 4 may employ a Time-of-Flight (ToF) method to measure the distance to the object being measured, based on the time it takes for the emitted light to reflect from the object and reach the depth sensor 4. The depth sensor 4 may calculate the distance based on the parallax of each image captured by the two cameras (stereo cameras). Alternatively, the depth sensor 4 may project a dot pattern and measure the distance from the distortion of the dot pattern. The con