JP-7856620-B2 - Device and method for supporting the determination of causes
Inventors
- 前田 梨帆
- 合田 航
- 長峯 望
Assignees
- 公益財団法人鉄道総合技術研究所
Dates
- Publication Date
- 20260511
- Application Date
- 20231026
Claims (10)
- A cause analysis support device that assists in identifying the cause of errors in judgment by an AI (Artificial Intelligence) model that determines whether or not a target object is captured in a photographed image, A first inspection means generates a modified image based on the erroneous image, which is the AI model being inspected, in which the camera settings were changed and the image was taken, and inputs the modified image into the model being inspected to perform a first inspection to determine whether the judgment is successful or not. A second inspection means performs a second inspection by evaluating the deficiencies in training images based on the training image set used when generating the aforementioned test model and the aforementioned error images. A third inspection means that performs a learning process using the aforementioned set of training images on each of several types of incomplete AI models to generate type-specific AI models, and then inputs the aforementioned incorrect images into each of the type-specific AI models to perform a third inspection to determine whether the judgment is successful or not. A device for supporting the determination of causes, equipped with the following features.
- The cause identification support device according to claim 1, comprising: determining whether to continue or terminate the inspection based on the results of the first inspection; performing the second inspection by the second inspection means if the inspection is to continue; determining whether to continue or terminate the inspection based on the results of the second inspection; and performing the third inspection by the third inspection means if the inspection is to continue.
- A first presentation means that presents the camera settings related to the image taken with the settings changed, which was judged to be successful in the first inspection, as an item for improvement. The cause determination support device according to claim 1 or 2, further comprising the above.
- The second inspection means is, Using a feature analysis process that analyzes the features of the aforementioned captured images, the features of each training image included in the training image group are analyzed, and the feature distribution of the training image group is calculated. Using the aforementioned feature analysis process, the characteristics of the misimage are analyzed, The features distribution is compared with the features of the misimages to determine the features missing from the training image set. The second inspection is performed as described above. The cause identification support device according to claim 1.
- The feature analysis process involves analyzing the features of the image portion of the captured image in which the object to be detected is depicted, and including these features in the features of the object to be analyzed. The cause determination support device according to claim 4.
- A second presentation means selects a training image from the group of training images that satisfies predetermined similarity feature conditions based on the missing features determined by the second inspection, and presents the selected training image and information indicating the missing features. The cause determination support device according to claim 4 or 5, further comprising the above.
- The aforementioned multiple types of incomplete AI models differ in at least one of the following aspects of machine learning models: 1) algorithm, 2) structure, 3) number of layers, 4) number of nodes, and 5) number of parameters. The cause identification support device according to claim 1 or 2.
- A third presentation means that presents information on the type of AI model that was determined to be successful in the third test, The cause determination support device according to claim 1 or 2, further comprising the above.
- The aforementioned captured image is an image of the track on which the vehicle is traveling. The object to be detected is an obstacle on the trajectory. The cause identification support device according to claim 1 or 2.
- A method for assisting a computer system in identifying the cause of a judgment error by an AI (Artificial Intelligence) model that determines whether or not a target object is present in a captured image, Based on the erroneous image, which is the AI model being tested and in which the subject model made a judgment error, a modified image is generated by changing the camera settings and taking a picture. This modified image is then input into the subject model to perform a first test to determine whether the judgment is successful or not. A second test is performed by evaluating the deficiencies in training images based on the training image set used to generate the aforementioned test model and the aforementioned error images. The process involves performing a learning process on each of several types of incomplete AI models using the aforementioned training image set to generate type-specific AI models, and then inputting the aforementioned incorrect images into each of these type-specific AI models to perform a third check to determine whether the judgment is successful or not. Methods to support the investigation of causes, including those mentioned above.
Description
This invention relates to a cause analysis support device, etc., that assists in identifying the causes of judgment errors in AI models. Generally, for users of AI (Artificial Intelligence) models, the decision-making process of the AI model is a black box, making it difficult to understand what processes are involved and what decision-making steps are taken. Here, "AI model" encompasses machine learning models, including the pre-processing for data input to the machine learning model and the machine learning model itself. Therefore, in a more precise and narrow sense, "AI model" can be defined as a machine learning model. Thus, while "AI model" and "machine learning model" can be considered equivalent in this invention, in this specification, "AI model" will be used in a broader sense. For users of AI models, the decision-making process of the AI model may be a black box. However, for businesses that manufacture and sell products incorporating AI models into their systems, or businesses that provide services incorporating AI models into their systems, it is desirable to quickly identify the cause of any judgment errors made by the AI model, since the AI model is used as part of the system. However, there is currently no universally effective technology for identifying the cause of AI model judgment errors. This is because the usage of AI models varies depending on the type of AI model used and the processing performed in each specific technical field. For example, Patent Document 1 proposes a method for indicating which areas of an image were used to estimate similarity when using an image as a search key for similar images in a document search that includes screen data. Japanese Patent Publication No. 2022-96379 Diagram illustrating the AI model to be tested.An explanatory diagram for generating an AI model.An explanatory diagram of errors in AI model judgment.Flowchart for the process of supporting the investigation of the cause.An example of changes made to camera settings during the first inspection.Flowchart for the first inspection process.An example of image features to be analyzed in the second examination.A comparison of the feature distribution of training images and the features of mis-captured images.Flowchart for processing the second inspection.Flowchart for processing the third inspection.Example of a functional configuration for a device that supports root cause analysis. Preferred embodiments of the present invention will be described below with reference to the drawings. Note that the applicable forms of the present invention are not limited to the following embodiments. Furthermore, identical elements are denoted by the same reference numerals in the drawings. The cause analysis support device 1 of this embodiment is a device for supporting the analysis of the causes of judgment errors made by AI models. Figure 1 shows a diagram illustrating the AI model that serves as the test model for the cause analysis support device 1. The AI model 10, which serves as the test model, is used for forward train monitoring in railways. It determines whether or not a detection target object 30 is visible in the input captured image 20 (presence or absence of the detection target object 30) and outputs the result. The captured image 20 is an image of the track in front of the railway vehicle, taken by a camera mounted on the railway vehicle. The detection target object 30 is an obstacle that affects the operation of the railway vehicle, such as people, animals, cars, bicycles, etc., on or near the track. In Figure 1, the captured image 20a on the left shows a "maintenance worker," which is the detection target object 30, next to the tracks, and the AI model 10 determines that "a detection target object is present." The captured image 20b on the right does not show the detection target object 30, and the AI model 10 determines that "no detection target object is present." As shown in Figure 2, AI model 10 is a trained AI model generated by performing a training process on an untrained AI model 12 using a set of training images 40, which are a collection of training images 42 associated with the presence or absence of the detection target object 30 in each captured image. The training images 40 can also be described as training data, where the training images 42 are the input data to the AI model, and the data output by the AI model indicates the presence or absence of the detection target object 30, thus training the AI model. Furthermore, a judgment error by AI model 10 refers to a case where, as shown in Figure 3 as an example, the model judged that there was "no object to detect" in the captured image 20 (22) which contains the object to be detected 30. The cause analysis support device 1 assists in identifying the cause of a judgment error in the AI model 10, which is the model under test. Based on the error image 22 (the image 20 in which the AI model 10 made a judgment error) and the training image