EP-4738257-A1 - SUBSTRATE PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR SUBSTRATE PROCESSING DEVICE
Abstract
In order to efficiently recognize a posture of a component of a substrate processing apparatus, reference shape information regarding a two-dimensional shape of a 3D model is acquired for each of a plurality of first virtual camera positions in a plurality of virtual surfaces along a virtual spherical surface surrounding the 3D model of a target component based on design information of the target component. For each of the first virtual camera positions, a matching degree between real shape information related to a two-dimensional shape of an object in a real image and the reference shape information is calculated. A virtual camera position with high matching degree having a highest matching degree is detected among the plurality of first virtual camera positions. A virtual surface in which the virtual camera position with high matching degree is set is divided to generate a plurality of virtual divided surfaces. Based on the design information of the target component, the reference shape information related to the two-dimensional shape of the 3D model is generated for each of a plurality of second virtual camera positions on the plurality of virtual divided surfaces. For each of the second virtual camera positions, a numerical value indicating a matching degree between the real shape information and the reference shape information is calculated.
Inventors
- SHIMIZU, SHINJI
Assignees
- SCREEN Holdings Co., Ltd.
Dates
- Publication Date
- 20260506
- Application Date
- 20240605
Claims (11)
- A substrate processing apparatus that processes a substrate, the substrate processing apparatus comprising: a storage unit that stores three-dimensional design information regarding a target component; an imaging unit that obtains a real image capturing said target component by imaging; and a search processing unit that, based on reference shape information regarding a two-dimensional shape of a three-dimensional model in each of a plurality of virtual images that can be obtained by imaging said three-dimensional model of said target component from a plurality of virtual camera positions, and real shape information regarding a two-dimensional shape of an object in said real image, the reference shape information being generated based on said three-dimensional design information, searches for a virtual camera position having a highest matching degree between said reference shape information and said real shape information among said plurality of virtual camera positions, wherein said search processing unit includes: a first shape information acquisition unit that acquires said reference shape information generated for each of a plurality of first virtual camera positions assuming a case where said three-dimensional model is imaged from each of said plurality of first virtual camera positions that is a plurality of virtual camera positions set by virtually setting a surface aggregation including a plurality of virtual surfaces located along a virtual spherical surface surrounding the three-dimensional model centered around a reference point of said three-dimensional model and virtually setting one virtual camera position for each of said plurality of virtual surfaces based on said three-dimensional design information; a first calculation unit that calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said plurality of first virtual camera positions; a first detection unit that detects a virtual camera position with high matching degree that is a first virtual camera position having a highest matching degree between said real shape information and said reference shape information among said plurality of first virtual camera positions based on a calculation result by said first calculation unit; a divided surface generation unit that generates a plurality of virtual divided surfaces by dividing a virtual surface with high matching degree that is a virtual surface in which said virtual camera position with high matching degree is virtually set, among said plurality of virtual surfaces; a second shape information generation unit that generates said reference shape information for each of a plurality of second virtual camera positions assuming a case where said three-dimensional model is imaged from each of said plurality of second virtual camera positions that is a plurality of virtual camera positions set by virtually setting one virtual camera position for each of said plurality of virtual divided surfaces based on said three-dimensional design information; and a second calculation unit that calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said plurality of second virtual camera positions.
- The substrate processing apparatus according to claim 1, wherein assuming a case where said three-dimensional model is imaged from each of M1×T1 first virtual camera positions that are M1×T1 virtual camera positions set by virtually setting T1 (T1 is a natural number equal to or greater than 2) surface aggregations having mutually different distances from said reference point and virtually setting one virtual camera position for each of M1 (M1 is a natural number equal to or greater than 2) virtual surfaces in each of said T1 surface aggregations based on said three-dimensional design information, said first shape information acquisition unit acquires said reference shape information generated for each of said M1×T1 first virtual camera positions, said first calculation unit calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said M1×T1 first virtual camera positions, said first detection unit detects said virtual camera position with high matching degree which is a virtual camera position having a highest matching degree between said real shape information and said reference shape information among said M1×T1 first virtual camera positions based on a calculation result by said first calculation unit, said divided surface generation unit divides, according to a same rule, each of T2 (T2 is a natural number equal to or greater than 2) virtual surfaces including said virtual surface with high matching degree, intersecting a line passing through said reference point and said virtual camera position with high matching degree on a side of said virtual camera position with high matching degree with respect to said reference point and having mutually different distances from said reference point among said M1 virtual surfaces in each of said T1 surface aggregations to generate M2 (M2 is a natural number equal to or greater than 2) virtual divided surfaces for each of the T2 virtual surfaces, thereby generating M2×T2 virtual divided surfaces, said second shape information generation unit generates, assuming a case where said three-dimensional model is imaged from each of M2×T2 second virtual camera positions that are M2×T2 virtual camera positions set by virtually setting one virtual camera position for each of said M2×T2 virtual divided surfaces, said reference shape information for each of said M2×T2 second virtual camera positions based on said three-dimensional design information, and said second calculation unit calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said M2×T2 second virtual camera positions.
- The substrate processing apparatus according to claim 2, wherein said search processing unit includes a second detection unit that detects a virtual camera position having a highest matching degree between said real shape information and said reference shape information among said virtual camera position with high matching degree and said M2×T2 second virtual camera positions.
- The substrate processing apparatus according to claim 3, wherein said search processing unit executes one or more times of n-th unit processing (n is a natural number equal to or greater than 2) after executing first unit processing for said target component, said search processing unit sequentially performs first A processing, first B processing, first C processing, and first D processing in said first unit processing, said first A processing is processing in which said divided surface generation unit divides, according to a same rule, each of first T3 virtual divided surfaces which are a plurality of virtual divided surfaces generated by dividing each of said T2 virtual surfaces and which are T3 (T3 is a natural number equal to or greater than 2) virtual divided surfaces including a virtual divided surface including a first reference virtual camera position which is a virtual camera position detected first by said second detection unit, intersecting a line passing through said reference point and said first reference virtual camera position on a side of said first reference virtual camera position with respect to said reference point and having mutually different distances from said reference point to generate first M3 virtual divided surfaces which are M3 (M3 is a natural number equal to or greater than 2) virtual divided surfaces for each of the first T3 virtual divided surfaces, thereby generating first M3×T3 virtual divided surfaces which are M3×T3 virtual divided surfaces, said first B processing is processing in which said second shape information generation unit generates, assuming a case where said three-dimensional model is imaged from each of first M3×T3 third virtual camera positions that are M3×T3 virtual camera positions set by virtually setting one virtual camera position for each of said first M3×T3 virtual divided surfaces, said reference shape information for each of said first M3×T3 third virtual camera positions based on said three-dimensional design information, said first C processing is processing in which said second calculation unit calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said first M3×T3 third virtual camera positions, said first D processing is processing in which said second detection unit detects a second reference virtual camera position that is a virtual camera position having a highest matching degree between said real shape information and said reference shape information among said first reference virtual camera position and said first M3×T3 third virtual camera positions, said search processing unit sequentially performs n-th A processing, n-th B processing, n-th C processing, and n-th D processing in each of said one or more times of n-th unit processing, said n-th A processing is processing in which said divided surface generation unit divides, according to a same rule, each of n-th T3 virtual divided surfaces which are a plurality of virtual divided surfaces generated by dividing each of (n-1)-th T3 virtual divided surfaces and which are T3 virtual divided surfaces including a virtual divided surface including an n-th reference virtual camera position that is a virtual camera position detected n-th by said second detection unit, intersecting a line passing through said reference point and said n-th reference virtual camera position on a side of said n-th reference virtual camera position with respect to said reference point, and having mutually different distances from said reference point to generate n-th M3 virtual divided surfaces that are M3 virtual divided surfaces for each of the n-th T3 virtual divided surfaces, thereby generating n-th M3×T3 virtual divided surfaces that are M3×T3 virtual divided surfaces, said n-th B processing is processing in which said second shape information generation unit generates, assuming a case where said three-dimensional model is imaged from each of n-th M3×T3 third virtual camera positions that are M3×T3 virtual camera positions set by virtually setting one virtual camera position for each of said n-th M3×T3 virtual divided surfaces, said reference shape information for each of said n-th M3×T3 third virtual camera positions based on said three-dimensional design information, said n-th C processing is processing in which said second calculation unit calculates a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said n-th M3×T3 third virtual camera positions, and said n-th D processing is processing in which said second detection unit detects a virtual camera position having a highest matching degree between said real shape information and said reference shape information among said n-th reference virtual camera position and said n-th M3×T3 third virtual camera positions.
- The substrate processing apparatus according to claim 4, wherein said search processing unit ends execution of said one or more times of n-th unit processing in response to one reference virtual camera position among from said first reference virtual camera position to said n-th reference virtual camera position being continuously detected by said second detection unit as a virtual camera position having a highest matching degree between said real shape information and said reference shape information a first predetermined number of times set in advance.
- The substrate processing apparatus according to claim 4, wherein said search processing unit ends execution of said one or more times of n-th unit processing in response to the n-th unit processing in said one or more times of n-th unit processing being executed a second predetermined number of times set in advance.
- The substrate processing apparatus according to any one of claims 4 to 6, further comprising an abnormality detection unit that detects an abnormality of said target component by comparing reality information related to a posture of said target component recognized based on a virtual camera position having a highest matching degree between said real shape information and said reference shape information detected (n+1)-th by said second detection unit in the last n-th D processing in said one or more times of n-th unit processing, with normal information related to a posture of said target component based on said three-dimensional design information in a case where a state of said target component is normal.
- The substrate processing apparatus according to any one of claims 2 to 6, wherein said same rule includes a rule of dividing a division target surface to be divided into a plurality of surfaces by a plurality of line segments respectively connecting a center point of the division target surface and all vertices of the division target surface.
- The substrate processing apparatus according to any one of claims 1 to 6, wherein each of said plurality of virtual surfaces is a triangular surface, and said surface aggregation is a polyhedron constituted with a large number of triangular surfaces.
- The substrate processing apparatus according to claim 9, wherein said divided surface generation unit divides said virtual surface with high matching degree into three virtual divided surfaces as said plurality of virtual divided surfaces by three line segments connecting each of three vertices of said virtual surface with high matching degree and said virtual camera position with high matching degree.
- An information processing method in a substrate processing apparatus that processes a substrate, the information processing method comprising: a real image acquisition step of acquiring, by an arithmetic unit, a real image capturing a target component obtained by imaging by an imaging unit; and a search step of searching for, by the arithmetic unit, based on reference shape information regarding a two-dimensional shape of a three-dimensional model in each of a plurality of virtual images that can be acquired by imaging a three-dimensional model of said target component from a plurality of virtual camera positions, and real shape information regarding a two-dimensional shape of an object in said real image, the reference shape information being generated based on three-dimensional design information regarding said target component stored in a storage unit, a virtual camera position having a highest matching degree between said reference shape information and said real shape information among said plurality of virtual camera positions, wherein said search step includes: a first shape information acquisition step of acquiring, assuming a case where said three-dimensional model is imaged from each of a plurality of first virtual camera positions that is a plurality of virtual camera positions set by virtually setting a surface aggregation including a plurality of virtual surfaces located along a virtual spherical surface surrounding the three-dimensional model centered around a reference point of said three-dimensional model and virtually setting one virtual camera position for each of said plurality of virtual surfaces based on said three-dimensional design information, said reference shape information generated for each of said plurality of first virtual camera positions; a first calculation step of calculating a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said plurality of first virtual camera positions; a first detection step of detecting a virtual camera position with high matching degree that is a first virtual camera position having a highest matching degree between said real shape information and said reference shape information among said plurality of first virtual camera positions based on a calculation result in said first calculation step; a divided surface generation step of generating a plurality of virtual divided surfaces by dividing a virtual surface with high matching degree that is a virtual surface in which said virtual camera position with high matching degree is virtually set, among said plurality of virtual surfaces; a second shape information generation step of generating said reference shape information for each of a plurality of second virtual camera positions, assuming a case where said three-dimensional model is imaged from each of said plurality of second virtual camera positions that is a plurality of virtual camera positions set by virtually setting one virtual camera position for each of said plurality of virtual divided surfaces based on said three-dimensional design information; and a second calculation step of calculating a numerical value indicating a matching degree between said real shape information and said reference shape information for each of said plurality of second virtual camera positions.
Description
TECHNICAL FIELD The present invention relates to a technique for recognizing a posture of a component in a substrate processing apparatus that processes a substrate. Substrates to be processed in the substrate processing apparatus include, for example, a semiconductor substrate, a substrate for a flat panel display (FPD) such as a liquid crystal display apparatus or an organic electroluminescence (EL) display apparatus, a glass substrate for a photomask, a substrate for an optical disk, a substrate for a magnetic disk, or a substrate for a solar cell. BACKGROUND ART There is a substrate processing apparatus including a processing chamber, a substrate holding unit, a nozzle, a camera, an image processing unit, and a monitoring unit (see, for example, Patent Document 1). In this substrate processing apparatus, the substrate holding unit, the nozzle, and the camera are arranged in the processing chamber. The substrate holding unit holds a substrate in a horizontal posture. The substrate holding unit rotates the substrate in a horizontal plane. The nozzle moves between a standby position deviated to a side of the substrate and a discharge position above the substrate by turning of a drive arm to which the nozzle is fixed. The nozzle is arranged at the standby position when the substrate is attached to and detached from the substrate holding unit, and is arranged at the discharge position when a processing liquid is discharged from the nozzle toward the substrate. The camera is attached to a predetermined position in the processing chamber and captures an image of a predetermined region including the nozzle moved to the discharge position. The image processing unit acquires second nozzle position information indicating a position of the nozzle based on the image from the camera and outputs the second nozzle position information to the monitoring unit. The monitoring unit determines whether or not there is an abnormality in the position of the nozzle based on correspondence between first nozzle position information as information directly or indirectly indicating the arrangement position of the nozzle from a control unit and the second nozzle position information from the image processing unit. PRIOR ART DOCUMENT PATENT DOCUMENT Patent Document 1: WO 2019/146456 A SUMMARY PROBLEM TO BE SOLVED BY THE INVENTION By the way, with respect to a substrate processing apparatus, there is room for improvement in terms of efficiently recognizing a posture of a component. MEANS TO SOLVE THE PROBLEM A substrate processing apparatus according to a first aspect is a substrate processing apparatus that processes a substrate, the substrate processing apparatus including: a storage unit that stores three-dimensional design information regarding a target component; an imaging unit that obtains a real image capturing the target component by imaging; and a search processing unit that, based on reference shape information regarding a two-dimensional shape of a three-dimensional model in each of a plurality of virtual images that can be obtained by imaging the three-dimensional model of the target component from a plurality of virtual camera positions, and real shape information regarding a two-dimensional shape of an object in the real image, the reference shape information being generated based on the three-dimensional design information, searches for a virtual camera position having a highest matching degree between the reference shape information and the real shape information among the plurality of virtual camera positions, in which the search processing unit includes: a first shape information acquisition unit that acquires the reference shape information generated for each of a plurality of first virtual camera positions assuming a case where the three-dimensional model is imaged from each of the plurality of first virtual camera positions that is a plurality of virtual camera positions set by virtually setting a surface aggregation including a plurality of virtual surfaces located along a virtual spherical surface surrounding the three-dimensional model centered around a reference point of the three-dimensional model and virtually setting one virtual camera position for each of the plurality of virtual surfaces based on the three-dimensional design information; a first calculation unit that calculates a numerical value indicating a matching degree between the real shape information and the reference shape information for each of the plurality of first virtual camera positions; a first detection unit that detects a virtual camera position with high matching degree that is a first virtual camera position having a highest matching degree between the real shape information and the reference shape information among the plurality of first virtual camera positions based on a calculation result by the first calculation unit; a divided surface generation unit that generates a plurality of virtual divided surfaces by dividing a virtual s