Search

KR-20260067594-A - Apparatus and method for generating cross-sectional image of pillar reinforcing composite material

KR20260067594AKR 20260067594 AKR20260067594 AKR 20260067594AKR-20260067594-A

Abstract

The present invention relates to an apparatus and method that enables the construction of a 3D virtual model of a composite material by generating a cross-sectional image of a filler composite material during the development of a composite material. A cross-sectional image generation device for a filler-reinforced composite material according to an embodiment of the present invention comprises: a first encoder unit that extracts a first feature vector from a first cross-sectional image of predetermined composite materials; a second encoder unit that receives condition data for a specific composite material and generates a condition embedding vector; a combining unit that combines the first feature vector and the condition embedding vector to generate a conditional latent vector; and a decoder unit that generates a second cross-sectional image of the specific composite material from the conditional latent vector.

Inventors

  • 문성남
  • 임담혁
  • 황서경
  • 정진미

Assignees

  • 주식회사 엘지화학

Dates

Publication Date
20260513
Application Date
20241106

Claims (16)

  1. A first encoder unit that extracts a first feature vector from a first cross-sectional image of predetermined composite materials, and A second encoder unit that receives condition data for a specific composite material and generates a condition embedding vector, and A combination unit that generates a conditional latent vector by combining the first feature vector and the condition embedding vector, and A cross-sectional image generation device for a filler-reinforced composite material comprising a decoder unit that generates a second cross-sectional image of a specific composite material from the above conditional latent vector.
  2. In Article 1, A cross-sectional image generation device for a filler-reinforced composite material, comprising an artificial neural network including a CNN (Convolutional Neural Network) model capable of image encoding with a configuration of Conv2D, Batch Normalization, and Flatten layers, or a Vision Transformer model composed of MultiHeadAttention, Dense layers, etc.
  3. In Article 1, The above-mentioned second encoder unit is composed of Dense, MultiHeadAttention, and LayerNormalization layers, and is a cross-sectional image generation device for filler-reinforced composite materials that processes condition data by reflecting specific conditions.
  4. In Article 1, The above first feature vector is, It is obtained from a first encoder unit that has trained an artificial neural network with the predetermined training data, the network comprising: a first encoder unit that receives a first cross-sectional image of a predetermined composite material constituting the predetermined training data and generates a second feature vector; a second encoder unit that receives a predetermined condition data constituting the training data and generates condition embedding vectors; and a decoder unit that generates a second cross-sectional image of a composite material from the second feature vector and the condition embedding vector. The above artificial neural network is a cross-sectional image generation device for filler-reinforced composite materials trained such that the difference between the second cross-sectional image generated above and the first cross-sectional image of predetermined composite materials constituting the training data is minimized.
  5. In Article 4, The above artificial neural network is a cross-sectional image generation device for filler-reinforced composite materials, wherein the neural network parameters of the neural network constituting the first encoder unit, the second encoder unit, and the decoder unit are updated so that the difference between the second cross-sectional image generated above and the first cross-sectional image of predetermined composite materials constituting the training data is minimized.
  6. In Article 1, The above decoder unit is a cross-sectional image generation device for filler-reinforced composite materials that generates a second cross-sectional image by selectively adopting a structure using an UpSampling2D + CNN series, a Diffusion structure, and a U-Net style decoder structure.
  7. In Article 1, A device for generating a cross-sectional image of a filler-reinforced composite material, characterized in that the first cross-sectional image is a distribution image of internal fillers of a composite material measured using at least one of a scanning electron microscope (SEM), an X-ray microscope (XRM), a microscope, and a transmission electron microscope (TEM) on a specimen of the composite material.
  8. In Article 1, The above-mentioned first feature vector is a low-dimensional latent space vector that compresses and expresses the features of the first cross-sectional image into a low-dimensional space, and the cross-sectional image generation device for a filler-reinforced composite material includes information summarizing the features of the first cross-sectional image.
  9. In Article 1, A cross-sectional image generation device for a filler-reinforced composite material, characterized in that the specific composite material above is a composite material to be developed.
  10. In Article 1, A cross-sectional image generation device for a filler-reinforced composite material, characterized in that the above condition data is data obtained through a direct calculation method using image processing software, simulation software, or analytical models.
  11. In Article 10, The above condition data includes statistical indicators derivable from a frequency distribution table and parameters for fitting a probability density function, information related to contact between fillers, information on distance between fillers, information on space between fillers, and material property information of the composite material, wherein the material property information includes the thermal conductivity of the composite material measured by fabricating a specimen through actual compression or the thermal conductivity of the composite material calculated through a virtual model built in commercial simulation software.
  12. In Article 1, The above condition embedding vector is a device for generating a cross-sectional image of a filler-reinforced composite material that includes information of the corresponding condition by encoding information reflecting a specific condition or context into a vector form.
  13. In Article 1, The above conditional latent vector is a cross-sectional image generation device for a filler-reinforced composite material, in which information reflecting the features of the input data included in the first feature vector and the specific conditions or context included in the condition embedding vector is all reflected.
  14. A method for generating a cross-sectional image of a composite material that meets the conditions by inputting the conditions of the composite material, In the first encoder unit, a first feature vector extraction step for extracting first feature vectors from a first cross-sectional image of actual images of predetermined composite materials, and In the second encoder unit, a condition embedding vector generation step that generates a condition embedding vector by encoding condition data for a specific composite material, and In the combination section, a conditional latent vector generation step of concatenating the first feature vector and the condition embedding vector to generate a conditional latent vector, and A method for generating a cross-sectional image of a filler-reinforced composite material, comprising a second cross-sectional image generation step in which, in a decoder section, a second cross-sectional image of a specific composite material is generated from the conditional latent vector.
  15. In Article 14, The first encoder unit and the decoder unit above are, It consists of an artificial neural network whose output value changes depending on the updated neural network parameters, and The above-mentioned updated neural network parameters are, A first encoding step for extracting a second feature vector from first cross-sectional images stored in advance, and A second encoding step for encoding predetermined condition data of a composite material to generate a condition embedding vector, and A second cross-sectional image generation step of generating a second cross-sectional image of a specific composite material by combining the second feature vector and the condition embedding vector, and A loss difference calculation step for calculating the difference between a second cross-sectional image of the reconstructed specific composite material and a first cross-sectional image of a predetermined actual composite material, and A method for generating a cross-sectional image of a filler-reinforced composite material that is finally updated by repeating a neural network parameter update step to update neural network parameters so that the difference calculated above is minimized.
  16. In Article 15, A method for generating a cross-sectional image of a filler-reinforced composite material, wherein the predetermined condition data of the composite material includes statistical indicators derivable from a frequency distribution table and parameters for fitting a probability density function, information related to contact between fillers, information on distance between fillers, information on space between fillers, and material property information of the composite material, wherein the material property information includes the thermal conductivity of the composite material measured by fabricating a specimen through actual compression or the thermal conductivity of the composite material calculated through a virtual model built in commercial simulation software.

Description

Apparatus and method for generating cross-sectional image of pillar reinforcing composite material The present invention relates to a filler-reinforced composite material, and more specifically, to an apparatus and method that enables the construction of a 3D virtual model of a composite material by generating a cross-sectional image of the filler composite material during the development of the composite material. With the recent rise in awareness regarding eco-friendliness, the automotive industry is also striving to develop eco-friendly vehicles in line with national environmental standards. The government is implementing various policies, such as providing subsidies for vehicles using eco-friendly fuels like electric and hydrogen cars, and offering various benefits based on eco-friendly ratings derived from fuel efficiency. In particular, fuel efficiency standards are becoming stricter due to exhaust gas regulations, leading to extensive research aimed at improving fuel economy. While there are various approaches to improving fuel efficiency, such as increasing engine efficiency and enhancing gear shifting technology, the most effective method is the lightweighting of the vehicle body. As such, with the recent expansion of the electric vehicle market, the market for related electronic components is also growing rapidly, and there is a continuing demand for lightweighting of electric vehicles and electronic components. Furthermore, due to rising metal post-processing costs, much research is being conducted to replace existing metal parts with engineering plastics (EP). EP is a high-strength plastic used as an industrial or structural material. It is designed by imparting mechanical, thermal, and shielding properties to general plastics or functional resins (matrix) and by heterogeneously compounding various fillers to improve properties. However, since these fillers are used in combination, including various organic and inorganic types depending on their composition, optimizing the physical properties of the final composite material requires a great deal of know-how and trial and error. FIGS. 1 and 2 are drawings showing the configuration of a cross-sectional image generation device for a filler-reinforced composite material according to an embodiment of the present invention. FIG. 3 is a drawing for explaining a method for generating a cross-sectional image of a filler-reinforced composite material according to an embodiment of the present invention. FIG. 4 is a diagram illustrating the procedure for machine learning each neural network constituting the neural network in FIG. 3. Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. However, the present invention is not limited to the embodiments disclosed below but may be implemented in various different forms, and the embodiments of the present invention are provided merely to ensure that the disclosure of the present invention is complete and to fully inform those skilled in the art of the scope of the invention. To explain the invention in detail, the drawings may be exaggerated, and like reference numerals in the drawings refer to like elements. FIGS. 1 and 2 are drawings showing the configuration of a cross-sectional image generation device for a filler-reinforced composite material according to an embodiment of the present invention. Referring to FIGS. 1 and 2, a cross-sectional image generating device for a filler-reinforced composite material according to an embodiment of the present invention includes a first cross-sectional image storage unit (100), a first encoder unit (200), a second encoder unit (300), a coupling unit (400), and a decoder unit (500). The first cross-sectional image storage unit (100) stores a first cross-sectional image of a predetermined composite material. At this time, the first cross-sectional image may be a 2D image of the distribution of internal fillers of the composite material measured using a scanning electron microscope (SEM), an X-ray microscope (XRM), a microscope, a transmission electron microscope (TEM), etc., on a specimen of the actual sample composite material. In addition, the first cross-sectional image storage unit (100) can perform image preprocessing, such as contrast adjustment, a noise removal filter, and a watershed filter, to effectively perform embedding on the first cross-sectional image being stored. Furthermore, while maintaining the same conditional data (physical information), it may be possible to introduce enhancement techniques for the image, such as left-right inversion and up-down inversion. The first encoder unit (200) takes the first cross-sectional image as input and encodes it into a low-dimensional latent space vector. The low-dimensional latent space vector generated by encoding is called the first feature vector. In this case, the first feature vector is a low-dimensional latent space vector that compresses and repr