Search

US-12620078-B2 - Method for the automated support of an inspection and/or condition monitoring of objects of a production system

US12620078B2US 12620078 B2US12620078 B2US 12620078B2US-12620078-B2

Abstract

A method for the automated support of an inspection and/or condition monitoring of objects of a production system is provided. A processor takes or calculates an actual pose of an object. The actual pose specifies a translation and/or rotation with respect to a target pose of the object. A processor calculates a scaled pose of the object from the actual pose of the object by scaling the translation and/or rotation with respect to the target pose of the object. The processor displays an animated focus graphic. The focus graphic displays alternately a graphical image of the object in the target pose and a graphical image of the object in the scaled pose, and the scaling is selected such that a deviation of the actual pose from the target pose, which deviation is diagnostically relevant for the inspection and/or condition monitoring of the object, is clearly perceptible.

Inventors

  • Axel Platz

Assignees

  • SIEMENS AKTIENGESELLSCHAFT

Dates

Publication Date
20260505
Application Date
20211215
Priority Date
20201221

Claims (20)

  1. 1 . A method for automatically assisting with an inspection and/or condition monitoring of objects of a production system, said method comprising: a processor gathering or calculating an actual pose of an object from sensor data in a focus data record, wherein the sensor data contains measured values of measurements by sensors on the object in the production system and/or data are derived from the measured values, wherein the actual pose of the object indicates a translation and/or rotation with respect to a target pose of the object; the processor or a different processor calculating a scaled pose of the object from the actual pose of the object by virtue of the processor scaling the translation and/or rotation with respect to the target pose of the object; and the processor or the different processor displaying, on a graphical user interface, a moving focus graphic representing the sensor data of the focus data record, said moving focus graphic alternately showing a first graphical representation of the object in the target pose of the object and a second graphical representation of the object in the scaled pose of the object.
  2. 2 . The method as claimed in claim 1 , wherein the object is a holder which holds a workpiece, or wherein the object is the workpiece which is held by the holder, or wherein the object includes the holder and the workpiece such that the holder holds the workpiece.
  3. 3 . The method as claimed in claim 2 , wherein the holder is a suspension means and/or wherein the workpiece is a body.
  4. 4 . The method as claimed in claim 1 , wherein the sensor data in the focus data record are updated continuously by new measurements, and wherein the actual pose of the object, the scaled pose of the object and the focus graphic are recalculated continuously in real time based on the updated sensor data.
  5. 5 . The method as claimed in claim 1 , wherein the scaling is selected in such a manner that a direction of the translation and/or rotation can be seen by scaling the translation by a factor of between 10 and 200 and/or by scaling the rotation by the factor of between 10 and 200.
  6. 6 . The method as claimed in claim 1 , wherein the focus graphic shows an animation in which the object continuously moves back and forth between the target pose of the object and the scaled pose of the object, wherein the movement from the target pose to the scaled pose occurs in a time duration of between 0.4 and 1.7 seconds or between 0.8 and 0.9 seconds, and wherein the movement from the scaled pose to the target pose occurs in said time duration.
  7. 7 . The method as claimed in claim 6 , wherein the animation increasingly colors the object during the movement to the scaled pose of the object on the basis of an extent of the translation and/or rotation.
  8. 8 . The method as claimed in claim 1 , further comprising: the processor or a different processor gathering or calculating an actual pose of a comparison object from sensor data in a comparison data record, which contain measured values of measurements by sensors on the comparison object in the production system and/or data derived from the measured values, wherein the actual pose of the comparison object indicates a translation and/or rotation with respect to a target pose of the comparison object; the processor or the different processor calculating a scaled pose of the comparison object from the actual pose of the comparison object by virtue of the processor scaling the translation and/or rotation with respect to the target pose of the comparison object; and the processor or the different processor displaying a moving comparison graphic, which represents the sensor data of the comparison data record, beside the focus graphic on the graphical user interface, wherein the comparison graphic alternately shows a third graphical representation of the comparison object in the target pose of the comparison object and a fourth graphical representation of the comparison object in the scaled pose of the comparison object.
  9. 9 . The method as claimed in claim 1 , comprising: the processor or a different processor accessing a database which contains a set of data records containing the focus data record, wherein each data record from the set of data records contains, for a respective object from a set of objects in the production system, sensor data containing measured values of measurements by sensors on the respective object and/or data derived therefrom, and a first item of context information, a second item of context information and a third item of context information characterizing the respective object itself or a situation of the respective object at the time of the measurements on the respective object, the processor or the different processor selecting first data records from the set of data records, a first context information of which does not correspond to the first context information of the focus data record, and a second context information of which corresponds to the second context information of the focus data record, the processor or the different processor, for each of the first data records, gathering or calculating an actual pose of the respective object from the respective sensor data, which actual pose indicates a translation and/or rotation with respect to a target pose of the respective object, calculating a scaled pose of the respective object from the actual pose of the respective object by virtue of the processor scaling the translation and/or rotation with respect to the target pose of the respective object, the processor or the different processor lines up moving first graphics, which each represent the sensor data of one of the first data records in each case, beside the focus graphic on a first axis on the graphical user interface, wherein the first graphics alternately show third graphical representations of the respective objects in the target poses of the respective objects and fourth graphical representations of the respective objects in the scaled poses of the respective objects, the processor or the different processor selects second data records from the set of data records, the first context information of which corresponds to the first context information of the focus data record, and the second context information of which does not correspond to the second context information of the focus data record, the processor or the different processor, for each of the second data records, gathering or calculating an actual pose of the respective object from the respective sensor data, which actual pose indicates a translation and/or rotation with respect to a target pose of the respective object, calculating a scaled pose of the respective object from the actual pose of the respective object by virtue of the processor scaling the translation and/or rotation with respect to the target pose of the respective object, and the processor or the different processor lines up moving second graphics, which each represent the sensor data of one of the second data records in each case, on a second axis, which intersects the first axis at the position of the focus graphic, on the graphical user interface, wherein the second graphics alternately show the third graphical representations of the respective objects in the target poses of the respective objects and the fourth graphical representations of the respective objects in the scaled poses of the respective objects.
  10. 10 . The method as claimed in claim 9 , wherein the database is in a memory connected to the processor or is in a cloud, wherein the processor, a different processor or a plurality of other processors receives(s), for each object, once or repeatedly, the sensor data after the respective measurements and store(s) the sensor data for each object together with the first context information, the second context information and third context information in the respective data record, thus forming the set of data records in the database.
  11. 11 . The method as claimed in either of claim 9 , wherein the data records are updated continuously on the basis of new measurements by the sensors, and wherein the focus graphic, the first graphics and/or the second graphics is/are updated continuously in real time in order to represent the updated sensor data.
  12. 12 . The method as claimed in claim 9 , wherein the first context information, the second context information and the third context information each indicate a time of the measurements, or a manufacturing station at which the measurements are carried out, or a type or serial number of the object, or a type or serial number of a secondary object which was related to the object and, was mechanically connected to the object and/or acted on the object at the time of the measurements, or a type or serial number of one of the sensors.
  13. 13 . The method as claimed in claim 9 , wherein the first data records and the second data records are selected in such a manner that third context information of the first data records and the second data records corresponds to the third context information of the focus data record, the method comprising: the processor or a different processor selecting third data records from the set of data records, the first context information of which corresponds to the first context information of the focus data record, the second context information of which corresponds to the second context information of the focus data record, and the third context information of which does not correspond to the third context information of the focus data record, the processor or the different processor, for each of the third data records, gathering or calculating an actual pose of the respective object from the respective sensor data, which actual pose indicates a translation and/or rotation with respect to a target pose of the respective object, and calculating scaled pose of the respective object from the actual pose of the respective object by virtue of the processor scaling the translation and/or rotation with respect to the target pose of the respective object, wherein the processor or the different processor lines up moving third graphics, which each represent the sensor data of one of the third data records in each case, on a third axis, which intersects the first axis and the second axis at the position of the focus graphic, on the graphical user interface, wherein the third graphics alternately show the third graphical representations of the respective objects in the target poses of the respective objects and the fourth graphical representations of the respective objects in the scaled poses of the respective objects.
  14. 14 . The method as claimed in claim 13 , wherein the processor or a different processor evaluating an initial user interaction which selects the focus data record from the set of data records, and/or evaluating a first user interaction which selects the first context information from a set of context information stored in the focus data record, and/or evaluating a second user interaction which selects the second context information from the set of context information, and/or evaluating a third user interaction which selects the third context information from the set of context information.
  15. 15 . The method as claimed in claim 13 , different processor renders multiple graphics and then stores the multiple graphics in the respective data records, or retrieves the multiple graphics from the respective data records, or retrieves the multiple graphics from a server which renders the multiple graphics and/or keeps the multiple graphics in a memory, and wherein the multiple graphics comprise the focus graphic, the first graphics, the second graphics and/or the third graphics.
  16. 16 . The method as claimed in claim 13 , wherein the focus graphic, the first graphics, the second graphics and/or the third graphics are three-dimensional graphics and are described by code of a Web3D description language, or are two-dimensional moving raster graphics which appear in a three-dimensional manner.
  17. 17 . The method as claimed in one of claim 13 , wherein the focus graphic, the first graphics, the second graphics and/or the third graphics contain arrows, numbers and/or other symbols.
  18. 18 . The method as claimed in claim 13 , wherein the focus graphics are arranged in the center of the first axis, the second axis and the third axis, and/or wherein the focus graphic, the first graphics, the second graphics and/or the third graphics are arranged in an equidistant manner on the respective axis.
  19. 19 . The method as claimed in claim 13 , wherein the first axis, the second axis and the third axis are orthogonal to one another and are represented by a projection onto the graphical user interface, wherein the projection is a central projection.
  20. 20 . A terminal for automatically assisting with an inspection and/or condition monitoring of objects, having at least one processor programmed to carry out the method as claimed in claim 1 , said terminal having a display configured to output the graphical user interface, and having an interface configured to access the database.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to PCT Application No. PCT/EP2021/085904, having a filing date of Dec. 15, 2021, which claims priority to DE Application No. 10 2020 216 400.2, having a filing date of Dec. 21, 2020, the entire contents all of which are hereby incorporated by reference. FIELD OF TECHNOLOGY The following relates to a method for the automated support of an inspection and/or condition monitoring of objects of a production system. BACKGROUND In automobile manufacture, bodies are transported in fully automatic conveying systems. After body construction, they pass in this case through a painting system before they are supplied to the final assembly line. The fully automatic conveying systems, for example in an assembly line, use assembly supports, to which the body is fixed as an object for assembly. The assembly supports are generally referred to as holders below and the objects for assembly are generally referred to as workpieces. In addition to automobile manufacture and assembly processes in the stricter sense, embodiments of the invention generally relate to production systems which transport workpieces in fully automatic conveying systems by holders, and to the inspection of these holders in order to determine and assess their actual condition. In addition, embodiments of the invention also relate to production systems in which the position or orientation of objects, for example drive shafts, is generally important in other contexts for an inspection or condition monitoring of the objects. The position and orientation of an object are combined below under the term “pose”. DIN EN ISO 8373 defines the term “pose” as a combination of the position and orientation of an object in three-dimensional space, which is predefined as the base coordinate system. The position of the object may be stated, for example, in three coordinates as the distance between its mass point and the origin of the base coordinate system. The orientation of the object may be described, for example, by virtue of a further coordinate system being spanned at its mass point, for the coordinate axes of which coordinate system an angular offset with respect to the respective axes of the base coordinate system is respectively indicated by three angle specifications. Different poses can be mapped to one another by translation and rotation. According to DIN EN 13306 and DIN 31051, maintenance denotes a combination of measures which are used to obtain or restore a functional condition of an object. One of these measures is inspection, which is used to determine and assess the actual condition of the object and to determine possible causes of impairments. The result of the inspection may involve identifying repair measures for the object, which are subsequently carried out. In this case, the term “object” denotes, for example, a component, a part, a device or a subsystem, a functional unit, an item of equipment or a system, which can be considered alone. During condition monitoring, machine conditions are regularly or permanently captured by measuring and analyzing physical variables. For this purpose, sensor data are processed and analyzed, in particular, in real time. Monitoring the machine's condition enables condition-oriented maintenance. Both functional failures of objects such as holders in production systems and their repair and preventative inspection and maintenance work are associated with high costs in manufacturing since they can result in a downtime of the respective manufacturing section. SUMMARY An aspect relates to automatically assisting with an inspection and/or condition monitoring of objects of a production system. The embodiments described can be implemented for the focus graphic, the first graphics, the second graphics and the third graphics on three axes or only for the focus graphic, the first graphics and the second graphics on two axes. The advantages mentioned below need not necessarily be achieved by the subjects of the independent patent claims. Rather, they may also be advantages which are achieved only by individual embodiments, variants or developments. The same applies to the explanations below. A user-centered approach for automatically assisting with an inspection or condition monitoring in a production system is provided, which approach provides a visualization of sensor data by a special visualization concept which allows deviations in the sensor data to intuitively stand out. Since the data were previously available only as tables and columns of numbers, this means significant simplification, an increase in efficiency and a qualitative improvement for the maintenance engineer. Instead of only visualizing when limit values are exceeded, as previously, the present user-centered approach makes it possible to visually capture the type of deviation that is present. The maintenance engineer can infer possible causes from the type of deviation that is present. Th