Measuring the geometry and dimensions of physical objects is an important task during the manufacturing process. Constant monitoring of the production process is needed to run the production line efficiently. By constantly measuring the geometry of the objects produced by a production line, drifts in the production process can be detected. This allows to adjust the parameters of the production process before deviations from the intended geometry become so large that the produced parts have to be rejected. Thus, keeping the production line efficient is saving money and resources.
This kind of process control often requires measuring the geometry of objects, which have the size of meters, with µm precision. To achieve the required precision, currently very stable bridge coordinate measuring machines are needed to position the sensor head which performs the final measurement (by tactical or optical means). Those bride type coordinate measurement systems are very stable, however, are not as flexible as robot arms. Unfortunately, robot arms are mechanically not stable enough to position and orientate the sensor head with the required precision. If it would be possible to determine the position and orientation of the sensor head mounted to such a robot arm with the required accuracy, high precision metrology solution could be integrated much more seamlessly in today’s production processes, thereby enabling better monitoring and thus increased efficiency of production lines.
In order to achieve that, it is required:
- to perform 3D localization of the sensor head in a meter scale volume with µm precision, preferably accuracy
- to determine the orientation of the sensor head with 50 µrad precision, preferably accuracy, preferably 10 µrad.
Already solving just one of those tasks would be of significant relevance.
To date, mainly three techniques are used to perform such localization and orientation measurements:
- Machine vision-based systems: A set of markers is attached to the machine part to be localized. The set of markers is observed from a number of perspectives and the position and the orientation of the marker set are computed (e.g. photogrammetry, stereoscopy).
- On-board-sensors for the robot axes provide absolute angles of the respective robot joint, which allows the robot’s end effector’s position and orientation to be computed on the basis of more or less sophisticated robot models. The models attempt to compensate distortions of the robot kinematics as can be caused by varying temperatures, varying payloads, and varying motion parameters or the like.
- Sensor fusion approaches attempt to alleviate deficiencies of certain sensor principles by fusing e.g. absolute angle information with accelerometric information (IMUs).
However, each of these techniques has significant limitations. Hence, a new method is needed which overcomes the limitations of the established techniques. The following five criteria determine the value of such a new technology for practical applications:
- 3D localization of the sensor head with µm precision
- determine the orientation of the sensor head with 50 µrad precision (preferably 10 µrad)
- offer a tracking band width interval from ideally 0 Hz to 2000 kHz
- allow to be easily calibrated
- yield first and second derivative of the detected movement for all 6 degrees of freedom (3D localization and orientation measurement)
A technology which fulfills all the criteria listed above would be ideal. However, it is already enough if a new technology is better than the established techniques.