The next landmark in vision
High-powered computers and laser and camera systems are making true 3D vision a reality explains E&T.
The emergence of high-speed 3D image processing technologies is a major milestone for the vision industry as it represents the most significant step change in processing capabilities since the development of the 2D pattern recognition algorithms.
3D measurement methods have been around for a while but have rarely been used in machine vision due to the computational loads involved. However, not only has processing power vastly improved over the years, but laser and camera systems are now available at lower cost and with far higher accuracy.
A number of developments have boosted processing power, such as faster and multiple core CPUs, the use of Graphical Processing Unit parallel processing capabilities, and the use of FPGAs to take the load away from the more complex processing units.
Not only does the technology allow fast, accurate 3D measurements, but new algorithms for 3D matching have also been developed, which allow the 3D point cloud of a test component to be compared very quickly to that of a "good" component or one derived from the original CAD data. Given the choice, most engineers would prefer to get genuine 3D information rather than extrapolated 2D representations.
3D Measurement Techniques
The three most commonly used 3D measurement techniques are Stereoscopy, Monocular 3D imaging and Laser triangulation.
3D Stereoscopy requires two cameras taking images at the same time. The machine vision system needs to locate the same point in the two separate images and then calculate the difference, and therefore distance, from the cameras. Object localisation in 3D is available from multiple images and this will work on any object as long as the same point is visible on both images.
The simplest approach to Monocular 3D Imaging is to have an object of known proportions. The system then looks for expected points and can work out the orientation and distance from these.
Laser triangulation is probably the method of choice and is used in many applications. The simplest form is a single laser reflected by the object to a sensor. A change in height can be measured by the change in angle of the reflection to give a single height measurement. By fanning the laser out into a line, or stripe, it is possible to collect height information for a set of points within a plane (Fig 1). If the object is then moved, such as on a conveyor belt, the third dimension can be measured, providing the angle between light source and camera is known. Height calibration is achieved using objects of known heights.
For the best accuracy, the peak intensity of the laser line must be measured. Typical methods include Centre of Gravity measurements and Peak Detection. For many surfaces (Lambertian surfaces) such as plastics (PVC and similar), clean non-specular metal, walls and ceramics, the reflected line has a Gaussian shape and the maximum intensity is in the middle of the stripe image.
However, for translucid Lambertian surfaces, such as other plastics (Nylon and similar), animal tissue, silicon, resins, marble minerals, oil-coated metals and some timber, the laser light penetrates into the material due to its light transmission properties. The reflected line shape is now non-gaussian and the centre of gravity and peak intensity do not coincide, so peak detection methods are required.
3D measurements are particularly important where the product has relatively unpredictable variations in dimensions.
Two examples are random defects in materials such as wood and in industries such as the food industry where cooking processes result in non-uniform product. 3D models of the inspected product give the user all of the traditional benefits associated with 2D inspection, such as reduced reliance on manpower for quality control; availability of live production data for monitoring systems; improved consistency of product; increased throughput and reduced wastage, as well as 100 per cent inspection of shape, size and other parameters such as colour.
Fig 2a shows a laser scanning over a small depression in a piece of wood. Fig 2b shows a colour-coded depth map of the surface, clearly revealing the depression. By calibrating the system using wood samples with planes of 1.25mm thickness, the depth of the defect can be measured as 1.18mm (Fig 2c).
Another application is 3D inspection of pizza bases to check that the dough is neither overcooked nor undercooked, that the shape is within specified parameters and the height of the bread is neither too high nor too low. The system is linked to a reject mechanism so that defective bases can be automatically rejected before the valuable topping is applied.
Fig 3 shows the pizza 3D model as a wire diagram, built up from a number of laser dots which illuminate the pizza bread as it passes under the 3D measurement system. The measurement data, which include diameter, height and the outer crust, appear at the bottom of the image.
This particular system offers the following operational performance:
- 20 images per second;
- line speed can be up to 600mm per second;
- four bases per second inspection time or 240 items per minute for 30cm diameter pizza bases.
3D Shape Matching
A great number of quality control machine vision applications can be performed through the comparison of an object's image with an ideal "model" or "golden" image of the object. Template matching has proven to be a successful technique for conventional 2D image processing.
One of the most demanding imaging applications is the inspection and measurement of complex freeform objects where completeness and dimensional accuracy have to be checked in real 3D. Before two surfaces can be compared, they must be aligned in 3D. The 'cloud' of 3D points corresponding to the scanned object should be moved until it is completely overlapped with the cloud of points corresponding to the model object. Aligning objects in 2D means finding a translation along the X and Y axes and a rotation.
However, aligning objects (clouds of points) in 3D means finding a translation along the X, Y and Z axes, together with a rotation about the X, Y and Z axis. That is: three translations and three rotations, a total of six degrees of freedom, compared with three degrees of freedom in the 2D case.
New patent-pending 3D matching algorithms developed by Aqsense SL offer a new practical approach for 3D imaging, providing high accuracy and a processing speed fast enough to keep track with modern production lines.
Differences between the images of the template and the object highlight part deviations and can be identified in real-time, allowing pass/fail decisions to be made. The algorithm works internally on real 3D point clouds and automatically adjusts position errors or tipping and tilts in all six axes. Hence, there is no need for highly accurate part positioning and handling as the software aligns the part image in 3D before comparison.
Once the part surface has been aligned to the model surface, the comparison is performed by surface subtraction - a point-to-point subtraction between model and part surfaces - to produce a disparity map. These values can be converted to metric units provided that a calibration procedure has been applied beforehand for the surface acquisition. This approach results in a reduced mechanical effort and assures high inspection throughput for 100 per cent inline control.
Fig 4 shows two 3D surfaces, the green one represents the golden model or reference object while the red one represents the part under inspection. Fig 4a shows both surfaces before the alignment. Fig 4b shows the two surfaces after alignment. Fig 5 corresponds to the pseudo-color representation of the disparity map. Green means zero difference between model and part, red means that the part is higher than the model, and blue means the part is lower than the model.
The software is extremely flexible and allows for the inspection of different parts at the same time. Thanks to this approach quick product changes are possible and no special calibration is required.
Applications are many and varied and include:
- adhesive bead inspection;
- welded seam inspection;
- component inspection;
- surface inspection;
- inspection of tyres and rubber seals;
- geometric inspection;
- BGA, PCB and solder paste inspection;
- rail measurement;
- foodstuff portioning.
A 1 million point surface with an initial misalignment of 10 degrees and 10mm in the three axes X, Y and Z can be aligned in approximately 100ms, with an alignment error of <1 micron, using a suitable processor.
The high speed capability of the 3D matching algorithm places demands on camera technology to deliver high volumes of 3D points at high speed. Currently, GigE Vision 3D cameras are available which can offer more than 104 million 3D points per second with a resolution of 1280 x 1024 pixels. Data is exchanged via a Gigabit Ethernet interface which complies with the GigE Vision standard, thus making integration easy. With the plug and play Gen<i>Cam protocol, configuration is quick and simple.