A new approach to multi-camera 3D imaging is set to make a big impact in accurate volume measurement for irregular shaped objects such as sausages in the food industry.
In principle, the measurement method is straightforward: measure the cross sectional area of the product as it moves along a conveyor belt and multiply it by the length to give the volume (Figure 1).
When the appropriate volume has been measured, a slicer can be activated to cut the sausage to size. Since the sausage is not uniformly cylindrical, the more cross-sectional ‘slices’ that can be measured, the more accurate the volume measurement. Since only the surface profile can actually be seen at any point along the sausage, the cross-sectional area must be calculated using measurements of the profile. 3D imaging is a well-established technique for making volume measurements for portion control and checking that product will be the correct size to fit into the packaging. Traditional 3D profiling uses a laser line projection which is imaged onto a CCD camera (Figure 2).
By moving the object past the camera (on a conveyor belt), the change in shape and position of the laser line is directly related to the slice area of the object at that position. By summing all of the acquired slices, 3D volumetric data can be calculated. Accuracy of this 3D profiling technique is based on three criteria:
Problems arise using a single camera if the object to be measured has undercuts or any edges that cannot be seen by the camera. By implementing a two camera/laser system, positioned at each side of the profile area, undercut regions can be imaged. However processing the profile now becomes more complex as two profiles need to be combined and the errors due to the unusual geometry now need to be corrected.
Another fundamental problem with both the single camera and two camera systems is that they assume the base of the object is flat. If that is not the case, then significant errors are introduced. The solution for irregular shaped objects (sausages, for example) is to use a three camera approach (Figure 3) to make a measure of the area. In order to make this possible in practice, two conveyor belts are used. The gap between them allows the cameras and lasers a clear line of view of the sausage as it ¬passes from one conveyor to the next.
One way of making the measurements is illustrated in Figure 4. The perimeter of the sausage is reconstructed from the 3D measurements. The resulting cross-sectional ‘slice’ is divided into a series of triangles as indicated. The area of each of the individual triangles is calculated and then summed to get the total area of the ‘slice’ (A = ΣAi). Since the sausage is progressing along a conveyor belt, the distance travelled while the measurement is being made, dM, is measured from the encoder on the conveyor belt. The differential volume, dV, is then simply calculated as A * dM, and the total volume, V, is calculated by summing the differential volumes, i.e V = ΣV.
Whilst this seems straightforward in principle, the challenges are to put all the ‘slice’ points into the same reference coordinate system in order to obtain a circular slice, and to cope with "overlapping" areas in order to avoid "double counting" in the computation. The overlap occurs because sections of the perimeter are visible to two cameras at the same time (Figure 5). Calibration is achieved using a specialized calibration piece, precision machined from stainless steel, which is imaged on the conveyor belt in the same way as the sausage itself.
This complex shape features vertices with co-ordinates which have been chosen so that the reference co-ordinate system lays in the centre of the part. Each of the three camera-laser sets are calibrated separately from the calibration piece, using the co-ordinates of the 3D vertices referred to the central reference co-ordinate system. When the calibration parameters are applied to each of these three sets, the reconstructed points for the measurements on the perimeter of the sausage will be put in the correct place. The points can then be sorted to eliminate overlap and prevent the possibility of double counting.
Sophisticated software is required to allow this level of computation to take place in realtime at production line speeds in a real factory environment. This measurement capability is available as part of a comprehensive suite of 3D capabilities offered within STEMMER IMAGING’s Common Vision Blox (CVB) hardware independent imaging toolkit, which readily integrates into existing machine vision environments. The project was realized with the help of our partner company, Aqsense SL, who provided in-depth simulations to predict the accuracy of the system prior to design. CVB provides interactive image processing systems as well as graphical programming tools to create the perfect software platform for quick and reliable vision application development. This fast, powerful and modular programming library supports all common acquisition technologies.
The CVB 3D Suite also features CVB Match 3D, a tool which allows a 3D image of a perfect sample (golden template) to be compared to the 3D images of upcoming test parts in the production line, complete with full alignment of the two images. Part deviations can be identified in realtime, allowing pass/fail decisions to be made. The algorithm works internally on real 3D point clouds and automatically adjusts position errors or tipping and tilts in all 6 axes. Hence there is no need for a highly accurate part positioning and handling as CVB Match 3D aligns the part image in 3D before comparison. This approach results in a reduced mechanical effort and assures high inspection throughput for 100% inline control. STEMMER IMAGING can also supply the hardware needed for fast 3D measurements, including cameras, lasers, optics, and frame grabbers.
STEMMER IMAGING has been leading the machine vision market since 1987. It is Europe's largest technology provider in this field. In 1997 STEMMER IMAGING presented Common Vision Blox (CVB), a powerful programming library for fast and reliable development and implementation of vision solutions, which has been deployed successfully throughout the world in more than 40,000 imaging applications in various industries.