At the cutting edge of carrots
Finding the right cutting position to separate the green from the carrot seems to be no problem for image processing. The following feasability study comes to this conclusion and shows us how CVB Blob and Edge helped to realize it.
»Off with their heads!« Using an image processing system to cut carrots
Is it possible to process carrots automatically using image processing systems and to determine the point at which the green of the carrot should be cut off? A feasibility study was carried out at Iris Vision in Holland in order to answer this question. It was found that the hardware and software available are able to complete this task as quickly and accurately as required.
Vegetables are still harvested and processed largely manually. Not only are these activities generally monotonous and in some cases bad for the health of those involved; the associated labor costs also have a considerable effect on the prices of the end products. The Dutch distributor of the Common Vision Blox software system, Iris Vision, a company based in Etten-Leur, therefore decided to examine the extent to which image processing systems can be deployed in this context.
A feasibility study was carried out to examine whether image processing systems were able to determine the point at which the carrot green should be cut from the carrot during automatic carrot processing. If too little is cut off, some of the unwanted green tops of the carrots remain. If too much is cut off, on the other hand, this has a negative effect on the productivity of the process.
In order to create conditions that were as realistic as possible, a conveyor belt was used to transport the carrots to the image processing system. Pre-washed carrots between 150 mm and 300 mm in length were placed on this at arbitrary angles and transported past the camera. Further conditions were that not more than 10 percent of the total length of the carrot should be cut off and that the part cut off should be less than 3 cm in length.This was ensured by an upstream process. 20 carrots per second had to be processed.
»Since the carrots can be lying in any position on the conveyor belt, the first task is to ascertain their angle of rotation,« explains Dietmar Serbée, the head of distribution at Iris Vision, who was in charge of carrying out the study. »The next step is to determine the point at which the green part of the carrot is to be cut off.«
The top of the carrot is the end at which the carrot is at its thickest perpendicular to the longitudinal axis. From the point of maximum thickness, the carrot decreases in thickness much more quickly toward the green end of the carrot than toward the root. The algorithm therefore searches for the cutting point on the green side of the thickest point.
The following steps are thus involved in determining the point at which to cut the carrot:
- Ascertain the carrot's angle of rotation.
- Trace the thickness of the carrot along the longitudinal axis and find the thickest point.
- Measure how rapidly the carrot decreases in thickness on both sides of the thickest point.
- The point at which the reduction in thickness is greatest is the approximate point at which the carrot is to be cut.
- Calculate the exact point at which the carrot is to be cut by means of an accuracy adjustment procedure.
Achieving the objective with the right software
»We solved this problem on the software side by using the tools Blob and Edge from the Common Vision Blox image processing toolbox,« explains Serbée. This quasi-operating system for image processing was developed by the Puchheim-based German company STEMMER IMAGING GmbH in order to provide a standardized, open platform allowing the rapid and flexible development of image processing applications. There are now over 30 Common Vision Blox tools, and these can be used to solve virtually any industrial image processing problem.
CVB Blob is able to calculate the shape of an object consisting of contiguous pixels. It calculates not just the object's centroid (center of mass), bounding box (the smallest rectangle that can surround the object), perimeter and area, but also its minimum and maximum moments of inertia.The relationship between these two moments of inertia determines whether or not the object has an orientation. If the minimum and maximum moments of inertia are identical, as they are for a circle, the object does not have an orientation.
In the case of carrot inspection, CVB Blob begins by identifying the perimeter of the carrot. It can then calculate the moments of inertia and use these to calculate the angle of rotation. »A carrot's minimum moment of inertia will generally be considerably lower than the maximum moment of inertia, which is essentially the carrot's longitudinal axis,« says Serbée. »CVB Blob uses this to calculate the carrot's angle of rotation.«
The Common Vision Blox tool Edge allows the edges of an object to be determined on the basis of the gray-scale values of the object and those of the background. A standard algorithm used by this software detects two opposite edges and calculates the distance between these points with a level of accuracy that can be in the sub-pixel range, if required. These distances are saved, thus allowing the maximum diameter of the carrot (its top) to be ascertained and the decrease in thickness from this point to be calculated. CVB Edge thus carries out steps 2 and 3.
An M10 BX camera from the Danish manufacturer JAI (a CCIR industrial camera with a resolution of 768 x 572 pixels) and a PcVision PCI frame grabber card from DALSA Coreco were used for image acquisition in the feasibility study. »The images obtained in this way are displayed as gray-scale images in a window in the upper-left corner of the graphical user interface,« explains Serbée. »The image is then converted into a black-and-white image with a »white« carrot on the basis of a predefined gray-scale threshold value, which depends on the type of lighting used. In this step BLOB also ascertains the position of the carrot by calculating its centroid, and it displays its bounding box in the window in the upper-right corner of the GUI.«
CVB Blob then calculates the object's moments of inertia so as to determine the carrot's angle of rotation on the basis of the maximum moment of inertia. An image of the carrot aligned horizontally is displayed in the lower-right corner of the GUI. In the next step, a small region of interest (ROI) is defined perpendicular to the longitudinal axis. This can be set in adjustable increments along the length of the carrot.Within this window, CVB Edge looks for pairs of edges representing the transitions from black to white and vice versa and saves the distances between these points. A further ROI is defined either side of the line representing the greatest distance between points on the opposite edges. In this ROI the software then finds out on which side of this line the thickness of the carrot decreases more rapidly. This is the point, subsequently adjusted by a defined distance, at which the blade later cuts the green part of the carrot off. This adjustment provides a safety margin, thus ensuring that none of the green part of the carrot remains.
The accuracy of this procedure depends on the width of the ROI in which the software calculates the side on which the thickness of the carrot decreases more rapidly and on the size of the increments used to set the ROIs. The narrower the ROIs are, and the smaller the distances between them, the greater is the accuracy. A higher level of accuracy does, of course, inevitably slow the application down.
Time-consuming calculation of orientation
The tests for checking the correct functioning of the algorithm were carried out with two different systems:
- System A: Intel Pentium Pro 200 MHz, 64 MB RAM
- System B: Intel Pentium III 500 MHz, 128 MB RAM
Windows NT Workstation 4.0 with Service Pack 5 was running on both systems. Common Vision Blox Version 1.4 and Microsoft Visual Basic Version 5.0 were used.
Table 1 shows the benchmarks for the different steps. They were carried out with a resolution of 25 pixels per ROI. This table indicates that neither of the two systems will reach the required speed of 20 carrots a second without further optimization. In addition, the measured values indicate that around 85 percent of the processing time taken is required to determine the carrot's angle of rotation (i.e. to find the object and calculate its moments of inertia).
There are a number of options available for optimizing these values and achieving the specified speed. If the carrots can be inspected in a known position rather than at arbitrary angles of rotation, the processing time is reduced to less than 50 ms because the processor-intensive algorithms for calculating the areas and moments of inertia are not required. This makes it possible to inspect 20 carrots a second. There is also potential in the software itself for further increasing the speed: »The demonstration program is written in Visual Basic and not optimized for speed,« explains Serbée. He estimates that the processing time can be reduced by a further 10 to 20 percent if the system is implemented in C++. The required level of accuracy was reached in all the tests. To further increase the speed to meet more stringent requirements, it would also be possible to increase the size of the ROI and the size of the increments. This would also lead to better performance.
The feasibility study shows that it is possible to use industrial image processing to determine the point at which the green part of the carrot should be cut off. The demonstration and evaluation program developed on the basis of Common Vision Blox worked robustly without optimization measures and achieved the required speed of 20 carrot inspections a second without software optimization, provided the objects are fed to the image processing station at a specified angle of rotation. This would be easy to do using mechanical devices.
The hardware components (JAI M10 camera and PcVision frame grabber from DALSA Coreco) were appropriate for the purpose and were able to provide the required resolution for reaching the required level of accuracy. Both components are supported by Common Vision Blox.
The demonstration system presented is currently still with the customer for evaluation purposes and is undergoing further development there. »We are confident that this system will soon be able to demonstrate its effectiveness under industrial conditions,« emphasizes Dietmar Serbée.
Founded in 1996, IRIS VISION in the Netherlands offers a complete range of machine vision and image processing products from major manufacturers. IRIS's expertise is the integration of all necessary vision components for a wide variety of image processing and machine vision applications.
Ranging from the beginning of the data chain with lighting and optics, through the versatility of the acquisitions and processing to the result of the application. IRIS offers a range of solutions, simple frame grabbers, extensive pipeline processors, host based processing and hardware processing.