To increase the quality of their products while reducing waste and manual labor costs, manufacturers in agricultural and packaging industries are leveraging machine-vision systems. When compared with manual inspection techniques, the benefits of using such systems provide these vendors with a cost-effective means to evaluate the quality of food products as they ensure a high-level of consistency.
Today, different types of machine-vision systems perform these inspection tasks. For high-speed inspection and sorting of recently harvested products such as potatoes, linescan camera-based systems are most commonly deployed. In less demanding applications, harvested or baked products can be evaluated using color systems that employ area-array or multispectral cameras. To ensure the correct portions of foods such as meat or fish are properly packaged, structured light systems may evaluate the products’ volume prior to automatic slicing.
Once products are packaged, machine-vision systems can validate sizes, defects, color, and barcodes to ensure package consistency. Although the hardware on which the systems are based may be different, developers often use off-the-shelf software packages to process and analyze captured images. Using these packages with their associated graphical user interfaces, engineers and integrators can rapidly develop machine-vision systems with a minimum level of coding.
The requirements of the task to be performed must be carefully considered before choosing a software package. Luckily, many of the low-level functions required by machine-vision systems have already been incorporated into these packages. For example, preprocessing functions such as lens distortion correction, geometric calibration, Bayer interpolation, and numerous filters for noise reduction are now commonplace in most manufacturers’ software toolsets.
To extract quantitative information from captured image data, higher-level image segmentation algorithms are often used. These include methods such as thresholding, histogram analysis, edge-based segmentation, and region-based segmentation. While simple thresholding classifies regions within an image based on their intensity values, histogram-based methods can locate specific clusters of color or image intensity within an image. Edge-detection algorithms can detect discontinuities within images based on parameters such as intensity, color, or texture. Pixels of similar intensities, colors, or textures can also be grouped together into regions.
Using these segmentation techniques, image features can be represented by boundaries within images, then used to determine characteristics such as the size or shape of an object. Similarly, sets of regions can be used to analyze specific defects or image texture within the object.
To measure features within segmented regions or boundaries, most off-the-shelf software packages offer different types of tools for measurement and feature extraction. These include caliper-based measurement, blob analysis, morphological operators, and color analysis tools. In many applications, multiple algorithms are often combined to discern multiple features of a specific product. Color image analysis is also now commonplace in software packages.
For example, to automatically process carrots, the system can analyze carrots before they are automatically cut. Prewashed carrots at arbitrary orientation between 150 mm and 300 mm in length are transported along a conveyor.
After the system captures grayscale images of the carrots, images are thresholded. Next, it determines the centroid and moments of inertia. This information is used to calculate the orientation of the carrot. By measuring how rapidly the carrot decreases in thickness on both sides of the thickest point, the system can locate the point at which the carrot should be cut.