WHAT DO IMAGE PROCESSING SYSTEMS HAVE TO DO WITH KEEPING FOODSTUFFS IN GOOD SHAPE?
Everyone prefers foodstuffs that are fresh and outwardly attractive. Image processing systems are frequently used during the quality assurance process for these products to ensure that this is the case. The image data helps producers make informed decisions that would be otherwise be impossible.
But how are systems of this kind designed? What steps are necessary, what must be taken into account, and what options are available?
Selection of the camera, selection of the lens and lighting source, evaluation of image quality, selection of PC hardware and software and the configuration of all components – all of those are important steps toward an effective image processing system.
Imagine an apple grower asks you to design a machine vision system for inspecting the apples. He’s interested in delivering uniform quality, meaning the ability to sort out bad apples while still working fast. He is faced with the following questions:
What are the precise defined requirements for the system?
Which resolution and sensors do I need?
Do I want to use a color or monochrome camera?
What camera functions do I need, and what level of image quality is sufficient?
WHAT EXACTLY SHOULD THE SYSTEM DELIVER AND UNDER WHICH CONDITIONS?
This question sounds so obvious that it's frequently overlooked and not answered in the proper detail. But the fact remains: If you are clear up front about precisely what you want, you'll save time and money later.
SHOULD YOUR SYSTEM
Only show images of the object being inspected, with tools like magnification or special lighting used to reveal product characteristics that cannot be detected with the human eye?
Calculate objective product features such as size and dimensional stability?
Check correct positioning — such as on a pick-and-place system?
Determine properties that are then used to assign the product into a specific product class?
RESOLUTION AND SENSOR
Which camera is used for any given application? The requirements definition is used to derive target specifications for the resolution and sensor size on the camera.
But first: What exactly is resolution? In classic photography, resolution refers to the minimum distance between two real points or lines in an image such that they can be perceived as distinct.
In the realm of digital cameras, terms like "2 megapixel resolution“ are often used. This refers to something entirely different, namely the total count of pixels on the sensor, but not strictly speaking its resolution. The proper resolution can only be determined once the overall package of camera, lens and geometry, i.e. the distances required by the setup, is in place. Not that the pixel count number is irrelevant — a high number of pixels is truly needed to achieve high resolutions. In essence, the pixel count indicates the maximal resolution under optimal conditions.
Fine resolution or large inspection area — either of these requirements necessitates the greatest possible number of pixels for the camera. Multiple cameras may actually be required to inspect a large area at a high level of resolution. In fact, the use of multiple cameras with standard lenses is often cheaper than using one single camera with a pricy special lens capable of covering the entire area.
The sensor size and field of view dictate the depiction scale, which will later be crucial for the selection of the lens.
COLOR OR MONOCHROME?
Generally speaking, most applications do not really need a color camera. Color images are often just easier on the eyes for many observers. Realistic reproduction of color using a color camera necessitates the use of white lighting as well. If the characteristics can be detected via their color (such as red blemishes on an apple), then color is often — but not always — needed. Yet these characteristics can also in many cases be picked up in black and white images from a monochrome camera if colored lighting is used. Experiments on perfect samples can help here. If color isn’t relevant, than monochrome cameras are preferable, since color cameras are inherently less sensitive than black and white cameras.
Are you working with a highly complex inspection task? If so, you may want to consider using multiple cameras, especially if a range of different characteristics need to be recorded, each requiring a different lighting or optics configuration.
WHAT A CAMERA SHOULD ALSO PROVIDE: CAMERA FUNCTIONS AND IMAGE QUALITY
There’s more to a good camera than just the number of pixels. You should also take image quality and camera functions into account.
When evaluating the image quality of a digital camera, the resolution is one important factor alongside:
In terms of camera functions, one of the most important is the speed, typically stated in frames per second (fps). It defines the maximum number of frames that can be recorded per second.
THE EYE OF THE CAMERA: SCALE AND LENS PERFORMANCE
Good optical systems are expensive. In many cases, a standard lens is powerful enough to handle the task. To decide what’s needed, we need information about parameters such as
Image scale, meaning the ratio between image and object size. This corresponds to the ratio of the size of the individual pixels divided by the pixel resolution (The pixel resolution is the length of the edges of a square within the object being inspected that should fill up precisely one pixel of the camera sensor.
Focal length of the lens that determines the image scale and the distance between camera and object
Once this information is available, it becomes much easier to examine the spec sheets from lens makers to review whether an affordable standard lens is sufficient or whether a foray into the higher-end lenses is needed.
Lens properties like distortion, resolution (described using the MTF curve), chromatic aberration and the spectral range for which a lens has been optimized, serve as additional selection criteria.
There are for example special lenses for near infrared, extreme wide angle lenses ('fisheye‘) and telecentric lenses that are specially suited for length measurements. These lenses typically come at a high price, though.
Here too the rule is: Tests and sample shots are the best way to clear up open questions.
It’s hard to see anything in poor light: It may seem obvious, but it holds true for image processing systems as well.
High inspection speeds typically require sensitive camera and powerful lenses. In many cases however the easier option is to modify or improve the lighting situation to boost the image brightness. There are a variety of options for attaining greater image brightness: Increasing the ambient light and sculpting the light using lenses or flashes to create a suitable light source are two examples. But it’s not just the lighting strength that’s important. The path that the light moves through the lens to the camera matters too.
One common example from photography is the use of a flash: if the ambient lighting is too diffuse, then a flash is used to aim the light in a targeted manner — although then you need to deal with unwanted reflections off smooth surfaces in the image area that can overwhelm the desired details. During image processing, these kinds of effects may actually be desired to deliver high light intensities on straight, low-reflecting surfaces. For objects with many surfaces reflecting in various direction, diffuse light is better.
We look at photos by reflecting light on them, while a stained glass window only reveals its beauty when the light shines through it.
Which hardware is required depends on the task and the necessary processing speed.
While simple tasks can be handled using standard PCs and image processing packages, complex and rapid image processing tasks may require specialized hardware.
Software is required to assess the images. Most cameras come together with software to display images and configure the camera. That’s enough to get the camera up and running. Special applications and image processing tasks require special software, either purchased or custom developed.