When machine vision systems fail, multiple factors may contribute into the issue, however, usually the following two options are the primary reasons.
#1 – The resolution is not correctly set.
#2 – The contrast is not optimized.
Defining the Optimum Resolution
When considering machine vision, it’s not easy to detect an object that we cannot see. To illustrate, let’s take a basic measurement application as an example.
Consider a 25mm square with a tolerance of +/-0.1mm. What is the appropriate resolution of the camera?
A simple rule of thumb is a 10X factor. If the tolerance required is 0.1mm, then the resolution needs to offer 10X better segmentation of the part, i.e., 0.01mm. In this case, we need a Field Of View (FOV) of a minimum size to capture the entire part. For this case we will assume 30mm square and simply 30mm/0.01 = minimum resolution required along the shortest camera axis, or 3000 pixels. When considering standard sensor sizes, we find that common sensor size is 4096×3000, thus our camera selection should be a minimum of 12MP.
Now let’s build on this basic illustrative example. From the camera resolution point of view, we can accurately detect the part dimensions with an accuracy of 0.01mm. Therefore, in theory, parts with a dimension in the range of 24.905 – 25.995 can be accurately passed, whereas parts less than 24.905mm and greater than 25.995mm will fail.
But simply calculating the camera resolution is only one factor that will impact the imaging accuracy. Pieces of hardware that go on the end of the camera will have a major impact on the ability to image properly.
Creating Optimum Contrast
Lens selection will determine how well the camera sensor can detect the line edge, how well the contrast is preserved, as well as factors relating to part presentation to the imaging system. For example, using a fixed focal length lens is an economical solution for imaging applications and is often the right choice. However, when there are variations of the part relative to the position of the camera, this will result in a dimensional deviation from the reference point.
For example, in our illustrative application, a variation of 0.1mm working distance (WD) of the part to the camera when using a 12mm fixed focal length lens results in a dimensional variation of 0.118mm, greater than the actual tolerance of the part. Therefore, an acceptable part, would still fail if it is more than 0.1mm from our calibrated reference position.
Additionally, a lens selection that does not match the camera resolution will result in a blurred edge, reducing the ability of the imaging system to accurately resolve the position of the edge or result in poor contrast of the edge due to few line pairs/mm (LP/mm) resolving power of the lens itself.
As an example, the edge in this image is represented by a variable grayscale as it passes through several pixels of the camera sensor. The specific position of the edge can be determined more accurately by means of sub-pixilation. In other words, determining the specific location of the edge is realized by the grayscale value. If the part has a grayscale value of 0 (0 = black, 255 = white in an 8-bit grayscale), then the edge can be detected more accurately. The pixel can have a value of 63, the edge is 25% of the way into white, 127 = 50%, 190 = 75% and 255 = 100%.