This question seems to come up like a broken record (for those of you under age 35 that means hearing the same thing over and over again.)  All too often, companies are buying machine vision systems to perform a specific task, and it fails…miserably.  But what is the problem?  When the vision sales person proposed the solution, the image samples looked good!  The technology wasn’t cheap, and it came from a reputable brand name!  What could go wrong?  

Although the answer may be multi-factorial, there are usually 2 primary issues when a vision system doesn’t perform well.  

#1 – The resolution is not correct

#2 –The contrast is not good

Determining the Correct Resolution

How does resolution impact machine vision performance?  It’s an important question, as a higher resolution costs more money.  The short answer is you cannot detect what you cannot see, or more accurately when considering machine vision, you cannot accurately determine what is there when you cannot see it with enough resolution.  To illustrate, let’s take a basic measurement application as an example.

Your application calls for detecting that your part is 25mm square with a tolerance of +/-0.1mm.  What is the appropriate resolution of the camera?

A simple rule of thumb is a 10X factor.  If the tolerance required is 0.1mm, than the resolution needs to offer 10X better segmentation of the part.  So in this case, we need a Field Of View (FOV) of a minimum size to capture the entire part.  For this case we will assume 30mm square.  So simply 30mm/0.01 = minimum resolution required along the shortest camera axis, or 3000 pixels.  When considering standard sensor sizes, we find that common sensor size is 4096×3000, thus our camera selection should be a minimum of 12MP.  

Now let’s build on this basic illustrative example.  From purely the camera resolution point of view, we can accurately detect the part dimensions with an accuracy of 0.01mm.  Therefore, in theory, parts with a dimension in the range of 24.905-25.995 can be accurately passed, whereas parts less than 24.905mm and greater than 25.995mm will fail.  

But simply calculating the camera resolution is only one factor that will impact our imaging accuracy.  That other piece of hardware that goes on the end of the camera will have a major impact on our ability to image properly.  

Creating Sufficient Contrast

Lens selection will determine how well the camera sensor can detect the line edge, how well the contrast is preserved, as well as factors relating to part presentation to the imaging system.  For example, using a fixed focal length lens is an economical solution for imaging applications and is often the right choice.  However, when there are variations of the part relative to the position of the camera, this will result in a dimensional deviation from the reference point.

For example, in our illustrative application, a variation of 0.1mm working distance (WD) of the part to the camera when using a 12mm fixed focal length lens results in a dimensional variation of 0.118mm, greater than our actual tolerance of the part.  Therefore, a perfect part, would still fail if it is more than 0.1mm from our calibrated reference position.  

Additionally, a lens selection that does not match the camera resolution will result in a blurred edge, reducing the ability of the imaging system to accurately resolve the position of the edge.  Or result in poor contrast of the edge due to too few line pairs/mm (LP/mm) resolving power of the lens itself.  

Now for the processing of the image, an edge is determined not necessarily by a black to white transition.  This is where things get complicated (ahem, tedious) and why I chose to not offer this information as something one could listen to, say while driving.

Say for example, the edge in this image is represented by a variable grayscale as it passes through several pixels of the camera sensor.  The specific position of the edge can be determined more accurately by means of sub-pixilation.  In other words, determining the specific location of the edge by the grayscale value.  If the part has a grayscale value of 0 (0 = black, 255 = white in an 8-bit grayscale), then the edge can be detected more accurately.  If the pixel has a value of 63, the edge is 25% of the way into the pixel, 127 = 50%, 190 = 75%. So if our camera resolution is 1 pixel = 0.01mm, then respectively the edges represent 0.0025mm, 0.005mm and 0.0075mm.  This is often used as a solution to reducing camera resolution.  However, caution needs to be exercised.  In this image example, the grayscale value of the part is in fact 36.  Perhaps you interpreted as black.  Which again highlights how machine vision differs from human vision.  Thus, sub-pixilation is a good method for increasing the accuracy of a well-engineered system, but it is not a practical solution for engineering a cheap solution.

All of this emphasizes the importance of beginning the application with the correct selection of hardware.  So many times, price is part of this evaluation process.  However, this only creates an obstacle to the success of your project.  Instead, we’ll focus attention to the other aspects of the project so we can ensure the system works, before we evaluate the cost of the solution.

This brings us to other opto-mechanical issues that can relate to poor image capture, poor image processing, or unreliable vision performance.

  • Over-exposed: “pixel blooming”, whereby the electrical charge created by photons spills over into the neighboring pixels, distorting the image
  • Improper focus: edge will be blurred over several pixels
  • Vignetting: reduced contrast around the edges of the image

These issues can all be solved by good lens selection and a proper lighting approach. 

Choosing the Best Lighting Sources

BUT…there is a multitude of lighting choices!  This is true because to get an optimum image for the purpose of automatic processing dictates a good lighting approach.  So what is best for your application, back-lit, front-lit, dome, spot, diffused, collimated, ring…?

The answer is a combination of science, testing, and experience.  The part material, the part presentation, requirements of the inspection, and exposure requirements all play a role in correct lighting selection.  But there are some measures that can be taken in virtually any imaging application that will make your system more reliable.

Configuring your vision system for a minimum camera exposure time will aid in reducing the effects of ambient lighting.  This may entail a large aperture to allow as much light to the camera sensor as possible, strobing the light to increase luminosity, and physically blocking ambient light from the imaged object.

Other measures include using bandpass filters to block unwanted wavelengths of light from the camera sensor, effectively reducing the total pixel response, and providing better control of the lighting conditions.  Below is a graph of the spectral response of the Baumer VCXU-123M, 12MP monochrome camera.  From this we can see that this sensor has a peak response in the 575-630nm range (green-red).  However, any light from UV to NIR will create a response by this sensor.  Bandpass filters effectively block these unwanted wavelengths of light from creating larger variations in our imaging response. 

So, why doesn’t your machine vision system work?  Most likely it results from one of these 2 main issues, resolution, or contrast or a combination of both.  However, the bigger question is, how can you get it to work?  That is where we can help.

At Skye Automation, we take an engineering approach to machine vision for industrial applications.  Our success has been based on choosing the correct hardware, software and using the scientific method to mitigate the variables to create working vision systems that make satisfied clients.

Talk to us for a system that is designed to meet your application, integrated to your system, and built to your individual needs.