Thursday, 22 February 2018


The speed of line scan cameras has greatly increased in the last years. MODERN LINE SCAN CAMERAS operate with integration times in the range of 15 µs. In order to achieve excellent image quality, in some cases illuminance levels of over 1 million Lux are required. One of the most important criteria for assessing image quality is noise disturbance (white noise). There are various noise sources in image processing systems and the most dominant one is called “shot noise”.
Shot noise has a physical cause and this has nothing to do with the quality of the camera. The noise is caused by the special essence of light, by photons. The image quality depends on the number of photons which hit the object and ultimately on the number of photons which reach the camera sensor.
In a set-up with a defined signal transmission there are three parameters which influence the 'shot noise' when capturing an image:
  •  integration time (scanning speed)
  •  aperture (depth of focus and maximum definition)
  •  amount of light on the scanned object
The choice of lens aperture greatly determines the required light intensity. If, for instance, the aperture is changed from 4 to 5.6, twice the amount of light is required in order to maintain the same signal to noise ratio (SNR) – see fig. 01). By using a greater aperture, more depth of focus is achieved and the image quality is improved due to reduced vignetting effects with the majority of lenses.
Industrial Machine Vision Systems in Singapore


LEDs are available in various shades of color. You can get them in red, green, blue, yellow or amber. Even UV LEDs and IR LEDs are obtainable. The choice of a specific color and thus a specific wave length can determine how object properties on surfaces with diverse spectral response are made visible.
In the past, red light was often used wherever high intensity was required. However, relevant performance increase in LED technology today usually occurs with white LEDs. These high-performance LEDs are used for example in car headlights and street lamps. The core of a white LED actually consists of a blue LED. Using fluorescent substances, part of the light from the blue LED is converted into other visible spectral ranges in order to produce a 'white' light.
UV-LEDs are frequently used to make fluorescent effects visible. In many cases a wavelength of approx. 400nm is sufficient. UV-LEDs with shorter wavelengths may be suitable for hardening paint, adhesives or varnishes. In comparison to blue or white LEDs, UV-LEDs are less efficient. By focusing through a reflector however this can be improved. IR lighting is implemented for food inspection. Wavelengths of 850nm or 940nm are used. When sorting recyclable material, wavelengths from 1,200nm to 1,700nm are used to identify different types. Here however, IR-LEDs in this range are not as adequate as classic halogen lamps with appropriate filters where beam output is concerned.


The small design enables a very short warm-up phase. This circumstance presupposes good thermal dissipation, in order to maintain appropriate working conditions, i.e. temperatures. As a rule: the better the cooling, the longer the LED durability. Apart from durability, LED temperatures also influence spectral behavior (possible color shifting) and general output (luminance).
In systems where precise color reproduction is required, it is recommended to keep the lighting’s temperature steady at a predetermined value. At present, efficient control systems can regulate the LED temperature to within a spectrum range of less than 2°C.
Modern lighting systems, such as the Corona II lighting system developed by Chromasens, provide numerous cooling options. This includes passive cooling with thermal dissipation via convection, compressed air cooling, water cooling and ventilation. Active ventilation, compressed air or water cooling are good cooling methods for measuring applications situated in surroundings with high temperatures. By monitoring the temperature of the LEDs and regulating the cooling system, shifts in color reproduction can be completely avoided or at least greatly reduced.


If a flat object at a known and fixed distance is to be illuminated, selecting the adequate focus is relatively simple. Selecting the right lighting is more complicated, if the object is not at a predetermined distance from the light or has no flat surface. In such a case, assuring a permanently sufficient image brightness is a challenge. Here the use of reflector technology facilitates the accumulation of light from a LED (greater coverage angle of the reflected light) and a better light distribution from the depth.
In contrast to background or bright field lighting, focused lighting is normally used for top lighting. Customary lighting systems use rod lenses or Fresnel lenses in order to achieve the necessary lighting intensity. CHROMASENS adopts a novel and completely unique approach. While the use of rod lenses causes color deviations due to refraction, the mirror (reflector) principle developed and patented by Chromasens has no such trouble.
Shiny or reflective materials are a challenge for lighting. Unwished for reflections often appear in the image. In combination with a polarizing filter rotated 90 degrees in front of the camera, these unwanted light reflections can be prevented. When using such filters, certain factors have to be considered. The temperature stability of the filter is one point. In this respect, many polarizing filters can only be used to a certain extent. Another criterion is effectiveness: with such settings, only about 18-20 % of the original amount of light reaches the sensor. The amount of light provided by the lighting must therefore be great enough to minimize noise and yet achieve sufficiently good image quality.


When selecting the correct LIGHTING FOR LINE SCAN CAMERA APPLICATIONS, following factors ought to be considered:
  • The lense aperture and the light amount significantly influence the signal noise ratio
  •  LED systems offer definite advantages compared to traditional lighting technologies such as halogen or fluorescent lamps + Good cooling ensures long durability, consistent spectral behavior and a high level of brightness
  •  The use of reflectors assures optimal lighting, even from different distances
  •  Color LEDs, UV- and IR-LEDs are extremely versatile
  •  Polarizing filters prevent unwanted light reflection on shiny surfaces. The amount of light provided by the lighting must still be sufficient



Thursday, 15 February 2018


Choosing the right illumination for the application is critical for acquiring the high quality images needed for calculating 3D data. We compare the imaging results of a directional coaxial brightfield illumination with a Corona tube light in terms of color image quality and height map for different samples. It could be shown that for material that exhibit considerable amounts of subsurface scattering, coaxial lighting geometry benefits the 3D measurement using 3DPIXA.In practice, it has to be kept in mind that introducing the beam splitter in the light path results in a shift of the working distance of the camera system, and a slight reduction of image quality.


An illumination scheme where the source rays are reflected from a flat sample directly into the camera is called a brightfield. With line scan cameras there are two possible ways to realize such a setup: either by tilting the camera and light source such that the angle with respect to the surface normal is the same but opposite direction, or by using a beam splitter. The first method is not recommended as it can lead to occlusion and keystone effects. Thus we want to discuss the brightfield setup using a beam splitter.
Figure 1 shows the principle of this setup in comparison to a setup with a tubelight. The tubelight is the superior illumination choice for a wide array of possible applications. It reduces the intensity of specular reflections and evenly illuminates curved glossy materials. Most of the time the tubelight should be your first choice and only some materials require the use of a coaxial brightfield illumination.
An example as such is material that exhibits strong subsurface scattering, which means that light beams partially penetrate a material in a certain direction, are scattered multiple times, and then exit at a different location with possibly different direction. Resulting from that is a material appearance that is translucent. Examples of such materials are marble, skin, wax or some plastics.
Using tube light on such materials results in a very homogeneous appearance with little texture, which is problematic for 3D reconstruction. Using coaxial brightfield illumination results in relatively more direct reflection from the surface to the camera, as compared to a tube light illumination. This first surface reflection contributes to the image texture; the relative amount of sub-surface scattered light entering the camera is thereby reduced.
There are some specific properties that have to be taken into consideration when using a coaxial setup with a 3DPIXA. Firstly, only a maximum of 25% of the source intensity can reach the camera as the rest is directed elsewhere in the two transits of the beam splitter. Secondly, the glass is an active optical element that influences the imaging and 3D calculation quality. In chapter 3 we have a closer look at these factors and offer some guidelines for mechanical system design to account for resulting effects. Prior to that, we discuss the effects of the brightfield illumination on a selection of a few samples to give an idea when this type of illumination setup should be used.
Industrial Machine Vision Systems in Singapore


In this chapter we want to give you some impressions of the differences between using a coaxial illumination in comparison to a tubelight using different samples. As a tubelight we used the CHROMASENS CORONA II Tube light (CP000200-xxxT) and for the brightfield we used a CORONA II Top light (CP000200-xxxB) with diffusor glass together with a beam splitter made from 1.1 mm “borofloat” glass.
In figure 2 we show a scanned image of a candle made of paraffin, which is a material that exhibits strong subsurface scattering. With coaxial illumination (right image) the surface texture is clearly visible and the height image shows the slightly curved shape of the candle. In comparison the tube light (left image) contains very low texture and height information could not be recovered for most of the heights (black false colored region). The texture is only visible with coaxial illumination because under this condition the light reflected from the surface is more dominant in the final image than the subsurface scattered light. However, the ratio between these two effects varies with different surface inclinations. The more deviated the surface normal is from the camera observation angle, the less direct light is reflected directly from the first surface. Therefore, texture in the image gets lower. For the candle sample, more than 15° deviation resulted in failure in recovering height information. This can be seen in the right image at the outer edges of the candle.
3Fehler! Verweisquelle konnte nicht gefunden werden.. The substrate area in the tube light image (left) shows low texture, resulting in partially low performance height reconstruction (black points in false-colored image overlay). With coaxial illumination (right image), the amount of source rays reflected back into the camera from the surface of the material is larger than the subsurface scattered light. The image texture is higher and height reconstruction performance improves.
However, if the height of the balls is the focus in the application rather than inspecting the substrate, the situation becomes more complex as the coaxial illumination results in specular reflection on the ball tops. If these areas are saturated, it negatively affects height measurements as well.
The best illumination therefore strongly depends on the measurement task and materials used and can often only be determined by testing. If you are unclear which light source is best for your application, please feel free to contact our sales personnel to discuss options and potentially arrange for initial testing with your samples at our lab.


The beam splitter essentially is a plan parallel glass plate which offsets each ray passing through without changing its direction. The size of this offset depends on the incidence angle, the thickness of the glass and its refractive index. The thickness of the beam splitter should therefore be only as small as is needed for stability reasons. In the following analysis we assume a thickness of the beam splitter of d=1.1 mm “borofloat” glass.
The result of the beam splitters influence is a movement of the point from where the sharpest image can be acquired in all three spatial coordinates. The change along the sensor direction (called x-direction) leads to a magnification change of the imaging system that is negligible small (<0.4%, with a small dependence on camera type).
The change along the scan direction (called y-direction) only offsets the starting point of the image. If the exact location of the scanline is important (e.g. when looking on a roll) the camera needs to be displaced relative to the intended scan line by
Δy = d*(0.30n – 0.12).
The equation is valid for all glass thicknesses d and is a linear approximation of the real dependency on n, where n is the refractive index of the glass material introduced into the light path. The approximation is valid in the interval of n= [1.4, 1.7] and for all types of 3DPIXAs. The direction of the displacement is towards the end of the beam splitter that is nearer to the sample, so in the scheme in figure 1 the camera has to be moved to the left.
The change of the working distance is different along the x- and y-axis of the system because of the 45° tilt of the beam splitter leading to astigmatism. In y-direction the working distance is increased by
zy = +d*(0.24n +0.23).
As above, the formula is valid for all d and n= [1.4, 1.7]. The change of the working direction along the x-direction is not constant but also changes depending on the position of the imaged point which leads to field curvature. Both astigmatism and field curvature slightly lower your image quality which influences the imaging of structures near the resolution limit. But they should not influence the 3D algorithm as generally only height structures that are several pixels in size can be computed.
Additionally to the optical effects discussed above the beam splitter also changes the absolute height values computed by the 3D algorithm (i. e. the absolute distance to the camera). The exact value of this height change is slightly different for each camera. Generally the measured distance between camera and sample decreases, so that structures appear nearer to the camera than they really are. This change is constant over the whole height range (simulations show 0.2% change) and also constant over the whole Field of View. In summary, relative height measurements are not influenced at all, and absolute measurements are shifted by a constant offset.
As the precise change of the calculated height is not known, the zero plane of the height map can’t be used to adjust the camera to the correct working distance. We advise you instead to set up your camera using the free working distance given in the data sheet and correcting it with Δzy from above.


On certain translucent materials (those exhibiting considerable subsurface scattering of light), using coaxial illumination can result in a significant increase in image texture which greatly benefits the 3D height reconstruction. However, the additional glass of the beam splitter in the optical path of the camera system when using coaxial illumination influences the optical quality negatively. Further, the working distance of the system changes slightly and the absolute measured distances are set off by a constant value. This does not affect relative measurements, which are generally recommended with the 3DPIXA.



Thursday, 4 January 2018



Everyone prefers foodstuffs that are fresh and outwardly attractive. Image processing systems are frequently used during the quality assurance process for these products to ensure that this is the case. The image data helps producers make informed decisions that would be otherwise be impossible.
But how are systems of this kind designed? What steps are necessary, what must be taken into account, and what options are available?
Selection of the camera, selection of the lens and lighting source, evaluation of image quality, selection of PC hardware and software and the configuration of all components – all of those are important steps toward an effective image processing system.
Imagine an apple grower asks you to design a machine vision system for inspecting the apples. He’s interested in delivering uniform quality, meaning the ability to sort out bad apples while still working fast. He is faced with the following questions:
  •  What are the precise defined requirements for the system?
  •  Which resolution and sensors do I need?
  •  Do I want to use a color or monochrome camera?
  •  What camera functions do I need, and what level of image quality is sufficient?
  •  The eye of the camera: Scale and lens performance
  •  Which lighting should I use?
  •  What PC hardware is required?
  •  What software is required?
Machine Vision System in Singapore
Credits :



This question sounds so obvious that it's frequently overlooked and not answered in the proper detail. But the fact remains: If you are clear up front about precisely what you want, you'll save time and money later.


  •  Only show images of the object being inspected, with tools like magnification or special lighting used to reveal product characteristics that cannot be detected with the human eye?
  •  Calculate objective product features such as size and dimensional stability?
  •  Check correct positioning — such as on a pick-and-place system?
  •  Determine properties that are then used to assign the product into a specific product class?


Which camera is used for any given application? The requirements definition is used to derive target specifications for the resolution and sensor size on the camera.
But first: What exactly is resolution? In classic photography, resolution refers to the minimum distance between two real points or lines in an image such that they can be perceived as distinct.
In the realm of digital cameras, terms like "2 megapixel resolution“ are often used. This refers to something entirely different, namely the total count of pixels on the sensor, but not strictly speaking its resolution. The proper resolution can only be determined once the overall package of camera, lens and geometry, i.e. the distances required by the setup, is in place. Not that the pixel count number is irrelevant — a high number of pixels is truly needed to achieve high resolutions. In essence, the pixel count indicates the maximal resolution under optimal conditions.
Fine resolution or large inspection area — either of these requirements necessitates the greatest possible number of pixels for the camera. Multiple cameras may actually be required to inspect a large area at a high level of resolution. In fact, the use of multiple cameras with standard lenses is often cheaper than using one single camera with a pricy special lens capable of covering the entire area.
The sensor size and field of view dictate the depiction scale, which will later be crucial for the selection of the lens.


Generally speaking, most applications do not really need a color camera. Color images are often just easier on the eyes for many observers. Realistic reproduction of color using a color camera necessitates the use of white lighting as well. If the characteristics can be detected via their color (such as red blemishes on an apple), then color is often — but not always — needed. Yet these characteristics can also in many cases be picked up in black and white images from a monochrome camera if colored lighting is used. Experiments on perfect samples can help here. If color isn’t relevant, than monochrome cameras are preferable, since color cameras are inherently less sensitive than black and white cameras.
Are you working with a highly complex inspection task? If so, you may want to consider using multiple cameras, especially if a range of different characteristics need to be recorded, each requiring a different lighting or optics configuration.


There’s more to a good camera than just the number of pixels. You should also take image quality and camera functions into account.
When evaluating the image quality of a digital camera, the resolution is one important factor alongside:
  •  Light sensitivity
  •  Dynamic range
  •  Signal-to-noise ratio
In terms of camera functions, one of the most important is the speed, typically stated in frames per second (fps). It defines the maximum number of frames that can be recorded per second.


Good optical systems are expensive. In many cases, a standard lens is powerful enough to handle the task. To decide what’s needed, we need information about parameters such as
  •  Lens interface
  •  Pixel size
  •  Sensor size
  •  Image scale, meaning the ratio between image and object size. This corresponds to the ratio of the size of the individual pixels divided by the pixel resolution (The pixel resolution is the length of the edges of a square within the object being inspected that should fill up precisely one pixel of the camera sensor.
  •  Focal length of the lens that determines the image scale and the distance between camera and object
  •  Lighting intensity
Once this information is available, it becomes much easier to examine the spec sheets from lens makers to review whether an affordable standard lens is sufficient or whether a foray into the higher-end lenses is needed.
Lens properties like distortion, resolution (described using the MTF curve), chromatic aberration and the spectral range for which a lens has been optimized, serve as additional selection criteria.
There are for example special lenses for near infrared, extreme wide angle lenses ('fisheye‘) and telecentric lenses that are specially suited for length measurements. These lenses typically come at a high price, though.
Here too the rule is: Tests and sample shots are the best way to clear up open questions.


It’s hard to see anything in poor light: It may seem obvious, but it holds true for image processing systems as well.
High inspection speeds typically require sensitive camera and powerful lenses. In many cases however the easier option is to modify or improve the lighting situation to boost the image brightness. There are a variety of options for attaining greater image brightness: Increasing the ambient light and sculpting the light using lenses or flashes to create a suitable light source are two examples. But it’s not just the lighting strength that’s important. The path that the light moves through the lens to the camera matters too.
One common example from photography is the use of a flash: if the ambient lighting is too diffuse, then a flash is used to aim the light in a targeted manner — although then you need to deal with unwanted reflections off smooth surfaces in the image area that can overwhelm the desired details. During image processing, these kinds of effects may actually be desired to deliver high light intensities on straight, low-reflecting surfaces. For objects with many surfaces reflecting in various direction, diffuse light is better.
We look at photos by reflecting light on them, while a stained glass window only reveals its beauty when the light shines through it.


Which hardware is required depends on the task and the necessary processing speed.
While simple tasks can be handled using standard PCs and image processing packages, complex and rapid image processing tasks may require specialized hardware.


Software is required to assess the images. Most cameras come together with software to display images and configure the camera. That’s enough to get the camera up and running. Special applications and image processing tasks require special software, either purchased or custom developed.


Thursday, 7 December 2017


USB 3.0 is the newest interface on the image processing market. Read here about when USB 3.0 is the ideal choice for your applications, factors to remember during installation and which camera models Basler is offering you.


USB3 Vision cameras are an excellent tool for a variety of applications. Especially their bandwidth that effectively closes the speed gap between Camera Link and GigE interfaces, their simple plug and play functionality and their Vision Standard compliance make them suitable for industrial applications.
In addition, the USB 3.0 is perfectly tailored for the latest generation of CMOS sensors, with the architecture and bandwidth to take advantage of all that new technology has to offer.
Thanks to a decision by the USB Implementers Forum, the USB 3.0 interface may also henceforth be referred to as USB 3.1 Gen 1. Even with the new name, there are no technical differences from USB 3.0, and so the terms can be used synonymously. For simplicity’s sake and to avoid confusion, we will continue to refer to it as USB 3.0.
It’s important to distinguish it from USB 3.1 Gen 2 (a.k.a. USB Superspeed+), as this new generation of the interface offers a higher bandwidth than its predecessor.
USB 3.0 Cameras Dealer in Singapore
Credits :
Selected advantages of the USB 3.0 interface:
Fast: High data throughput rates of up to 350 MB/s
  •  Outstanding real-time compatibility
  •  High stability
  •  Simple integration into all image processing applications (libraries)
  •  Reliable industrial USB3 Vision Standard
  •  Low CPU load
  •  Low latency and jitter
  •  Screw-down plug connectors
  •  Integrated memory and buffers for top stability in industrial applications.
  •  Plug and play functionality


The following camera series are available with the USB 3.0 interface:
Basler Ace
  •  Broad sensor portfolio: CCD, CMOS and NIR variants
  •  Extensive firmware features
  •  VGA to 14 MP resolution, up to 750 fps
Basler Dart
  •  Outstanding price/performance ratio
  •  Board level, small and flexible
  •  1.2 MP to 5 MP and up to 60 fps
Basler Pulse
  •  Compact and lightweight, with elegant design
  •  Global shutter and rolling shutter options
  •  1.2 MP to 5 MP and up to 60 fps



Wednesday, 22 November 2017


What is MACHINE VISION? Who is using machine vision? How can I get started using machine vision? These are all great questions when it comes to the exciting world of machine vision, its capabilities, and its impact on daily and yearly production outputs. In this blog, we’ll answer these questions and more, as we introduce you to the future.


Machine vision is the automatic extraction of information from digital images. A typical machine vision environment would be a manufacturing production line where hundreds of products are flowing down the line in front of a smart camera. Manufacturers use machine vision systems instead of human inspectors because it’s faster, more consistent, and doesn’t get tired. The camera captures the digital image and analyzes it against a pre-defined set of criteria. If the criteria are met, the object can proceed. If not, the object will be re-routed off the production line for further inspection.
Machine vision can be difficult to understand, so here is a very basic example: Say you are a beverage manufacturer. Traditionally, you would have human inspectors watching thousands of bottles move down a production line. The workers would need to ensure every bottle cap was secured correctly, every label was on straight and contained the correct information, and every bottle was filled to the appropriate level. With machine vision, this entire repetitive process can be automated to be faster, more efficient, and more productive.


Machine vision is used heavily in conjunction with robots to increase their effectiveness and overall value for the business. These types of robots resemble a human arm with a camera mounted at the “hand” position. The camera acts as the robot’s “eyes”, guiding it to complete the assigned task. (Visit our previous blog about integrating machine vision cameras with robots for more information.)
A machine vision system has five key components that can be configured either as separate components or integrated into a single smart camera. The correct configuration depends on the application and its complexity. The five key components are:
Lighting – This critical aspect of a machine vision system illuminates the part to be inspected, allowing its features to stand out so that the vision system can see them as clearly as possible.
Lens – Captures the image and presents it to the sensor in the form of light.
Sensor – Converts light into a digital image for the processor to analyze.
Vision Processing – Consists of algorithms that review the image and extract required information.
Communication – The resulting data is communicated out to the world in a useful manner.
Our MicroHAWK MV Smart Camera is a fully-integrated machine vision system. This means that the lighting, lens, sensor, and vision processing is done on the camera. That information can then be sent to a PC or tablet via Ethernet or USB. MicroHAWK is available with an array of hardware options to take on any inspection task in a wide variety of applications.


The machine vision market is growing rapidly. According to Statistics MRC, “the global machine vision market was estimated at $8.81 billion in 2015 and is expected to reach $14.72 billion by 2022, growing at a CAGR of 8.9% from 2015 to 2022”. Many retail giants use a vision system to track products in their warehouse from arrival to dispatch, aiding workers by eliminating the possibility of human error and automating repetitive tasks. “Items retrieved from storage shelves are automatically identified and sorted into batches destined for a single customer. The system knows the dimensions of each product and will automatically allocate the right box, and even the right amount of packing tape.” (MIT Technology Review). A worker will then pack the box and send it on its way.
Machine vision is better-suited to repetitive inspection tasks in industrial processes than human inspectors. Machine vision systems are faster, more consistent, and work for a longer period of time than human inspectors, reducing defects, increasing yield, tracking parts and products, and facilitating compliance with government regulations to help companies save money and increase profitability.
Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.


Reduce Defects
  •  Ensure fewer bad parts enter the market which cause costly recalls and tarnish a company’s reputation.

  •  Prevent mislabeled products whose label doesn’t match the content. These defects create unhappy customers, have a negative impact on your brand reputation, and pose a serious safety risk – especially with pharmaceutical products and food items for customers with allergies.
Increase Yield
  •  Turn additional available material into saleable product.
  •  Avoid scrapping expensive materials and rebuilding parts.
  •  Reduce downtime by detecting product routing errors that can cause system disruptions.
Tracking Parts and Products
  •  Uniquely identify products so they can be tracked and traced throughout the manufacturing process.

  • Identify all pieces in the process, reducing stock and ensuring product will be more readily available for just-in-time (JIT) processes.
  • Avoid component shortages, reduce inventory, and shorten delivery time.
Comply with Regulations
  •  To compete in some markets, manufacturers must comply with various regulations.
  •  In pharmaceuticals, a highly regulated industry, machine vision is used to ensure product integrity and safety by complying with government regulations such as 21CFR Part 11 and GS1 data standards.


Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.
Another common machine vision application is counting – looking for a specific number of parts or features on a part to verify that it was manufactured correctly. In the electronics manufacturing industry, for example, machine vision is used to count various features of printed circuit boards (PCBs) to ensure that no component or step was missed in production.
Machine vision can be used to locate the position and orientation of a part and to verify proper assembly within specific tolerances. Location can identify a part for inspection with other machine vision tools, and it can also be trained to search for a unique pattern to identify a specific part. In the life sciences and medical industries, machine vision can locate test tube caps for further evaluation, such as cap presence, cap color, and measurement to ensure correct cap position.
Machine vision can be used to decode linear, stacked, and 2D symbologies. It can also be used for optical character recognition (OCR), which is simultaneously human- and machine-readable. In factory automation, machine vision is used to sort products on a production line by decoding the symbol on the product. The symbols themselves can also be verified by machine vision-based verification systems to ensure that they comply with the requirements of various symbology standards organizations.
Machine vision is a powerful tool that saves money and increases efficiency in virtually any industrial process. The MicroHAWK MV Smart Camera can be scaled from basic decoding to advanced inspection and integration with robotic applications.
Microscan will soon be announcing a new machine vision system that will make you re-evaluate your definition of fast.