Tuesday, 14 February 2017

Vision System Inspects X-ray Dosimeter Badges – Helmholtz-Zentrum

http://mvasiaonline.com/ 

In Germany, the inspection of x-ray dosimeters worn by people who may be exposed to radiation is a governmental responsibility. Only a handful of institutions are qualified to perform such tasks. One of which, the Helmholtz-Zentrum (Munich, Germany) is responsible for the analysis of approximately 120,000 film badge dosimeters a month.

Previously these 120,000 film badges were evaluated manually. To speed this inspection and increase reliability, the Helmholtz-Zentrum has developed a machine-vision system to automatically inspect these films. The film from each dosimeter badge is first mounted on a plastic adhesive foil, which is wound into a coil. This coil is then mounted on the vision system so that each film element can be inspected automatically (see figure). To analyze each film, a DX4 285 FireWire camera from Kappa optronics (Gleichen, Germany) is mounted on a bellows stage above the film reel. 

Data from this camera is then transferred to a PC and processed using HALCON 9.0 from MVTec Software (Munich, Germany). Resulting high-dynamic-range images are then displayed using an ATIFire GL V3600 graphic board from AMD (Sunnyvale, CA, USA) on a FlexScan MX 190 S display from Eizo (Ishikawa, Japan). Before the optical density of the film is measured, its presence and orientation must be determined. As each film moves under the camera system’s field of view, this presence and orientation task is computed using HALCON’s shape-based matching algorithm.

Both the camera and a densitometer are used to measure the optical density of the film. The densitometer measures the brightness at each of seven points on the film in high precision and is used to calibrate the camera measurement for every film image. To increase the dynamic range of the gray-level image of the film, two images with different exposure times are computed and combined into a high-dynamic-range image. Because the background lighting is not homogenous, shading correction is performed to eliminate any lighting variation. Any lens vignetting and variations caused by pixel-to-pixel sensitivity variation is eliminated by flat-field correction. The optical density is converted into a photon dose using a linear algebraic function to calculate the x-ray dose to which the film was exposed.

Every film reading must be correlated with the unique specimen number associated with each badge. Since these numbers are deposited onto the film material, approximately 10,000 characters needed to be trained and saved to an OCR database using HALCON. After the film is identified, the system must also detect which type of dosimeter cassette has been used to house the film. Since each cassette uses a different x-ray filter, the shadow cast on the film can be either rectangular or round. Thus, a grayscale analysis of these shadows can be used to detect the differences between the different types of cassettes that were used to house the film. To pinpoint the specific causes of x-ray exposure, the system is also programmed to detect whether any potential exposure is caused by errors in film developing or x-ray contamination. If the imaging system detects contamination events, these are then reported manually.




To Know More About Machine Vision System in Singapore, Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432
Source - mvtec.com

Tuesday, 7 February 2017

INDUSTRIAL CAMERAS - LETTING ROBOTIC ARMS SEE

http://mvasiaonline.com 
Robotic arms are widely used in industrial automation. They complete tasks which humans cannot accomplish, are considered too time consuming or dangerous, or which require precise positioning and highly repetitive movements. Tasks are completed in high quality with speed, reliability and precision. Robotic arms are used in all areas of industrial manufacturing from the automobile industry to mold manufacturing and electronics but also in fields where the technology might be less expected such as agriculture, healthcare and service industries.

Robotic Arms "See" with Machine Vision

Like humans, robotic arms need "eyes" to see and feel what they grasp and manipulate: machine vision makes this possible. Industrial cameras and image processing software work together to enable the robot to move efficiently and precisely in three dimensional space which enables them to perform a variety of complex tasks: welding, painting, assembly, picking and placing for printed circuit boards, packaging and labeling, palletizing, product inspection, and high-precision testing. Not all industrial cameras are compatible with or can be installed in robotic arms, but The Imaging Source's GigE industrial cameras provide an optimal solution.

GigE Industrial Cameras from The Imaging Source - The Cost Effective and Highly Versatile Imaging Solution

The Imaging Source's GigE industrial cameras are best known for their outstanding image quality, easy integration and rich set of features. They are shipped with highly sensitive CCD or CMOS sensors from Sony and Aptina, which offer very low noise levels, provide multiple options in terms of resolution and frame rate, guarantee precise positioning capture and output first-rate image quality. External Hirose ports make the digital I/O, strobe, trigger inputs and flash outputs easily accessible. Binning and ROI features (CMOS only) enable increased frame rates and improved signal to noise ratios. The cameras' extremely compact and robust industrial housing means straightforward integration into robotic assemblies.

In addition, The Imaging Source's GigE industrial cameras are shock-resistant, so camera-shake and blurred images can be avoided. The cameras are shipped with camera-end locking screws, and the built-in Gigabit Ethernet interface allows for very long cable lengths (up to100 meters) for maximum flexibility.

The Imaging Source's GigE industrial cameras come bundled with highly compatible end-user software and SDKs which makes the setup and integration with robotic arms fast and simple. Trained personnel without extensive robot programming experience can reprogram the cameras to complete new tasks in a snap. These camera characteristics, along with their competitive price, make The Imaging Source GigE industrial cameras the perfect solution for robotic arm applications.

Suitable cameras for robotic arms:
  • GigE color industrial cameras
  • GigE monochrome industrial cameras


To Know More About Imaging Source Machine Vision Cameras, Singapore, Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432

Tuesday, 31 January 2017

Vision Helps Spot Failures on the Rail - Network Rail Ltd.

http://mvasiaonline.com
An automated vision inspection system relieves rail workers from the task of manually inspecting rail infrastructure

Traditionally, rail infrastructure has been inspected manually by foot patrols walking the entire length of a rail network to visually determine whether any flaws exist that could result in failures. Needless to say, the method is extremely labor intensive and time consuming.

To minimize the disruption to train services, the manual inspection process is usually performed overnight and at weekends. However, due to the increase in passenger and freight traffic on rail networks, the time that can be allocated to access the rail infrastructure by foot patrols is now at a premium. Hence rail infrastructure owners are under pressure to find more effective means to perform the task.

To reduce the time required to inspect its rail network, UK infrastructure owner Network Rail (London, England; www.networkrail.co.uk) is now deploying a new vision-based inspection system that looks set to replace the earlier manual inspection process. Not only will the system help to increase the availability -- and assure the safety -- of its rail network, it will also enable the organization to determine the condition of the network with greater consistency and accuracy.

Developed by Omnicom Engineering (York, UK; www.omnicomengineering.co.uk), the OmniVision system has been designed to automatically detect the same types of flaws that would be spotted by foot patrols. These include missing fasteners that hold the rail in place on sleepers and faults in weak points in the infrastructure such as at rail joints where lengths of rail are bolted together. The system will also detect the scarring of rail heads, incorrectly fitted rail clamps and any issues with welds that join sections of rail together to form one continuous rail.

System Architecture

The OmniVision system comprises an array of seven 2048 x 1 pixel line scan cameras, four 3D profile cameras, a sweeping laser scanner and two thermal cameras. Fitted to the underside of a rail inspection car, the vision system illuminates the rail with an array of LED line lights and acquires images of the track and its surroundings as the car moves down the track at speeds of up to 125mph (Figure 1). The on-board vision system is complemented by an off-train processing system located at the Network Rail in Derby that processes the data to determine the integrity of the rail network.

For every 0.8mm that the inspection vehicle travels, a pair of three line scan cameras housed in rugged enclosures capture images of each of the rail tracks. Two vertically positioned cameras image the top surface or the head of each of the rails, while the other four are positioned at an angle to capture images of the web of the rail. A seventh centrally-located line scan camera captures images of the area between the two rails from which the condition of the ballast and the rail sleepers and the location and condition of other rail assets that complement the signaling system can be determined.

The cameras transfer image data to frame grabbers in a PC-based 19in rack system on board the train over a Camera Link interface. The frame grabbers were designed in-house to ensure that the data transfer rate from the cameras could be maintained at a rate of approximately 145MBytes/s and that no artifacts within the images are lost through compression. Once captured, the images from each of the cameras are then written to a set of 1TByte solid state drives.

Within the same rugged enclosure as the line scan cameras, the pair of thermal cameras mounted at 45° angles point to the inside web of each of the rails. Their purpose is to capture thermal data at points such as rail joints which can expand and contract depending on ambient temperature. Both the thermal cameras are interfaced via GigE connections to a frame grabber in the on-board 19in rack and the images from them are also stored on 1TByte solid state drives.

Further down the inspection vehicle, two pairs of 3D profile cameras capture a profile of the rails and the area surrounding them for every 200mm that the vehicle travels. Data from the four cameras are transferred to the 19in rack-mounted system over a GigE interface to a dedicated frame grabber and the data again stored on TByte drives. Data acquired by the cameras is used to build a 3D image of the rails and the fasteners used to hold the rails to the sleepers and the ballast around them.

In addition to the line scan, thermal and 3D profile cameras, the system also employs a centrally-mounted sweeping laser scanner situated on the underside of the inspection vehicle which covers a distance of 5m on either side of the rails. Data from the laser scanner - which is transferred to the 19in rack-mounted system over an Ethernet interface and also stored on a set of Terabyte drives - is used to determine whether or not the height of the surrounding ballast on the rail is either too high or deficient.

Processing Data

In operation, a vehicle fitted with the imaging system acquires around 5TBytes of image data in a single shift over a distance of around 250 miles. Once acquired, the image data from all the cameras is indexed with timing and GPS positional data such that the data can be correlated prior to processing. Data acquired from the cameras during a shift is then transmitted to the dedicated processing environment at Network Rail, where it is transferred onto a 500TByte parallel file storage system at an aggregate data rate of around 2GB/s for a single data set.

Because the image data is tagged with the location and time at which it was acquired, it is possible to establish the start and end of a specific patrol, or part of a single shift. The indexed imagery associated with each patrol is then subdivided into sections representing several hundred meters of rail infrastructure, after which it is farmed out to a dedicated cluster set of Windows-based servers, known as the image processing factory. Once one set of image data relating to one section of rail has been analyzed by the processing cluster of 20 multi-core PC-based servers and the results returned, a following set of data is transferred into the processors until an entire patrol has been analyzed.

To process the images acquired by the cameras, the OmniVision system uses the image processing functions in MVTec's (Munich, Germany; www.mvtec.com) HALCON software library. Typically, the images acquired by the line scan cameras are first segmented to determine regions of interest - such as the location of the rail. Once the location of the rail has been found, it is possible to establish an area of interest around the rail where items such as fasteners, clamps and rail joints should be located. A combination of edge detection and shape-based matching algorithms are then used to determine whether a fastener, clamp or rail joint has been identified by comparing the image of the objects with models stored on the database of the system (Figure 2).

To verify that objects such as fasteners or clamp are present, missing, or being obscured by ballast, a more detailed analysis is performed on the data acquired by the 3D profile cameras as a secondary check. To do so, the 3D profile data is analyzed using HALCON's 3D pattern matching algorithm to determine the 3D position and orientation of the objects even if they are partially occluded by ballast (Figure 3). Should the software be unable to match the 3D data with a 3D model of the object, the potential defect - known as a candidate - is flagged for further analysis and returned to a database for manual verification.

The system can also determine the condition of welds in the rail. As the vision inspection system moves over each of the welds, the line scan cameras capture an image of each one. From the images, the software can perform shape-based matching to identify locations where a potential joint failure may exist. Any potential failure of the weld is also flagged as a potential candidate for further investigation. Similarly, the 3D-based model created from data captured by the laser scanner can also be analyzed by the software to determine if the height of the ballast in and around the track is within acceptable limits.

Identifying defects

Through OmniVision's Viewer application - which runs on a set of eight PCs connected to the server - track inspectors are visually presented with a breakdown of the defects along with the images associated with them. This allows them to navigate through, review and prioritize any defects that the system may have detected. Once a defect has been identified, the operators can then schedule the necessary repairs to be carried out manually by on-track teams.

To date, three Omnicom vision systems have been fitted to Network Rail inspection vehicles and effectively used to determine the condition of the UK's West Coast mainline network. Currently, two additional systems are being commissioned and by the end of this year, Network Rail plans to roll the system out to cover the East Coast main line between London and Edinburgh and the Great Western mainline from London to Wales. When fully operational, the fleet of inspection vehicles will inspect more than 15,000 miles of Network Rail's rail network per fortnight, all year round.




To Know More About Machine Vision System Singapore, Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com

 

 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432
Source - xenics.com

Tuesday, 24 January 2017

3D-Shape GmbH – At the Frontiers of Feasibility

http://mvasiaonline.com
Why 3D-Shape GmbH equips white light interferometers with Mikrotron cameras
How do you monitor the topography of micro parts when the demands of efficient production require short cycle times and high quality? 3D-Shape GmbH performs complex surface measurements using the principle of white light interferometry, advanced sensors, 3D image processing and powerful camera technology from Mikrotron GmbH.

Fast three-dimensional image processing technologies are becoming more and more important for quality control of complicated components. They are now superior to tactile measurement systems in speed, flexibility, precision and analytical possibilities. In keeping with the adage “a picture is worth a thousand words,” image analysis allows you to discover complex connections and many object parameters at a single glance.

A few years ago, high precision measurements within industrial production lines were unimaginable. Today, reliable quality control with measurement uncertainties of only a few nanometers are possible even with short cycle times. This is true for applications in the electronics, aircraft and automotive industries through to the mold construction of micro parts with highest precision.

The Measuring Principle of White Light Interferometry

Through rapid innovation cycles in processor and camera technology as well as in precision optics and image processing software, interferometry is increasingly coming into focus. With white light interferometry, the topographies of both rough and smooth objects can be measured and captured in a very precise way. Simply put, the measurement subject and a reference mirror are illuminated by a light source. This is separated into two parts by a semi-transparent mirror (beam splitter).

As the process continues, this results in brightness variations, which are recorded on the image sensor of the camera. These are analyzed by special software and each pixel is assigned a height value. This then creates a highly differentiated profile height in the nanometer range. If the process is carried out at various layers, complex structures are recorded in their full height.

Performing High-Speed Measurements with the Highest Precision

Due to its compact design, the Korad3D system can be directly integrated into the production line.
Due to its compact design, the Korad3D system can be directly integrated into the production line.

The KORAD3D sensor product family, produced by 3D-Shape GmbH, utilizes the principle of white light interferometry. Their ability to measure fields of 0.24 × 0.18 mm at minimum to 50 × 50 mm at maximum, means they are compact, can be integrated into the production line systems and cover a wide range of applications. They determine flatness and roughness on sealing surfaces, provide 3D imaging of milling and drilling tools, give information on wear on cutting inserts and check the period length and step height of the smallest contacts in electronic devices. The achievable accuracy is directly dependent on the required measurement field size, the optics used and the camera resolution.

The most important factor influencing the measurement accuracy and measurement speed of a KORAD3D system is the performance of the built-in camera. Larger measurement fields are advantageous in ensuring the system can be used for a variety of applications. However, the greater the measurement field, the more inaccurate the measurement. A key requirement for the camera is therefore the megapixel resolution. This is, of course, in addition to other important aspects of image quality such as contrast and noise behavior and the sensitivity of the camera. At the same time, the camera must be able to deliver a high frame rate. In many applications, the entire structure is recorded layer-by-layer, and at very short cycle times within the production line. Doubling the measuring depth, however, also causes twice the measuring time. The resulting large amounts of data need to be addressed. This can only be achieved by a camera that captures and transfers images in real time.

Monitoring with KORAD3D in the µm Range

In order to continue processing the ball-grid arrays without errors, it is important to ensure that they are all placed with their top ends inside one area. The bumps in the arrays, arranged like a “nail board,” can be checked with the KORAD3D for different characteristics up to the µm range.
Every single contact pin in the ball-grid array is precisely checked in size and shape to the µm range. In just about one second, the topography of the entire group is captured.

Convincing on all Levels

When 3D-Shape were looking for a camera that meets all these requirements best, only a few were shortlisted. The Erlangen-based company operates along the frontier of the physically possible and therefore needed a camera with the latest technology. They got a crucial tip from a sensor manufacturer. According to the head of development, ultimately the Mikrotron EoSens® was the only camera that met all the requirements. Therefore, the company decided to equip the KORAD3D measuring systems family with this camera.
At a full-screen resolution of 1,280 × 1,024 pixels, the camera delivers up to 500 images per second via the high performance base/full-camera link (160/700 MB/second) interface. This specification convinced the Erlangen-based company. So they could monitor the as yet unfitted circuit boards at a frame rate of 180 fps (frames per second) for a customer in the electronics industry. But even higher frame rates of up to 500 fps are used in applications.

Another important argument for the EoSens® was its outstanding light sensitivity of 2,500 ASA. It is based among other things on the large area of a single pixel of 14 × 14 µm and the high pixel fill factor of 40%. The investment required for lighting systems was thus reduced and a higher range of brightness and contrast for image processing could be set.

In addition to this there was the switchable exposure optimization. It adapts the usually linear image dynamics of the CMOS sensors to the nonlinear dynamics of the human eye at two freely selectable levels. The bright areas are thereby suppressed and details can be extrapolated even with extreme light-dark differences in all areas. In the most demanding image processing tasks, this is a great advantage.

Given the cycle times the KORAD3D system has to maintain, each contribution to the acceleration of the data processing is important. This includes the ROI function, which can be defined and customized freely to fit the size and location of individual tasks or the receptive field to be evaluated. The amounts of data are thus reduced and the analysis is accelerated. This simultaneously allows extremely increased frame rates. The built-in multiple ROI function allows the user to define up to three different image fields in the overall picture. 3D-Shape GmbH is not making use of this in current applications, but is already looking at interesting solutions applying this in future.

To keep the measurement accuracy of the topographies created by the KORAD3D system in a narrow range, a number of features of the imaging quality must work together to form a performance-boosting whole. The global shutter of the EoSens® completely freezes the captured frame and stores it in real time, while the next image is already being exposed. This provides images of dynamic processes free of distortion and smear effects. In addition to the C-Mount lens mount there is also the F-Mount option. The latter allows the operator to connect the camera and lens to a fixed calibrated unit, which increases the precision of the analysis. In addition to this range of outstanding performance data the compact design of the camera, which simplifies system integration, wins customers over.




To Know More About Mikrotron High Speed Camera, Singapore, Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com

 

 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432

 

Source - mikrotron.de

Tuesday, 17 January 2017

Basler Cameras by Comparison: Better Images, Lower Price with the Basler ace acA1300-200um

http://mvasiaonline.com
The PYTHON sensors from ON Semiconductor are currently among the most powerful sensors for the machine vision industry. They combine excellent image quality with high speeds and are therefore highly suitable for the challenging tasks of many machine vision applications, such as quality control or inspection tasks in the semiconductor industry. 

A sensor alone, however, is still no guarantee of a good image. It always depends on how the camera supports the sensor performance. To obtain comparative values, we compared one of our Basler ace USB 3.0 cameras (acA1300-200um) and two competitor USB 3.0 cameras, each equipped with the PYTHON 1300 sensor from ON Semiconductor. The test results show that to some extent there are broad disparities in the image quality of the three tested cameras despite having the same sensor.

Testing with different levels of illuminance

During the comparison test, different light conditions were simulated to examine how image quality behaves under these conditions. The test conditions were identical for all three cameras: Each camera was equipped with a Computar M1614-MP2 F1.4 f16mm 2/3" lens for the individual tests. 

In the first test we lit a scene in a dark environment with an X-Rite Color Checker, with an illuminance of 1 lux (super-low light). The gain of each camera was adjusted individually to the same exposure time of 200 ms until the white field of the Color Checker showed a median grayscale of 150 DN.

In a further comparison test we created a brighter environment by illuminating the scene with an LED light source with 8.8 lux (low light). In this setup, the exposure time was set to a constant 20 ms and the gain was again adjusted enough so that a median grayscale of 150 DN was reached.

Vertical (competitor 1) or horizontal (competitor 2) bands were discernable on the images from the competing cameras primarily in the dark environment, while the image from the ace was homogeneous (cf. Fig. 2). In the brighter environment, the image from the competitor camera 2 also showed vertical bands.
The ace camera also achieved the best values when the DSNU (dark signal non-uniformity from the EMVA 1288) was measured. The ace therefore has lower fixed-pattern noise, which also contributes to the fact that the Basler camera delivers the most homogenous image in a dark environment.

Summary of test results

Although the cameras were tested under identical conditions, they exhibited different image qualities. In particular, the Basler ace camera with its excellent image quality was impressive when compared under varying light conditions.

Small, inexpensive, powerful – the Basler ace camera

The tested Basler ace camera acA1300-200um offers users not only very good image quality but also a high frame rate of 200 fps – at an excellent price. The two GPIO interfaces (general-purpose I/O) allow flexibility. The compact 29 mm x 29 mm housing is one of the factors making system integration simple. The USB 3.0 interface is very user-friendly.

The ace equipped with the PYTHON 1300 sensor from ON Semiconductor can be used wherever there is a need for a compact high-performance camera with a high frame rate and good image quality. Last but not least, its price/performance ratio is another factor that sways opinion in its favor.




To Know More About Basler Camera Distributor, Singapore, Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432
Source - baslerweb.com

Tuesday, 10 January 2017

Putting Thread to The Test

http://mvasiaonline.com
The hemstitch sewing machine was invented in 1893 by Karl Friedrich Gegauf. In 1932, his son Fritz introduced the first household sewing machine and the inventions and patents from Bernina have kept coming ever since. The company, based in Steckborn, Switzerland, stands for the highest quality and durability: BERNINA sewing machines must therefore meet very high standards. They must be fast, low vibration, quiet and wear-resistant. They should also stop with high precision, execute moves in time and repeat them predictably.

The mechanical challenges of a sewing machine are complex: First, a sewing machine is composed of a number of vibrating parts that mutually influence each other. The vibrations are often associated with high imbalances due to the nature of the construction. When vibration frequencies are high, even the smallest deviations at the micro-level can substantially affect the functionality of a sewing machine. Also, depending on the material used, the upper and lower thread may require different and often unpredictable movements, sometimes in the tightest of spaces.

Facilitating a diverse range of tests with the right camera


To analyze the problems arising from this, BERNINA AG has to apply accurate, efficient and reliable measuring instruments. These include the MotionBLITZ EoSens® mini2 by Mikrotron. They started using the high-speed camera in the construction and development department in 2009. Mr. Durville, head of the department, and Mr. Schwyn, test engineer at BERNINA AG, were particularly impressed by their flexibility. “The camera has quickly established itself as a flexible tool for many testing purposes,“ says Mr. Schwyn. It records the movement of the thread, but also monitors components operating under spring pressure or the needle-looper movement and checks time sequences. The resolution and speed of the camera are adjustable. At a resolution of 1,696 × 1,710 pixels (3 megapixels), the camera takes 523 images per second. The images let you see every detail. Higher frame rates of up to 200,000 frames per second are possible at a lower resolution.

Recording the movement of the thread

A special challenge in the development of a sewing machine is the controlled movement of the thread: Since commercially available threads come with a variety of surfaces, thicknesses, resistances and flexural properties, it is difficult to keep the grain line under control. In addition, the operational parameters may run from a speed of 2 m/sec to being completely paused. In the latter case, there is a danger that the thread may become hooked on machine components or slip out of the thread guide.

For a regular and high-quality seam, it is extremely important that the slip motion of the thread remain under control at all times and everywhere. The MotionBLITZ EoSens® mini2 by Mikrotron makes the movement of the thread visible to the human eye, meaning the design process can be brought to bear on it.

Monitoring the thread guide

High-speed camera MotionBLITZ EoSens® mini2 - Monitoring the thread guide

Thanks to the high-speed imaging of the MotionBLITZ EoSens® mini2 it is possible to check whether the thread is moving in a controlled way or if the alternations of tension and release are too great. With an image size of 1,696 × 1,710 pixels and a frame rate of 512 frames/second, the camera delivers high-resolution and accurate shots.

Monitoring components under spring pressure

In a sewing machine, spring pressure enables many parts to stay in position or allows them to perform the necessary movements without hindrance. For this, one usually alternates between high and low pressure. High pressure leads to high friction and wear, while low pressure goes hand in hand with low reliability. The high-speed camera MotionBLITZ EoSens® mini2 helps manufacturers to find the right spring design faster and more effectively.

Monitoring the spring pressure


MotionBLITZ EoSens® mini2 - Monitoring the spring pressure

The MotionBLITZ EoSens® mini2 is used to control the spring pressure of two components. A high frame rate of 3,668 frames/second was chosen for the image. A resolution of 528 × 652 pixels was sufficient to document the interaction of the two components in Detail.

Analyzing the sensitive needle-looper movement

The central piece of a sewing machine is the so-called looper. It detects the thread, which is “offered” to it by the needle. Only a few hundredths of a millimeter determine whether the thread is detected by the looper or not, i.e., whether a stitch is created or not. The needle is guided from the top of the machine; the looper is guided from the bottom of a sewing machine.

The long mechanical lever arms, the high speed and the many moving parts cause vibrations that threaten this very important function. Here the high-speed camera MotionBLITZ EoSens® mini2 also delivers precision.

Monitoring the needle-looper movement


The MotionBLITZ EoSens® mini2 allows various high-speed recordings of the sensitive interaction between needle and looper.

Checking temporal sequences


A sewing machine can only fulfill its task if the temporal processes are executed with absolute precision and accuracy
  • The fabric may move only as long as the needle is not inserted.
  • The needle should only execute the ZZ movement if it is not inserted in the fabric.
  • The servomotor for the fabric transport length should only move when its not transporting.
  • The looper must then be exactly on the spot when the needle forms a small thread loop.



To Know More About Mikrotron High Speed Camera in Singapore, UAE, Southeast Asia, Contact MV Asia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432
Source - mikrotron.de

Tuesday, 3 January 2017

Basler's new proprietary feature set: PGI

Menzel Infomatrix SE Asia Pte Ltd.

Several of our newest camera models come with our powerful in-camera image optimization technology already built in: The proprietary PGI feature set enhances your images at the full speed of your camera. PGI is comprised of a unique combination of 5x5 debayering, color-anti-aliasing, denoising and improved sharpness.

With PGI, your camera will produce better images than ever, without putting additional load on your processor. The PGI feature set is available in dart and pulse color cameras and in all ace color camera models with sensors from the Sony Pregius series or PYTHON sensors from ON Semiconductor.
Harness the full power of pylon and activate the PGI features or change the settings for individual PGI components until you've achieved the optimal results. 

The following cameras possess the PGI feature set:
  • Basler dart camera series
  • Basler pulse camera series
  • Basler ace cameras with Sony IMX174, IMX 249 or PYTHON sensors

Details on PGI

PGI is comprised of a combination of various features. Learn here about the individual features of PGI.

5x5 Debayering

The term debayering generally describes an algorithm that calculates the color image from the image sensor data. However, as the image sensor does not provide color values for each color in each individual pixel, the algorithm must interpolate to determine that information. Cameras typically offer 2x2 debayering, whereby the two closest pixels are used to calculate the actual color of the image. This can lead in some cases to uneven edges between colors and other artifacts. The 5x5 debayering used in the PGI feature set delivers cleaner transitions between colors and eliminates artifacts altogether.

At the same time, the optimized debayering also assists with noise suppression.

Color-Anti-Aliasing

Due to the limits of camera resolution, the use of the debayering algorithms can easily lead to color distortion in the captured images. In practical terms, even colorless structures suddenly appear to have color. The aforementioned resolution limits allow color cameras using debayering to capture only specific colors, which in turn produces false interpolation values. PGI uses an expanded informational range during debayering while also simultaneously correcting the incorrect colors.

Denoising

Noise is a phenomenon that occurs in all cameras, and arises from a variety of causes (photon shot noise, sensor noise). Color cameras must not only deal with gray noise but also with color noise caused and reinforced by the sequencing of multiple calculation steps and interpolation. PGI accounts for and avoids this type of noise formation from the start through careful linking and parallelization of calculation operations. In addition, active noise filtering can be applied to further reduce the noise level, which in turn further enhances the image.

Improved Sharpness

Color cameras often struggle to depict particularly fine or sharp structures. The results are aliasing effects or reduced image sharpness. This can be traced back to the interpolation algorithms, known as debayering for cameras with Bayer pattern. Because its interpolation algorithm is adapted for the image structure, the PGI feature set delivers significantly improved sharpness, with the option to pursue further improvement via a supplemental sharpness factor. These enhancements are particularly helpful for applications requiring strong sharpness, such as applications using color cameras to detect and process letters and numbers (such asANPR for traffic applications) or other fine or sharp-edged structures (such as barcodes).



To Know More About Basler Machine Vision Camera Distributor in Singapore, UAE, Southeast Asia, Contact MV Asia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432

Source - baslerweb.com