Become a member
Take advantage of exclusive member benefits, world class events, networking and specialist support
Want to learn more about the technology that’s behind the latest industrial vision techniques? It’s all here. Just click on the links below for a comprehensive summary.
Deep learning is a subset of machine learning, which in turn is part of the overall umbrella of artificial intelligence (AI). AI was first formulated as far back as 1956. In the 1980s, machine learning emerged in the form of algorithms that can learn patterns in training data sets using some form of statistical analysis, and then look for those same patterns in ‘unknown’ data. Deep learning, however, utilises artificial neural networks to imitate the way the human brain works for recognition and decision-making.
Deep learning began to make an impact around 2012 thanks to several key breakthroughs. These included the development of deep neural networks with many hidden layers, the possibility of massive parallel processing at affordable costs through GPUs, large data storage capabilities and the availability of huge data sets for training. Now deep learning capabilities for machine vision are available through commercial image processing software.
In this in-depth feature, we take a look at deep learning and machine learning classification methods for industrial vision. With the incorporation of these tools into commercial vision software products, the use of these powerful methods is becoming more and more mainstream. We’ll take a good look at some of the applications that will benefit from their use as compared to traditional classification methods and find out just how easy it is to implement them.
Thanks are due both to UKIVA members and to presenters at UKIVA’s Machine Vision Conference & Exhibition (Acrovision, Cognex, Framos, IDS Imaging Developments Systems, Matrox Imaging, Multipix Imaging, MVTec Software and Stemmer Imaging) for their extensive contributions to this special feature.
One of the more recent developments in machine vision is the wide commercial availability of algorithms and software tools that can process and measure pixels in the third dimension. The most common applications work in 2 dimensions, X and Y. In the real world, this translates to the accurate location of an object within the image - or the actual position of a product on a conveyor belt for example. In a manufacturing environment this works very well when the product type and size are known - as when the height of the product is a fixed value - as long as the product outline can be recognised and measured, the height is assumed.
In a scenario where multiple product types on a conveyor belt are presented to the camera, this may be a problem for traditional 2D systems, as if a product's height is not where it is expected to be, the system will fail. A 3D vision system can extrapolate a pixel’s position, not just in the X and Y, but also in the Z.
3D machine vision is achieved using a variety of techniques, which include (but are not limited to) stereo vision, point clouds, or 3D triangulation. Taking stereo vision as an example, this works in the same way that the human brain does (and any animal with two eyes). Images from each eye are processed by the brain and the difference in the images caused by the displacement between the two eyes is used by the brain to give us perspective. This is critical when we are judging distances.
3D Stereo machine Vision uses two cameras identically. The software reads both images and can compare the differences between the two images. If the cameras are calibrated so that the relative position between each camera is known, then the vertical position (Z) of an object can be measured. In computing terms, this takes more processing time than an X and Y measurement. But with modern, multi-core processors now ubiquitous, 3D machine vision is no longer limited by processing time, meaning ‘real time’ systems can be improved with 3D machine vision.
An obvious real-world benefit of this is 3D robot guidance and taking the initial example of known product dimensions on a conveyor belt, a 3D robot guidance system will deal with product variants even when the next product type and size is unknown. With a competent 3D Vision System, the robot will not only receive X, Y AND Z data but also the corresponding roll, pitch and yaw angle of each pixel in the combined image.
With the increasing demands of consumers, from a quality and quantity perspective, the use of automated inspection, also known as machine vision, is now one of the key technology areas in the manufacturing industry.
Traditionally a manufacturing process has a human element at the inspection stage where products are being visually inspected. This often involves handling the product and so due to manufacturing speeds can mean that the inspection is on a sub-set of the entire production run. However, today's markets demand higher production throughput with guaranteed quality and so this is where the importance of automated inspection is realised.
With the use of industrial cameras, illumination and optics a high-quality image of the product can be captured. This can be from a few images a second to hundreds of images per second depending on the demands of the manufacturing process. Once the image has been captured it is processed to include tasks such as:
There is also an emerging area of vision that deals with 3D data analysis which is critical for processes that require robotic control such as pick and place.
Furthermore, by combining the latest techniques you can now realise systems which not only inspect the product but also handle the product, which are based on industrial cameras for grabbing images and robust image processing software for the inspection.
Manufacturing processes need to automate to remain competitive and to cope with the demands of the worldwide consumer base. For these reasons alone, automated inspection is a key technology now and in the future.
3D cinema films, 3D TV and 3D gaming consoles are all familiar concepts in the world at large, but now 3D machine vision imaging is increasingly having a major impact in a wide range of industries and applications from volumetric measurements, to inspection for packaging and robot vision.
The biggest challenge for 3D machine vision imaging is time. Creating complex 3D images is computationally intensive and therefore time-consuming . So it has been the emergence of processors capable of handling the computational overhead required at production line speeds that has been the key to establishing true 3D measurement techniques and made it a credible alternative to 3D contact measurement and metrology. 2D or 3D?
However, because 3D imaging is so processor-intensive , it is important to be able to assess whether an application needs 3D measurements or whether conventional 2D imaging is more appropriate. Looking at the component in the picture, if you just want to measure the inner or outer diameter, 2D imaging is more than adequate, but if you want to be able to measure the defect in the surface, 3D imaging is needed. In the same way, using 3D robot vision to pick unordered parts enables manufacturers to save a lot of time and resources shiſting or organising parts in the manufacturing process.
Just as no single imaging configuration is suitable for every possible 2D application, the same holds for 3D applications. Several different 3D imaging techniques have evolved with different capabilities, and there is plenty of choice of components and systems from different suppliers. These measurement techniques are highlighted in our special centre page spread, which also looks at issues such as calibration, shadowing effects and applications of 3D measurements, including inspection and robot vision.
UKIVA members can offer further advice on the different camera formats and technology.
The major imaging toolkits that are available from several different vendors offer a multitude of 3D measurement and image manipulation and presentation capabilities. Vision tools are available for registration and triangulation of point clouds, calculation of features like shape and volume, segmentation of point clouds by cutting planes, and many more. It is possible to make a 3D surface comparison between the expected and measured shape of a 3D object surface. ‘Golden template’ matching is also possible in 3D with deviations between the template and test part identified in real time using real 3D point clouds and automatically adjusted for variations in orientation in all 3 axes. With 3D ‘smart’ cameras, however, acquisition, measurement, decision and control are all performed within the unit, although data can be output for further processing if required.
Many 3D applications can work reliably with noncalibrated images, while others do need calibrated images and measurement data. Since the calibration process can be demanding, it is worth making sure that having real-world corrected units is really necessary. The easiest calibration set-up comes using 3D smart cameras, where the laser, camera and lens are housed in a single housing. These systems are precision factory aligned to provide consistent, reliable measurements in realworld coordinates, and require only minor adjustments. For systems where the components are mounted independently, calibration involves moving a known object accurately through the field of view using a positioning stage. From this, the system can build a lookup table for converting XYZ pixel values to real-world coordinates.
UKIVA members can offer further advice on the different camera formats and technology.
Most of us have had some experience of high-speed imaging. Classic images such as the corona formed in a liquid when a droplet hits the surface or the rupturing of a balloon’s fabric as it bursts are well known. High-speed imaging is used extensively in filmmaking and TV programmes such as sports coverage.
However, high-speed imaging is a diverse technology which also has a host of industrial applications. Examples include:
In the manufacturing industry inspection applications on high-speed production lines include:
Process and machinery diagnostics are another important application since even minimal discrepancies in high-speed process machinery mechanisms can cause an entire production line to come to a standstill. Intermittent failures can be even more difficult to troubleshoot. High-speed diagnostic systems can record image sequences both before and after an event for slow-motion review to allow causes of failures to be identified and any necessary adjustments made.
High-speed cameras need to be capable of short exposure times and fast frame rates. Short exposure times are needed when imaging a fast-moving object (typically so the object moves less than one pixel during the exposure) to avoid motion blur. However, when a series of objects are moving past the camera, high frame rates will also be required to ensure that each item is imaged on a successive frame for analysis.
Many manufacturers would suggest that ‘high-speed’ cameras operate at frame rates over 200 frames/second although frame rates of thousands of frames/second may be required for troubleshooting manufacturing process problems. However, effective high-speed imaging is a function of much more than just the frame rate and exposure time of the camera. Factors such as illumination intensity needed, camera and light source triggering, data capture, transfer, processing and storage all play a key role and UKIVA members who are vision technology suppliers or vision systems integrators can provide expert advice on this specialist topic. More details on these technology considerations as well as some application examples can be found in our special centre page spread.
UKIVA members can offer further advice on the different camera formats and technology.
Light Emitting Diodes (LEDs) are a popular form of illumination for machine vision applications, offering a good deal of control. They can be readily pulsed, or strobed to capture images of objects moving past the camera at high speeds. Strobing needs to be synchronised with the objects to be inspected so the camera is triggered at the same moment as the pulse of light. The short exposure times required for high-speed imaging mean that high light intensities are required. It is possible to dramatically increase the LED intensity over short exposure times by temporarily increasing the current beyond the rated maximum using lighting controllers. However, the LED must be allowed to cool between pulses to avoid heat damage. Lighting controllers can provide fine adjustment of the pulse timing, which is oſten more flexible than the camera’s timing. The camera can be then set for a longer exposure time and the light pulsed on for a short time to ‘freeze’ the motion.
High-speed imaging requires that the exposure of the camera happens exactly when the object is in the correct position. Initiating the start of an exposure at a particular time is called triggering. If a camera is free-running the position of the moving object could be anywhere in each capture frame or even completely absent from some frames. Triggering delivers image acquisition at a precise time. The frequency of a trigger should not exceed the maximum frame rate of the camera to avoid over-triggering . This also means that the exposure time cannot be greater than the inverse value of the image sequence. The exposure is generally triggered by an external source such as a PLC with a simple optical sensor often used to detect when the object is in the correct position. Precise triggering is very important for high-speed imaging and in very high-speed applications great care must be taken to assess and reduce all of the factors that can influence any delays from initiating a signal to the resultant action in the sensor to ensure the required image is acquired. These factors could include opto isolators in cameras as well as the latency and jitter within the imaging hardware.
High frame rates and high spatial resolution generate high volumes of data for processing. Image data are generally transferred directly to a PC’s system memory or hard disk. This relies on an appropriate interface speed between the camera and the computer and the speed of the computer. There are several vision image data transfer standards such as:
One of these interfaces offers an acceptable data transfer rate for the application and long sequences are required, this is a good solution. The alternative is to have the image recording memory within the camera itself, which increases data throughput significantly since images are held in the camera without any need for transmission while recording. However, the amount of onboard memory is significantly less than a PC hard drive, which means that only relatively short sequences can be recorded.
UKIVA members can offer further advice on the different camera formats and technology.
High-speed vision systems can significantly improve the accuracy of diagnostic analysis and maintenance operations in industrial manufacturing applications. Users can record and review a high-speed sequence, either frame by frame or using slow-motion playback to allow perfect machine setup and system synchronisation. Alternatively, the system can be used as a 'watchdog' by continuously monitoring a process and waiting for a predefined image trigger to occur. These troubleshooting systems are generally portable and can be used in a wide range of manufacturing applications including bottling lines, packaging manufacture, food production lines, plastic container manufacture, pharmaceutical packaging, component manufacturing, paper manufacture and printing.
Troubleshooting applications can require short exposure times so high-intensity illumination is required. Camera frame rates of thousands of frames/second generate a lot of data at high speed, especially if they have high spatial resolution. Rather than try to transfer this data to a PC, in-built highspeed ring buffers may be used for image recording. Image sequences can be replayed in slow motion on self-contained image displays after the event has been recorded or transferred to the hard disk of a PC for later review. Image sequences can be recorded in standard video file format.
Specialist triggering is used for troubleshooting because generally, it is important to see what is happening both before and after the trigger event. The system continuously records into a ring buffer and once full, the system starts overwriting the first records. Once a trigger event has occurred, the system records until the ring buffer is filled up and stops. In this way, both pre-event and post-event recording information is acquired. Sequences can also be triggered by monitoring either changes in intensity or movement in the image, meaning that the camera triggers itself to send an image or sequence, removing the need to generate a trigger using hardware. This is particularly useful for capturing intermittent or random events.
UKIVA members can offer further advice on the different camera formats and technology.
End-of-line inspection is one of the most important uses of vision in the manufacturing industry with applications on both manufacturing and packaging lines. The combination of vision technology developments and the emergence of specialist vision systems integrators make the use of vision much more practicable. In addition, pressure from major players in the industry and legislative requirements in the pharmaceutical industry are making the use of vision an essential requirement.
Major players in several industries can impose stringent quality requirements on their suppliers. In the food industry, supermarkets wield a significant amount of power over their suppliers. Margins are squeezed, and penalties can be imposed on suppliers whose products or packaging do not meet their demanding standards. The use of vision technology can help meet such demands. The automotive industry is one of the world's most cost-sensitive industries and also one of the most demanding in terms of product quality and aversion to component failures. Both manufacturers and component suppliers rely increasingly on leading-edge vision technology to validate complex assembly processes. The use of vision also helps the industry’s changing approach to quality inspection which now concentrates more on differentiating between critical and non-critical defects – those that affect the functionality of the object.
Vision plays an important role in reading unique identifiers in the form of 1D or 2D codes, alphanumerics or even braille for tracking and tracing applications in industries as diverse as aerospace, automotive, food, healthcare and pharmaceutical. Human readable on-pack data, such as batch, lot numbers, best before or expiry dates are also critical for products such as food, pharmaceuticals , medical devices and cosmetics. In the automotive industry, data management allows manufacturers to optimise processes and perform product traceability. Having the appropriate data on vehicles and related parts can also help to reduce costs and make it possible to accurately and promptly respond to quality assurance and recall problems. In the pharmaceutical industry, the 2011 EU Falsified Medicines Directive (FMD) will require individual packs of medicines to carry a unique, machine-readable identifier which will provide traceability from the point of sale back to the manufacture. More on these and other topics can be found in our centre page spread feature. Thanks are due to UKIVA members Acrovision, Bytronic Automation, Olmec-UK, Omron, Sick UK and Stemmer Imaging for their contributions to these features.
UKIVA members can offer further advice regarding End of Line Inspection.
EU regulators are introducing a new era of pharmaceutical manufacturing and distribution compliance. The Falsified Medicines Directive (FMD) and similar legislations in other parts of the world are designed to reduce the counterfeiting of pharmaceutical products by stipulating that individual packs of medicines will carry a unique, machine-readable identifier. This item-level serialisation will provide traceability of a pack from the point of sale back to manufacture so that its authenticity can be checked at any point in the supply chain. Machine vision will have a key role to play in this.
Serialisation means that packs must be labelled correctly the labels verified by machine vision and all data passed upstream to the appropriate place, and all at production line speeds. The process will generate huge quantities of data compared to present levels and product data will need to be uploaded to a national or international database against which product IDs can be verified. The challenges for vision systems used in serialisation applications are primarily high-speed inspection of codes transferring data and handshaking with control hardware on the shop floor.
For some time, traceability and quality control of parts particularly in the automotive and aerospace industries has been carried out using 2D Data Matrix Direct Part Marked (DPM) codes. These are normally laser-etched or dot-peened onto the component, providing an almost indestructible code to survive a life that a traditional barcoded label would not survive. Using vision systems to read these codes, key components such as differential gears, clutches, transmission cases , housings, valve bodies etc. can be traced throughout the production process. In addition, engine components, such as pistons, cylinder heads, engine blocks , CAM shafts , and crank shafts can be traced throughout the manufacturing and distribution processes.
Despite the obvious benefits of this ‘cradle to grave’ tracking, factors such as shiny surfaces, curved surfaces, rough finishes and dirt or oil contamination can lead to unreliability and low read rates. However, recent enhancements in code reading cameras and lighting, with economies of scale driving down pricing, means that direct part marking and identification are now becoming a more cost-effective and robust technology.
UKIVA members can offer further advice regarding End of Line Inspection.
Measurement is a main-stay for automated inspection and has provided the platform for ever faster, more efficient and more accurate quality control. In addition to preventing defective products from reaching the customer, vision measurements can also be directly linked to statistical process control methods to improve product quality, reduce wastage, improve productivity and streamline the process.
Machine vision does not examine the object itself - measurements are made on the image of the object on the sensor. All of the factors that contribute to the quality of that image must be optimised, so careful consideration must be given to every element of the machine vision system, including lenses, illumination, camera type and resolution, image acquisition, measurement algorithms, as well as external factors such as vibrations, electromagnetic interference and heat.
Measurements fall into three categories: 1D, 2D and 3D. 1D measurements are typically used to obtain the positions, distances, or angles of edges that are measured along a line or an arc. 2D measurements provide length and width information and are used for a host of measurements including area, shape, perimeter, the centre of gravity, the quality of surface appearance, edge-based measurements and the presence and location of features.
Pattern matching of an object against a template is also an important part of the 2D armoury. Reading and checking characters and text, and decoding 1D or 2D codes is another key activity. The emergence of many affordable 3D measurement methods provides length, width and height information, allowing the measurement of volume, shape, and surface quality such as indentations, scratches and dents as well as 3D shape matching.
Good accuracy and repeatability of vision-based measurements are of paramount importance. Accuracy is an indication of how close the actual measurement is to the true value. Repeatability shows the closeness of several repeated measurements. A group of measurements could have poor accuracy and repeatability, good repeatability but poor accuracy, or good accuracy but poor repeatability, as well as the desired combination of good accuracy and repeatability.
We’ll take a look in more detail at some machine vision measurements and the factors that affect them in the centre pages. Thanks are due to UKIVA members Bytronic Automation, Clearview Imaging, Multipix Imaging and Stemmer Imaging for their contributions to these features.
Optical character recognition (OCR), verification (OCV) and code reading and verification are major application areas for machine vision. Ensuring alphanumeric codes (e.g. lot details and best-before information), 1D barcodes, 2D data matrix codes and QR codes are correct can be critical for consumer safety and for product identification and traceability. Products can be tagged either by a stick-on label or by information printed directly onto them or the packaging.
OCR tools use some pattern matching techniques such as pattern finding and contour detection since an alphanumeric character is simply a pattern that needs to be recognised. Contour-based tools work well on text with clear backgrounds and can accommodate variations in scale and rotation with no loss in speed. Contrast-based tools provide more robust detection in applications where the contrast is poor or changing. Machine vision OCR algorithms need a fairly complete letter to decipher, especially if the text is structured. Once the character has been detected, OCR systems work by comparing a library of taught models to what is printed. The output from an OCR system is the alphanumeric string that has been read such as a use-by date. Special consideration must be given to text that is written in a curved pattern.
Pattern-matching techniques are used to locate the code and extract the data. However, to improve reliability, barcodes, 2D data matrix and QR codes have a built-in redundancy so the code can still be successfully read even if it sustains a certain amount of damage in a production environment.
Verifying that a barcode has been printed accurately is very different from simply reading the code itself. A good reading algorithm should be able to read poor-quality codes, but a barcode verification algorithm should grade how well the code is printed. There are several verification standards that cover parameters such as symbol contrast, fixed pattern damage and distortion. Each result is then graded accordingly. Similarly, OCV is a tool used to inspect the print quality and confirm its legibility. The output from an OCV system is usually a pass or fail signal based on whether the text string is correct, as well as the quality, contrast and sharpness of the text.
Particularly for codes and alphanumerics directly marked on a component, there can be challenges in acquiring a suitable image for reading and verification. This may be due to a lack of contrast , a reflective surface, a curved surface or some combination of these. Even for codes written on packaging or labels, there may be problems with contrast or reflections from shiny backgrounds. Therefore, just as in any other machine vision application, correct illumination is of paramount importance.
Machine vision measurements are made in software. For vision systems utilising a smart camera, the measurement software and measurement tools are built into the camera itself. For PC-based systems, there are essentially three main software categories that can be used with single or multiple cameras:
To make actual measurements, pixel values must be converted into real-world values, which means that system calibration is required and the system must be set up to ensure that measurements can be made with the required accuracy, precision and repeatability. For the best repeatability, all of the set-up conditions for the vision system should be recorded. These include:
A universal test chart can be used for quick and convenient system set-up and checking, for focus, system resolution, alignment, and colour balance. Geometric distortions from the lens can usually be corrected in software.
Special 3D calibration bodies with known reference surfaces and angles allow metric calibration in combination with special software packages. They can be used for the simultaneous calibration of one or more cameras. In addition to metric calibration a plane fit for alignment of 3D point clouds is possible. This is important for 3D matching and for easy processing of range map images.
It is important to check the accuracy and repeatability of a vision system. One way of doing this is to perform a series of 30 measurements of the same part with automatic or individual part feeding and check the variation of the results measuring the same part compared to the allowed tolerance. If this is acceptable, it sets a benchmark for future measurements and many machine vision systems offer extra statistical information, such as minimum, maximum, mean, std. dev., cp and cpk of measured values. The stability of the system can be checked by performing measurements with the same equipment at the same place and the same operator but at different times. It is also important to monitor machine vision results periodically to guarantee measuring tool reproducibility and this can be done by making test measurements with a reference object or special calibration body.
Pattern recognition is hugely important and most vision toolkits offer some type of pattern matching tool. A senior vision engineer at one UKIVA member comments on the fact that at one stage, almost every vision application could be addressed using pattern recognition methods. Pattern matching can be used to localise a pattern in a large area, such as for pick and place applications where the camera identifies the object and tells the robot where it is. It can also be used for classification to decide which object is at a known location. Essentially pattern matching involves subtracting the image of a part under test from an image of a good part (‘golden template’) to highlight any differences. Pattern matching techniques include:
Important factors to consider are potential changes in lighting conditions and/or scale and rotation between the template and the parts under test. Some methods utilise a single image for the template, but decision tree and neural network methods produce the template from many images using a learning algorithm. More sophisticated approaches include those where complex appearance models of good examples can be created during the training phase to be able to allow for any natural variability in the parts.
Pattern matching in 3D imaging uses the same principle of comparing a 3D golden template image of a perfect sample with the 3D images of parts under test produced using real 3D point clouds. However, the alignment of both images is more complex since additional degrees of movement and rotation are possible. Automatic adjustment of position errors or tipping and tilts in all 3 axes (6 variants) are possible in software so there is no need for accurate positioning and handling of the test sample. This allows deviations between the template and test sample to be identified in real-time.
Image quality has a major influence on the resulting measurements and is dependent on resolution, contrast, depth of field, perspective and distortion. These, in turn, are determined by the choice of system components including cameras, lenses and illumination. Cost is also an important consideration. The best components for the application should be used, but over-specifying them leads to unnecessary costs for no gain.
Since all the components in a machine vision system must be perfectly coordinated it is essential to make an initial evaluation of the application:
These and other factors help to determine the specification of the vision components needed, but there are also environmental issues that should be taken into account. These include physical constraints on the positioning of components and environmental conditions such as ambient light etc. The resulting system does not need to be a complicated set-up, it simply needs to be fit for purpose.
With 3D machine vision technology becoming much more widely available, a similar process should be adopted when specifying a system to make 3D measurements. Although 3D systems have become much more affordable in recent years, they are still generally more expensive than 2D systems and add more data and more complexity so should only be specified when the required measurement can’t be made using 2D methods. With a variety of 3D measurement techniques available, it is also important to choose the method.
The rapid evolution of computing power in embedded, single-board computer systems is providing new, exciting possibilities for vision. Embedded vision systems based on platforms such as NVIDIA® Jetson, Raspberry Pi®, CompuLab and ODROID are the newest variants of intelligent vision and are finding increasing use in applications where space is constrained, cost is an issue and a self-contained vision solution is required.
With embedded systems already controlling many devices commonly used today in consumer, industrial, automotive, medical, commercial and military applications, the principle of using an embedded vision system is particularly attractive. Precisely configured designs are less costly both from a production point of view and ongoing support and service. Embedded vision is also an obvious platform for large-volume solutions where economies of scale can have a real impact.
For example, the Raspberry Pi 3 has a quad-core CPU which offers a level of processing greater than that available on most laptops not so long ago - and all for around $35. To take advantage of this processing power, many of the leading image-processing libraries are now providing the capability to port to these platforms. A powerful image processing solution can be developed on a PC and then transferred to the embedded system where it will run independently. In addition, there are a variety of software development kits available that will provide interfaces to a wide range of camera types.
An embedded vision system essentially utilises any microprocessor-based platform that isn’t a general-purpose computer. Smart cameras contain image capture and processing capabilities within the camera unit itself, while compact, or multi-point imaging systems feature a self-contained unit for image acquisition and processing that can control multiple cameras. With the recent introduction of smart vision sensors as well, there is a real scalable choice of embedded vision solutions and the goals of the application must be used to drive the selection. In our centre page feature, we take a closer look at these different types of ‘embedded’ vision systems and how they can be used. Thanks are due to UKIVA members Alrad Imaging, Baumer, IDS Imaging Development Systems, Multipix Imaging, Sick and Stemmer Imaging for their contributions to these features.
UKIVA members can offer further advice on Embedded Vision Technology.