![](https://assets.isu.pub/document-structure/210406131445-652e0ce25a8d7daee7b524fdb33bbcba/v1/65182c3ca09ebaeb765dfd35586ae124.jpg?width=720&quality=85%2C50)
10 minute read
Advice from industry experts about how end users can ensure that they are able to implement successful AI-based machine vision applications
MACHINE VISION
HAVING A VISION FOR AI AND DEEP LEARNING
Advertisement
Advances in deep learning/AI is resulting in these technologies being increasingly utilised within machine vision solutions. Control Engineering Europe sought advice about how end users can ensure that they are able to implement successful AI-based machine vision applications
Neil Sandhu, UK product manager for Imaging, Measurement & Ranging at SICK, believes that AI/ deep learning machine vision will result in greater production flexibility because it has the potential to retrain machines, adapt to changes in processes and respond to a high variety of products – all of which are, of course key elements of Industry 4.0. “Deep Learning technologies should be especially attractive to end users because they can cut out tedious and lengthy programming time and costs especially for more complex tasks,” he said. “This offers the potential to automate machine vision tasks that have previously been too difficult, costly or time-consuming.”
However, Sandhu goes on to warn that deep learning should not be considered as a silver bullet for every application. He believes that it is suited to harder-to-solve inspections where there are a greater number of natural variations from a standard, which would be laborious or even impossible to solve one at a time.
Ruben Ferraz, field product marketing manager Deep Learning at Cognex, pointed out that, as with any new technology there are considerations and trade-offs so the advice is to set proper expectations for what deep learning can bring to any project. “It is important to understand these trade-offs at the outset,” he said.
With any deep learning project, there are four core job roles that are needed for resource planning. These include: 1. A vision developer who implements the solution, as well as optimises lighting and image formation; 2. A quality expert who analyses and grades images; 3. An image labeler; and 4. A data collector who records and organises all information including images, grades, labels, and meta data.
While it is possible for one employee to cover more than one of these roles, being aware of the types of expertise needed is helpful to have upfront. It is also worth nothing that any deep learning initiative will require a powerful Windows-based PC with a graphical processing unit (GPU) installed.
Ferraz advises that the best route forward is to pilot small manageable projects in a sensible phased approach to allow automation teams to set themselves up for long-term success with deep learning image analysis. “Pick a project with a clear payback that cannot easily be solved with traditional rule-based vision, but which is not so difficult that it never makes it into production. Focus on a core need and develop both a core competency and understanding of what deep learning can and cannot do in a factory automation setting. Deep learning pilot projects should have two primary goals: evaluating its broader utility for a more holistic automation strategy and automating an inspection or
verification process that is either not done at all or done manually,” he said.
Change is coming
Due to complexity, variability, and the necessity to distinguish between very small differences some inspection applications have been impossible to achieve with traditional machine vision systems. However, things are now changing, according to Damir Dolar, director of embedded engineering at FRAMOS. He said: “Deep learning can
MACHINE VISION
cope with complexity and variability. It combines the flexibility of human inspections with the consistency and speed of a computer. Experience accumulated in the past can now be applied and used to train deep learning systems.
“The ability of AI to process images is well-documented and most manufacturers will already have large databases of past material that could easily be used by deep learning algorithms for initial training. Once the algorithm has been trained, it can be deployed, maintained, and improved over time just like
![](https://assets.isu.pub/document-structure/210406131445-652e0ce25a8d7daee7b524fdb33bbcba/v1/d57fa37c7a058638879748535c27777e.jpg?width=720&quality=85%2C50)
any other piece of industrial equipment.” According to Dolar AI and deep learning is a useful tool in every machine vision application which currently has limitations of performance. With deep learning it is possible to easily develop high performance solutions for difficult vision problems. Including, for example, automated quality control for goods and products that could not be monitored with current technologies. “In very simple terms, a deep learning algorithm could easily automate quality control by learning what a ‘good’ part looks like and rejecting the rest,” he continued. “More broadly, and if it is not limited to the specific visual inspection system, deep learning algorithms could be used to learn what part of the manufacturing process causes a set of defects. The algorithm can adjust the current process and remove all defects from subsequent issues, they are able to locate parts despite a change in appearance, distinguishing between functional, and cosmetic defects. It also detects classifications while tolerating differences in size, appearances, orientation, and naturally occurring variations.
“The main advantage of such an application is in consistency of product quality, because the algorithm operates 24/7 and maintains the same level of quality all times. The system identifies every defect out of tolerance, the operation is faster, defects are identified in seconds, and it supports high-speed applications.”
The best approach
Dr Robert-Alexander Windberger, an AI specialist at IDS, believes that deep learning/AI machine vision solutions are best approached iteratively and empirically. “It can be hard, or even impossible, to predict the accuracy of a neural network analytically, or the amount and quality of data required,” he said. “However, one of the big advantages of deep learning is that there is no need to cover the full complexity of a scenario by sets of rules but to improve the model by example online. While the evaluation of deep learning models on data sets helps to ensure a model's convergence and to benchmark its accuracy, many issues only arise in a physical test setup. So, the goal is to get a first model working in an application-related setup as early as possible.”
The exact starting point and the iteration step size will depend on the user's experience. Windberger advises that when in doubt it is best to reduce the problem to an absolute minimum: In a quality assurance scenario this could be a binary statement, ‘IO’ and ‘NIO’, instead of covering all occurring errors and locating them at once. “This may yield a fair gain if a major workload can be lifted from a manual quality control. Further classes can be subsequently added in a controlled fashion – one at a time – and immediately tested.
“The reduced complexity requires only a reduced data set size accordingly, which can again be extended step-bystep. It is possible that data biases will become evident during the evaluation and it would be unfortunate if this happened at the end of an expensive data acquisition campaign. By having an application-related setup in place early in the process, data acquisition will be simplified. Through frequent evaluation, data can be obtained more purposefully: If certain image contents are found to be misclassified or overlooked, it might help to add these contents to the data set for retraining,” continued Windberger.
He believes that deep learning can deliver a specified result for certification but says it can also be implemented as a continuous process. “The solution can be steadily improved and adapted by topic experts, who use their knowledge to feed the algorithm handpicked data.”
Yonatan Hyatt, CTO and co-founder of Inspekto, says that AI and deep learning can drive and hyper optimise hardware to the specific need at hand. He said: “Instead of using AI as a software stack in a larger scope project with many hardware layers, I would urge end-users to demand products that already include the required hardware and AI-based smart software. This will save time – both in the project planning phase and later in the operative stage – allowing for self-adjustment instead of repetitive iteration stages with the expert integrator.”
New application areas
Control Engineering Europe then went on to ask what type of new machine vision applications we can expect to see
MACHINE VISION
when AI and deep learning are properly utilised. Sandhu said: “Machine vision has always operated by matching examples, but deep learning takes this to the next level by using the knowledge of hundreds of experiences to deliver a judgement that classifies a product, for example as belonging to a ‘good or bad’ category.
“Deep learning software is developed using artificial neural networks that learn by example, in the same way that humans do, to model what a good part should look like and what variations can be tolerated. There is no need to select from the conventional toolbox of algorithms used to identify defects, such as pattern finding or edge detection. Deep learning cameras can automatically detect, verify, classify and locate ‘trained’ objects or features by analysing the complete ‘taught-in’ image library. As a result, the solutions most likely to emerge with deep learning vision will be specific and customised on a case-by-case basis.
A SICK pilot project has been operating in the timber industry to optimise the cutting process in a sawmill based on deep learning recognition of the annual age rings and other features in the lumber, for example knots in the wood. Using the technology, the sawmill has been able to make the best use of each log and avoid waste, while improving the overall product quality.
In the automotive industry, the complexity of all the potential creases or flaws in a leather car seat during quality checking following ironing, provides a good example of the degree of variation of surface features which can be solved by deep learning vision. Here, systems can be trained on hundreds of images to make a judgement on whether each subsequent example is a pass or fail.
Qualitative problems
Windberger believes that qualitative problems are more suited to a deep learning approach than quantitative problems. Explaining further, he said: “If a trained employee makes a qualitative decision within a few seconds just by looking at an object, chances are that deep learning might be worth considering. If this employee had to use a caliper to come to that decision, then other methods might be more promising.”
Windberger argues that, while these qualitative decisions may seem to be the simpler ones, the fact that many companies still have these decisions embedded as manual processes suggests that this is not the case. “These processes often involve repetitive, monotonous tasks and frequently suffer from varying quality standards over time or across company subsidiaries.” The reasons for these manual processes to persist can be, among others, a high object variety, as in medical or food industry, or undefinable backgrounds and environments, as in retail or logistics. Moreover, decision criteria can be difficult to quantify and cover with a set of rules. For example, if the aesthetic appearance is relevant, such as in the wood industry. If deep learning can be implemented in a continuous improvement cycle, it can be an adaptable tool which can also cope with time-varying objects, such as annual variations on fruits, vegetables, weeds, and pests in the agricultural industry.
“Where rule-based machine vision has not been attempted or has reached its limits, due to one of the above reasons, there is a high potential for deep learning algorithms to support employees and drive forward automation.” Windberger stresses that the different machine vision approaches should not be viewed as competition, but rather as complementry. Often a qualitative decision hampers the implementation of a quantitative image processing chain, such as picking the right measurement algorithm depending on the item under investigation or finding the right area of interest to apply the measurement to. In these cases, access to the full machine vision tool kit, including deep learning, allows for powerful combinations. plus-circle
![](https://assets.isu.pub/document-structure/210406131445-652e0ce25a8d7daee7b524fdb33bbcba/v1/641a2e2044cd869b1d80187b2be045c3.jpg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/210406131445-652e0ce25a8d7daee7b524fdb33bbcba/v1/d9b6ad44dbc0e1ee96d1a24810a10501.jpg?width=720&quality=85%2C50)