Publications, books and careful research online can provide a wealth of information about Artificial Intelligence and how it differs from Machine Learning and Deep Learning. Nevertheless, laymen and even the media often get these three related but distinct concepts confused. In this part of a special series of articles by SIM-CI, Tijn van der Zant, Artificial Intelligence Lead at SIM-CI, talks about what Deep Learning is, his personal experience as a pioneering Deep Learning researcher, and the tremendous impact this technology has on our world now and in the future. ON DEEP LEARNING Tijn van der Zant has no trouble explaining Artificial Intelligence to anyone who asks. “Sometimes I explain what I'm doing is: if you see (artificial) intelligence in science fiction movies, well that’s what I’m working on.” He goes on to say that, foremost, Artificial Intelligence is quite broad. “It is the encompassing field, there's a lot of stuff in AI.” One of those subcategories in AI is Machine Learning. Machine Learning involves processes that require relatively large amounts of data and algorithm to predict specific outcomes, such as recognising spam mail or predicting traffic flow, for instance: in short, processes that are quite smart but not without limitations. Deep Learning, on the other hand, is a subcategory of Machine Learning but one which goes practically “deeper.”
To illustrate an example of Deep Learning, Tijn recalls a brief from a client this year, specifically a supplier for car factories, requiring what was called “error-detection” in car paints, error in this case meaning scratches, dents or scuffs. Put simply, Deep Learning algorithms can “see” and “learn” these irregularities and identify them which in turn would inform as to the best repair option. “I got data gathered from a couple of hundred cars, in the form of three-fold scans which when combined would result into a recognisable image of the affected surface.” Using ocular inspection alone, people would mislabel these quite often, leading to an error margin of 20%. This margin of error adversely affects the decision-making of which reparative option should be applied. The entire project was to last three months. Implementing Deep Learning on the data took only three days and reduced the error margin to 3%, an impressive 85% improvement. That said, the process wasn’t without a hitch. Two of those three months were devoted to what he calls “debugging the human component.” Tijn says, “For one, because of shortage of digital storage space, the company deleted all the original data.” Not only that, the images he was given have already been tampered with using a software to make them look ‘nicer.’ This disrupted the process in such a way that he spent several weeks unsuccessfully applying the algorithm. Fortunately, after a thorough enquiry, the one who processed the images managed to produce the raw originals from his hard drive, and Tijn could proceed and complete his brief. It is this deeply inherent data-driven quality of Deep Learning which requires heavy emphasis on meticulous data-gathering. Seasoned Deep Learning experts know that as much as two-thirds of time can be spent on “pre-processing” the data, and would know best to lessen or avoid this timeconsuming stage.