AI doesn't eat the world Iam going to elaborate the intersection of cybersecurity and machine learning. I use a variety of neural network architectures and machine learning techniques to create new ways to detect new malware. I have worked on other projects using machine learning and AI.
And we have nothing to worry about.
We are about to enter a new valley of frustration with AI technologies. The explosion in neural networks that lead to self-driving cars, autonomous drones and other modern AI applications relies on two single events - the development of more complex and biologically motivated network architectures and GPUs.
Networks today, like the ubiquitous convolutional networks we use today in deep learning applications, were originally inspired by eye design and used in visual image recognition contexts. The first CNNs exhibited a monumental increase in performance compared to other methods used by people for letter recognition, most notably the handwriting character recognition on the MNIST data set. These networks are certainly new to their architecture, but the techniques they use are not really new. They represented the evolution of neural practices, not the revolution in which it appeared.
But what really enabled these deep, complex structures was computational power. Neural models are faster than once trained but
difficult to train early on. Propagating error vectors through multiple layers is expensive and slow. The types of deep networks used by convolutional networks have taken a long time to train in the past. When graphical processing units and associated development tools such as CUDA are truly usable and affordable, this kind of training is suddenly possible. Networks that take days to train with the CPU can be trained in hours.
But we’ve started hitting the end of the road with AI apps. If no obvious problems are resolved, at least consult with neural network-based architectures. And they are reasonably successful, though we certainly have some significant failures, such as Uber's self-driving car fiasco and recidivism prediction systems, which seem to have summed up the training set above all else.
The simple fact is that, yes, you can train these types of systems to do amazing things if you have enormous data, and you have access to Google, Facebook or Netflix. If not, it's not that easy. Choosing unbiased data is more difficult than people think, to train a model effectively to select the features that interest you rather than random and meaningless features.
We don’t have another GPU revolution on the horizon. We will definitely continue to develop new algorithms, but we won't have a shiny new engine to implement them. And we tax the GPU clusters just as we did the CPU clusters, and let them do the same with the
restrictions. And certainly, we have a lot of work to do to understand the implications of these types of systems and what we can do to protect them from interference.
But general intelligence? Job replacement with AI? Forget about it. Not with the tools we have today.