The Facts About AI

I often make passing comments trying to downplay the hype surrounding Artificial Intelligence. This is because the hype has gotten so out of control that many people believe we are much further along than we actually are. It’s important to re-enter reality if we are to have honest conversations about this. It scares me that so much of our computer science talent is entering a field with such a disparity between expectations and reality. They are in for a rude awakening when asked to do currently impossible things.

The good news is there is probably no threat to your job, and no imminent threat to human civilization. Can things change? Sure, as technology evolves who knows what may be possible. However, like the levitating cars we expected to have by now, it is important to recognize what has been shown to be imminent, and what hasn’t.

All we are doing now is simply rehashing and refining decades old ideas (neural nets) for use in modern computers. Neural nets sure sound intelligent, but they aren’t. None of it comes close to ‘intelligence’ unless you redefine the term to mean smart, specialized algorithms and applications to do things like play games, recognize patterns, or retrieve stored knowledge. Are those algorithms intelligent? Well, is a mold intelligent? Is bacteria intelligent? Both are adaptive. It gets to be a complicated question.

Specialized is the key word. These are specialized machine learning applications that have made some advances. This includes pattern recognition (voice/image), playing Go and Chess, or IBM’s Watson in the realm of expert systems. Some people might include self-driving cars because they seem intelligent. However, in all specialized cases, they are just operating off of sensors and/or algorithms, training neural nets; but there really isn’t any intelligence. There might be some level of machine learning and correction, but again it’s not close to what most people would classify as ‘intelligent’. Certainly there is no self-awareness or deductive reasoning at a level anywhere close to a human.

Artificial General¬†Intelligence (AGI) has become the term to represent a more robust, theoretical AI that might approximate human intelligence. For better or worse, this has not been shown to be possible. We can reason it may be possible, but we really have no way of knowing that because we haven’t come close. There is no construct that has been shown that would absolutely scale to a self-aware AI.

In conclusion, I’ve heard non-industry people say it is a matter of time, or even it’s a matter of infrastructure. The latter is certainly false, it is a matter of breakthroughs that may or may not ever happen. The former is a prophecy, who knows what the future holds. We didn’t get the levitating cars, so we may not get general AI in our lifetimes, or ever. So as marketers hype AI, remember that the term has been redefined and that we are so far from what you see in the movies. There is no imminent risk to mankind. Your job, depending on what it is, may or may not be at risk of increasingly smarter bots or automation, but if that were the case you’d probably already be feeling, or been impacted by, that displacement pressure.

If you want to postulate about an emerging technology, I would advise quantum computers, which at least have been practically demonstrated on a small scale and seem to be shown as likely imminent, unlike AI.

Agree, disagree? Comment below.