Open AI trained ChatGPT on about 45 terabytes of information from the internet. These days, it feels like roughly half of all data on the internet is Twitter posts with screenshots showing off ChatGPT’s text. Throw in Dall-e2-generated art, and it’s fair to say that AI is having a moment.
A quick look at Google Trends for the term shows just how wild the hype has gotten over the last 12 months.
But while the term artificial intelligence is getting tossed around like confetti at a wedding, perhaps it’s time to step back and ask, “What is artificial intelligence?”
It might seem like that’s an easy enough question to answer. However, defining it can get pretty murky and lead you down endless philosophical rabbit holes. So let’s try to keep it as straightforward as possible.
What is artificial intelligence (AI)?
Determining what artificial intelligence (AI) is, depends on how you ask. It’s a lot of different things for different people.
However, most of us can agree that AI is:
- A field of computer science
- A way to synthesize human intelligence in machines
- A marketing term for generating hype.
Let’s expand the second definition slightly because it’s where most of the controversy lies. If we can carefully lay out what AI does and think about its different forms, we can start to get an idea of what it is and is not.
First, we should make clear that AI isn’t a single technology, even though some people refer to it as such.
Instead, the term refers to a collection of different software and hardware components and practices that support:
- Machine learning
- Computer vision
- Natural language understanding, generation, and processing
- Speech recognition
- Automated planning and scheduling.
OK, so now we know some of the broad applications of AI technologies, let’s talk about its different forms and how it processes data so we can have a more solid definition.
Different types of AI
Another way to think about AI is by focusing on how it approaches the problem of synthesizing intelligence.
Weak AI: Weak AI specializes in becoming good at one narrow task. For example, chess, facial recognition, answering questions, etc.
Strong AI: Strong AI, also referred to as artificial general intelligence, is more akin to a human mind. This theoretical AI would be able to respond to and understand a wide range of stimuli and autonomously think, plan, and perhaps even have some level of self-awareness.
Super AI: Super AI is another theoretical type of AI that would far exceed human intelligence. Right now, this is the stuff of science fiction or Nick Bostrom’s book Superintelligence.
Despite the promises of strong AI being perennially “around the corner,” we’re far off machines capable of human-level general intelligence. However, it can outwork us at a wide variety of narrow tasks already, such as:
- Digital advertising optimization
- Speech recognition
- And more.
While some technologists suggest that a combination of computer vision, machine learning, natural programming, etc., will combine to produce something approaching human consciousness that remains magical thinking. So far, we haven’t been able to create an embodied system that can even match the intelligence of an insect.
Additionally, we can’t even agree more generally as a culture about consciousness. It’s called the hard problem of consciousness for a reason, and if we can’t agree about how intelligence emerged from matter, it’s challenging to think about how we could start synthesizing it in robots.
Ai categorized by data processing methods
Finally, we can also categorize AI based on how it processes data. Perhaps the best way to explain this is by using a famous analogy where the AI is a professional poker player.
Reactive AI: Reactive AI makes decisions using real-time data. In the poker analogy, the AI makes all of its decisions based on what is happening in the current hand.
Limited Memory AI: Makes decisions based on historical data. As a poker player, it will consider its hand history plus the history of its competitors.
Theory of Mind AI: Makes decisions based on inferences about human goals, objectives, desires, and intent. In poker, that could involve reading a player’s behavior for tells.
Self-Aware AI: Self-aware AI would have a level of consciousness and awareness similar to a human being. In a poker context, it could think about whether playing poker was the best use of its time and resources and even decide to quit the game to pursue other goals that it found more exciting or meaningful.
Reactive and Limited Memory AI are commonplace. However, the Theory of Mind AI and Self-Aware AI is some way off.
Interesting AI applications.
Although human-like general intelligence seems pretty far away, that’s not to suggest that there haven’t been incredible advances in AI over the last few years. On the contrary, the last decade has seen the emergence of software that uses AI to improve our lives and help us perform tasks more quickly, cheaper, or with much higher accuracy.
Here are a few interesting applications and startups that are doing interesting work.
ChatGPT is a type of generative AI. That means that it can generate novel content, like text, images, blog posts, jokes, programming code, and more. People have gotten fairly excited about ChatGPT’s ability to do things and imagine that just around the corner, it will produce the next Anna Karenina or Don Quixote.
With GPT-4 around the corner, Open AI should add impressive capabilities to its products. For starters, some areas it could improve are summarizing texts or better memory so that it can refer to previous text passages.
Aurora Solar allows users to generate 3D models in seconds, regardless of experience levels. Its software is built for the solar industry, and the company is valued at about $4 billion.
Swedish company Univrses is using deep learning and computer vision to collect visual data about urban centers so that we can tackle issues that arise from urbanization, such as planning and developing functional infrastructure.
The applications mentioned above are all interesting uses of AI. However, in recent years, we’ve seen the term used quite liberally to describe computer applications that don’t meet the criteria.
So, how did everything become AI?
Well, there was a time a few years ago when it seemed like every startup that was getting any attention had something to do with AI or blockchain. While the buzz around blockchain has died down — it turns out that append-only databases have limited use cases — AI has continued to cause a buzz.
So, are companies slapping AI on their products because it’s a buzzword? Well, in many cases, yes, that is entirely true. Many apps use automation to perform specific tasks and will try to tell you they are cutting-edge AI tools. Additionally, many products say they are “AI-powered,” which can be slightly misleading.
Why defining AI matters
A clear definition of what is or isn’t AI is essential for consumers and business owners who want to buy the right products.
Additionally, with the Artificial Intelligence Regulation, better known as the AI Act, the EU lawmakers and countries are trashing out an agreed-upon definition to determine what sort of oversight and protection the Act will afford European citizens.
Finally, some fairly respected voices in the AI world think we shouldn’t even use the word intelligence when discussing modern AI tools. Michael I. Jordan, a leading AI and ML researcher, says that, at best, AI machines have “low-level pattern recognition skills.” Still, they are not intelligent because they can’t understand their work or exhibit creativity.
So, after all of that, what is AI?
So, we’ve come full circle. One roadblock that stops us from agreeing on what AI actually is is that we’re not always sure what intelligence is.
When we think about human intelligence, it’s a combination of several things, like the ability to reason, think, plan, understand, learn, solve problems, grapple with complex concepts, think abstract thoughts, and be creative.
However, we already have difficulties with defining intelligence in other species. Intelligence is slippery to determine, so we end up having a lot of trouble strictly defining AI>
There are two significant areas where there is a lot of public confusion about AI.
- A) In the case of ChatGPT, people believe that it understands the language.
- B) People mistake (or mislabel) automation for AI. While automation is widely used across the Banking, eCommerce, Finance, etc., sectors, it’s deterministic, programmed, and doesn’t learn in the way that an intelligent entity would.
ChatGPT has caused a massive spike in AI interest. However, it has also highlighted fundamental misunderstandings about AI and whether it understands the tasks it completes.
Overall, AI is here to help us achieve better outcomes. It can outperform us in narrow tasks because of the speed with which it can process data via sophisticated models.
However, perhaps we should remember that it’s meant to be a synthesis of human intelligence. That means it doesn’t have to be 1:1 to be considered “intelligent.” But similarly, if your product uses rule-based automation and cannot make decisions on its own, it might be time to stop diluting the term.