Blank Space (small)
(text and background only visible when logged in)
Image
An AI generated image of a humanoid robot staring at a futuristic city with text: "What IS Artificial Intelligence?"
Blank Space (small)
(text and background only visible when logged in)

Engineers working in machine learning and AI offer a crash course in the basic concepts and buzzwords that have moved from the lab to everyday life.

Blank Space (small)
(text and background only visible when logged in)

It’s tempting to think that the artificial intelligence revolution is coming — for good or ill — and that AI will soon be baked into every facet of our lives. With generative AI tools suddenly available to anyone and seemingly every company scrambling to leverage AI for their business, it can feel like the AI-dominated future is just over the horizon.

The truth is, that future is already here. Most of us just didn’t notice.

Every time you unlock your smartphone or computer with a face scan or fingerprint. Every time your car alerts you that you’re straying from your lane or automatically adjusts your cruise control speed. Every time you ask Siri for directions or Alexa to turn on some music. Every time you start typing in the Google search box and suggestions or the outright answer to your question appear. Every time Netflix recommends what you should watch next. 

All driven by AI. And all a regular part of most people’s days.

But what is “artificial intelligence”? What about “machine learning” and “algorithms”? How are they different and how do they work?

We asked two of the many Georgia Tech engineers working in these areas to help us understand the basic concepts so we’re all better prepared for the AI future — er, present.

Blank Space (small)
(text and background only visible when logged in)

‘Artificial Intelligence’ Defined

Not long ago, Yao Xie was talking to a group of 8- and 9-year-olds about AI and asked them to explain it to her. She was surprised at their insights, which offered a good outline of the basics.

“They said building algorithms, or methods, that can mimic how the human brain functions or how human intelligence functions,” said Xie, Coca-Cola Foundation Chair and professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE). “They summarized it very well: trying to mimic human intelligence — all the way from something very simple, like adding numbers, to something super sophisticated, like understanding the context of a prompt and generating images.”

Along with that, Xie said she would add dimensions of speed and scale. AI can perform computations or produce results much more quickly than humans, and the speed and power of AI can increase as computational power grows.

Justin Romberg put it this way: AI “is when a computer or other automated agent makes a decision that a human could make but does so without human input.” 

Image
An AI generated image of a futuristic circuitboard concept in blues and yellows

Romberg is the Schlumberger Professor in the School of Electrical and Computer Engineering (ECE) and senior associate director of Georgia Tech’s Center for Machine Learning. He noted there’s no set definition of the term “artificial intelligence” and most researchers treat this idea on a case-by-case basis when they’re deciding if something should be considered AI.

Romberg said the AI decision-making process, at its core, is just like any other calculation that a computer makes. Some combination of data is fed into the system, there are constraints that the algorithm or device operates under, and a result is produced.

And this is where engineering and science bend a bit toward philosophy.

“What you can’t escape is that, ultimately, everything we call an AI is really just a very concrete computational algorithm that takes in some input and spits out some output,” Romberg said. “The real question is, is that what our brains do?”

Image
Text: “Ultimately, everything we call an AI is really just a very concrete computational algorithm that takes in some input and spits out some output.The real question is, is that what our brains do?” JUSTIN ROMBERG

AI vs. Machine Learning

For the non-experts among us, these terms can be conflated, sometimes used together or interchangeably. They are different concepts, however.

“Machine learning refers to a family of techniques or an entire discipline on how you learn from data,” Romberg said. “Obviously, learning from data is a big part of artificial intelligence, but there are other things you might call artificial intelligence that don’t learn.”

For example, machines that play chess might be considered AI, but these systems aren’t really learning from data. Rather, they’re very good at rapidly exploring a whole range of possible scenarios in the game to identify the next move that most likely leads to victory.

“A lot of the higher-level applications of achieving different forms of artificial intelligence are built on top of machine learning,” said Xie, who works specifically in this area. “Machine learning is more like the foundation, but developing machine learning algorithms involves many other foundational scientific pieces, including mathematics, statistics, combinatorics, optimization, and many more.”

Blank Space (small)
(text and background only visible when logged in)
Image
AI generated image of a futuristic brain with technology and circuits

Machine learning can be thought of as the base, with all kinds of uses and applications built on top. And these involve yet more words that have become more familiar: natural language processing, computer vision, speech recognition, and more. These are all applications of AI using underlying machine learning algorithms to pursue an outcome.

Blank Space (small)
(text and background only visible when logged in)

What is an Algorithm?

When Xie was working with the group of elementary school kids, she asked them this question. And once again, they offered an astute answer: “They told me an algorithm is steps that can be implemented by a machine.”

Algorithms are the recipes for a computer to follow. It’s how programmers tell the computer what they want it to do. An algorithm might ask the computer to add two numbers together or take data from an MRI sensor and produce an image of the patient.

“You’re not going to do anything on a computer without an algorithm involved somewhere,” Romberg said. “The difference between an algorithm and a computer program is that the program is the packaging that you need around the algorithm that allows it to interface with the real world.”

When building some kind of AI tool, algorithms are one of the two key components, Xie said. The other is modeling.

The first step to creating an AI application is defining what the tool needs to achieve. Once developers understand that, they can collect data — lots of data — and create a model. The model is usually an abstract way of representing the data, like a statistical model or sequence modeling. (As an illustration, sequence models can be used for language. Sentences are ultimately a sequence of words, with dependencies and grammar that help shape meaning.) Researchers use a variety of common modeling approaches for this work.

“Once we have a handle on that model, a lot of times after that is math — algorithms and implementation coded up into some pipeline,” Xie said. “Then we listen to the machine to actually have something.”

Blank Space (small)
(text and background only visible when logged in)

Training the Algorithms

Training algorithms to take data and produce results is painstaking work. But at its simplest, Romberg said, it’s a math problem: fitting a function to data. 

He used the example of image recognition and creating an AI tool to sort images of cars, trucks, and airplanes. The algorithm takes in a photo, proceeds through a series of computations where different weights are applied to the image’s pixel data, and the data is combined in a variety of ways, and then it produces a single value: car, truck, or airplane.

Training the system to correctly identify the cars as cars, and not as airplanes, requires teaching it with many examples.

“What you try to do based on all of those examples is adjust the weights so you’re getting the right answer for each of the examples you give it,” Romberg said.

Researchers might feed the algorithm a million pictures of cars, a million pictures of trucks, and a million pictures of airplanes. Then they run through a picture of an airplane to see how the system identifies it.

“If it says, this is a truck, then you adjust all the weights until it works. And you continue this process of passing over data many times until you have a set of weights inside that are consistent with the data that you’ve seen. From there, hopefully, it generalizes to new data that you’ll see in the future.”

In other words, pictures of cars, trucks, and airplanes the algorithm hasn’t been trained on still result in the correct identification. 

Blank Space (small)
(text and background only visible when logged in)

Neural Networks: Using the Brain as a Model

One way researchers have been working to build more efficient and flexible machine learning systems is to draw on our understanding of the human brain. 

Neural networks are a way to organize machine learning algorithms that try to mimic the way our brains collect, process, and act on information. The networks are efficient and extraordinarily flexible. (Though the terms “neural network” or “artificial neural networks” sound incredibly futuristic, they have been around since the very early days of computers in the 1950s.)

Image
An AI generated image of a futuristic neural network concept

In a neural network, algorithms are layered atop one another to process data and pass on the important parts to a higher level. The approach is taken from one model of how brains function. In very simple terms, some stimulus activates neurons, which then feed data to each other and combine it in different ways. Once the information reaches a certain threshold, the information passes to the next layer of processing and so on.

Neural networks work similarly, collecting, weighing, and passing along data in a hierarchy from bottom to top.

“The lower level feeds forward to the next node, and then you combine the data, passing through an activation function. And the combination also has weights attached to it. These are going to select which information is most important,” Xie said. “When you design these algorithms, you have parameters that are the weights and activation and many, many layers. This is such a flexible architecture.”

Interestingly, both Xie and Romberg noted it’s not always clear why a neural network or other kinds of AI algorithms actually work. The complexity of the layers and the millions or even billions of parameters involved can make it challenging to understand why an algorithm or a neural network of algorithms produces a result — even if it’s the correct result. This is an area both Xie and Romberg are working to untangle in various ways.

“One interesting thing about AI and machine learning is that it’s been a highly experimental science so far: people have tried techniques out and they work. Some of that has refuted, or bumped up against, how we understand classical statistics,” Romberg said. “Some of the work I do is trying to understand how AI algorithms really work. Can we put them into a classical mathematical framework?”

Xie likewise uses statistics, machine learning, and optimization principles to shed light on the functions of tools like neural networks so scientists can build better ones — and trust the output of such systems.

Blank Space (small)
(text and background only visible when logged in)
Image
AI generated image of a futuristic humanoid robot

“There are all kinds of theories, and there have been advances in explaining how and why a neural network works,” Xie said. “A lot of math researchers and statisticians, including myself, are working on explaining how it works and how to do it better. Because otherwise, it’s a black box — and to what extent can we trust it? We want to answer this question.”

Blank Space (small)
(text and background only visible when logged in)

Trusting the Machine

Ensuring we can trust the output of AI tools becomes crystal clear when you think about applications for self-driving cars, for example. Algorithms must take in mountains of data from different kinds of sensors about the environment, the car itself, and more. Lidar, radar, video, and other data might provide information about the road, signage, other vehicles, and pedestrians or others around the car itself. And the AI must process that data — recognizing people, say, or that the car ahead is slowing — before directing some action. Get it wrong, and passengers or pedestrians could be hurt.

The same is true in healthcare settings. Xie has been looking at using AI to help critical care nursing staff monitor patients, and the stakes are sky-high.

“To what extent can we trust an algorithm to automatically monitor patients and raise an alarm? That’s a life-or-death situation, so we really have to ensure safety,” she said.

Image
Stylized text reading: "To what extent can we trust an algorithm to automatically monitor patients and raise an alarm? That’s a life-or-death situation, so we really have to ensure safety." Yao Xie

Why Now?

Even in the days when computers far less powerful than our smartphones took up entire rooms, scientists started to wonder, “What if?”

“Since the 1950s, when computers were first invented, people have been thinking about how to do artificial intelligence,” Xie said. “It’s not a new thing.”

So why does it seem like AI is now suddenly everywhere? Probably thanks to a confluence of factors, not least of which are the internet and social media. When OpenAI released its ChatGPT chatbot in November 2022, it quickly went viral. Now companies are rushing to bake AI functions into existing tools and develop new ones.

“ChatGPT basically showed many people something they hadn’t seen before about how to interact with a machine,” Romberg said. “It’s a very tangible illustration or tangible packaging of research, and specifically research in large language models.”

The current awareness of AI tools — and specifically generative AI for chats or images — is less a watershed moment in research or AI development, though. It’s more about how relatable those tools were to a broad audience.

“The ability to answer queries is just part of a continuing trend of things we’ve seen. We’ve had recommender systems for Amazon and Netflix. We have Google that prioritizes our searches. We’ve had Siri that gives us at least semi-coherent answers to the questions we ask,” Romberg said. “So, this has been building over time.”

 

The visuals in this feature were created using Adobe Firefly’s text-to-image generative AI tool.

Blank Space (medium)
(text and background only visible when logged in)

Ask the AI

Since ChatGPT seems to have sparked so much of the current attention on AI, what does the chatbot have to say about these AI basics? (Note: These answers come from the publicly available ChatGPT 3.5.)

How do you define artificial intelligence?

Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include reasoning, learning, problem-solving, perception, understanding natural language, and even interacting with the environment. AI systems often employ techniques such as machine learning, neural networks, natural language processing, and robotics to emulate human-like cognitive abilities. The ultimate goal of AI is to create machines that can mimic human intelligence and perform tasks autonomously, potentially surpassing human capabilities in certain domains.

So, is AI actually intelligent or just mimicking intelligence according to the instructions from algorithms?

The question of whether AI is actually intelligent or merely mimicking intelligence is a topic of ongoing debate in the field of artificial intelligence and philosophy.

At present, AI systems are typically designed to simulate intelligent behavior based on algorithms and instructions provided by humans. These systems may demonstrate impressive capabilities in tasks like problem-solving, pattern recognition, language understanding, and decision-making. However, they do not possess consciousness, self-awareness, emotions, or subjective experiences that are often associated with human intelligence.

AI’s “intelligence” can be understood as a combination of sophisticated algorithms, vast computational power, and access to large datasets, allowing them to perform tasks that would traditionally require human intelligence. While AI systems can exhibit behaviors that appear intelligent, they lack the self-awareness and understanding that humans possess. Thus, some argue that AI is not truly intelligent but rather mimicking intelligence based on programmed rules and algorithms.

Nevertheless, as AI technologies advance, there is ongoing exploration into creating AI systems with higher levels of autonomy, adaptability, and even consciousness. However, achieving true artificial general intelligence — intelligence comparable to that of humans across a wide range of tasks — remains an elusive goal.

Blank Space (small)
(text and background only visible when logged in)

Related Stories

Making AI

A first-of-its-kind AI Makerspace created in collaboration with NVIDIA will give undergrads unprecedented access to supercomputing power for courses, projects, and their own innovations.

AI for a Better World

Georgia Tech engineers are refining AI tools and deploying them to help individuals, cities, and everything in between.

AI Beyond Campus

Corporate leaders with ties to the College describe AI in their current roles, what will happen in the next five years, and how students and professionals will need to adapt.

Blank Space (small)
(text and background only visible when logged in)
Image
Helluva Engineer Spring 2024 magazine cover with a large gold "T" on a futuristic technology background and text: Artificial Intelligence

Helluva Engineer

This story originally appeared in the Spring 2024 issue of Helluva Engineer magazine.

Georgia Tech engineers are using artificial intelligence to make roads and rivers safer, restore or boost human function, and enhance the practice of engineering. We’re building the technology and infrastructure to power tomorrow’s AI tools. And we’re giving our students the AI courses and supercomputing power they need to be ready. AI is changing our world, and Georgia Tech engineers are leading the way.