AI: Let’s Demystify The Buzz
In a webinar with Stephen Waddington, we discussed all the different buzzwords surrounding AI, and especially generative AI. With so many of them flying around, it’s hard to understand what’s real and what’s fluff.
So, we decided to break down all these words into plain English so that you can understand what’s being talked about without needing a degree in computer science. After reading this, you’ll finally be able to understand what’s going on when someone drones on and on about their shiny new machine learning algorithm or why their foundational models are so incredible.
For us communicators who want to get a better understanding of generative AI (or genAI for short), let’s take a (very surface level) look at how it all works!
AI
First thing’s first - what is AI exactly? Well, according to IBM, it’s “technology that enables computers and machines to simulate human intelligence and problem-solving.” Pretty straightforward.
Now, AI is a HUGE field, encompassing everything from robots working in factories, Machine Learning, algorithms that spot cancer in pathology labs, self-driving cars, autopilots on airplanes, and so much more. These different categories are called “subsets,” and are a term we’ll be using throughout this post as we delve deeper into what genAI is and how it works.
All of these subsets are different and have unique purposes. And yet, at the end of the day, they’re all still considered AI.
For our purposes of discovering how genAI works, let’s zoom in on the AI subset of Machine Learning
Machine Learning
Machine Learning, or “ML”, uses algorithms to learn and generalize information. On a VERY basic level, you feed an ML algorithm a bunch of information, ask it a question, and the algorithm thinks about the answer before spitting it out based on the information you gave it.
There are lots of subsets of ML. These include decision trees, which are used in simple chatbots. Think of it like trying to decide what to eat. Are you hungry now or later? If you’re hungry now, do you want to cook or order food? If you want to order food, do you want to go out or order delivery? If delivery, do you want pizza, Asian food, or a burger? If you want pizza, do you want olives, pepperoni, or extra cheese on it? And so on.
There’s linear regression - something that’s quite useful for predicting weather. For instance, weather services have a lot of historical data on hurricanes and cyclones, so they can put that data into an algorithm to better predict where these storms will make landfall. This is already being done with increasing accuracy and is saving lives.
There’s clustering, where the algorithm puts a lot of similar things in matching clusters to determine how best to deal with them. This is how targeted advertising and marketing works - the algorithm finds people with similar interests, puts them in a cluster, and then shows them all the same ad.
And finally, there’s deep learning, which is a deeper form of machine learning.
Wait, what??
Deep Learning
All Deep learning means is that there are more than three layers of neural networks.
Wait, what? Layers? Neural networks?
Admittedly, this is a difficult concept, but for our purposes, neural networks are the “brain” of the algorithm, while layers are the depth of “thought” an algorithm can do.
Think about your own brain. You’re able to have complex thoughts and feelings because of all of the layers and neurons that are inside of your head. Well, it’s the same thing with deep learning for AI, but artificial - the more neurons and layers there are, the more human-like and deep the output of the algorithm will be.
So what is deep learning? It’s when the algorithm takes all of the information in its brain and thinks really hard, really deep, and really nuanced. This is compared to ML one level above, where the thinking that’s being done is simpler, less nuanced, and more shallow.
In fact, this is where we get the terms “deep” and “shallow” AI from!
But where does it get all the information from? Well, it gets it from the brain of course! And in the case of AI, the “brain” is a foundational model.
Foundational Models
Foundational models are basically big brains with lots of knowledge in them. The models are then able to be trained and fine tuned so that the algorithm can use its deep learning capabilities to do really complex assignments.
Basically, to get the amount of information you would need to make your own genAI chatbot, you would need to spend hundreds of millions of dollars and thousands of hours putting the “brain” together. However, many organizations have already done this work, so all someone needs to do is buy the model and put it into their AI.
One of these kinds of foundational models are Large Language Models, or LLMs. They are called this because they are HUGE and have tons of data in them, they are meant for processing normal, human language (and not computer code), and they are foundational models.
LLMs include OpenAI’s ChatGPT, Google’s Gemini, Meta’s LLaMa, and many others! So when coming up with specially built AI chatbots, many organizations will buy access to one of these LLMs and incorporate it into their offerings.
If a company buys access to an LLM and makes minor tweaks to it so it does certain things, they are known as a “Wrap” company, since their company basically wraps the LLM into a nice packaging.
However, many other organizations (like Propel!) integrate the LLMs into their deep learning algorithms to train it on its own, unique data, which is how the company is able to generate pitch and press release drafts! It’s also how we were able to create our matching algorithm that matches pitches and journalists, turning simple prompts into a highly targeted media list.
There are also video foundational models, such as Open AI’s Sora which can generate videos in seconds, audio models that can create speech and music, and even biological models that can determine how cells and proteins will react to different environmental factors!
So where does genAI fit in?
So, what is genAI? Well, as was described by IBM, “generative AI models are specifically crafted to generate new content. It is the creative expression stemming from the knowledge base of the foundation models.”
Basically, generative AI uses foundation models to drive deep learning algorithms (which is part of machine learning, which itself is a part of AI in general) to create new kinds of never before seen content.
And that’s it! Now hopefully you’ll be less confused by all these AI terms. Now go out there and generate some great content of your own!