Don’t Ask “How Do We Put AI Into Our Product?” Ask This Instead…

“How can we put AI into our product?” How many of you have heard or said that recently? With all the hype around AI recently, it’s understandable why. However, it is the wrong question to ask.
What is meant by AI?
Let’s define what we are talking about here. While AI is a broad scope of tools and techniques that have been around for decades, most people, especially in the context mentioned above, mean Large Language Models or LLMs. You know, the ChatGPTs, Google Geminis, and Groks of the world. It is no surprise that LLMs have become the frontman of the AI world. They introduce a huge leap in what we think of AI and how we even define what intelligence means. For many, it’s a magic black box that you type or speak questions into, and get at least on the surface reasonable and human-sounding responses.
Why are we asking the wrong question?
Earlier I pointed out the wrong question to ask is “How can we put AI into our product?” But why is it the wrong question? Well, I’m sure you have heard of the saying, “When you have a hammer, everything looks like a nail.” This is no more true of this. LLMs, while amazing pieces of technology, have their limits. Simply trying to cram an LLM into every situation will lead to sub-optimal and expensive solutions.
What is the right question to ask?
The right question to ask is… “What problems do we want to solve?” When you are asking yourself that, don’t forget problems you may have brushed off before thinking they weren’t possible to solve with a computer. For instance, automatically summarizing a technical document and wording it in a way a non-technical person may understand, was nearly if not completely impossible not that long ago.
Now that you have your list of problems to solve, determine what is the best way to solve them. Some may be “AI” as originally intended. Some may be other forms of AI, i.e. not LLMs. Others may be good old-fashioned programmatic automation.
How do we determine if LLMs are the right tool for the job?
Well, you need to understand in part how LLMs work. The following is slightly oversimplified, but good enough for our purposes. LLMs are text predictors. They are trained on large amounts of data to come up with what text is most probable, based on the previous text. It doesn’t really understand anything. We can use this to determine what is and isn’t good at.
The Bad
- Math (however there are workarounds)
- Context
- Admitting it doesn’t know an answer
The Good
- Text generation and transformation
- Summarization
I am sure we could add to both categories, but you get the point. Also note that LLMs can be trained to leverage other tools to augment their answers. For example, you could ask some math questions to an LLM that has been trained to utilize a calculator.
Just because an LLM can be used to solve the problem, doesn’t mean it should. You can hammer a screw into a piece of wood, but it takes more effort and you are left with a potentially inferior result. LLMs can be slow and expensive. Why use an LLM when you can solve the problem with a much less costly and faster linear regression model?
Am I saying you shouldn’t use LLMs?
Short answer, no. Longer answer, focus on the problems that need solving. If LLMs work well to solve the problem, go for it, but don’t forget there may be other better solutions.
Need some help identifying and solving problems?
I am here to help. Schedule a Free 30-Minute Consultation to see how I can improve your process.