Let’s take a critical look at ChatGPT’s business ideas

The writer is Andrew M. Heller Professor at the Wharton School of the University of Pennsylvania

Whether for writing essays, passing academic exams or creating software code, ChatGPT has been making headlines since its launch in November.

It’s well documented that AI chatbots are knowledgeable—enough, for example, to merit a solid pass in my MBA class at the Wharton School. He is also friendly and intelligent. But the quality of his answers turns out to be highly erratic.

Social media is flooded with examples of questions that ChatGPT doesn’t answer correctly. His mistakes are now often referred to as hallucinations. These include the confident explanation of why adding broken china to breast milk can help an infant’s digestion, and the deduction that if two cars take two hours to drive from A to B, it must take four hours for four cars to finish the same thing. travel.

What, then, is the best use for a technology that probably shouldn’t (yet) be trusted without close human oversight?

FT Online MBA Ranking 2022 — 10 of the best

Find out which schools are in our online MBA degree rankings. Find out how the table was compiled and read the rest of our coverage at ft.com/online-learning.

One opportunity is to turn the instrument’s weakness – the unpredictability of the response – into a strength.

In most management settings, “high variance”—erratic and unpredictable behavior—is a bad thing. We want our pilots, doctors or agents to operate in an orderly fashion with minimized variation. So in these circumstances, Six Sigma – a set of management strategies for reducing errors – is the name of the game.

For example, any airline considering recruiting 10 pilots would rather have them all be solid, reliable hires (scoring, say, 7 out of 10 on piloting skills) than take on one pilot who is brilliant ( scoring 10 out of 10). ) and nine that are terrible (rating 1 out of 10) that could crash the plane.

But when it comes to creativity and innovation – like finding a way to improve the air travel experience or launching a new aviation venture – the same airline would prefer a great idea (10 out of 10) and nine that are stupid, rather than a set of ten solid ideas.

The reason is that in creative tasks, variation is your friend. An idea is like a financial option: you just use it if it’s good (or better, great) and just forget about it if it’s not. This perspective has important implications for the design and use of AI systems in general and their use in creativity and innovation in particular.

First, we should distinguish between using ChatGPT to generate ideas and to perform a specific task. A student asks, “What would be ten class project ideas given my interest in space travel and psychology?” is an example of generating ideas. “Write me a three-page essay on the role of psychology in space travel” is an example of a specific task.

Although subtle, this distinction matters. In most evaluations of ChatGPT in academic settings, it achieved about 50-70% correct answers. For students or managers, this is a useful starting point for an assignment such as an essay, but not enough to get the job done. In idea generation, by contrast, all we need to succeed is a good idea, so we can tolerate many more mistakes.

Second, when we’re looking for a single great idea, we should encourage our AI assistant to run wild, just as we should welcome out-of-the-box thinking in traditional brainstorming sessions. Using ChatGPT, this can be done by prompting with “imagine you’re six years old” or “what ideas would Steve Jobs come up with?”

As new versions of the technology emerge, we can imagine the user striking a balance between accurate (low variance) and totally insane (high variance).

Third, even the best idea has little value if no one acts on it. What’s the point of having one brilliant idea and nine bad ideas if we, as human decision makers, are bad at picking the winner? For this, we need to think more carefully about the idea selection process. There is a large body of academic research that shows that even experts (such as venture capitalists) are bad at identifying the best idea.

One way to deal with this selection problem is parallel exploration. It is difficult to choose the best of 10 ideas. But could you pick the top five and explore them a little further? This approach – which is often referred to as a touring process – aims to validate ideas based on small, inexpensive experiments.

Another way is to call ChatGPT and ask them to critically evaluate their own answers. “What problems do you see with this idea?” might be a good question to ask. The answer might tell us why our idea isn’t as good as we thought. But remember: all it takes for creativity and innovation is a good idea.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
%d bloggers like this: