Editor’s Note: We’re all looking to get smarter, faster about what to do with generative AI. That’s why Chief Executive Group has put together a great afternoon of learning and strategizing-the C-Suite’s Guide to Generative AI: Practical Applications and Uses, June 26 anchored by Tom Davenport, who we talk with here and Microsoft’s Rashida Hodge. Get a leg up. Join us >
Artificial intelligence is everywhere, doing just about everything. AI tools are predicting customer behavior, diagnosing patients, screening résumés, managing supply chains, responding to queries and fulfilling orders. They’re informing strategic decision-making by digesting and analyzing vast quantities of data with lightning speed. Chances are you’ve experienced firsthand the magic of generative AI creating images and text in seconds that are indistinguishable from human efforts.
Yet, amid these seemingly endless capabilities, exactly how companies can participate in this AI-fueled future, whether through incremental improvements or revolutionary transformation, can be difficult to pin down. For clarity, Chief Executive’s Dan Bigman spoke with AI thought leader Tom Davenport, author of 15 books on innovation and technology, two on analytics and artificial intelligence. The President’s distinguished professor of information technology and management at Babson College and a fellow of the MIT Center for Digital Business, Davenport weighed in on what leaders need to understand now about the real-world future of AI in business. Some insights.
ChatGPT, Midjourney and other AI tools have really exploded. What’s your take on AI in business, how it’s accelerating and what you think it means?
Well, so far, ChatGPT and the other AI generative tools are not for what you call mission-critical applications. They are for writing marketing copy, blog posts and product descriptions, not the terribly important stuff. However, over the weekend I read that the company that owns Sports Illustrated is doing what’s called fine-tune training, where you train it on all of your content after it’s been trained on the entire Internet.
Then you can use it to write things that relate to everything you have written in the past. Before I was in AI, I did analytics, and before that I was a knowledge-management person. I think AI could be quite revolutionary for managing all the knowledge that the organization has accumulated and providing very easy access to it. I suppose it could also revolutionize journalism. The Associated Press was doing automated reporting for a while, but that was for highly structured things. We used to say, “If you’re a feature writer, you’re safe.” Now I think regenerative stuff is more feature-oriented. It’s not very good at news, but it’s really good at features.
I saw the exact same story, and I thought that was so slick because if it’s all yours, you avoid the plagiarism issue. You already have it vetted and, theoretically, fact-checked. So it’s just reconstituting things at speed and scale.
Right. They’ve been publishing for decades, so why not mine the past? And Morgan Stanley is trying something similar. It’s still pretty experimental to take all of their investing advice that they have accumulated over the years, plus life-event advice to get better at matching investing to life events, and use that to fine-tune-train GPT-3. Then they’ll be able to either give customers direct access to that knowledge or, more likely, mediate it through their financial advisers.
You’ve written about this for a long time now. What do you tell leaders about the best way to think about AI long term and incorporate it into their strategy?
It’s really important for senior executives to have some sense of what the different AI technologies are, what they’re good for and what stage of development they are in. Even Sam Altman of OpenAI says, “Don’t do anything important with ChatGPT.”
Yet, there are plenty of AI applications involving these generative tools or prediction-oriented tools, but there are much more business-oriented, prediction-oriented tools that will predict things like what offer customers will respond to, which employees will leave or what price they’re likely to accept. Those are not experimental. The ChatGPT stuff, people should definitely be experimenting with that, but they should be aware that: A) AI gets things wrong quite frequently and in a very confident fashion. B) It’s not very good at some things, such as math. C) Chances are fairly good that somebody’s gonna sue you if you use a lot of this stuff. But everybody in the company ought to be experimenting with it to see what they could do and how it might make their jobs better.
So where do you think this takes companies in 5, 10 years? What should leaders be doing to prepare?
Some of the companies I wrote about in my book are using AI to sort of change their strategy, enable new business models, really transform operations, not just improve them on the margins. The tendency with analytics and AI is to say, “Okay, we already make decisions, let’s make better ones, or, let’s use it to make our processes more efficient.” The survey research I’ve done with Deloitte suggests that those are the most common objectives sought, but if you’re a CEO, you ought to be thinking about what new business model is possible with this, or what entirely new products and services could you offer as a result?
The thing that was most interesting to me were these new organizational structures that companies were building that were sort of ecosystem-based rather than just their own company. For example, Airbus putting together an ecosystem of all the companies that fly Airbus aircraft and getting the customers all together to say, “Okay, how can we make these flights more efficient and make the maintenance processes better and share data?”
The best example of that in the world is Ping An in China, which has five different ecosystems, each powered by AI. They even have the smart-city ecosystem working with the healthcare ecosystem sharing data across a variety of industry participants, in many cases competitors as well. They’re using that data to make better decisions, which makes the customers’ lives better. It’s just really a perfect sort of circular business model, in a sense: better data, more customers, better offerings, etc.
In a situation like that, how does the AI empower the business model?
A couple of ways. Take Airbus and a Japanese company called Sompo International, the second-biggest insurance company in Japan and the largest nursing home owner in Japan. They use it to integrate data. Data integration itself is not that exciting, but if every company gathers the information in different ways, you need AI to put it together. And there are all sorts of things you can do once you have that data. You can start to predict when plane engines need service or when a person in a nursing home is getting sick.
Once you have the data, you can start to add value to it in all sorts of ways—that’s AI. These sort of platform-based companies in the digital economy have been doing this for a while: Uber, Airbnb and even Google couldn’t exist without AI, but the fact that legacy companies can do it as well is quite exciting. But it puts a lot of weight on CEOs and senior leaders to say, “Okay, what are we not doing that we could do if we had all of this AI capability at our fingertips?”
How do you begin to imagine that kind of a thing if it’s outside your current competency?
It’s largely a matter of awareness, and you get awareness in most cases through education. So if I were a CEO, I’d say, “Okay, all my senior leaders and directors have to be educated on how AI works, what it can do, what it’s doing in other companies.” I’ve just been doing a program at MIT for the military, and they’re bringing in, including NATO, not just U.S., bringing in senior leaders in all of the military branches to… they basically spend a week learning about AI. I don’t know that many companies would take that much, but I don’t know how you get it otherwise. P&G has been sending a lot of its senior executives virtually to a Harvard executive education program. Various companies I know have started to do that, but a lot more haven’t yet.
What do you tell them when they show up and ask you, “What can this do and what can’t this do?”
It should be a lot more than spouting off about what AI is and so on. It needs to be very interactive, like, “Okay, here’s our strategy. How could that change? What have some other companies done in that regard?” It’s sort of half typical education and half a consulting-oriented workshop to make that happen. In the military, obviously, the submarine people are going to have different questions than the Air Force people or the Space Force people. So if you have a company with a bunch of different business units, you’d probably want to have a lot of breakouts and separate conversations about it.
Is this about looking at business problems and seeing if there’s a new way of doing it? Or is the approach different, where it really is more blue-sky brainstorming?
The problem-solving approach, if you do it on a large enough scale, can certainly be transformational. Shell is using AI to totally transform how it inspects refineries and pipelines. It used to take something like six years to inspect all the piping in a refinery, and now they use drones and AI image recognition, and it’s about six days. Clearly that’s a dramatic transformation in how you can go about that particular part of the business. So there’s that, but there’s also the blue-sky part—what new business model could be made possible, and what should we be considering that we haven’t even thought about before
We often think about the future based on metaphors about the past, but this seems to be on a different track.
A lot of AI is sort of analytics on steroids. But you’re right, some of the language-oriented stuff we’ve never really had before. We’ve never had anything that could compose a really great essay, blog post or product description without human involvement. And so we need to separate the things that are extensions of what we’ve done before from the things that weren’t possible at all in the past. That’s one of the things that’s gotten people so excited about ChatGPT. It’s still machine learning, it’s still prediction, but it feels like a completely different capability.
You spend all of your time on this. The rest of us probably spend 2 percent of our time on this. What do you see coming down the road?
There probably are some things that nobody else thinks about, but we have plenty of work still to do with the things that we’re familiar with in terms of possibilities but haven’t really executed on yet. Clearly, we still have some work to do in using image recognition for driving purposes, but it’s been just around the corner for 40 years now—so, continued refinement. But everybody in a company needs to be thinking about what it will do. Fully autonomous vehicles, fully autonomous aircraft and fully autonomous lawn mowers and so on will clearly happen at some point. So a huge number of businesses will be changed by that capability.
There’s this company, CCC Intelligent Solutions, that’s been around for 40 or 50 years, and they knew the collateral value of cars, so that if you wreck your car, the insurance company knows how much to give you for it. The CEO was previously the CTO, and he had a pretty good sense that before long, people were going to be able to take high-resolution images with cell phone cameras, and you would be able to get AI-based image recognition. So about 10 years ago they started building the capability to allow you to take a photograph of your recently crashed car and get an immediate estimate for how much it would cost to fix it, if it’s not a huge amount of damage. It took that kind of long-term betting capability…. I mean, we can all now look back and say, sure, that was a no-brainer, but it wasn’t at the time.
What pitfalls do you see going forward with this, not in terms of societal, but where it goes wrong as people try to make the leap across the chasm here?
It’s the AI-human interface more than anything else. One is lack of ambition. People just do too much experimentation, and they don’t really deploy their various statistics for what percentage of AI models actually get deployed into a production capability. I mean, on average, it’s probably 20 percent. I don’t know the exact right figure. It’s clearly not 100 percent because you should be experimenting and failing some, but 20 percent is a huge waste of economic value. So we need a plan for deployment. Deployment involves change in various aspects of the business. It means treating this increasingly as a product.
A lot of companies talk about data products as something that they should be trying to manage. They should put data product managers in place; that includes data analytics or AI, organizational change, ongoing management. Kroger was telling me the other day that when avian flu comes, the price of eggs goes up to six bucks a dozen. Our models weren’t predicting that, so we were not able to do effective planning. You’ve got to constantly monitor the models after you put them in place. And a product management structure does that very well. The smartest companies are starting to do that now. But a big part of that is the human interface. We’re not doing a very good job of training people on the frontline to use this effectively.
A lot of surveys show that people are saying, “Managers are not telling me what to expect, what I need to get better at.” It’s not easy to think about how a particular job will change. But people need some help in thinking about that issue, and probably some reassurance that we’re not here to eliminate your job. The only people who will lose their jobs to AI are those who refuse to work with AI, but people need to be hearing that message so they’re experimenting with ChatGPT. They’re not afraid of it.
More and more, we will see democratization of a lot of these capabilities. We’re always seeing a lot of low code, no code for typical business applications. There’s democratized data science now with automated machine learning, democratized automation. You can automate your own job with robotic process automation tools. We had elements of that before with Lean Six Sigma, people running around with yellow, green and black belts, but the automation part will make a huge difference to that too. So it’s a new world a lot of organizations are facing here.
Given those dynamics, what do those at the top have to do to set the tone? What can leaders do that brings about that success culturally?
They educate themselves. A great example is Piyush Gupta at DBS Bank. He told me his mentor had been John Reed at Citi, who was the first banker to get the importance of information and technology. So Piyush helped design their data strategy. They used this Amazon game involving Formula 1 to teach reinforcement learning. He said, “I wasn’t the best in the company, but I was in the top 100.” People really look to see, is the CEO participating in this?
He encouraged a lot of experimentation. He didn’t mind failure at first. They failed with IBM Watson. They engaged the National Research Foundation Singapore to do a couple of things that failed. But he was not deterred, he kept going. He opened up the purse strings to let people experiment for a while without much in the way of controls or returns expected. But then he said, “Okay, experimentation time is over. We need to make some money with this, and you need to say how much you’re going to make, and we’re going to track that carefully.”
So, he was just engaged in the whole process. And Githesh Ramamurthy at CCC was the same way. They know what’s going on in the organization. It’s like Jack Welch used to do with all the digitization and Six Sigma stuff. He’d call people up and say, “Well, why don’t you do this?” Keep the pressure on.
Much of the conversation after books like Superintelligence and others was about AI gone wrong and making sure that we are running AI and AI is not running us too much. Is that just a real fallacy that we shouldn’t even be concerned about?
I’m not personally worried about robot overlords killing us all. I try to rely on the data. All I’ve seen is augmentation, not large-scale automation. If there’s been any automation, it’s been on the margins, a task here, a task there. Most people do more than one thing in their jobs. So I don’t see that on the immediate horizon if super intelligence or its equivalent comes along. My guess is it’s still a ways away. You and I will probably be dead by then, so let’s not worry about it too much.
What else should leaders be aware of? The kind of things that transcend industries that maybe the rest of us should be adopting?
I don’t think there’s any magic to it. It is just people at a very senior level looking around at the world and saying, it’s going to be very data-intensive in our industry if it’s not already. How do we use that data to make better decisions, create new strategies? I spoke to a guy at Morgan Stanley who looked at Netflix 12 years ago and said, “We should be doing the same thing with investments that Netflix is doing for movies.”
It took them quite a while, but now they have this great next-best-action system, and it’s very influential in their business, but it turns out what people really care about is that their financial adviser’s looking out for them. So yes, they sent out these AI-based ideas, but they also integrated it with a Salesforce CRM system to make sure that people heard from their financial advisers often about birthdays and hurricanes coming to your region and things of that nature. So, it’s going to be the mixture of the human touch and the AI touch for a really long time.
Any final thoughts as we send leaders off into the great AI future?
There’s no shortage of learning resources out there. I probably get 25 newsletters a week on AI. You should be playing around with ChatGPT, Dall-E 2, Midjourney. It’s your duty. It used to be that as a director, you had to know something about accounting or compensation or whatever. Now you’re really not doing your duty if you’re not aware of what AI can bring to the table.