If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank.

There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference."

The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others.

Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not — for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools.

AI's sticker shock

Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them.

"We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan.

Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models will ever be able to do what humans do so well. Rangan has a more optimistic view.

"We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought, and it's not as productive as we thought."

Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions.

AI and your job

Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. 

Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities. Concerns about how movie studios will use AI helped fuel film and television strikes by the union in 2023, which lasted four months.

Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech.

"We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "AI tools currently don't understand the world."

Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores.

"What AI lacks today is the common sense that humans have, and I think that is the next step," he said.

An 'agentic future'

That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf.

Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale."

"We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate."

Gains in medicine

AI tools have also streamlined the medical field. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods.

Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences.

Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling.

When it comes to pharmaceutical development, that collaborative process can take several years, he said — with AI, the process can be cut to a few days.

"That, to me, has been the most dramatic use," Pandey said.


Become a #ThisIsTucson member! Your contribution helps our team bring you stories that keep you connected to the community. Become a member today.