5 top business use cases for AI agents

0
5 top business use cases for AI agents

“We’ll be releasing an agent-driven version of this process, where it’ll be a continuous monitoring of vendors, which was previously not possible,” he says.

This is something that companies often miss when they think about AI agents, he says. “A lot of people have focused on the optimization use cases,” he says. “But the real value is this expansion of the market, and expansion of revenue opportunities.”

5. HR and employee support

Another relatively low-risk, high-value use case for AI agents is answering employee questions and handling simple tasks on their behalf. A January IBM survey on gen AI development, in fact, concluded that 43% of companies use AI agents for HR.

Indicium, a global data services company, began deploying AI agents in mid-2024, for example, when the technology started to mature.

“You’d start seeing off-the-shelf applications — both open source and proprietary — that made it easier to build them,” says Daniel Avancini, the company’s CDO.

The agents are used to making things easier for HR, he says, including tasks such as internal knowledge retrieval, tagging, and documenting, as well as other business processes. Each agent is like a micro service, specializing in one particular thing. “And they all talk to each other in a multi-agent system,” he says. And these prompt-based conversations can get peculiar. The tricky thing is there’s a possibility of hallucinations and all the other problems that come with gen AI. “So there’s a lot of tweaking of the model so they don’t do the wrong thing or access the wrong information,” he says.

On the positive side, the AI agents can handle a lot of questions autonomously, so there’s a business benefit there. “And we’re finding things that aren’t correctly documented, so it helps us make the processes better,” he adds.

Trust but verify

Safety was a cornerstone of AI agent development from day one. In fact, one of the first agentic frameworks was BabyAGI, released in early 2023, which combined ChatGPT with a Pinecone vector database for memory, and LangChain for orchestration. The developer who created it jokingly asked it to create as many paperclips as possible — a reference to a hypothetical paperclip apocalypse caused by an unchecked AI — and the system immediately recognized the potential for problems and started by first generating a safety protocol for itself. But most agentic AI developers aren’t willing to put that much faith in the AI.

In a November LangChain survey of over 1,300 professionals, 55% of respondents said that tracing and observability tools are a must-have control for AI agents, helping them get visibility into agent behaviors and performance. In addition, 44% had guardrails in place, and 40% used off-line evaluation.

“AI models are risky and make all kinds of mistakes,” says Virginia Dignum, chair of the technology policy council at the Association for Computing Machinery, and professor at Sweden’s Umeå University.

But it’s possible to create systems to catch mistakes, she says, so if an agent isn’t able to accomplish a task, it would admit it failed instead of trying to make something up.

“There’s a lot of research on this and it’s there in theory,” she says. “But as far as I know, there’s not really a suitable agentic interface out there. And once you start to develop these systems, you’ll need to deal with the consequences and what happens if one of them does the wrong thing.”

That means there’s a need for governance and regulation. And agentic frameworks don’t just need to deal with the practical and business implications of possible AI mistakes, but legal implications as well.

“If those aren’t solved, then I don’t think there’ll be much use for enterprise agents,” she says.

Then there’s one more risk that enterprises need to deal with when deploying AI agents: disruption and negative outcomes caused by the scale of AI-powered automation that AI agents make possible. The change management process is very important when deploying these systems, says Pushpa Ramachandran, VP and global head AI at Wipro. “This is where I see a lot of customers take a bit more time,” he says. And taking the extra time up front means the company can go farther in the long run. “The ones who are thoughtful about the change management process can scale faster,” he says.

link

Leave a Reply

Your email address will not be published. Required fields are marked *