The recently observed phenomenon of ‘doomprompting’ with LLM and AI agent results can lead to poor outcomes and huge costs.

Many AI users have developed a healthy distrust of the technology’s outputs, but some experts see an emerging trend of taking the skepticism too far, resulting in near-endless tinkering with the results.
This newly observed phenomenon, dubbed “doomprompting,” is related to the behavior of doomscrolling, when internet users can’t tear themselves away from the social media or negative news stories on their screens.
There’s a difference in impact, however. Doomscrolling may waste a couple of hours between dinner and bedtime, and lead to a pessimistic view of the world, but doomprompting can lead to huge organizational expenses, with employees wasting a bunch of time and resources as they try to perfect AI outputs.
Designed for conversation loops
The problem of excessive tinkering with IT systems or code isn’t new, but AI brings its own challenges, some experts say. Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt.
AIs like ChatGPT often suggest what to do next when they respond to a prompt, notes Brad Micklea, CEO and cofounder at AI secure development firm Jozu.
“At best, this is designed to improve the response based on the limited information that ChatGPT has; at the most nefarious it’s designed to get the user addicted to using ChatGPT,” he says. “The user can ignore it, and often should, but just like doomscrolling, that is harder than just capitulating.”
The problem is exacerbated in an IT team setting because many engineers have a tendency to tinker, adds Carson Farmer, CTO and cofounder at agent testing service provider Recall.
“When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’”
The problem often happens when the project lacks definitions of what a good result looks like, he adds.
“Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.”
Agents of doom
Observers see two versions of doomprompting, with one example being an individual’s interactions with an LLM or another AI tool. This scenario can play out in a nonwork situation, but it can also happen during office hours, with an employee repeatedly tweaking the outputs on, for example, an AI-generated email, line of code, or research query.
The second type of doom prompting is emerging as organizations adopt AI agents, says Jayesh Govindarajan, executive vice president of AI at Salesforce. In this scenario, an IT team continuously tweaks an agent to find minor improvements in its output.
As AI agents become more sophisticated, Govindarajan sees a temptation for IT teams to continuously strive for better and better results. He acknowledges that there’s often a fine line between a healthy mistrust of AI outputs and the need to declare something “good enough.”
“In the first generation of generative AI services and systems, there was this craftsmanship in writing the right prompt to coax the system to generate the right output under many different contexts,” he says. “Then the whole agentic movement started, and we’ve taken the very same technology that we were using to write emails and put it on steroids to orchestrate actions.”
Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says.
“The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”
Clear goals needed
Like Govindarajan, Recall’s Farmer sees a tension between a useful skepticism about AI outputs and endless fixes. The solution to the problem is setting the appropriate expectations and putting up guardrails ahead of time, Farmer says, so that IT teams can recognize results that are good enough.
A strong requirements document for the AI project should articulate who the audience is for the content, what the goals are, what constraints are in place, and what success looks like, adds Jozu’s Micklea.
“If you start using AI without a clear plan and without a good understanding of what the task’s definition of done is you’re more likely to get sucked into just following ChatGPT’s suggestions for what comes next,” he says. “It’s important to remember that ChatGPT’s suggestions aren’t made with an understanding of your end goals — they’re just one of several logical next steps that could come.”
Farmer’s IT team has also found success in running multiple agents to solve the same problem, a kind of survival-of-the-fittest experiment.
“Rather than doomprompting to try to solve an issue, just let five agents tackle it, and merge their results and pick the best one,” he says. “The problem with doomprompting is it costs more and wastes time. If you are going to spend the tokens anyway, do it in a way that saves you time.”
IT teams should treat AI agents like junior employees, Farmer recommends. “Give them clear goals and constraints, let them do their job, and then come back and evaluate it,” he says. “We don’t want engineering managers involved in every step of the way, because this leads to suboptimal outcomes and doomprompting.”