If predictions are right, AI brokers can quickly do “actual” work, akin to adjusting promoting budgets, updating product listings, and authorizing refunds.
However is there a safety danger? Earlier than it may possibly delegate that stage of management, a enterprise should make sure the agent will behave predictably and safely.
That concern helps clarify why OpenAI has introduced plans to accumulate Promptfoo, a startup that develops instruments for testing and securing synthetic intelligence purposes.
OpenAI’s plan to accumulate Promptfoo might sign how enterprise AI techniques take a look at for immediate vulnerabilities.
Testing AI Methods
Promptfoo started as an open-source framework for builders to judge prompts and AI responses. The platform developed right into a testing setting, enabling engineers to run 1000’s of simulated AI interactions earlier than releasing an utility or agent.
These assessments can expose weaknesses, together with:
- Alternatives for immediate injection assaults,
- Brokers utilizing instruments in unsafe methods,
- Unintended API calls,
- Information leakage via responses.
Promptfoo is akin to an AI quality-assurance framework. Conventional software program testing verifies code with identified outcomes. But AI techniques behave in a different way. Builders want instruments that may probe many attainable inputs and edge circumstances. Promptfoo automates that course of.
AI Brokers
The Promptfoo acquisition additionally implies a shift in how corporations deploy AI brokers and purposes.
Enterprise deployments up to now have targeted on chatbots and information assistants. Many depend on retrieval-augmented era, by which fashions reply questions by retrieving info from a database.
Extra just lately, builders have begun constructing AI brokers that may plan duties, name exterior instruments, and execute multi-step workflows. Examples embody:
- Analyze promoting efficiency and alter marketing campaign budgets,
- Handle customer-service workflows,
- Replace product listings or pricing,
- Run advertising and marketing or analytics queries.
The brokers work together straight with CRMs, stock databases, and ecommerce platforms. That functionality expands what an AI agent can do. It additionally will increase the dangers.
Business Shift
OpenAI’s acquisition is just not the one sign that AI brokers are more and more distinguished, or that companies should give attention to AI safety.
Meta just lately acquired Moltbook, a social community of types for autonomous AI brokers. The corporate’s know-how allows brokers to speak and coordinate via a shared system.
Moltbook is an early take a look at how AI brokers talk.
Taken collectively, the actions of OpenAI and Meta spotlight completely different elements of the rising agent ecosystem.
Meta’s acquisition focuses on enabling AI brokers to work together with each other, whereas OpenAI’s addresses their habits and security.
The mixture suggests that enormous tech corporations anticipate software program brokers that work together with people and different brokers.
Safety
An AI chatbot that produces an incorrect reply is often an inconvenience‚ a hallucination.
An AI agent with system entry can create actual issues. From a prompt-injection assault, for instance, an agent may:
- Share delicate buyer info,
- Set off unauthorized or fraudulent refunds,
- Modify pricing or stock,
- Expose proprietary information to different brokers.
Companies, due to this fact, want guardrails that stop manipulation and unpredictability.
Promptfoo seems to supply that functionality. By integrating testing instruments straight into its enterprise AI platform, OpenAI will help builders determine vulnerabilities earlier than deploying brokers in manufacturing environments.
Fraud
Safety extends past inner techniques to incorporate fraud prevention.
Jeff Otto, chief advertising and marketing officer at Riskified, a fraud-prevention platform, stated the rise of AI brokers may create software program techniques that work together with each other (just like Moltbook).
“Meta’s determination to accommodate a social community for AI brokers inside Superintelligence Labs is a powerful sign that agentic commerce is transferring from idea to actuality,” Otto stated. “Moltbook’s brokers had been constructed on the OpenClaw framework, which allows autonomous brokers to work together, coordinate, and probably transact on behalf of human customers.”
If that imaginative and prescient develops, Otto stated, ecommerce fraud detection might want to evolve as effectively.
“That shift units the stage for a high-stakes machine-versus-machine setting,” he stated. “For retailers, the standard rules-based fraud playbook is not ample. When bots are those clicking ‘purchase,’ retailers want a protection layer that may distinguish between a reliable AI assistant and a malicious agent in milliseconds.”
Agentic Commerce
With their agent-related acquisitions, OpenAI and Meta are presumably planning for what’s subsequent.
If that future consists of agentic commerce, retailers should take into account an setting by which software program brokers — not simply people — do the procuring.

