What if you could build AI agents in hours instead of months? OutSystems ODC's Agentic AI Workbench makes it possible—but here's the catch most developers miss:ᯓ★ The "Context Gap" is Real: Speed alone won't fix bad data. AI agents need a quality Data Fabric; without proper ODC entities and clean APIs, expect hallucinations and irrelevant answers.ᯓ★ Security Beyond the Prompt: Building a bot is easy, but building a secure Agent is harder. Configure the Agentic Workbench with the same granular Permission Sets and multi-tenant security headers as your ODC ecosystem to prevent unauthorized data exposure.ᯓ★ The Governance Bottleneck: Quick deployment causes "Agent Sprawl." Without a clear plan for how agents fit into your workflows, you risk hard-to-monitor silos that cost more to maintain.ᯓ★ Token Management vs. User Experience: The Workbench simplifies LLM connections, but developers often miss Latency and Cost issues. Production agents need token optimization and prompt engineering to stay fast and affordable at scale.Anyone else working with agentic AI in their low-code platform projects? Would love to hear how you're approaching it.
Hi @AD-OS,
That is where the human interaction arises. Without proper configurations or hands-on experience, the code or the agent being developed would be rough.
For the first part, “The Context Gap is Real”, we are referring to the AI agent configurations. For example, I developed an agent but didn’t assign the proper temperature that aligns with the business, didn’t provide valid grounding data, or left the response length / max tokens open without limiting it. In my opinion, max tokens also depend on the business case. Leaving it open can trigger cost increases along with hallucinations, as you mentioned.
The security part is also crucial. Giving an agent multiple permissions to sensitive data is not ideal, because it has access to the entire system. Even a tiny error, say 0.0000001%, could cause huge damage. That’s why we try to separate a single agent into multiple ones, each handling its own part.
For the third part, “Agent Sprawl”, I assume you mean creating many agents that become hard to monitor. This requires understanding the agent patterns from documentation. You select what suits you best and develop your app accordingly. For instance, I wanted to create a supervisor agent that manages lower-level agents, where each does its part. This requires proper planning and implementation of patterns.
Finally, regarding tokens and user experience, as mentioned before, max tokens control the response length visible to the user. At the same time, the TokenUsage runtime parameter in the action-calling tab counts all tokens generated by the agent, including reasoning, looping, decisions, generated response, and even user input. This affects cost and requires careful handling.
In summary, most of this depends on the business case. Some scenarios require leaving max tokens open like an AI model generating thousands of lines of code while a chatbot responding with short messages benefits from limiting tokens to control cost and latency. However, some aspects, like security, should always be considered regardless of the business case.