Hi All,I’m looking for a detailed and practical comparison between OutSystems Developer Cloud (ODC) and OutSystems 11 (O11) to make informed decisions about modernization and AI adoption.1. ODC vs O11 What are the key benefits of ODC compared to O11? Why should a client choose ODC for new app development? In which cases is it recommended to migrate O11 apps to ODC? 2. AI & Agentic Path What are the practical benefits of using AI Agents in ODC? In which real use cases should AI Agents be used?Why is the ODC + Agentic approach preferred? 3. AI Model Cost & SecurityHow is AI model cost calculated (token consumption, input size, etc.)? How do AI models process data in the backend? How is data security handled when AI agents access application data? Looking for clear guidance, best practices, and real‑world insights. Thanks
Hello,
Regarding security, it mainly depends on how the system is configured. If you give the agent access to sensitive information, there is always some level of risk. Even with the best AI systems, there is always a very small chance of error, so careful handling and proper security controls are necessary.
Hi Ajit, happy to give a more detailed breakdown across all three areas.
1. ODC vs O11 — Architecture and When to Migrate
ODC is a fundamentally different runtime from O11, not just an upgrade.
ODC runtime:
O11 runtime:
When to migrate:
New apps: start on ODC unless you have a hard dependency on something ODC doesn't yet support (some integration patterns, legacy connectors, or specific SAP/Salesforce accelerators that exist in O11 but not yet in ODC).
Existing O11 apps: migrate when the app would genuinely benefit from cloud-native auto-scaling, faster deployment pipelines, or the native AI capabilities covered below. Don't migrate just to migrate — ODC is newer, and some of the tooling and marketplace components you rely on in O11 may not have equivalent ODC versions yet.
2. AI Capabilities: ODC vs O11
This is where the gap is most significant.
What ODC has natively:
AI Agent Builder — visual tool inside ODC Studio. You build GenAI agents with an LLM backbone and optionally RAG. The agent compiles to a Server Action, callable from any workflow or screen like any other action.
AI Model Management — configured in ODC Portal > AI Connections. Supported providers: Azure OpenAI, OpenAI, Anthropic, AWS Bedrock, Databricks, Mistral, Gemini — plus custom connections for any provider that implements the ODC contract. API keys are stored as Secrets (AWS Secrets Manager, encrypted at rest, not visible after saving).
MCP (Model Context Protocol) — connect agents to external tools via MCP servers. Supports SSE streaming and service-to-service authentication. For MCP servers running in private networks, you use Private Gateway.
Agent Workbench — multi-agent orchestration layer. Agents can use tools: REST endpoints, Data Fabric entities, Service Actions, MCP servers. Supports RAG against external document stores. Agents can be embedded as activities inside workflows.
Mentor — AI digital worker covering the full SDLC. Includes App Generator (describe an app in natural language and get a scaffold), App Editor, and Code Reviews. GA since early 2025.
What O11 has natively:
None of the above. In O11 you consume external AI APIs via REST integrations, parse structured responses manually, and build all orchestration yourself with custom logic. It's possible — teams do it — but you're building the plumbing from scratch every time.
Real use cases where this matters:
3. AI Model Costs and Data Security
Costs:
Token-based billing. Every input token (user prompt + system instructions + retrieved context from RAG) and every output token (model response) is billed by the provider. Models that expose internal reasoning (like o1/o3) also bill for reasoning tokens. ODC's Agent Workbench shows token usage so you can monitor per-agent consumption. You configure the API key and billing is between you and the provider.
The cost variable that surprises people most is RAG context: if you retrieve 10 large document chunks per query, all of that goes into the input token count. Design your retrieval carefully.
Security:
Important caveat that gets glossed over:
When an agent retrieves sensitive data (customer records, medical data, financial data) to answer a question, that data is sent to the LLM provider as part of the prompt. The platform secures the API key and the transport — but the data itself goes to the provider you configured. If your data is sensitive, choose your provider accordingly. Azure OpenAI gives you the most control (your Azure tenant, enterprise data processing terms). Generic OpenAI is fine for many use cases but check your data processing agreements before using it with regulated data.
Worth keeping an eye on the March 31 event from Lisbon — OutSystems is teasing a major announcement around AI-powered development that could further widen the ODC advantage.
Happy to dive deeper into any of these areas.