Hi everyone,
One of the biggest friction points I've encountered while building Agentic AI apps in ODC is the Compilation Tax. Every time we needed to tweak a system prompt, whether to refine the tone, fix a hallucination, or adjust a rule, we had to republish the entire application.
I wanted to share an architectural pattern Iāve been using to solve this by treating prompts as Content rather than Code.
The Concept: A Headless Prompt Assembler
Instead of hardcoding prompts in logic or using the basic builder configuration, I built a standalone Prompt Assembler ODC App. It decouples the prompt lifecycle from the code lifecycle using a headless architecture:
Consumer Apps request prompts via Service Actions (passing only data keys, not instructions).
The Assembler constructs the prompt dynamically using a custom Regex-based injection engine.
The Benefit: We can update prompts in Production instantly without a deployment pipeline, while still maintaining strict governance and versioning.
I wrote a detailed breakdown of the architecture, including:
How to handle JSON Output Contracts (the biggest risk with this pattern).
The Regex logic for dynamic variable injection.
Governance strategies for locking prompts in Production.
If you are dealing with AI deployment bottlenecks in ODC, you might find this pattern useful:
š Read the full architectural breakdown here: https://medium.com/@michael.de.guzman/stop-hardcoding-prompts-in-odc-19920bd6935f
Iām curious if anyone else has tackled Zero-Deployment updates for AI in ODC yet?
Very well written article @Michael de Guzman . I am looking forward to seeing your application on Forge soon!!
Nice design. Will try it soon