Hello,
I am very new to ODC and trying to understand how it all works. When creating an agentic app or using Agent Builder, how to get AI analyse uploaded files that are temporarily stored in a local variable? CallAgent action only accepts text as an input. I see the desired result from Agent Playground but it allows me to upload files. How do I get this functionality from the app that is using this agent?
I couldn't find any tutorials or forum posts re this.
Any help is appreciated, thanks.
Hi Alexandra,
Are you using the AI Agent Builder app? If so, I suggest building an agentic app instead and exploring it, since OutSystems has now opened that functionality until the end of the year. In an agentic app, you can pass files directly to the agents.
You can check the documentation here:
https://success.outsystems.com/documentation/outsystems_developer_cloud/building_apps/build_ai_powered_apps/agentic_apps_in_odc/
https://success.outsystems.com/documentation/outsystems_developer_cloud/building_apps/build_ai_powered_apps/agentic_apps_in_odc/image_input_for_ai_models/
Hi Ana Sofia, thank you for your reply!
Yes I was trying with AI Agent Builder as agentic apps seem to be really complicated coming from Outsystems 11.
Thank you for the links. Do you happen to know where to find actual tutorials rather than documentation? It's been a struggle.
In my opinion, agentic apps are the best approach right now. They give you much more flexibility to explore agentic implementations.
You can follow this guided path for a clear walkthrough:
https://learn.outsystems.com/training/journeys/build-agentic-powered-app-3411
Also, check OutSystems’ YouTube videos introducing the Agent Workbench:
https://www.youtube.com/watch?v=2JtGN6OCfjs&list=PLxALhSwsaivy1afIHT5TElTJJs9OMHNcX
Good luck!
Thank you again. I watched this video https://www.youtube.com/watch?v=Q1auUDxIwRA&t=724s , that's why I was trying to do it with AI Agent Builder. Which I sort of did, but only in playground as I couldn't implement the action from the app.
I'll have a go with agentic apps approach, hopefully I'll figure it out eventually.
Hello I have a solution here: for the agent to be able to read the maximum content you want to provide from a file, you need to build a pre‑processing function. This function will handle parsing file types from base64, extract all the content inside, and then feed it to the model for analysis. With this approach, you can process a wide variety of file types and save tokens for each model call. I suggest you try the Document Intelligence tool, which has a trial mode you can experiment with.
Hi,Hope the below solution helps.
File Upload in the App
Use the File Upload widget to let users upload a document (PDF, Word, TXT, etc.).
The file is stored temporarily in a local variable or in OutSystems storage.
Extract File Content
You need to read the file content and convert it into text.
For PDFs/Word docs, you can use OutSystems Forge components (e.g., PDF Viewer, Word Utils) or integrate with external services (Azure Cognitive Services, OpenAI embeddings, etc.) to extract text.
For plain text files, you can directly read the content into a variable.
Pass Text to CallAgent
Once you have the text, feed it into the CallAgent action.
Example: CallAgent(Text: ExtractedFileContent)
The agent then analyzes the text as if it came from a user prompt.
Best Practice
Don’t push entire large files directly — instead, chunk the text (e.g., split into sections) and send it progressively.
This avoids hitting token limits and improves analysis quality.
If you need semantic search over documents, store embeddings in a vector DB (like Azure AI Search or Pinecone) and let the agent query relevant chunks.
Thanks,
Saicharan