As AI and Large Language Model (LLM) integrations become more common in OutSystems applications, developers need better ways to monitor and manage how these services are used.
Currently, when calling APIs like OpenAI or Azure OpenAI, developers must manually track important metrics such as token usage, latency, errors, and estimated costs. This is repetitive and usually implemented differently in each project.
This idea proposes a Forge component called AI Observability Toolkit for OutSystems that provides a standardized way to execute LLM calls while automatically capturing key metrics such as:
- Prompt and completion tokens
- Total token usage
- Response latency
- Model and provider used
- Estimated request cost
- Execution status and errors
The component could optionally store this information for monitoring purposes and include a ready-to-use Screen Template (AI Observability Dashboard) that developers can select when creating a new screen. This dashboard would display metrics like AI request volume, token usage, latency trends, and estimated costs.
This would allow developers to quickly add AI monitoring and observability to any OutSystems application, helping teams better control costs, troubleshoot issues, and manage AI-powered features.