Hello colleagues. I'm facing an issue using my Custom AI Model in the ODC studio.
I followed the documentation and exposed a custom LLM endpoint using the provided API contracts available. It worked fine and the Custom LLM is now available to be reused across my application.
Now, I'm constantly getting 'mal-formatted' request errors, which are not very self-explanatory. I'd like to ask if some colleague faced the same.
The endpoint used to trigger the custom LLM works. I can test it using API test tools. That's not the error. Apparently, the action to call the custom LLM (generated by ODC automatically) doesn't even call my API, as the error happens before that.
If have these custom LLM setup, for Gemini and GPT 4o - both libraries generated by ODC after setting up the custom AI model:
The parameters passed to the actions are like this:
The message
The request object
The 'Stop' and 'ExtraBody' are now empty. They are optional, but I tried several things without success.
The error displayed in ODC logs is this:
"OS-BERT-40004 - REST (Expose) (modelErrors, Failed to parse JSON request content., Failed to parse JSON request content., Failed to parse JSON request content., Failed to parse JSON request content., Failed to parse JSON request content., Failed to parse JSON request content., Failed to parse JSON request content.)"
Sidenote: I know the BERT-40004 error nature, but can't find the reason in that specific context.
The error can't be debugged and there is no trace to be checked.
Does any of the colleague have an idea about the cause?
Thanks
For those with the same issue:
The Custom AI Model setup needs to use a middleware API to normalize the API contract.
What the documentation doesn't mention is that the middleware API input will be used in different formats, therefore, the endpoint must be ready to receive the API calls using a flexible JSON format.
This flexible JSON format is the root cause of my issues. Sometimes, the 'content' parameter is sent as a string, sometimes as an array.
I hope it helps the colleagues.
hi @Lenon Manhães Villeth
Are you using the OpenAI Spec LLM API?
Gemini is now supported by Outsystems natively so Custom LLM integration is no longer required.