Hi everyone,
I’m facing an issue when uploading a file (~16MB) in OutSystems using a Server Action.
During the upload, the file seems to be converted to text when transferred from Front-End to Back-End, and the FE → BE transfer alone takes more than 20 seconds.
My questions:
Is this expected behavior in OutSystems when uploading files via Server Actions (serialization/encoding overhead)?
Or could this be caused by non-optimal implementation on my side?
Has anyone experienced similar performance issues with large file uploads? Any best practices or alternative approaches would be appreciated.
Thanks in advance!
Hi @Tuan Duong ,
These are some common issues.
If you're concerned about optimizing time, you can research chunk file techniques.
If you're concerned about large files, you can research about S3's PresignURL.
Hi Tuan
Uploading a single 16 MB file in one request is generally not recommended. In OutSystems, when a file is handled through variables or Server Actions, the binary content is serialized (Base64), which increases the payload size beyond the original file size and impacts performance.
The recommended approach is to upload the file in smaller chunks and reassemble it on the server side. OutSystems Forge provides some components that support chunked uploads in O11, and these approaches are also applicable in ODC.
Storing files directly in the database is not considered best practice for long-term storage, but it can be acceptable for temporary persistence during processing workflows.
Some components use background workers to offload processing and free the main request thread, although the same logic can also be executed synchronously if required.
Please, check this component from forge: simpleuploadworker
As an alternative, upload it directly to S3 or any other vendor. If you are interested in file handling overall including how to deal with very large files, please see this article series on ODC with AWS S3.
OutSystems Upload widget docs - shows how file uploads are done and that the widget handles binary data
Best, Miguel
Hi Tuan,
Not just in OS, but also in aws by default it limits the upload file size.
AWS is big infra, but they are aware if 1 user upload 16MB, its ok; what is about 100 user upload 16MB each. Therefore, in SaaS or PaaS / whichever system using cloud service, please think bigger.
In this inconvenience, they provide the way to upload chunks and auto combine to proceed with further logic.
Hello @Tuan Duong ,
Yes, this behavior is quite common in OutSystems. When you pass files to a Server Action, the binary is serialized/encoded during the FE → BE transfer, which adds noticeable overhead, especially for large files like 16 MB. So 20+ seconds is not unusual.
It’s usually not your logic, but the way OutSystems handles the transfer.
Best practice
Instead of sending large files directly to a Server Action, use chunked uploads.
You can use this Forge component:-
https://www.outsystems.com/forge/component-overview/9945/chunk-it-o11
https://www.outsystems.com/forge/component-overview/21467/tus-chunk-file-uploader-resumable-uploads-js-client-o11
It uploads the file in small parts (chunks), which:
It is much faster and more stable
Avoids large serialization overhead
Works better on slow networks
Prevents timeouts
Thanks
Regards
Gourav Shrivastava