30
Views
12
Comments
conection time out
Question

Hi,
I'm facing connection timeout error while opening a case record in edit screen.

API body -
[{"instant":"2026-03-18T06:13:14.909Z","logType":"error","message":"The connection has timed out","moduleName":null,"stack":"CommunicationException: The connection has timed out\n    at c.onTimeout (https://stonegate-dev.outsystemsenterprise.com/PublicanOffboarding/scripts/OutSystems.js?RnlDcii3Xz75iIHHERIZtA:3:7427)\n    at https://stonegate-dev.outsystemsenterprise.com/PublicanOffboarding/scripts/OutSystems.js?RnlDcii3Xz75iIHHERIZtA:3:4761","extra":{"Client Runtime Packages":"client-runtime-core= 3.27.0;client-runtime-view-framework-react= 3.5.0;client-runtime-widgets= 3.10.0;"}}]


2025-12-22 13-50-43
Sherif El-Habibi
Champion

Hello @Ayushi Kumari,

I’m not fully aware of the logic implemented, but if this issue occurs while editing a record, the screen is likely fetching the data for that specific record along with its related entities.

There may be a list or related data retrieval that is causing the timeout. Overall, it would be best to review all data actions and aggregates used in this screen and identify any potential performance bottlenecks.

If possible, you can share more details about how the logic is implemented to help further.

2025-06-19 10-02-53
Ayushi Kumari

Hi @Sherif El-Habibi
I'm having 4 agreegates (3 at start, 1 on demand)
3 data action ( 2 on demand , 1 at start)
One Attachment table which having max of 5 files per record with max of 10mb size

2025-12-22 13-50-43
Sherif El-Habibi
Champion

I’d rank them from lowest to highest risk of timeout as follows:

Aggregates (Low) Optimized at the database level by the platform and generally efficient. Unlikely to cause timeouts unless dealing with large datasets or unnecessary joins (such as including attachments).

Data Actions (Moderate) Include additional logic and may call other actions, which can increase execution time and make them more prone to delays.

Attachments (High) Handling large files and binary data increases payload size and response time, making them the most likely cause of timeouts, especially if loaded during initialization.

So in summary, I would start by investigating the attachment handling and look into ways to optimize it.

2025-06-19 10-02-53
Ayushi Kumari

Hi @Sherif El-Habibi
Thankyou for all the feedback.
I've reduced the server call on attachment upload and delete.
Also, replaced most of aggregates with advance SQL which helped in minimizing payload time and no error showing now.

Thankyou
Ayushi

2024-10-05 13-30-20
Huy Hoang The

Can you share something about that logic? 

I think it's a client action timeout even though the server returned 200. 

2025-06-19 10-02-53
Ayushi Kumari

Hi @Huy Hoang The


I'm having 4 agreegates (3 at start, 1 on demand)
3 data action ( 2 on demand , 1 at start)

One Attachment table which having max of 5 files per record with max of 10mb size 

2024-10-05 13-30-20
Huy Hoang The

As Sherif mentioned ,

Aggregate: Low, DataAction: Medium

*Attachment table : Very heavy because you're looping through it to display the results in a table, it's very slow. 

Additionally, remember that when the OS sends a request, it will send all local variables (including list attachments).

Please check the network run carefully; I saw one request that ran from 50,000ms to 150,000ms = 100 seconds, resulting in a timeout. Please highlight that area to find the request that is too large.

If you still can't find it : you can clone the screen and delete them one by one, double-check.


Hope this helps!


2025-06-19 10-02-53
Ayushi Kumari

Hi @Huy Hoang The
I've reduced the server calls and replaced aggregates with advance SQL.
This helped in minimizing screen load time.
Thankyou for all the feedback. 
Thankyou
Ayushi

2024-10-05 13-30-20
Huy Hoang The

Nice! If it has solved the problem, please use "mask as a solution".

2026-03-20 01-28-51
Saugat Biswas

Hi @Ayushi Kumari ,

I suspect the behaviour you’re seeing is influenced by the way the API is currently designed. At the moment, it appears to return up to five files per record, each potentially as large as 10 MB, which can quickly become expensive in terms of payload size and performance. 

From a design perspective, it’s usually more effective to separate metadata from content. When the UI only needs to show associated files, the API should ideally return just the file identifiers and document names, not the file content itself. This keeps the initial response lightweight and improves responsiveness. 

The actual file content can then be retrieved via a dedicated download API, invoked only when a user explicitly clicks a file link. This aligns better with user intent and avoids transferring large binaries unnecessarily. 

In cases where users need to download all associated files at once, a more scalable approach would be to expose an API that accepts a list of file IDs, assembles the files into a ZIP archive on the server, and returns that archive to the client. This keeps the client simple, reduces network chatter, and centralises file handling logic where it’s easier to manage and optimise. 

Overall, this separation of concerns; metadata for listing, content for download; tends to result in a cleaner API, better performance, and a more predictable user experience. 


Hope this helps.


Cheers

Saugat

2025-06-19 10-02-53
Ayushi Kumari

Hi @Saugat Biswas
Thankyou for giving this totally different perspective.
I will surely try to implement these changes in my next release.

For Now I've used advance SQL instead of aggregate which somehow helped. 

Thankyou
Ayushi

2026-03-20 01-28-51
Saugat Biswas

Hi @Ayushi Kumari ,

By using advance SQL, I suspect you might have partially implemented what I suggested. In your query you might have excluded the content field of the binary file, which makes your implementation efficient.

Cheers,

Saugat

Community GuidelinesBe kind and respectful, give credit to the original source of content, and search for duplicates before posting.