24
Views
3
Comments
Default caching in aggregates?
Application Type
Reactive
Service Studio Version
11.55.26 (Build 64206)

In a certain flow of my app I have the following:

The highlighted if simply checks if the previous aggregate is empty, and if so it executes the action LoadingOrder_Complete.
This action was triggered automatically wrongly, because the aggregate did have results. By using OutSystems feature to extract the aggregate to a SQL, I can check in my PROD sandbox that the aggregate when the error occurred should have had results:
If I look at the execution times of this aggregates I see something interesting:

After the last timestamp we have here (12:44:50) the error occurred. All this logs represent the same user doing this wizard multiple times. We can see the user is exceptionally fast and takes an average of 4 seconds between tasks. And a lot of the lines of this aggregate have execution times of 0ms. 
For extra context, the module does not have any cache set in minutes, we are using the default option were values should not be cached. 
Is it normal to have an aggregate execute in 0ms? Does the platform perform a non disclosed caching when calling an aggregate in such a short space of time? 

2016-04-22 00-29-45
Nuno Reis
 
MVP

Hello Maria.

The answer by Neo AI is quite spot on. SQL has internal mechanics to reduce duplicated effort. The  query plan cache will see that specific request is frequent and will optimize it. Out of OS control.

Can you give more details on why data is wrong? Can the new records still be inside an uncommitted transaction?

2022-02-10 14-41-34
Maria Sbrancia

Sure!
This is a mobile app and the aggregate has a max records of 10. 

This app is used in a warehouse for all its operational processes. Here we are looking at loading of goods that only takes place after the picking associated with it is completed. 

This user completed 15 picking tasks in a row which is normal because he was using the same load carrier to pick all 15 SKUs, and only when moving one load carrier to the loading dock all 15 tasks are bulk updated to completed and stock transitions. 

He started the loading of all these 15 tasks right after dropping them, a much simpler validation flow as we can see by his execution timestamps. I know he did this very closely since the first completed loading task was done 2 seconds after the picking ended. 

He managed to complete 13/15 tasks before the error, so even when he executed task nr13 he should have had 3 results in the aggregate that showed no results when executing the incorrect logic. 
Hope this helps

2016-04-22 00-29-45
Nuno Reis
 
MVP

Warehouse sounds like a big table with a lot of indexes. I believe it is possible data wasn't still updated. If it is on-premises you can have a DBA monitoring it. On cloud, talk with support.

Community GuidelinesBe kind and respectful, give credit to the original source of content, and search for duplicates before posting.