1216
Views
3
Comments
Solved
Best solution of Timer
Application Type
Traditional Web

I have a timer action processing huge amounts of data, after processing, the original data must be deleted to avoid being processed again. However due to the huge volume of data, the timeout is normally reached and new data keeps piling up while no new data is produced as the transaction is aborted upon timeout. 

Kindly suggest the best approach from below

  1. Create batch of 500/1000/2000 rows on each timer run, and make sure to only delete the same data at the end.
  2. Loop through a batch(1000-5000) of data, process them and commit them. Keep processing new batches controlling the elapsed time and re-wake a new instance of the same timer if the timeout is about to be reached. 
2023-10-21 19-42-11
Tousif Khan
Champion
Solution

Hello

I think 2nd option would work best on large data sets you need to create a logic flow that will update your flags after commiting a transaction you can loop 1000-2000 records

You have to create a variable with dateTime data type and add 10 minutes to it and create records 

Keep checking on your logic that time has passed or not , If passed then Wake the timer 

This is just a example but it can be extended based upon your data.

You can also check this link 

https://www.outsystems.com/training/lesson/1749/odc-2018-heavy-timers



Hope this helps

Thanks 

2020-09-15 13-07-23
Kilian Hekhuis
 
MVP
Solution

Hi Pradip,

In addition to the other two answers by Saurabh en Tousif, be sure not to commit too often, as it slows down processing, and also make sure the deletes of the processed records is within the same commit, so after a commit you can be sure that both the processed data is comitted and will not be processed again. Typically you'd commit every 100-1000 records, depending on the processing speed (every 10 seconds or so). The only exception to this is when you communicate with an external system (e.g. via REST), and you need to make sure that your local database reflects whatever is sent to the external system - in that case you commit after each record.

As for the differences between approaches 1 and 2, 1 is a fine option if both the process time per record (or batch of records) is fairly constant, and the infrastructure the timer runs on has a fairly constant performance. Create a site-property with the number of records to process per batch, and use that as the Max. Records of the aggregate fetching the data. You can then fine-tune the time the timer is allowed to run and the number of records per batch. I typically choose the number of records such that the timer runs about 10 minutes, and set the time-out to 20 so you have a bit of leeway.

Approach 2 is a good one if you do not have control over the length the timer will run, or processing time is very variable. In that case you'd query the timer's timeout value from the Meta_Cyclic_Job system entity, and make sure the timer only runs a safe time below that limit, by checking the start time and current time, like Tousif described.

In either case, it's a good idea to have a "kill switch" like Saurabh describes, in case you need to stop the timer from restarting itself, for whatever reason (especially useful during testing).

UserImage.jpg
Saurabh Shivananda Prabhu Chimulkar
Solution

Hi Pradip,

As per my opinion the 2nd approach is better wherein you are awaking a timer again based on a threshold duration (x seconds before actual timeout). As you are aware, the processed changes have to be committed and when the Timer wakes up processing starts from where it ended earlier.

As a part of good practice you can also consider to add Checkpoints if there are multiple stages involved while processing the data, and a Kill Switch ( Site Property set to False - then end the timer run) if you feel there is a need.


Regards,

Saurabh

2023-10-21 19-42-11
Tousif Khan
Champion
Solution

Hello

I think 2nd option would work best on large data sets you need to create a logic flow that will update your flags after commiting a transaction you can loop 1000-2000 records

You have to create a variable with dateTime data type and add 10 minutes to it and create records 

Keep checking on your logic that time has passed or not , If passed then Wake the timer 

This is just a example but it can be extended based upon your data.

You can also check this link 

https://www.outsystems.com/training/lesson/1749/odc-2018-heavy-timers



Hope this helps

Thanks 

2020-09-15 13-07-23
Kilian Hekhuis
 
MVP
Solution

Hi Pradip,

In addition to the other two answers by Saurabh en Tousif, be sure not to commit too often, as it slows down processing, and also make sure the deletes of the processed records is within the same commit, so after a commit you can be sure that both the processed data is comitted and will not be processed again. Typically you'd commit every 100-1000 records, depending on the processing speed (every 10 seconds or so). The only exception to this is when you communicate with an external system (e.g. via REST), and you need to make sure that your local database reflects whatever is sent to the external system - in that case you commit after each record.

As for the differences between approaches 1 and 2, 1 is a fine option if both the process time per record (or batch of records) is fairly constant, and the infrastructure the timer runs on has a fairly constant performance. Create a site-property with the number of records to process per batch, and use that as the Max. Records of the aggregate fetching the data. You can then fine-tune the time the timer is allowed to run and the number of records per batch. I typically choose the number of records such that the timer runs about 10 minutes, and set the time-out to 20 so you have a bit of leeway.

Approach 2 is a good one if you do not have control over the length the timer will run, or processing time is very variable. In that case you'd query the timer's timeout value from the Meta_Cyclic_Job system entity, and make sure the timer only runs a safe time below that limit, by checking the start time and current time, like Tousif described.

In either case, it's a good idea to have a "kill switch" like Saurabh describes, in case you need to stop the timer from restarting itself, for whatever reason (especially useful during testing).

Community GuidelinesBe kind and respectful, give credit to the original source of content, and search for duplicates before posting.