Batch Processing

Batch Processing


I have the following requirement / problem statement

  1. User(s) will be uploading excel files ( 1-5 MB in size)
  2. There should be a "job" that will be processing the uploaded files 
    • if excel rows are valid it will be inserted into table
    • if there is an error email will be sent or inserted into a bad record table.

I have two solutions for processing the job.

  1. Timers - Create a timer which will execute every 5 mins. This will look for any excel sheet which is pending for execution.
  2. Process - Create a  process that will be "launched on"  as soon as new excel sheet is created.

(Note : Assume that we have an entity "Job" and a new record will get created as soon as an excel sheet is uploaded. This table can be referenced both the Timers and Process. There will be a status flag against each record. ).

I am sure both of the above will work but thinking from the scalability and maintenance perspective I need advice on which one to go for considering 

  • there will be huge number of parallel uploads
  • multiple front end servers are involved.

The question here is what would be the side effect of Process (approach 2)  if there are say 500 uploads happening from different users. All the 500 will get processed parallel ? (Assuming the front server is Quad Core server) How does the queuing work if they are not executed in parallel ? What would be the impact on the user expericence - Other users accessing the website ? How do we retry if one of them failed ( not due to application logic).

If we go for the Timer approach and if there are several front end servers involved I think parallel processing will happen - timer will get executed on all the front end server ( please correct me if I am wrong). All the front send servers will access the job table and will find that there are excel files to be processed - All the timer will execute same time and they all look for job status not processed, they will find it and they attempt to lock the record before processing the job (at the same time) . Some will fail as somebody has already locked it and it has to retry for another record. And after a couple of retries every front end server will be processing different excel files parallel.

Looking forward for some insights into this.


Going with BPT may be a bit faster then using timers, if your servers have enough capacity. If you have 500 concurrent uploads, though, they won't be processed in parallel. The processes are still queued and handled by the scheduler, as a timer would; except it reacts faster. If a BPT activity fails for any reason, you can retry it manually from Service Center, although having to do that is a clear sign that you need to fix something in your code! 

That being said, I usually go with timers in very high volume scenarios, because BPT generates too much extra activity in the database (creating and updating metadata on the processes) and generates many "trash" records that need to be cleaned-up once the process ends. You can always use a WakeTimer action to start your timer when a file is uploaded, instead of waiting for it to run.

I agree with João, and usually, use Timers mainly because of those side-effects on the database.

Also in terms of maintenance and error catching, I prefer to use Timers instead of BPT.  

Thanks Joao and Goncalo for the insight. 

In short it is recommended to go for timers instead of BPT. 

btw is there any sort of peek lock available so that other receivers do not process the same job at the same time  ( e.g. : ) . This is when we have multiple front end servers are involved. 

You can only have 1 instance of a timer running per environment, no matter how many frontend servers you have. So the same job being picked twice is not really a problem.

Keep in mind, timeout for a timer is 20 minutes (default). For a BPT automatic process it is 5 minutes (fixed).

Go for timers..

and you can always create, say 2 timers, one that looks for odd-id's and the other for even id's to process the excels. to have some parallel activity.

Thanks J and Tim.

Yes it is decided .It will be timers. 

Intention of having multiple front ends are to distribute the load as there will be huge volume which one server alone may not be able to handle.

I am referring to it says we can enable timer on multiple front-end servers. So I am not sure about "1 instance of a timer running per environment". Please correct me if I am wrong.

Sorry for the late reply, but I stumbled upon the necessary info by accident today.

Look at 3.

3. The Web Service checks first whether the timer is already executing in any Front-end Server node. If not, the Is_Runnining_Since and Is_Runnining_By attributes of the Cyclic_Job_Shared entity are updated. This will lock the timer from executed in any other Front-end Server node.

Hi Hus,

adding to what Tim mentions and links to, and in line with what J. suggests:

You can have multiple timers running in parallel, in multiple front-ends, that's not an issue. What is relevant for you is that any given timer will only be running on one front-end at a time... so if you define your timer to handle all your load, it will not run on all/multiple front-ends in parallel, it will run in one front-end only, in parallel with any other different timer defined in your environment. (check

Given that your work load is stored as Jobs in a table, you can have your Timer handling a Job (or batch of Jobs) at a time and monitor how long it is taking so as to never timeout and loose work, this way there will always be progress. Doing this typically relies on checking whether you're reaching the threshold of available time for execution and if so stop processing Jobs and call the Wake<Timer> action again to make sure it continues as soon as possible.

With BPT, on the other hand, you can have multiple instances of the same process running in parallel and on all front-ends configured. A new process instance could be started automatically by the creation of a new Job record, and the frequency of polling for BPT events is higher than for Timers. As downsides, João's concerns are relevant, along with the hard 5 minute timeout of a BPT automatic activity (which may be avoided by separating your job processing logic in more than one activity, depending on your particular case).