Transactions in the OutSystems Platform

Transactions in the OutSystems Platform

  
Permalink: www.outsystems.com/goto/transactions-outsystems-platform

The purpose of this post is to give an explanation on how transactions work in OutSystems Platform. Please feel free to add your own comments or questions about this topic, and be sure to check the documentation.

Everytime there is a request to the platform a database transaction is started. This transaction is used along the entire request to fetch and store data in the database including both user and system data. When the request ends the transaction is closed. If everything goes well the transaction is committed, if there is an unhandled exception the transaction is rolled back.

In Service Studio terms this means that when the user clicks a button on a page a transaction is started. The screen action associated to the button runs using that transaction and every action called by the screen action also uses the same transaction. Furthermore, the preparation of the destination screen also uses this same transaction. This is important because if an unhandled exception occurs in either the screen action or the preparation the transaction is rolled back.

It might seem odd that an error in the preparation causes a rollback that affects the screen action, but in the user's perspective it makes sense. All the user knows is that there was an error, the user doesn't know where the error occurred. It would be stranger if by clicking back and resubmitting the form two instances were created in the database.

If you need to change this behavior in your application there are some features that allow you to do so. There are two built-in actions that can be used to stop the transaction at any time during an action, CommitTransaction and AbortTransaction. After these actions are called subsequent accesses to the database cause a new transaction to start.

You can also avoid rollbacks when exceptions occur by creating an error handler for the exception and setting the "Abort Transaction" property to false (the default value).
SQL Server and Oracle have different ways of working with transactions. We did our best to abstract these differences in the OutSystems Agile Platform and in most situations these differences will not have an impact on your work. But for those situations where a more advanced knowledge is required, here's an explanation on how transactions work in these two RDBMSs.

In Oracle everything happens on the same transaction. This means that all queries, inserts, updates, etc happen in the same database transaction and the data is stored to the database when the transaction is committed. For an example on how this may have impact on your work imagine that you have an action that stores a record to the database and then calls a web service to check on that data. Since the web service runs on a different transaction (it's a different request) it will not see the new data until a commit is made (of course at this point you would see that communicating with a web service via database is a bad idea and you would fix the design).

In SQL Server things aren't that simple. SQL Server only supports one reader per transaction, and it has serious limitations on reading and writing in the same transaction. To hide this from the developers, the OutSystems Agile Platform uses multiple transactions in SQL Server. One transaction is dedicated for the writes to ensure commit and rollback atomicity. Additional transactions are opened for the reads, depending on what is needed by the current request. Because of this and because read accesses to uncommitted rows in the database block SQL Server in the default isolation level, transactions in SQL Server have an isolation level of "read uncommitted". This means that other request accessing the data on the database will see the new data even before it is committed, unlike what happens with Oracle. In our Web Service example this means things would work in SQL Server (although it was still a bad idea).
Hello all
 
With Platform 9, OutSystems released support for additional database stacks (MySQL and iDB2 - the latter just for integration purposes). This means it is important to clarify the isolation levels used by the platform for each of these databases, and confirm behavior of the previous ones.
 
For the previously existing stacks (SQL Server, Oracle) the behavior was not altered. For the new stacks:
 
  • MySQL. The MySQL driver used by the OutSystems Platform has the same limitation as the one for SQL Server - it only supports one reader per transaction. As such, we opted for the same behavior as we do with SQL Server - transactions are Read Uncommitted and we use one transaction for writes and multiple transactions for reads.
     
  • iDB2. The iDB2 driver behaves the same way as Oracle, so we have Read Committed transactions and everything (reads and writes) running in the same transaction.

An additional clarification regarding SQL Server / MySQL. When Rodrigo indicates "One transaction is dedicated for the writes to ensure commit and rollback atomicity. Additional transactions are opened for the reads, depending on what is needed by the current request", instead of reads you should interpret as primitives built for reading data: Get<Entity> and Get<Entity>ForUpdate; Simple Queries/Aggregates; Advanced Queries.
A particularly relevant scenario is advanced queries - which you might be using to do inserts or updates - these will run in a separate transaction. In particular scenarios, this might cause locking : imagine you run a Get<Entity>forUpdate and then update the record in an advanced query; this would cause a lock and not be possible.

Regards,
Acácio Porta Nova
Hi

I decided to produce a short table to make this clearer. 


 

        SQL Server

            Oracle

              iDB2

             MySQL

     .NET
       or
    J2EE

Isolation level: READ UNCOMMITTED

 

Multiple transactions per request: one for writes, one for each read.

Isolation level: READ COMMITTED

 

One transaction per request. Multiple reads in the same transaction.

Isolation level: READ COMMITTED

 

One transaction per request. Multiple reads in the same transaction.

Isolation level: READ UNCOMMITTED

 

Multiple transactions per request: one for writes, one for each read.


Regards,
Acácio
Acácio

Thanks that is useful.
Hi all

Just letting you know that this post made it to the documentation. Check out the entry in Service Studio online help!

Acácio
Can someone comment on this ?
Hi folks,

I would like to be sure about an OS behaviour with you all.

Imagine that in my action the first thing I do is to call an Webservice, and, only after that, I make some DB operations.

Is the transaction started just before the WS call?

So, in that case I have the connection "stuck" till the end of the flow, right? So what can I do to avoid starting the transaction unecessarily before the WS call?

I have had a connection pool overhead error.

Stack Java and Oracle.

Thanks.. ;)
Hello João in my view the transaction will only be opened when running the first database operation, so in your case only after returning from WS you have an open transaction, or at least I think that should be the behavior. I will try to confirm or simulate this situation. As for the error mentioned this database is maintained by the platform or is an external database you access through an extension?
Alexandre Costa wrote:
Hello João in my view the transaction will only be opened when running the first database operation, so in your case only after returning from WS you have an open transaction, or at least I think that should be the behavior. I will try to confirm or simulate this situation. As for the error mentioned this database is maintained by the platform or is an external database you access through an extension?
Hi

Actually, the transaction opens in the beginning of everything. But it's a bit irrelevant, since the WS call is run in a separate transaction, and the original one has not done anything yet.

This is what happens (each line is one transaction):


[ORIGINAL] START ------------------------------------------------ do stuff ---------- END
                                   |                                    ^
                                   v                                    |
[WS]                           START ---do stuff ----END

Acácio
Thank you guys. But my biggest concern is about the total time a connection, in the connection pool, keeps opened. If the WS call last 8 seconds to get a response, does that mean that the connection, allocated by the connection pool in the beginning of the request, stays occupied this long too?
João Melo wrote:
Thank you guys. But my biggest concern is about the total time a connection, in the connection pool, keeps opened. If the WS call last 8 seconds to get a response, does that mean that the connection, allocated by the connection pool in the beginning of the request, stays occupied this long too?
IMHO, if your WS takes 8 seconds, you should be worried about that, not about the connection to the database.

Idle database connections should not be an issue (it's a matter of increasing the limit on both ends). The resources used by idle connections are (should be) irrelevant - if your customer is worried about that, they are spending their money in the wrong concern.
Connection pooling works that by avoiding constant opening and closing of connections, and in peak usage periods the size of the pools increase, but later on they go back down.

My biggest concern is the time the action stays running and the end-user is waiting. Don't get me wrong (I know some integrations are slow).
But end-users don't. And if a screen takes them 3 seconds, they tend to refresh. And then a duplicate request is done. And since requests are serialized in the platform (for a given user session), now instead of waiting 10 seconds total, they wait 20. And if they keep on refreshing the feeling becomes "the server is down".

If you are doing something that requires a slow integration, you should have a lazy-load mechanism so that the user is not waiting with an open request. Some time ago I suggest this approach to a customer of ours:

> A timer that processes a "queue" of such requests. Meaning, a queue (typically an entity) with the parameters to be processed, a boolean on whether it's complete, and the a column where the excel is stored there
> The screen places a request in the queue and wakes the timer that processes the queue. It is then sent to a "waiting" screen
> The timer processes the request and places the excel in the result
> The waiting screen (with auto-refresh - say every 5 seconds) looks at the queue (based on a identifier that is given when the request is placed). When the request is processed, the option to download the file is given to the end-user.
> A second timer that runs every hours or so and that deletes all queued requests that have been processed more than say 30 minutes ago.
 
Hope this helps.
Acácio
 
Hey Acácio, I have forgotten to mention that it is a BPT process. So we don't have the session and the waiting problems, once this is a naturally assynchronous operation. Thanks for the hints. I'm pretty aware of the usability -> performance issues you successfully pointed out.
João Melo wrote:
Hey Acácio, I have forgotten to mention that it is a BPT process. So we don't have the session and the waiting problems, once this is a naturally assynchronous operation. Thanks for the hints. I'm pretty aware of the usability -> performance issues you successfully pointed out.
 
Hi João

If it's BPT then the same principles apply, but on a different time scale. 
- For "pages", a 6-second integration is too slow.
- For BPT, a 5+ minute integration or batch process is slow.

I have provided these guidelines for both cases, only with this difference in the detail.

Glad it's useful!
Acácio

Ok, but if I have a huge volume of requests to the BPT activity endpoint, and each transaction holds the connection for 8 seconds, I end up reaching the MaxConnections in the JBoss Connection Pool configuration.
Hi João

yes, that's right. In that case you'll have to increase the limit - both in the Configuration Tool and possibly on Oracle side. 

Cheers
Acácio