23
Views
6
Comments
Espace Publish Timeout due to large Entity
Question
Hi all,

Lately I've been having a problem publishing 1 espace that contains an Entity with many Attributes (Platform 8.0.1.25).
That entity has less than 142 000 rows, but has many indexes due to all the attributes of type "....Identifier".
For your information, SQL Server reports that that particular table occupies 51.296KB of data space and 49.952KB of Index space.

A few days ago, we added 4 new attributes (all of type decimal) and the Publish process timed-out on the Database script part. We downloaded the script and the problem was with one of the UPDATE instructions who was setting the new attribute to ZERO.

The UPDATE was to ALL rows and was taking too long to execute. So we commented that part of the script an ran it again. It worked fine.

Now, the question. Is this because the table has too many attributes of "....identifier" type (because the index size is so large)?
Staff
Rank: #3100
Hi Gonçalo,

The issue you had with the change on your entity was probably related to its size (~142000 rows) and the new 4 attributes you've added to that entity.

On OutSystems we never store "NULL" values on the database (1, 2). As so, when you add a new attribute (database column) to your entity (database) the OutSystems Platform will generate scripts that create the new columns on the database and then update them with the default null values (" ", 0, 1900-01-01 00:00:00, ...).

This operation probably caused the update query to timeout, but to overcome the problem you can change your "update query timeout" on the configuration tool to a value high enough for the query to complete.

Please let us know if this worked for you.

Cheers!
Nuno Parreira
mvp_badge
MVP
Rank: #7
Nuno -

We've seen similar problems in the past as well. I think this is something that needs to be addressed. We've had deployments to PROD have serious issues because of this. :(

J.Ja

Hi Nuno,

Thanks for your response.
I understand your explanation.

142000 rows seems like a small number, to be generating a timeout. If it was a million rows I could understand, but 142000?
Could this also not be related to the fact that that particular table has many indexes?
Staff
Rank: #59
It's probably due to the new fields, altough 142.000 records isn't a lot... Indexes shouldn't be changed at all, unless you modified them to include the new columns.

I used to manage a system that had some tables with over 200M records and when we needed to add fields to them, we'd just run the scripts to create them manually one day before putting the modified application live, so our publish operation wouldn't go outside the nightly maintenance timeframe.
Rank: #28622

I know this is a pretty old thread but we're running into the same issue. We have several tables in our app that contain a large number of rows. If we add columns to those tables in dev and deploy to prod the platform runs a script that updates default values for the new columns into every row. In our case that prod update can run 15+ minutes and causes an almost complete loss of use of our production system while the db does this massive single update transaction. Has anyone come up with a good, reliable way to work around this? At this point we can't add any new columns to these tables without facing this issue - I hate for the database design to be driven by a platform limitation. 

Anybody else work around this? Is it feasible to alter on new columns and set the defaults in a more controlled way (like maybe 1M rows at a time, avoiding an extended outage on the db) before doing the deploy? 


Following is the Outsystems response from a ticket we had open on this:

We tested this scenario to clarify if not defining the Default value when creating a new attribute in an Entity would avoid the creating of the Update Script:

  • When creating a new attribute in an Entity with the type Integer the UpdateColumnDefaultValues.sql was created and it was updating the value to 0
  • When creating a new attribute in an Entity with the type String the UpdateColumnDefaultValues.sql was created and it was updating the value to ''
  • In any case, when a new attribute is added to an Entity the UpdateColumnDefaultValues.sql from the correspondent module will have a statement to update that value with the default defined in Service Studio or the default set automatically by the platform.
  • Any attempt to avoid this behavior by tampering with the SQL script may be tempting, but will not be supported and may have unforeseen consequences. Therefore, we recommended avoiding this option at all. 

Hey Greg,


The best way to avoid the timeout is to create a new entity with a 1 to 1 relation to the big entity. But if you really need to add the new column to the big entity, then the only way is to do what I did and that is running the db update script manually.


regards