Adv Query DELETE FROM

Adv Query DELETE FROM

  

When using DELETE FROM {DatabaseName}

It's not an actual delete, it kept its Primary Key ([Id]) while making other columns blank.

Is there a way for the adv query  that I can write

DELETE FROM {DatabaseName} actually delete the whole row? without keeping the Primary Key Id?

DELETE FROM {Entity} delete the whole row.

When you say that it keeps the ID, it means that it maintains the current autonumber ID, right?

When you insert a new record it assumes the next ID.

In order for the ID to be restarted it would be necessary to use "Truncate Table" statement, but the hubruntime user doesn't have sufficient permissions to execute it.

So what can I do now?

Just one table, it has over 50 Million's records....

Its affecting the performance of the query as well....


Diogo Capitão Pedrosa wrote:

DELETE FROM {Entity} delete the whole row.

When you say that it keeps the ID, it means that it maintains the current autonumber ID, right?

When you insert a new record it assumes the next ID.

In order for the ID to be restarted it would be necessary to use "Truncate Table" statement, but the hubruntime user doesn't have sufficient permissions to execute it.



If you are facing a performance problem, It's good for you to contact your DBA, and to maintenance the database.

I think this document may help.

https://success.outsystems.com/Support/Enterprise_Customers/Maintenance_and_Operations/SQL_Server_Best_Practices


Diogo Capitão Pedrosa wrote:

DELETE FROM {Entity} delete the whole row.

When you say that it keeps the ID, it means that it maintains the current autonumber ID, right?

When you insert a new record it assumes the next ID.

In order for the ID to be restarted it would be necessary to use "Truncate Table" statement, but the hubruntime user doesn't have sufficient permissions to execute it.

Also on Cloud service you might not be able to truncate.

If thats the case, I would suggest you start thinking about archiving in different tables in order to increase the performance of the main table. 

However, having increment ID does't have anything to do with table performance. If you deleted the row the only thing that is not "reseted" is the ID increment, the values are deleted.

If you really want to reset the ID value and you are in the cloud environment (you can't use TRUNCATE TABLE), I have a trick that will let you accomplish this.  Note that using this trick will not produce any performance improvement, it just removes all data and resets the ID.

There is one big assumption, that the eSpace that contains the table has no other data that you need to preserve.  What I do in all my applications is I put all the entities in a Database module, make them Public Read-Only and then create actions for all the functions you need and make the actions public. This allows me to delete all my data and reset the IDs without affecting anything else.

1) Open the eSpace with the entity to reset in Service Studio
2) Save as OML (not required but is a backup just in case)
3) In Lifetime, delete the eSpace that has the entity
4) In Service Studio, publish the eSpace (you may need to make some cosmetic change to make the 1-Click button turn green)

I've used this many, many times with no issues.

Hope this helps,

Curt


Hello Curt,

Thats a nice suggestion you made there : ) 

However, if you simply cut and paste your table on the development environment, without having to publish in between, it does the same result.

In the end the platform creates a new table in the database which will be empty.

Use with caution, since references to other espaces will have to be redone.

Best regards,

Cristiano

Cristiano,

Using my method, which does NOT cut and paste anything, all references are preserved since nothing actually changed.  Again, I've used it many, many times and it works great.

I do a lot of data conversions.  When importing the data, the ID number obviously increases.  When I get the import logic perfect I do one last run but use my trick first because, like the original poster, people don't like those higher than expected ID numbers.

Curt


I just cleared/ deleted, 100 millions rows, is there a reason why I am not freeing up more disk space?


while clearing that much of data, a couple of times it throws an error, log is full and I usually run this sql code on the back end, then continue again with the clearing up of the dtabase

USE outsystems;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE outsystems
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (outsystems_log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE outsystems
SET RECOVERY FULL;
GO