• Adam Thurgar

Deleting large amounts of data


I had a client call me about wanting to delete a large amount of data from their database. They knew what tables they wanted to delete from. They also knew that deleting large amounts of data causes the transaction log to grow.

Doesn’t matter what recovery model you are using, the transaction log will grow.

So if they went down the traditional delete path you are looking at lots of batch deletes, over a long period of time.

An alternative could be to

  1. Copy the data you want to keep to a staging table e.g. Manifest_Stage

  2. Truncate the Manifest table – truncate doesn’t use the transaction log as much

  3. Import the data from Manifest_Stage to Manifest.

  4. Drop the staging table

You could also just do the first step and then drop Manifest and then rename Manifest_Stage to Manifest.

This is of course a simplified overview as you would need to also manage Primary Key and Foreign Key relationships if they exist. Then make sure that the indexing is correct and any other table constraints or triggers etc.


3 views

Recent Posts

See All

Cardinality estimator

Recently I was asked by a software vendor to review a particular query that ran in under a second on a SQL Server 2014 installation at a compatibility level of 110 (SQL Server 2012), but when run unde

Index fragmentation

A law firm client, occasionally has issues with their legal software, that is provided by the global leader in this field. The response from the software provider is always the same - you have index f

Statistics sampling

DBA's and developers know the importance of having up to date statistics to ensure that query performance is optimal. Most of the time the statistics sampling percentage is left to the default. To und