• Adam Thurgar

Index your updates


We are currently working on a project to do a major data update on a manufacturing database to update client and product codes in preparation for moving to a new global system. This will require updates across 2 databases, 23 tables and hundreds of millions of rows. These updates are not difficult, but because of the size can be quite slow. This is when you must remember to check that the columns being updated are indexed and any table join columns are also indexed. This is commonly overlooked and can lead to updates (and merge) statements taking a lot longer than required and also taking out exclusive table locks - causing blocking. As an example, a 43 million row update took over 10 minutes without an index, after the index was added it took just over 2 minutes. These indexes do not need to be permanent and if not required could be dropped after the updates have been completed. So check your indexes before you start doing any major data updates.


3 views0 comments

Recent Posts

See All

Standardise database sizes and automation

The default setting for the initial sizing of data and log files (and their autogrowth) is not adequate. I like to make sure that we set standard sizes for our data and log files - plus standard sizes

Migrating SSIS packages to AWS

We are currently migrating a large amount of SSIS packages (SQL Server 2008R2) to SQL Server 2016 at AWS. It has been a learning exercise to get these packages running. What we have learnt (so far) is

Review missing index findings before you implement

There are a number of queries widely available that will look at the DMVs and tell you what indexes you may be missing. This is a great help with performance tuning your SQL Server. But be cautious of