• Repost - originally posted 24-10-2012

Implicit conversions and poor performance


Whilst debugging a slow query recently and went through the usual suspects. There were indexes on join columns, all the table joins were defined, no table scans were being done. What we eventually found was that on a join of two large tables that the join columns had different datatypes. This was causing an implicit conversion to occur and because of the table size was causing performance issues. After doing some testing we were able to change the datatype from a char to an int. This matched the datatype of the other column and the query performed better. No change to the query but a change to the underlying datatype. We had to drop the index on this column and have an outage window to make the datatype change, then recreate the index.

These implicit conversions also apply to stored procedures and functions where your parameter datatype should match your variable datatype which should be the same as your table datatype. Keeping them all the same can make a big difference.


7 views0 comments

Recent Posts

See All

Review missing index findings before you implement

There are a number of queries widely available that will look at the DMVs and tell you what indexes you may be missing. This is a great help with performance tuning your SQL Server. But be cautious of

Autoclose and autoshrink - set them to off

By running a simple check of your database settings (use helpdb or query sys.databases) you may find databases that are set to autoclose and/or autoshrink. If so, you should change these settings as p

Insert query performance improvement

I was asked the other day was there any way to improve the performance of a large number of rows being inserted. My immediate thought was to drop all indexes, do the inserts then recreate all the inde