Thursday 8 November 2012

SELECT * versus SELECT Col1, Col2 and Query Plans

Yesterday I deployed a colleagues project; a Website and a SQL Server Database to my local machine. I hadn't seen the code before but I'd seen the site in action. The Website used Entity Framework to connect to the underlying database.

When I first ran the Web page today there was no data in the database. So I then ran an SSIS package to populate the database.

Then when I reselected the Web page it timed out when executing the SQL query. Odd, I thought, so I looked at the underlying query. Entity Framework was issuing:

SELECT 
1 AS [C1], 
[GroupBy1].[K2] AS [IntraDayIntLegalEntityName], 
[GroupBy1].[K1] AS [IntraDayIntPortfolioName], 
[GroupBy1].[K3] AS [BaseCurrency], 
[GroupBy1].[A1] AS [C2], 
[GroupBy1].[A2] AS [C3]
FROM ( SELECT 
 [Extent1].[K1] AS [K1], 
 [Extent1].[K2] AS [K2], 
 [Extent1].[K3] AS [K3], 
 SUM([Extent1].[A1]) AS [A1], 
 SUM([Extent1].[A2]) AS [A2]
 FROM ( SELECT 
  [Extent1].[IntraDayIntPortfolioName] AS [K1], 
  [Extent1].[IntraDayIntLegalEntityName] AS [K2], 
  [Extent1].[BaseCurrency] AS [K3], 
  [Extent1].[IntraDayBasePayment] - [Extent1].[YestBasePayment] AS [A1], 
  [Extent1].[IntraDayBasePayment] - [Extent1].[YearEndBasePayment] AS [A2]
  FROM [FxR].[RawDataView] AS [Extent1]
 )  AS [Extent1]
 GROUP BY [K1], [K2], [K3]
)  AS [GroupBy1]

So I ran this query and it took ages. Longer than 15 minutes!

So I started to debug this. This query selected a view which itself did several left joins on a view.
But even more bizarrely, if I did

SELECT * FROM [FxR].[RawDataView] AS [Extent1]

Then the query took 10 seconds. If I re-ran
SELECT [IntraDayIntLegalEntityName] FROM [FxR].[RawDataView] AS [Extent1]

It took longer than 15 minutes! Hold on, why should SELECT * be quicker than Select ?

Time to debug! Enter the query plan. It was clear that the query plans were different for the two queries. Looking through the query plan (right to left, top to bottom) I could see that the faster query seemed to only pass 3000 rows early on, whereas the slow query passed 100,000 rows several times. This helped explain the slowness.


Also running the command

SET STATISTICS IO ON

Showed the following information:

Table 'TradeData'. Scan count xx, logical reads xxxxx.

The scan count and logical reads were vastly higher for the slow query.

Then, finally, I spotted the clue that led to the solution.
You can see from the screen that the estimated number of rows was vastly different from the actual number of rows. Clearly the query plan was expecting far fewer rows than it got. It was obviously out of date.

Running the query showed when the statistics were last updated:

SELECT
t.name AS Table_Name
,i.name AS Index_Name
,i.type_desc AS Index_Type
,STATS_DATE(i.object_id,i.index_id) AS Date_Updated
FROM
sys.indexes i JOIN
sys.tables t ON t.object_id = i.object_id JOIN
sys.partitions sp ON i.object_id = sp.object_id
WHERE
i.type > 0 and sp.rows > 0
-- AND i.name like '%RawDataView%'
ORDER BY
STATS_DATE(i.object_id,i.index_id) DESC,
t.name ASC
,i.type_desc ASC
,i.name ASC 

And this indicated the statistics were updated yesterday. SQL Server had determined the query plan based upon yesterday's data - when there was none there! After adding a 100,000 rows today the query plan was hopelessly out of date and not applicable.

The solution? I ran

UPDATE STATISTICS Fxr.TradeData WITH FULLSCAN

on each table and suddenly my query went down from 15 minutes to 10 seconds. The morale of the story? Learn to read query plans, and update your statistics if your data changes significantly!

No comments:

Post a Comment