Archive

Archive for the ‘Performance Tuning’ Category

How to find out what is causing page splits?

March 30, 2012 2 comments

Client was noticing high number of Page Splits/sec in relation to Batch Request/sec; however could not figure out why.  They were not have performance issue but still was curious what was generating it?

So how do we track it?  Well short of the answer is there is no easy way to do this.   Even though perfmon has ability to tell you page splits are taking place by Page Splits/Sec in SQLServer: Access Methods; it cannot tell you where.

We can use the fn_dblog function to read the transaction log file in SQL Server 2000/2005 and in SQL Server 2008 we can rely on extended events.

  SELECT   Operation
, AllocUnitName
		 , COUNT(*) AS NumberofIncidents
    FROM sys.fn_dblog(null, null)
   WHERE Operation = N'LOP_DELETE_SPLIT'
GROUP BY Operation, AllocUnitName

Example from #1.

I am not going to repeat the work done by others here, the code or examples are from reference links below if you need additional details.

  1. Daniel Adeniji’s – Learning in the Open. Microsoft – SQL Server – Page Splits. Link.
  2. Microsoft Connect. SQL Server Extended Events Page/Splits Event Additions. Link.
  3. SQL Server Pro. Evaluating the Page Splits/sec Value. Link.
  4. SQLBlog.Com. Michael Zilberstein (Hopefully soon to be a SQL MVP ;-)). Monitoring page splits with Extended Events. Link.

Edit Notes

  • March 30, 2012: I had it wrong that Michael is not a SQL MVP yet, so here to hoping he gets MVP :). Thanks for correction mate.

How to troubleshoot Writelog wait type?

November 26, 2011 2 comments

Few weeks ago when I was teaching the SQL Server PTO Workshop, I didn’t do a deep dive into how to troubleshoot writelog wait type.  Before I wrote something, I decided to look what was out there.  I think Sakthivel does a far more justice to the issue then I would have.  Please have a read, how T-Log works and why you might see writelog wait type.

  • Sakthivel Chidambaram; What is WRITELOG waittype and how to troubleshoot and fix this wait in SQL Server? (Link)
  • SQLCAT, Lindsey Allan; Diagnosing Transaction Log Performance Issues and Limits of the Log Manager (Link)
  • SQLCAT Blog; Deploying SQL Server 2005 with SAN #3 (Link)
  • CSS SQL Server Engineer Blog; Discussion About SQL Server I/O (Link)
  • MSDN, SQL Server 2005 Technical Articles; SQL Server I/O Basics, Chapter 2 (Link)
  • Bob Dorr, Microsoft SQL Server Principal PFE; Microsoft SQL Server Database Engine I/O PowerPoint Presentation (Link)

(copied links from Sakthivel’s Blog).

Last updated date for statistics?

November 9, 2011 Leave a comment

There is many ways to get his information; however in most cases you get information for a single table. If you want to get it for all statistics in a database, how to do it? We can use the STATS_DATE function to get this information. Here is a simple query …

SELECT  OBJECT_NAME(s.OBJECT_ID)         AS TableName
      , s.name                           AS StatisticsName
      , STATS_DATE(s.OBJECT_ID,stats_id) AS LastUpdated
 FROM sys.stats s
 JOIN sys.tables t ON s.OBJECT_ID = t.OBJECT_ID

You can use this query to control what stats to build based on whent hey were last built.  With the change windows being small in some organization, you might not be able to rebuild all stats every week.  Specially for larger tables with large sample size; in which case basing your builds on Last Updated date will be helpful.

Get Top 50 SQL Statements/Query Plans

October 21, 2011 1 comment

Finding out what statements causing grief on SQL Server has become significantly easier with SQL Server 2005+.  I am using some of the DMV/DMF to pull this information together.  Please note on a 64-bit system with lots of memory this can take a bit of time to run because of Sort & pulling execution plan information.  If that is an issue, I recommend getting the full information without sort by and dumping it to a temp table.  From there do your analysis, here are few options for running this statement:

Following query returns everything it is very expensive and takes a while to complete, even if you dump it to tempdb it can take 10+ mintutes on systems with lots of memory.

-- Get Top 50 Statements that have been recorded in SQL Server since last restart.
--

DECLARE @Top50By INT

-- 1 = Duration, 2 = CPU, 3 = Reads
SET @Top50By = 1

     SELECT   TOP 50
              SUBSTRING(st.text,
                        (qs.statement_start_offset/2)+1,
                        ((CASE qs.statement_end_offset WHEN -1 THEN
                             DATALENGTH(st.text)
                          ELSE
                             qs.statement_end_offset
                          END - qs.statement_start_offset)/2) + 1)  AS SQLStatement
            , qp.query_plan                                         AS QueryPlan
            , execution_count                                       AS NumOfExecutions
            , (total_elapsed_time/execution_count)/1000             AS AVGDuration_ms
            , (total_worker_time/execution_count)/1000              AS AVGCPU_ms
            , (total_logical_reads/execution_count)                 AS AVGReads
            , (total_logical_writes/execution_count)                AS AVGWrites
       FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
   ORDER BY CASE
            WHEN @Top50By = 1 THEN
               (total_elapsed_time/execution_count)/1000
            WHEN @Top50By = 2 THEN
               (total_worker_time/execution_count)/1000
            WHEN @Top50By = 3 THEN
               (total_logical_reads/execution_count)
            END DESC

Same code, but I have taken out the execution plan, so it should execute faster. The idea is once you know which queries you want to address those are the only plans you will bring up; making it faster. However this method if you save the data, there is no grantee the plan will be there in the future.

-- Get Top 50 Statements that have been recorded in SQL Server since last restart.
--

DECLARE @Top50By INT

-- 1 = Duration, 2 = CPU, 3 = Reads
SET @Top50By = 1

     SELECT   TOP 50
              SUBSTRING(st.text,
                        (qs.statement_start_offset/2)+1,
                        ((CASE qs.statement_end_offset WHEN -1 THEN
                             DATALENGTH(st.text)
                          ELSE
                             qs.statement_end_offset
                          END - qs.statement_start_offset)/2) + 1)  AS SQLStatement
            , qs.plan_handle										AS PlanHandle
            , execution_count                                       AS NumOfExecutions
            , (total_elapsed_time/execution_count)/1000             AS AVGDuration_ms
            , (total_worker_time/execution_count)/1000              AS AVGCPU_ms
            , (total_logical_reads/execution_count)                 AS AVGReads
            , (total_logical_writes/execution_count)                AS AVGWrites
       FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) st
   ORDER BY CASE
            WHEN @Top50By = 1 THEN
               (total_elapsed_time/execution_count)/1000
            WHEN @Top50By = 2 THEN
               (total_worker_time/execution_count)/1000
            WHEN @Top50By = 3 THEN
               (total_logical_reads/execution_count)
            END DESC

Getting Index count for all tables

August 9, 2010 Leave a comment
WITH cte AS
(
    SELECT   t.name AS Table_Name
	       , t.object_id
		   , i.index_id
		   , i.type
		   , i.type_desc
      FROM sys.indexes i
INNER JOIN sys.tables t
        ON i.object_id = t.object_id
     WHERE t.is_ms_shipped = 0
	   AND i.is_disabled = 0
	   AND i.is_hypothetical = 0
       AND i.type <= 2
), cte2 AS
(
SELECT *
  FROM cte c
 PIVOT (count(type) for type_desc in ([NONCLUSTERED])) pv
)
    SELECT   c2.table_name       AS TableName
           , MAX(p.rows)         AS Rows
		   , CASE WHEN (OBJECTPROPERTY(c2.object_id,'TableHasClustIndex') = 1) THEN
		        'HEAP'
		     ELSE
			    'CLUSTERED'
			 END                 AS TableType
           , sum([NONCLUSTERED]) AS NumOfNonClusteredIndexes
      FROM cte2 c2
INNER JOIN sys.partitions p
        ON c2.[object_id] = p.[object_id]
	   AND c2.index_id = p.index_id
  GROUP BY table_name, c2.object_id

Davide Mauri wrote a nice script, I have done similar work without using CTE.  So I really liked the script; I notice in his comments Rob suggested using OBJECTPROPERTY.  I haven’t used the function much, but just to get use to it I have re-write Davide’s script using Object Property and reformatted the code to my style ;-).   I like Davide’s script, because when it comes to T-SQL coding, I am fairly week.  So it was nice seeing an example double CTE + Pivot table :D.

Reference

  1. SQLBlog.Com. Davide Marui. Find all the tables with no indexes at all. Link.

Listing blocking chain by tasks

May 25, 2010 Leave a comment
WITH TaskChain (waiting_task_address, blocking_task_address, ChainId, LEVEL)
AS
(
   -- Anchor member definition: use self join so that we output
   -- Only tasks that blocking others and remove dupliates

   SELECT   DISTINCT A.waiting_task_address
          , A.blocking_task_address
          , A.waiting_task_address   AS ChainId
          , 0 AS LEVEL
     FROM sys.dm_os_waiting_tasks AS A
     JOIN sys.dm_os_waiting_tasks AS B
       ON A.waiting_task_address = B.blocking_task_address
    WHERE A.blocking_task_address IS NULL

UNION ALL

	-- Recursive member definition: Get to the next level waiting tasks

   SELECT   A.waiting_task_address
	  , A.blocking_task_address
	  , B.ChainId
	  , LEVEL + 1
     FROM sys.dm_os_waiting_tasks AS A
     JOIN TaskChain AS B
       ON B.waiting_task_address = A.blocking_task_address
)
  SELECT   waiting_task_address
         , blocking_task_address
 	 , ChainId
         , LEVEL
    FROM TaskChain
ORDER BY ChainId

References: #20

Queries execute slow what is the issue?

May 4, 2010 Leave a comment

This is a very loaded question, there is no simple answer.  But few things to make sure before you go crazy on PTO, make sure your indexes are maintained for the databases.  If we execute query similar to [6], just with “index_id > 0”; that is I only want to know about clustere and non-cluster indexes. Please note I am supplying in db_id() function, therefore it must be executed in context of the database that is giving you issues.

  SELECT   OBJECT_NAME(IPS.OBJECT_ID)       AS TableName
         , CASE WHEN i.name IS NULL THEN
              'HEAP TABLE, NO INDEXES'
           ELSE
              i.name
           END                              AS IndexName
         , index_type_desc                  AS IndexType
         , index_depth
         , avg_fragmentation_in_percent
         , page_count
    FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') IPS
    JOIN sys.indexes I ON IPS.OBJECT_ID = I.OBJECT_ID AND IPS.index_id = I.index_id
   WHERE I.index_id > 0
ORDER BY avg_fragmentation_in_percent

Information returned by this query can daunting on larger databases; but general idea is, to be cautious of is the average fragmentation.  It should be low as possible, but if it is higher then 10% you might have to look at defragging your indexes.  If this query returns no rows, you got serious configuration issue on database because it means the there are no indexes on the user tables.  If it only returns rows where the index_id is 2 then it means the database does not have any cluster indexes which can be host of other issues.

Another DMV, SQL Statement you can use to see whats happening on your SQL Server is the dm_io_virtual_file_stats; you can see relatively which file on your server is being accessed the most.

-- Support Statements
DECLARE @DBID AS int

-- Step 1: Get Database ID for which you wish to get information for
SELECT @DBID = db_id('master')

-- Step 2: Stall Read Information for the Database
  SELECT DB_NAME(database_id) AS DBName,
         file_id,
         CASE WHEN num_of_reads = 0 THEN
            io_stall_read_ms / 1
         ELSE
            io_stall_read_ms / num_of_reads
         END AS AVG_IO_StallTime_Reads_ms,
         CASE WHEN num_of_writes = 0 THEN
            io_stall_write_ms / 1
         ELSE
            io_stall_write_ms / num_of_writes
        END AS AVG_IO_StallTime_Writes_ms
   FROM sys.dm_io_virtual_file_stats(@DBID,null)

UNION

  SELECT DB_NAME(database_id) AS DBName,
         file_id,
         CASE WHEN num_of_reads = 0 THEN
            io_stall_read_ms / 1
         ELSE
            io_stall_read_ms / num_of_reads
         END AS AVG_IO_StallTime_Reads_ms,
         CASE WHEN num_of_writes = 0 THEN
            io_stall_write_ms / 1
         ELSE
            io_stall_write_ms / num_of_writes
        END AS AVG_IO_StallTime_Writes_ms
   FROM sys.dm_io_virtual_file_stats(2,null)
GO

After all that if the issue are still not obvious you can do to some SQL Profile Logging with following events; then we can run this through some SQL Utilities (like ClearTrace) that will tell us where are the performance impacts are with in SQL Server.

Events at minimum you should log:
Stored Procedures/RPC: Completed (SPID, CPU, Reads, Writes, Durations, TextData, StartTime)
TSQL/SQL: BatchCompleted (SPID, CPU, Reads, Writes, Durations, TextData, StartTime)

After logging this data, running ClearTrace against it gives you overview of where the issue might be.

Last but not least to find out if there are blocking issues, please refer to [2] on how to enable blocking check in SQL Server.  And then create another SQL Server Profiler trace with following events to see what blocking is occurring (feel free to email me, I can try helping with the report).

Events to log for blocking:
Errors and Warmings/Blocked Process Report (All Columns)

Performance tuning SQL Server is lots of work and there are complete books dedicated to it.  But this gives some of the basics on where to start to investigate the issues.

Reference Links

  1. Clear Trace. Link.
  2. Enabling Block Report Check in SQL Server. SQLLearnings. Link.
  3. Inside Microsoft SQL Server 2005: The Storage Engine (Solid Quality Learning) Link.
  4. The Guru’s Guide to SQL Server Architecture and Internals Link.
  5. Microsoft SQL Server 2008 Internals (Pro – Developer) Link.
  6. SQLCAN. Mohit Gupta. Getting List of Fragmented Indexes in a Database. Link.
Categories: Performance Tuning

Performace Tuning Stored Procedures

April 11, 2010 Leave a comment

I been performance tuning an OLTP system which is built with Visual Basic Front end, Microsoft Components middle tier and SQL Server back-end for DBMS.  It uses stored procedures for all its work relating to database.  This makes it easier for me to tune most of the issues in the system because the front end code doesn’t have to be re-written and deployed.

So anyhow this system was having serious performance issues for many reasons (most of them being worked on still).  But today here I am going to talk about two issues in particular 1) Parameter Sniffing 2) Max Clause without/ Group By.

1) Parameter Sniffing
This is not a new topic, but since most of the systems I support as a DBA don’t involve stored procedures I don’t run into this much.

“Parameter Sniffing” refers to a process where by SQL Server’s execution environment “sniffs” the current parameter values during compilation or recompilation, and passes it along to the query optimizer so that they can be used to generate potentially faster query execution plans.  The word “current” refers to the parameter values present in the statement call that caused a compilation or recompilation. [1].

 Omnibuzz [1], talks about simple way into fix this issue.  I want to just talk about how to figure out if this issue exists in your data or if are being inflicted by this.

When SQL Server is generating execution plans it relies on Indexes & Statistics to tell it how it should access data.  These plans stay in memory until SQL Server cannot hold them any more or the plans have expired forcing a recompile [2].  When SQL Server is building a plan for a stored procedure it is basing its decisions on the question asked but if that question changes every time then execution plan might not be optimal for next question as it was for the first question.  One of the reasons for this can be missing statistics on data generating less then optimal execution plan, another reason is the data in table(s) being affected is distributed like a bell curve [3].

How I noticed I had issue with this particular stored procedure?

  • I was recording execution plans from production and doing testing on another SQL Server.
  • Both databases were same, test had been restored from production.
  • The indexes and statistics were all up-to-date on both systems (i.e. there was no fragmentation and they were not out-dated; this does not exclude the fact that there might be missing indexes/statistics).
  • When stored procedure executed on production system it generated 22 million reads and took about 220s to execute and returns 2 rows (eww? right!) [Plan A].
  • When I executed the same stored procedure on test server, it generated about 50K reads and took about 20s to execute and returns 2 rows also (what the!?!) [Plan B].

So at first parameter sniffing didn’t come to my mind; I first looked at the execution plans on both servers, they looked almost identically.  But there were few key differences that caused [Plan A] to cause full table scans of 2 more tables, because the plan that had been generated was for large data set.  Where as [Plan B] was generated was for 2 rows only.  Microsoft SQL Server decides a difference between a Scan and Seek on a index based on number of being selected and its about 0.3% of the rows [4].   So now I was thinking this was parameter sniffing; so I made the stored procedure use a generalized plan.  Generalized plans work great for 70% of the queries, as the extreme data is selected from either end of the bell curve the performance degrades a bit.

After generalizing I was expecting the performance to be better and as expected it did go down from 22 million reads to about 150K reads.  But considering it was returning 2 rows, this was still not a good enough answer.  But at least it was same execution plan every time, so it is possible to tune this now.  I turned my attention to possible missing indexes and the SQL code in stored procedure.  I couldn’t really see any missing indexes, so still running a bit confused; looking at execution plan and the stored procedure I found still few full table scans. This lead to me finding following code, which was causing the SQL Server to generate full table scans.

2) Sub-select issues

SELECT *
  FROM SampleTable ST
WHERE DTValue = (SELECT MAX(DTValue)
                    FROM SampleTable
                   WHERE SampleTable.ID = ST.ID)
   AND MainID = 98

Note: SampleTable has 3 columns, MainID, ID, and DTValue; with 1 million rows.  I generated the data using Red Gate Data Generator so I don’t have script sorry.

This looked like fairly simple code; but it generated following execution plan:

Problem with this execution plan is it generates a full scan but why?  We are passing in the ID value so it should only need to select subset of the data.  Problem is the sub-select being used to complete the max date for given ID has no Group-By clause and the sub-select doesn’t know anything about the MainID.  So execution plan does the following to return us the result set:

  1. Get list of all ID and Dates pairs from the Clustered Index, returning full 1 million rows (note it is ignoring MainID, has no idea which ID value links to which MainID).
  2. Compute Max Date for each ID value generated from step 1; returning ID and Max(DTValue) pairs, returning 159 rows (again this is completely invalid data; as the rows returned *might* not be for MainID that was passed in).
  3. Next seek the MainID index based on our ID value we passed, getting pair of ID and DTValues for given ID, returning 635 rows.
  4. Then last but not least match results from step 2 and step 3 to produce final output of 0 row (because the rows returned from step 2 are all invalid).

So problems with this?

  1. Its missing the filter clause causes system to return invalid data.
  2. Missing group-by clause makes it so SQL Server does grouping before filtering the data.

I rewrote the SQL to following:

SELECT *
  FROM SampleTable ST
 WHERE DTValue = (SELECT MAX(DTValue)
                    FROM SampleTable
                   WHERE SampleTable.ID = ST.ID
                     AND SampleTable.MainID = ST.MainID
                GROUP BY ID)
   AND MainID = 786

This time it generates following execution plan:

 This execution plan is bigger then the previous one; but lets look at the differences.

  1. Index Seek for MainID, returning Triplet of MainID, ID, and DTValue for given MainID; returning 635 rows (top row).
  2. Index Seek based on MainID and ID, returning Triplet of MainID, ID, and DTValue for given MainID and ID. (bottom row, steps 2-4 repeated for each row in Step 1.).
  3. Followed by TOP, which is looking at the top DTValue..
  4. Following by the Stream Aggregate like before to compute the Max(DTValue).
  5. Assert, checking to make sure some rows were returned.
  6. After it finished computing Max(DTValue) for each MainID & ID Pair it filters data based on Max(DTValue) found.

 This time it had to do more work to get the final result; but there are few key differences:

  1. No full table scan on 1M row table.  It did quick seeks 635 times, which at most would have caused about 3 reads; so total of 1905 reads; comparing to before which it caused 4435 reads.  So we have gained an improvement of about 232%!.  Imagine the affect on table with 10 million, 100 million or more rows.
  2. We are now actually getting some results back, comparing to before of zero.  So this is actually corrects a mistake in query also.

References
[1] Parameter Sniffing & Stored Procedure Execution Plan. SQL Garbage Collector. Omnibuzz. Link.
[2] Execution Plan Cache and Reuse.  Books Online. Microsoft. Link.
[3] Bell Curve. Wikipedia. Link.
[4] Seek or Scan?. Microsoft SQL Server MVP, Gail Shaw. Link.
[5] Dissecting SQL Server Execution Plans, Grant Fritchey. Link.

Categories: Performance Tuning

Bench Mark Logging

May 29, 2009 Leave a comment

Bench marking also system administrators to easily see what changed between the time system went live and the current performance issue being experienced. So I was setting this up on the new SQL 2005 Cluster running on Windows 2008; I have setup following counters. Everyone’s needs will be different; if there are counters anyone think I should consider adding please email me at gupta.mohit.k@gmail.com or leave me comment here 🙂 Thanks!

I am tracking following counters and my reasoning for it …

System/OS Level:

  • Logical Disk, I know many people recommend physical disk just in case you are running multiple partitions on one disk. In my case each logical disk does not get shared with anything else so I am only tracking Logical performace.
  • Memory, I am running Active-Active configuration so I need to know how well that will work with memory (I got Min/Max, Lock pages in memory all configured).
  • Page file, Again testing memory pressure.
  • Network, Network I/O
  • Processor, Processor % Time, Queue Length etc.

SQL Level:

  • Buffer Manager, Memory pressure.
  • Database Statistics, Size, Log usage, # of transactions.
  • General Server Statistics, Application in question uses a lot of temp tables so I need to know when the usage of temp tables increases in case I have to adjust tempdb.
  • Latch & Lock Information, Blocking information, if any.
  • Transaction Information, Space in tempdb, batch requests, recomplies, plan cache, etc.
  • Wait Statistics, any waits taking place? If so what, network, disk, memory, or cpu?

Counter List:
\LogicalDisk(*)\% Disk Read Time
\LogicalDisk(*)\% Disk Time
\LogicalDisk(*)\% Disk Write Time
\LogicalDisk(*)\% Free Space
\LogicalDisk(*)\% Idle Time
\LogicalDisk(*)\Avg. Disk Bytes/Read
\LogicalDisk(*)\Avg. Disk Bytes/Write
\LogicalDisk(*)\Avg. Disk Read Queue Length
\LogicalDisk(*)\Avg. Disk sec/Read
\LogicalDisk(*)\Avg. Disk sec/Write
\LogicalDisk(*)\Avg. Disk Write Queue Length
\Memory\Available MBytes
\Memory\Page Faults/sec
\Memory\Pages/sec
\MSSQL$SQLInstanceName:Buffer Manager\Buffer cache hit ratio
\MSSQL$SQLInstanceName:Buffer Manager\Checkpoint pages/sec
\MSSQL$SQLInstanceName:Buffer Manager\Page life expectancy
\MSSQL$SQLInstanceName:Databases(*)\Active Transactions
\MSSQL$SQLInstanceName:Databases(*)\Backup/Restore Throughput/sec
\MSSQL$SQLInstanceName:Databases(*)\Data File(s) Size (KB)
\MSSQL$SQLInstanceName:Databases(*)\Log File(s) Size (KB)
\MSSQL$SQLInstanceName:Databases(*)\Log Flushes/sec
\MSSQL$SQLInstanceName:Databases(*)\Percent Log Used
\MSSQL$SQLInstanceName:Databases(*)\Transactions/sec
\MSSQL$SQLInstanceName:General Statistics\Active Temp Tables
\MSSQL$SQLInstanceName:General Statistics\Logins/sec
\MSSQL$SQLInstanceName:General Statistics\Transactions
\MSSQL$SQLInstanceName:Latches\Average Latch Wait Time (ms)
\MSSQL$SQLInstanceName:Latches\Latch Waits/sec
\MSSQL$SQLInstanceName:Latches\Total Latch Wait Time (ms)
\MSSQL$SQLInstanceName:Locks(*)\Average Wait Time (ms)
\MSSQL$SQLInstanceName:Locks(*)\Lock Requests/sec
\MSSQL$SQLInstanceName:Locks(*)\Lock Timeouts (timeout > 0)/sec
\MSSQL$SQLInstanceName:Locks(*)\Lock Waits/sec
\MSSQL$SQLInstanceName:Locks(*)\Number of Deadlocks/sec
\MSSQL$SQLInstanceName:Memory Manager\Connection Memory (KB)
\MSSQL$SQLInstanceName:Memory Manager\Target Server Memory (KB)
\MSSQL$SQLInstanceName:Memory Manager\Total Server Memory (KB)
\MSSQL$SQLInstanceName:Plan Cache(SQL Plans)\*
\MSSQL$SQLInstanceName:Plan Cache(Temporary Tables & Table Variables)\*
\MSSQL$SQLInstanceName:SQL Errors(*)\*
\MSSQL$SQLInstanceName:SQL Statistics\Batch Requests/sec
\MSSQL$SQLInstanceName:SQL Statistics\SQL Compilations/sec
\MSSQL$SQLInstanceName:SQL Statistics\SQL Re-Compilations/sec
\MSSQL$SQLInstanceName:Transactions\Free Space in tempdb (KB)
\MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time
\MSSQL$SQLInstanceName:Transactions\Transactions
\MSSQL$SQLInstanceName:Wait Statistics(*)\Lock waits
\MSSQL$SQLInstanceName:Wait Statistics(*)\Memory grant queue waits
\MSSQL$SQLInstanceName:Wait Statistics(*)\Network IO waits
\MSSQL$SQLInstanceName:Wait Statistics(*)\Page IO latch waits
\MSSQL$SQLInstanceName:Wait Statistics(*)\Page latch waits
\MSSQL$SQLInstanceName:Wait Statistics(*)\Wait for the worker
\Network Interface(*)\Bytes Received/sec
\Network Interface(*)\Bytes Sent/sec
\Network Interface(*)\Bytes Total/sec
\Network Interface(*)\Packets/sec
\Paging File(*)\*
\Processor(*)\% Privileged Time
\Processor(*)\% Processor Time
\Processor(*)\Interrupts/sec
\System\Context Switches/sec
\System\Processes
\System\Processor Queue Length
\System\Threads
\Terminal Services\*

Categories: Performance Tuning

Issues with abusing Table Variables

April 23, 2009 2 comments

I been thinking about writing about performance tuning issues I been dealing with. Every DBA has their own stories that they have ran into. In my cases it is easier to resolve these issues by creating proper indexes. But in some cases the actual code executing has to be re-written because of wide select. Recently while trouble shooting excessive use of TempDB in a system I look after I found an interesting use of Table Variables.

The code is dynamically builds a SQL statement batch and executes it against SQL Server every time someone accessed the page on the website. The code is not complicated but every time they are doing the following items:

  1. Create Table Variables (TV) to hold the data temporally for the dynamic batch statement generated.
  2. Take full contents of the main data tables and transfer it TV created in step 1.
  3. Create another TV table and insert into it all the filtered list of primary key ID fields from the first TV.
  4. Finally return the results to the user by taking first TV in step 2 and filtering the data based on information in second TV created in step 3.

What is wrong with this approach?
First brief information what TV are…

  • They were introduced in SQL Server 2000.
  • They work like variables and have well defined scope.
  • They are temporary tables; that get created in TempDB.
  • Their definition gets created in TempDB as random number and is not visible outside the scope of the SQL Batch the variable was declared.
  • They get created as soon as you use DECLARE statement and destroyed as soon as your batch completes.

So dumping data into a TV is not bad for small data set. But when you are working with thousands if not tens of thousands records it can cause excessive locking and heavy TempDB usage. I am taking a guess they might have used it because books online state table variable “cause fewer recompilations” then temporary tables. But that is for stored procedures and not ad-hoc SQL Statement. In addition these tables have no indexes, no statistics; so all access to the table happen by full table scan.

I created a test database, it contains 3 tables with all the proper indexes that these tables should have and I populated MainTable with 1000 Rows, SubTable with 1000 Rows, and PageTypeTable with 100 Rows.

Example:
So let’s look at an example of what I noticed and the issues with this approach (this is a simplified example):

DECLARE @rowData TABLE
(MainTableID INT,
PageName VARCHAR(50),
PageData1 VARCHAR(50),
PageData2 VARCHAR(50),
PageTypeID INT,
PageType VARCHAR(50))

INSERT INTO @rowData
SELECT main.MainTableID,
main.PageName,
main.PageData1,
main.PageData2,
ptype.PageTypeID,
ptype.Description AS PageType
FROM ( SELECT M.MainTableID, M.PageName, S.PageData1, S.PageData2, M.PageTypeID
FROM MainTable AS M
INNER JOIN SubTable AS S ON M.MainTableID = S.MainTableID) AS main
LEFT JOIN PageTypeTable AS ptype ON main.PageTypeID = ptype.PageTypeID

DECLARE @FilteredData TABLE
(MainTableID INT PRIMARY KEY)

INSERT INTO @FilteredData
SELECT DISTINCT MainTableID
FROM @rowData AS R
WHERE R.PageTypeID = 89


SELECT *
FROM @rowData AS R
WHERE R.MainTableID IN (SELECT MainTableID
FROM @FilteredData)

So if we look at the above example; first comes to mind is why is the code doing SQL’s job? Why not let SQL filter the data and return it in one select? Why make three selects out of it? And work with variable tables?

I ran this code 5 times and here are the SQL profiler results (statement level tracking):

Batch # CPU Duration Reads Writes RowCounts
1 16 10 2368 3 1026
2 16 16 2371 3 1026
3 16 10 2365 3 1026
4 15 11 2368 3 1026
5 0 13 2372 3 1026

So what’s interesting here? We see that every time I execute it SQL Server has to re-read the records and write it out; and it is working with the full set table 1000 rows + 13 filtered rows + 13 final rows in the data set. Now if I rewrite the SQL to following …

Refined Query (Taken from the first Query):

   SELECT main.MainTableID,
main.PageName,
main.PageData1,
main.PageData2,
ptype.PageTypeID,
ptype.Description AS PageType
FROM ( SELECT M.MainTableID, M.PageName, S.PageData1, S.PageData2, M.PageTypeID
FROM MainTable AS M
INNER JOIN SubTable AS S ON M.MainTableID = S.MainTableID) AS main
LEFT JOIN PageTypeTable AS ptype ON main.PageTypeID = ptype.PageTypeID
WHERE ptype.PageTypeID = 89

This time in the SQL Profiler I get very different results:

Batch # CPU Duration Reads Writes RowCounts
1 0 6 32 0 13
2 0 2 18 0 13
3 0 1 18 0 13
4 16 1 18 0 13
5 0 1 18 0 13

So key difference between the two:

  • Reads went down from 2372 to 18; logical reads.
  • Writes went down from 3 to 0 (no more written to temp tables, so no writing).
  • Row Count went from 1026 to 13, the only rows that we actual cared about.

So think about working on data set that has ten thousands or twenty thousands, or hundred thousand rows? How you think your server will fair? Now on top of it add your traffic, 10 people executing the same query, 100 people, 1000 people? The affects of writing code like in original example can be disastrous and is unacceptable; it will not matter how much hardware you throw at it it will not fix the problem.

So question is does the performance impacts end here?
No! Why? Because think about full table scans, what is the SELECT going to do to the table? It is going to create a shared lock on the table; now if this is OLTP system where there are selects and updates all the time. If we lock the entire table shared locks, what is going to happen to request for EXCLUSIVE lock? It will cause blocking. SQL Server is going to put it on because it cannot access the table. How can we test that? I can’t simulate the 1000’s of inbound connections; but using SQL Server profiler we see what kind of locks SQL Server takes out when we run the first batch of statements. You will see that SQL Server takes out locks on all the tables involved in the batch. You will also notice the locks being distributed among your database and the tempdb database. Because it acquiring shared locks on your databases but exclusive locks in tempdb to do the bulk load operation. Executing that batch generated over 5000 events for (Locks Acquired, Locks Releases, Lock Timeouts, and Statement Completed). When I run the second code, it generated about 2000 events. Less then half the locks acquired and released. So you are able to save additional processing time. This also saves on memory usage, as now we don’t have to read the entire table into memory. This will allow SQL to take selective page lock only; if table is large enough it should not cause blocking with other transactions wanting to update single rows else where in the table.

Conclusion:
It is general recommendation to avoid selecting rows and columns you shouldn’t select. This wastes I/O, Memory and CPU time. Yes hardware is improving very quickly; however there at one time or another you will have to deal with the code. Throwing more hardware at it is just patching the problem not fixing the problem.

References:
MSDN Online, Tables, Link.
SQL Server Central, TSQL – Tempdb Question of the day, Link.
SQL Storage Engine Engineers Blog, Lock Escalation, Link.

Updates:
May 15, 2009: I was mistaken on the introduction of TV into SQL Servers (thanks Chris!).

%d bloggers like this: