Archive

Posts Tagged ‘fn_dblog’

How to find out what is causing page splits?

March 30, 2012 2 comments

Client was noticing high number of Page Splits/sec in relation to Batch Request/sec; however could not figure out why.  They were not have performance issue but still was curious what was generating it?

So how do we track it?  Well short of the answer is there is no easy way to do this.   Even though perfmon has ability to tell you page splits are taking place by Page Splits/Sec in SQLServer: Access Methods; it cannot tell you where.

We can use the fn_dblog function to read the transaction log file in SQL Server 2000/2005 and in SQL Server 2008 we can rely on extended events.

  SELECT   Operation
, AllocUnitName
		 , COUNT(*) AS NumberofIncidents
    FROM sys.fn_dblog(null, null)
   WHERE Operation = N'LOP_DELETE_SPLIT'
GROUP BY Operation, AllocUnitName

Example from #1.

I am not going to repeat the work done by others here, the code or examples are from reference links below if you need additional details.

  1. Daniel Adeniji’s – Learning in the Open. Microsoft – SQL Server – Page Splits. Link.
  2. Microsoft Connect. SQL Server Extended Events Page/Splits Event Additions. Link.
  3. SQL Server Pro. Evaluating the Page Splits/sec Value. Link.
  4. SQLBlog.Com. Michael Zilberstein (Hopefully soon to be a SQL MVP ;-)). Monitoring page splits with Extended Events. Link.

Edit Notes

  • March 30, 2012: I had it wrong that Michael is not a SQL MVP yet, so here to hoping he gets MVP :). Thanks for correction mate.

How to find the log file usage for a transaction in SQL Server 2005?

March 30, 2012 Leave a comment

Word of caution before you try/read this: I am using undocumented function to read the trace file.  I do not grantee the results to be accurate, please use it your own risk. 

Client was wondering if there was anyway we can figure out how much log file is being used for each statement executed on SQL Server?  The simple answer is NO.  We can look at the Log Bytes Flush/sec in the SQL Server: Database object; however it doesn’t really correlate back to which transaction did the work.  So I decided to some random testing using fn_trace_gettable (Used to read Trace Files) and fn_dblog (Undocumented Command).

First open and setup a Trace with with following events/columns; you can add more events if you think it is needed for example StmtCompleted.

Please note this is minimal data needed, you are free to add additional data however it will not be needed for purpose of finding T-Log file usage.

Next turn on the trace and execute some random statements that will be logged to T-Log. After you have finished capturing your trace events, stop the trace and save the trace file where you can read it using fn_trace_gettable function.

Now execute following statement in the context of the DB we want to track log space usage:

WITH TranTotal (TransactionID, TotalLogFileUsage, OperationCount)
AS
(  SELECT   [Transaction ID]
          , SUM([Log Record Length])
		  , COUNT(*)
     FROM sys.fn_dblog(null,null)
 GROUP BY [Transaction ID])

SELECT   dblog.[Transaction ID]
		        , OperationCount
				, T.TextData
				, TotalLogFileUsage
				, dblog.[Begin Time]
           FROM sys.fn_dblog(null,null) dblog
INNER JOIN TranTotal TT
	         ON dblog.[Transaction ID] = TT.TransactionID
LEFT OUTER JOIN sys.fn_trace_gettable('C:\temp\TraceDataForCalculation.trc',default) T
ON dblog.[Begin Time] = T.StartTime

From which you’ll get output like below; please note one batch can have more the one transaction.  I am not quite sure how it is deteremined batch id, if I find more on this I will post.

Where the TextData is NULL my Trace File did not have any data, since I justed my regular test database and I have never done back on it, I have transactions in there from before.

PS I am sure Extended Events makes this easy, rightt now 2005 was goal.  I’ll post on 2008 later when I have “spare time” :p.

%d bloggers like this: