Archive

Archive for the ‘Performance Tuning’ Category

Dissecting SQL Server Execution Plans

May 31, 2016 7 comments

I remember my days before, Microsoft SQL Server PFE.  I wanted to learn everything and know everything about SQL Server.  However, getting hold of good resources was tough, as I didn’t have any mentor when I started down my journey to becoming a SQL Server Database Administrator.

Along the way I did pick up lots of books and references.  One of such books is Dissecting SQL Server Execution Plans.

I read this book before becoming PFE, I read this now, and I recommend everyone read this book more then once.

Grant Fritchey (Blog|Twitter) wrote the book back in 2008; I would still recommend take ready.  This book will only help you be better DBA and Developer.

SQL Central, Jeff Moden, Dissecting SQL Server Execution Plans

http://www.sqlservercentral.com/articles/Book+Reviews/69019/

Amazon, SQL Server Execution Plans

http://www.amazon.com/gp/product/1906434026?ie=UTF8&tag=dkranchnet&linkCode=as2&camp=1789&creative=390957&creativeASIN=1906434026

SQL Central, Red Gate, EBook

http://www.sqlservercentral.com/articles/books/65831/?utm_source=ssc&utm_medium=weblink&utm_content=Grant&utm_campaign=sqltoolbelt

This post is cross posted on my SQLCAN Blog, MSDN Blog, and SQL server Consultation blog.

Index Creation Date

October 18, 2015 Leave a comment

Often when troubleshooting performance related issues, it is beneficial to know when new indexes were introduced.  However currently there is no easy way to keep track of this information or attain this information.  That’s why one of my colleagues and a good friend, Mohamed Sharaf (Blog | Twitter), has submitted a request on Microsoft Connect, Expose index creation date in sys.indexes, Link.  Please take a few minutes to vote if you agree Smile.

This post is cross posted on my SQLCAN Blog, MSDN Blog, and SQL server Consultation blog.

Edmonton SQLPASS – Performance Troubleshooting

September 14, 2015 2 comments

I will be presenting a short talk on things to consider when it comes to performance troubleshooting SQL Server on September 17, 2015.  Goal is to provide practical advice that SQL Server DBAs/developers can use to help identify some of the common performance issues they might have within their environment.  We’ll look at the key resources of SQL Server, the pressure on those resources, and how to look inside the black box of SQL Server.

If your interested please register and show up Edmonton SQL Pass here.

Help! How do I find disk I/O problems?

October 3, 2014 2 comments

This is a big topic, so the advice below is not meant to be comprehensive; but more of generaly troubleshooting walkthrought that you can use to try to identify the problem.

Disk performance generally is measured in the disk latency we are experience for reads and writes.  For reads going to disk we would like to have disk latency between 8 – 15 ms (measured by Avg. Disk Sec/Read) and for writes with cache backup we would like disk latency to bet between 4 – 8 ms (measured by Avg. Disk Sec/Write).   In addition to these we need to understand how my IOPS (Input/Outputs Per Second) are being generated on the server.  These counters can be measured by (Disk Reads/Sec, Disk Writes/Sec, and Disk Transfers/sec).  We need to understand the IOPS because generally there is upper limit on amount of IOPS a disk can handle.  For example a 15000 RMP disk can handle approx. 150 – 225 IOPS/sec; or if we are working with SAN then undestanding the ture volume can be benfical.  It might be difficult to assess how many IOPS your SAN can support; however you can have an intelligent discussion if you know what are the IOPS you are generating.

For SAN communication through Fiber Channel, we have to also consider the HBA Queue Deapth Setting.  By default HBA cards Queue Depth Length now days is set to 32.  Which is usually sufficient, however we can use SQLIO (stress testing tool) and SQLIOSIM (testing San/server with similar I/O patterns as SQL will generate), to test other settings; in some cases higher HBA queue depth setting can lead to better performance.  Recommend working with SAN and HBA vendor to figure out best setting for you.

After this we have the throughput being generated, we can look how many bytes are being transferred per second (Avg. Disk Bytes/Transfer).  This can help us assess the total data being pushed through HBA (Fiber Channel) or Network (iSCSI).  A 4Gbs HBA can support 476MB/s throughput at MAX; a 10Gbs can support 1192MB/s throughput at MAX.  However both these methods have limitations defined by the fiber and switches they must travel through and the speed negotiate.

We also like to look at the Avg. Disk Queue Length for both Reads and Writes (Avg. Disk Read Queue Length, Avg. Disk Write Queue Length, and Avg. Disk Queue Length).  It is hard define a range for these values because this calculation used to be done by knowing the number of spindles assigned to a LUN.  However in most modern SANs that is difficult to assess.  That said, it is still important this number is large like 30 or 40+; that still represents an issue. If that happens you should see multiple I/O stalls in SQL Server and disk latency should be poor.

These counters together with underline hardware configuration understand can be used to understand the bottlenecks on the disk subsystem.  For example, lets say we have a blade center enclouser with 16 servers.  Which has 4 HBA cards, each being 4Gbs (total throughput of 1904MB/s) being shared amount 16 servers in total.  However each server in blade center can access at most two HBA at any given time, therefore each server effective throughput is limited to 952MB/s.  So therefore 952MB/s is being shared by at least 8 servers within the enclosure; without knowing the other server workload and assuming equal distribution; we are left with final bandwidth limitation of 119MB/s per blade centre server.  Therefore having understanding of underlying hardware can help use runderstand the numbers we collected; are they an issue or they with in acceptable parameters.

Other indication of the I/O subsystem bottle neck is we start seeing stalled I/O messages with in SQL Server error logs.  In addition to this you can also see stall i/o requests with in SQL Server DMVs (sys.dm_io_pending_io_requests) and the waits being generated on disk by looking at latch waits in (sys.dm_os_wait_stats or sys.dm_os_latch_stats).

“SQL Server is running slow!?!” Help!

April 15, 2013 Leave a comment

How many times have we heard that SQL Server is running slow from end-user, but can’t track down why? Or what happened?  In this post I hope to point out some of the things you can consider.  I’ll approach this scenario two ways, 1) SQL Server is running slow due to processor is pinned 2) SQL Server is running slow but processor is not pinned.

SQL Server is running slow due to processor is pinned

What are some of the things that require CPU in SQL Server?  When dealing with processor related issue, we need to know what might cause CPU utilization and what kind of utilization we have (privileged or user)? Before we start troubleshooting any kind of issue on the server, please ask the first obvious question?  Is it really sqlservr.exe (SQL Engine) causing the issue?  In CPUs case it is pretty easy, we can look at task manager. However if we wanted to look at the Windows Performance Monitor (perfmon), then we might want to look at least three counters; Processor\% Processor Time (_Total), Process\% User Time (sqlservr), and Process\% Privileged Time (sqlservr).

An interesting difference between Processor and Process counter is, Processor counter is an average of all processors visible to the operating system.  Where as Process counter is a summation of all processor visible to the operating system.

CPU Usage

Image 1: Processor Utilization

If we look at the utilization the Process\% User Time is much higher then Processor\% Processor Time, therefore to really see the utilization for SQL when looking at Process counters, we need to average the value to get true utilization.  When looking at Processor\% Processor Time, I would like the values to ideally run around maximum of 70-80%, if these values are going above that for extended period of time, then we need to figure out what is causing CPU utilization.  Maybe it is time to migrate some workloads off this server, maybe its time to purchase new hardware, or maybe we have runaway queries and need to do some performance tuning.  All of this normally depends on having proper baseline of these values.  However lets say we do have an issue so now what?

First thing to note, everything in SQL Server uses CPU.  Like life, nothing comes free.  Memory Reads (Logical Reads), Disk Read/Writes (Physical Reads/Writes), Hash Operations, Execution Plans, etc. all require CPU.  Question is what is using up the CPU?

Maybe its physical operations?

If its physical operations I should see my Process\% Privileged Time higher then normal.  Generally for SQL server we recommend this value should not exceed 25-30%.  In addition to this we should see high disk usage or some kind of memory pressure.  That is causing SQL Server to write dirty pages out or high amount of scans causing light number of physical reads.  So we potentially need to look at the Logical Disk or Physical Disk counters, SQL Server: Access Methods, and SQL Server: Buffer Manager counters.  In SQL Server we might see PAGEIOLATCH_* wait types in sys.dm_os_waiting_tasks DMV.

Maybe its logical operations?

If its logical operations, I should not see high number in Processor\% Privileged Time but should see high number in Process\% User Time.  In SQL Server maybe I want to run a query to get the most expensive queries by Logical Reads from sys.dm_exec_query_stats to see who is causing excessive logical reads.  You can look at Get Top 50 SQL Statements/Query Plans post on how to get this information.

Maybe its hash operations?

Again this should be user time utilization on CPU and if I have lots of Hash Operations, that means I have potential for high TempDB usage.  So I might look for counters SQLServer:Access Methods\Workfiles Created/sec in relation to SQLServer:SQL Statistics\Batch Requests/sec.  These values we like to see this ratio to be around 1:5 ratio.  In SQL Server we might want to look for queries with high amount of worker time.  In addition we might have SOS_SCHEDULER_YEILD wait_type in sys.dm_os_waiting_tasks if the box is being very busy.

Maybe its execution plans?

If its execution plans, then we might have high number of SQLServer:SQL Statistics\SQL Compilations/sec or SQLServer:SQL Statistics\SQL Re-Compilations/sec.  If its re-compilations n the sys.dm_exec_query_stats we might find execution plans with high plan_generation_num.  Alternatively we might have high compilations/sec because lots of Ad-Hoc T-SQL (any code that is not object with in SQL Server, such as stored procedures, triggers, etc.) code is being executed (greater then your baselines).

SQL Server is running slow but processor is not pinned

Well we looked at some of the scenarios where the CPU might be pinned, how about where CPU is not pinned?  What might be issue then?  In these cases, if your CPU is not pinned but things are running slow, the problem might be resource contention issue.  Therefore you might have blocking in SQL Server.  If you look at activity monitor you might see wait types such as LCK_M_*, or ASYNC_NETWORK_IO, etc.

We can look at DMV sys.dm_exec_waiting_tasks, sys.dm_exec_requests, and sys.dm_tran_locks to se where is the contention issue.  From there we can further dig into the SQL statement using sys.dm_exec_sql_text and then further the execution plan using sys.dm_exec_query_plan.

In addition you might need to track blocking in SQL Server, please look at Blocked Process Report in SQL Profiler post.

Conclusion

I am sure everyone has run into their fair share of issues, unfortunately the answer is rarely simple, especially when it comes to performance issues.  However I hope this post gave you some idea for your further troubleshooting.  Another thing to consider is, what if the issue has happened few hours ago or last day and you are just finding about it?  It is really outside scope of this post, but something’s that might help in this case is Management Data Warehouse, looking at the previous execution in the DMVs, or using other external monitoring tools such as Microsoft System Center Options Manager.

How to calculate Max Memory for SQL Server?

March 4, 2013 Leave a comment

SQL Server is designed to manage the memory on a server dynamically.  That is, it will acquire memory as needed and it will release it when asked to (by OS).  Its the good kid on the play ground, making sure now other processes on OS don’t starve.  Unlike 32-bit (x86), where the SQL Server process was capped to 1.7GB.  On 64-bit (x64) we do not have this limitation, as the virtual address space (VAS) for user applications is 8TB.  Therefore for SQL Servers running on x64, I often recommend to set Max Memory to control the amount of memory each instance can use.  Until SQL Server 2008 R2, the Max Memory stated the amount of memory assigned to Buffer Pool only.  However starting SQL 2012, it is almost all the memory SQL can consume.  (How memory architecture changed in SQL 2012 is beyond this post, maybe a future post?).  However even with new SQL 2012, we still should reserve some memory for SQL threads and external components.  So I am still using similar formula for all versions until I have reason to change it.

* Please note all calculations are for x64 bit system.

Max Memory - Main Formula

Where …

Max Physical Memory = Total Physical Memory Available on Server.  If you are calculating this for Clustered Environment, then I would use the Available Physical Memory from the Node with Least Amount of Memory.

Memory Reserved For OS = Really its not operating system I am referring to, any other applications running on server that is not SQL Server Engine.  I have used the guidelines below for initial configuration.  I will adjust this value for further monitoring and tuning.  Namely I will monitor Memory\Available Mbytes and Page File\% Usage(_Total).  On x64 I want Available Mbytes to be 1024MB or above and % Page File Usage <= 20%.

Physical Ram Ram Left for OS
4096MB (4GB) 1024MB (1GB)
8192MB (8GB) 2048MB (2GB)
16384MB (16GB) 3072MB (3GB)
24576MB (24GB) 4096MB (4GB)
32768MB (32GB) 5120MB (5GB)
40960MB+ (40GB+) 6144MB (6GB)

Memory Reserved for SQL Server: This formula is borrowed from really SQL 2005/2008 days, but I am still using it in SQL 2012 with positive results.

Max Memory - Memory Reserved for SQL OS

Where ….

# of Max Threads: This is referring to maximum number of worker threads SQL Server can create.  By default this value is “0”, meaning SQL calculates the maximum number based on number of CPUs as below.  CPUs in this context referrers to all visible CPU to operating system. For example, if you have 2 Physical CPUs with 4 Physical Cores each and Hyper-Threading; for the purpose of this formula the # Of CPU = (2 * 4 * 2) = 16.

Max Memory - Max Number of Worker Threads

This formula by all mean not absolute, please feel free to adjust it to your needs and environment.  After you configure SQL Server with Max Memory as listed above monitor your Available Memory (Memory\Available MBytes), Page File (Page File\% Usage (_Total), SQL Server Memory Pressure (SQLServer:Buffer Manager\Page Life Expectancy) and adjust the memory you reserve for the OS and SQLOS.

Lets work through an example …

  • Operating System: Windows 2008 R2 (x64)
  • # of Physical Sockets (CPUS): 4
  • # of Physical Cores per CPU: 6
  • Hyper-Threading Enabled: Yes
  • Number of SQL Server Instances: 2
  • Physical Ram Available: 16GB (16384MB)

First Step: Calculate the maximum number of worker threads…

# of Max Threads = 512 + {[(4*6*2)-4)*16} = 1216

Second Step: Memory Reserved for each SQL OS

SQLOS = 1216 Threads * 2MB = 2432MB

Third Step: Memory Reserved for Operating System

Memory Reserved for OS = 3072MB (Based on my reference table above)

Forth Step: Total Memory Reserved for all Instances

Total Memory Reserved for SQLOS = 2 * 2432 = 4864MB

Final Step: Putting it All together

Max Memory Per Instance = [16384MB (Physical) – 3072MB (Reserved for OS) – 4864MB (Reserved for SQLOS)] / 2 (Number of Instances)

Therefore Max Memory Per Instance = 4224MB

WOW? Even though server has 16GB, we are leaving SQL server with only 4.125GB per instance.  Please note this is extreme value, that is you have a SQL Server that is going to use all 1216 threads.  Therefore if you monitor your SQL Server and actually see your Available Memory is ALWAYS above 1024MB, maybe you want to give more to your SQL instances.  In this example lets suppose we know Instance 2 only requires 2048MB.  Then we can assign 6400MB to instance #1 without working about performance.

I hope I explained this topic a bit, generally troubleshooting performance issues with Memory is lots of work. But I usually start here for dealing with memory related issues.

PS Please note I only use MB in my calculations, because we set our max memory in SQL Server to MB not GB or KB.

July 23, 2013 – Small update for calculating the # of CPU formula in # of Max Threads section.

Performance Impact of SQL Server Profiler

January 7, 2013 Leave a comment

The general best practice is only capture the events and columns you need when using SQL Server Profiler.  But how much impact does it have?  Well the answer always been for me IT DEPENDS!

But really how much impact does it back?  So I decided to do some testing.  I will be using the code below for each test case scenario.

USE TempDB
GO

SET NOCOUNT ON
GO

SELECT GETDATE()
GO

CREATE TABLE T1 (Col1 INT, Col2 INT, Col3 INT)
GO

DECLARE @Cntr INT

SET @Cntr = 0

WHILE (@Cntr < 100000)
BEGIN
SET @Cntr = @Cntr + 1
INSERT INTO T1 VALUES (@Cntr, @Cntr*10, @Cntr*100)
END
GO

DROP TABLE T1
GO

SELECT GETDATE()
GO

Note: I know there are better way to do large insert, but for this test, this way works pretty nice :).

I am simply creating a table throwing 100,000 rows in it.  And taking the start time and end time.  So I can calculate the total time it takes to complete the operation.  In my control test, with no SQL Server Profile, it took 2416ms to complete this insert.

Now I set up a SQL Server Profile with following Events (with all columns):

Errors and Warnings

  • ErrorLog
  • EventLog
  • Exception

Security Audit

  • Audit Login
  • Audit Logout

Stored Procedures

  • RPC: Completed
  • RPC: Starting
  • SP: StmtCompleted
  • SP: StmtStarting

TSQL

  • SQL: BatchCompleted
  • SQL: BatchStarting
  • SQL: StmtCompleted
  • SQL: StmtStarting

Lets run again and see what happens…

Waiting ….

Waiting …

I had time to go get some some coffee and snacks while it was running, when I came back it had finished.  It took 165106ms that is 68x worse.  WOW SQL Profile is evil!

Well lets not through SQL Server Profiler out on the chopping block just yet, maybe some events are more expensive?  So let me take out all of the Security and Error events, since I really don’t care about them.

And …

It executed in 166530ms, huh slightly worse.  Okay that is just weird.  Lets chop that up to my crappy laptop.  But can say those events didn’t make much dent, because they were not capturing much to begin with.  Will now what can I do?  I want to use Profiler to capture what is running, but but is just too painful to wait this long.  How about we get rid of some of the columns we don’t need?

So I have eliminated more than half of the columns and its running tick tock tick tock …

Nope it didn’t finish while I typed that sentence as I hoped… hmmm … tick tock tick tock… waiting I do…

IT FINISHED!  It finished in 56550ms, wow that is 1/3 the time it took when I captured all the columns.  But thats still 23x worse than my control?  Hmm well I don’t really need Starting events because nothing is failing.  So I’ll only captured completed.  Lets see if that helps it?

Well it helped, it finished in 40086ms.  Slightly better, but still worse then control, about 16x worse.  Improving slowly … alright lets remove RPC events (since this is all batch work).  As we seen before I don’t expect much improvement.  But what they hey… that’s why do we do testing right?

36303ms!!! Improvement why, what is going on?  Well it turns out that my SQL Server has some back ground processes running (Ooops!) that were making RPC calls, so it did hurt the performance a little. So now this is 15x worse.

I am only capturing SQL: BatchCompleted and SQL: StmtCompleted, with select number of columns.  I can possibly eliminate some more columns, but over all at this point it might not be worth it.  So the end lesson is?

SQL Server Profiler IS going to have performance impact on your server, you can minimize the impact but cannot eliminate it.

Results Summary

  • Control Test, No Profiler: 2416ms (2 Seconds 416ms)
  • Run #1, All Events & All Columns: 165106ms (2 Minute 45 Seconds 106ms)
  • Run #2, Only RPC/TSQL Events & All Columns: 166530ms (2 Min 46 Seconds 530ms)
  • Run #3, RPC/TSQL & Select Columns: 56550ms (56 Seconds 550ms)
  • Run #4, Only Completed Events & Select Columns: 40086ms (50 Seconds 86ms)
  • Run #5, Only TSQL & Selected Columns: 36303ms (36 Seconds 303ms)

Lessons Learned

SQL Profiler is a very important tool to find out what is happening.  But you need to understand what you need, do not capture every event and column.  You cannot eliminate the performance impact, but it is lesser of two evils.  Either know what is going on, or don’t know and live with performance problems. Yes we can use Dynamic Management Objects, but the tool is still important.

Next week I will do same test but using SQL Server Service Side Trace and follow it up with Extended Events.  After which I’ll compare all three 🙂

 

Slow disk write performance when you use a back-end SAN on a computer that is running Windows Server 2008 R2

November 14, 2012 Leave a comment

Interesting fix was released today, your WRITE performance might be poor due to incorrect priority in DPC in Windows 2008 R2.

If you have write performance worse on your SQL Server running on Windows 2008 R2 with SAN but was working well in Windows 2003, please look at this HOTFIX.

Cheers!

Getting List of Top Queries by Query Hash

May 16, 2012 Leave a comment

Starting SQL 2008, we have had Query Hash column in sys.dm_exec_query_stats.  It helps group common queries together, because when SQL Server is generating execution plan or caches queries it bases it on a HASH value it calculates.  However, if you’re application is mostly using T-SQL (e.g. not Stored Procedures); then the HASH value be different every time.  So the Query Stats DMV couldn’t provide proper grouping until 2008; script below takes advantage of that and gives you total stats for key columns.

-- SQLCanada, Mohit K. Gupta
-- Script from https://sqlcan.wordpress.com
-- Last Updated: May 12, 2012
------------------------------------------

WITH QSD (SQLStatement, PlanHandle, NumOfExecutions, Duration_ms, CPU_ms, Reads, Writes, QueryHash)
AS (SELECT   SUBSTRING(st.text,
                       (qs.statement_start_offset/2)+1,
                        ((CASE qs.statement_end_offset WHEN -1 THEN
                             DATALENGTH(st.text)
                          ELSE
                             qs.statement_end_offset
                          END - qs.statement_start_offset)/2) + 1)  AS SQLStatement
            , qs.plan_handle AS PlanHandle
            , execution_count AS NumOfExecutions
            , total_elapsed_time/1000 AS Duration_ms
            , total_worker_time/1000 AS CPU_ms
            , total_logical_reads AS Reads
            , total_logical_writes AS Writes
            , query_hash AS QueryHash
       FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) st
      WHERE query_hash != 0x0000000000000000)

  SELECT QSD.QueryHash,
         MIN(QSD.SQLStatement) AS SQLStatement,
	      MIN(QSD.PlanHandle)   AS PlanHandle,
         SUM(QSD.NumOfExecutions) AS TotalNumOfExecutions,
         SUM(QSD.Duration_ms)/SUM(QSD.NumOfExecutions) AS AvgDuration_ms,
         SUM(QSD.CPU_ms)/SUM(QSD.NumOfExecutions) AS AvgCPU_ms,
         SUM(QSD.Reads)/SUM(QSD.NumOfExecutions) AS AvgReads,
         SUM(QSD.Writes)/SUM(QSD.NumOfExecutions) AS AvgWrites
    FROM QSD
GROUP BY QueryHash

Example Output:

image

If you notice I don’t have any kind of sort or Top 50 function.  Reason is I don’t want to tax SQL Server; after this list is produced I can pull it into Microsoft Excel for analysis.  On a 32-bit system you can get away with doing this right on server; however on 64-bit server with 128GB of memory, the data set returned can be large trying to do top 50 or other sorting can be costly.

Also please note I didn’t include sys.dm_exec_query_plan for similar reasons.  After I find the statement I want to focus on I’ll use the value of PlanHandle to pull out the individual plans.

How to implement PAL (Performance Analyis of Log) counters in Windows 2003?

April 1, 2012 3 comments

PAL designed by Clint Huffman, available on Codeplex.com.  This tool analyzes the performance counters logs for you to identify if anything is outside a threshold.  These thresholds are predefined in the program, thus you don’t have to remember stuff like minimum “Page Life Expectancy” suppose to be 300 (all though you really should remember at least that one :P).  But it has many-many good counters to look at for system and SQL Server.  So instead of me providing a list of all the counters you can look at there is functionality in PAL to export an XML file from which you can create Windows 2008 based Perfmon capture (will blog about it another time).  But what I want to do is use the same counters on Windows 2003, however the 2003 version does not support XML so we have to use command line tool called logman.exe.

PAL Wizard - Threshold File

1. Start PAL.
2. Go to Threshold File.
3. Click on Threshold file Drop Down select “Microsoft SQL Server 2005/2008”.
4. Click Export to Perfmon Template File.

PAL - Instance Name Dialog Box

5. It’ll ask for Instance name, click OK (assuming we are running it on Default instance).
6. Close PAL.
7. Open the XML file in Notepad.

Modifying XML Data File

SQLCounters.XML File XML To Delete @ TOP

PAL Updating XML File

SQLCounters.XML File XML To Delete @ BOTTOM

8. Delete the Highlighted Part at the Top of the XML file and at the Bottom of the XML file.
9. Search for and replace ”    ” with (nothing).
10. Search for and replace “” with (nothing).

Clean Counter List

11. So you are left with the above image.

Save Dialog Box
12. Now Save file as SQLCounters.txt and make sure the to change encoding to ANSI (Windows 2003 command doesn’t like unicode).
13. Run following command from Command Prompt.

logman create counter SQLPerfCollection –cf “C:\Temp\counters.txt” -si 00:10

14. You should see a collection in perfmon called SQLPerfCollections now.

Credit for this goes to a colleague of mine, Microsoft Platforms PFE (Thanks!!).

Categories: Performance Tuning, Tools Tags: , ,
%d bloggers like this: