Advertisements

Archive

Archive for the ‘SQL 2005’ Category

Microsoft Support policy for SQL Server in Windows 8 and Windows 2012

November 9, 2012 Leave a comment

Now that Windows 8 (which btw ROCKS!) and Windows 2012 is out, Microsoft has released a KB highlighting the support for SQL Server in respect to these operating systems.

In KB2681562, lists the support requirements, issues with install, etc. So a one-stop-shop or so… 🙂

Advertisements

How to find out what is causing page splits?

March 30, 2012 2 comments

Client was noticing high number of Page Splits/sec in relation to Batch Request/sec; however could not figure out why.  They were not have performance issue but still was curious what was generating it?

So how do we track it?  Well short of the answer is there is no easy way to do this.   Even though perfmon has ability to tell you page splits are taking place by Page Splits/Sec in SQLServer: Access Methods; it cannot tell you where.

We can use the fn_dblog function to read the transaction log file in SQL Server 2000/2005 and in SQL Server 2008 we can rely on extended events.

  SELECT   Operation
, AllocUnitName
		 , COUNT(*) AS NumberofIncidents
    FROM sys.fn_dblog(null, null)
   WHERE Operation = N'LOP_DELETE_SPLIT'
GROUP BY Operation, AllocUnitName

Example from #1.

I am not going to repeat the work done by others here, the code or examples are from reference links below if you need additional details.

  1. Daniel Adeniji’s – Learning in the Open. Microsoft – SQL Server – Page Splits. Link.
  2. Microsoft Connect. SQL Server Extended Events Page/Splits Event Additions. Link.
  3. SQL Server Pro. Evaluating the Page Splits/sec Value. Link.
  4. SQLBlog.Com. Michael Zilberstein (Hopefully soon to be a SQL MVP ;-)). Monitoring page splits with Extended Events. Link.

Edit Notes

  • March 30, 2012: I had it wrong that Michael is not a SQL MVP yet, so here to hoping he gets MVP :). Thanks for correction mate.

Unable to Delete Records in using API Cursors

March 11, 2010 1 comment

I am not very familer with the internals of API Cursors; so this was an annoying problem to solve.  I was working with web guy, he had a user working on Web Front-End application.  When user tried to delete record from the database they got following error:

Microsoft OLE DB Provider for SQL Server error ‘80004005’ 
Login failed for user ‘Domain\ServiceAccount’.

I thought it was a typical error so I my first thought was it will be easy problem.  Looking at SQL logs I had following error logged for this error:

Login failed for user ‘Domain\ServiceAccount’. [Client: 10.0.0.1]
Error: 18456, Severity: 14, State: 8

State 8, means that the user login in is not a SQL Login account, but is trying to login in like one.  With this information in hand I talked to web guy and asked him to confirm his application pool credentials and connection string for me.  He confirmed for me the application pool and connection string are not using the ‘Domain\ServiceAccount’; so we were confused as to where it was getting this account from.  The IP listed in second error message gave us an idea where to look.  It was the SQL Servers’ IP address; but that confused me again. Why did we have transaction activity from web server switched to SQL Server?  I started SQL Server Profiler and logged the following:

DECLARE @p1 INT
SET @p1=180150013 
DECLARE @p3 INT
SET @p3=1
DECLARE @p4 INT
SET @p4=16386 
DECLARE @p5 INT
SET @p5=1 
EXEC sp_cursoropen @p1 output,N'SELECT * FROM Table WHERE ID=''1223'' AND SiteID = ''ZZZZ''',@p3 output,@p4 output,@p5 output
SELECT @p1, @p3, @p4, @p5 
GO
EXEC sp_cursorfetch 180150013,16,1,1
GO
EXEC sp_cursorfetch 180150013,16,2,1
GO
EXEC sp_cursorfetch 180150013,1040,1,1
GO
EXEC sp_cursor 180150013,34,1
GO
Login failed FOR USER 'Domain\ServiceAccount'. [CLIENT: 10.0.0.1]
EXEC sp_cursorclose 180150013
GO

Notice again same failure is noted in middle of API cursor calls; “EXEC sp_cursor ….,34,1” statement means to delete the current record.  What was most confusing was why was SQL Server it self trying to login using another service account when delete got executed?

Digging into the configuration a bit; I found someone had created a Linked Server to the local server.  If you try to create Linked Server under (Server Objects -> Linked Servers) to local server you get an error message “You cannot create a local SQL Server as a linked server“.  So how did this get added using the ‘Domain\ServiceAccount’ account?

Looking at it a bit more it is possible to add a linked server to local server (its useless to do, as far as I can tell) by using sp_addserver stored procedure and then using sp_serveroption stored procedure you can set properties.  But the GUI interface will not let you interact with the object.

Now I had answer to why SQL Server was trying to login to it self using that service account; it was mystery to me why was the API Cursors were using the linked server.  I confirmed the SQL code being from front end (above) did not have four-part name for table object (Server.Database.Schema.Table).  There should be no reason for it to fall back on linked server, as this was not a distributed query coming from another SQL Server either.  I checked sys.server, it had one entry in it with id value of 1 but nothing for id value of 0 (default value).  So I decided to delete the linked server (id = 1 in sys.servers) and got web guy to try again; now error changed to following:

Microsoft OLE DB Provider for SQL Server error ‘80004005’
Could not find server ‘SQLServer’ in sys.servers. Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers.

I started Googling to see if I can find the issue; with not much luck.  But while reading Pinal Dave’s blog I got idea to check @@ServerName property; which returned ‘NULL’.  Again running into further confusion I decided to do listing on sys.servers to see what entries exist in there as per the error messages suggestion.  It returned 0 rows (as I deleted the only row visible); but there should be at least one row in this table referencing the local server with id value of 0.  This server was not renamed, it was always the same name; I don’t know why this entry was missing.  My guess was someone added the missing entry using “sp_addserver ‘SQLServer’, ‘SQLServer'” which created a remote server connection instead of local server connection.  I executed following commands to adjust entries in this table:

EXEC sp_addserver 'SQLServer', 'local' 
GO

This added a new entry called id = 0 (which what the default entry should be).  So since the local server was no longer viewed as a linked server under server objects I got web guy to try again, the error changed to following:

Microsoft OLE DB Provider for SQL Server error ‘80004005’
Server ‘
SQLServer‘ is not configured for DATA ACCESS.

Now reading Pinal Dave’s blog some people suggested enabling RPC Out property; now this is valid for a linked server.  But this was not linked server so why did I have this error?  Looking at @@ServerName, it was still ‘NULL’; even with record id of 0 in sys.servers.  This is because global variables like that are populated at boot time, so I had to restart SQL Server services before that variable was populated.

After @@ServerName returned proper value; the web service started running also.  I’ll be investigating the link between API cursors and @@ServerName in another article; for now problem solved.

Problem: Unable to do delete operations in an application that relies on API Cursors.
Solution: Check to make sure the @@ServerName property is set; if not check sys.servers.  If there are no rows in the table.  Add new entry using sp_addserver and restart SQL Services.

References:

  1. SQL Learnings, Error 14274. Link.
  2. SQL Learnings, Error 18456. Link.
  3. Journey to SQL Authority with Pinal Dave, Error 7411. Link.
  4. API Cursors. System Stored Procedures. Link.
  5. SQL Server Performance. Exposing API Server Cursors. Link.
  6. MSDN. sp_addserver. Link.
  7. MSDN. sp_serveroption. Link.

SQL Server Integration Services Issue (Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum))

January 27, 2010 Leave a comment

TITLE: Microsoft SQL Server Management Studio
——————————

Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476

——————————
ADDITIONAL INFORMATION:

The SQL server specified in SSIS service configuration is not present or is not available. This might occur when there is no default instance of SQL Server on the computer. For more information, see the topic “Configuring the Integration Services Service” in Server 2005 Books Online.

Login timeout expired
An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
Named Pipes Provider: Could not open a connection to SQL Server [2].  (MsDtsSrvr)

——————————

I have used SSIS for ETL process few times; but on stand alone instances only.  Never tried to do SSIS in a Cluster environment.  This particular issue can happen in a standalone environment also because SSIS services by default cannot handle named instances.

SSIS also is not cluster-aware and is single instance application (i.e. only one copy of SSIS can run on server at a time).  I read the reference articles [1] and [2] to investigate my options.  I decided to do following to make SSIS Packages management and controlled per SQL instance in cluster and named instance environment.  Current configuration is Active-Active 2-Node Cluster with both instances using Named Instances.

1. Go to C:\Program Files\Micrsoft SQL Server\90\DTS\Binn\ directory.
2. Open MsDtsSrvr.ini.xml file.

You will something similar to following:

<?xml version="1.0" encoding="utf-8"?>
<DtsServiceConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>
   <TopLevelFolders>
      <Folder xsi:type="SqlServerFolder">
         <Name>MSDB</Name>
         <ServerName>.</ServerName>
      </Folder>
      <Folder xsi:type="FileSystemFolder">
         <Name>File System</Name>
         <StorePath>..\Packages</StorePath>
      </Folder>
   </TopLevelFolders>
</DtsServiceConfiguration>

We need to do couple things in the file for it to work with our cluster:

  • Add the information for each SQL Instance in your Cluster (e.g. for me SQLCluster01p and SQLCluster02p)
  • Update it to make sure it referring to Named Instance instead of one default instance (e.g. for me Instance01 and Instance02)

So I modify the file to following:

<?xml version="1.0" encoding="utf-8"?>
<DtsServiceConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>
   <TopLevelFolders>
      <Folder xsi:type="SqlServerFolder">
         <Name>MSDB_SQLCluster01p_Instance01</Name>
         <ServerName>SQLCluster01p\Instance01</ServerName>
      </Folder>
      <Folder xsi:type="SqlServerFolder">
         <Name>MSDB_SQLCluster02p_Instance02</Name>
         <ServerName>SQLCluster02p\Instance02</ServerName>
      </Folder>
      <Folder xsi:type="FileSystemFolder">
         <Name>File System</Name>
         <StorePath>..\Packages</StorePath>
      </Folder>
   </TopLevelFolders>
</DtsServiceConfiguration>

If you notice I changed the first instance and added a 2nd instance.  Now when you launch SSIS on either cluster you will see something like this in your SSMS:

Please note even though we did change the name of the SQLServer folder; the information is still being stored in MSDB database on the instance entered in the ServerName property.

Now I did it this way to allow me to manage the SSIS packages remotely and monitor their execution.  But if you don’t need to worry about stopping SSIS packages; then you can simply store it on a network location or in SSIS Package Storage.

References:
[1] Configuring the Integration Services in Cluster Environment, Link.
[2] Configuring the Integration Services Service, Link.
[3] Different ways to execute a SQL Server SSIS package, Link.
[4] Description of the SQL Server Integration Services (SSIS) service and of alternatives to clustering the SSIS service, Link.

Adding new disk resource to cluster

January 10, 2010 3 comments

My current configuration includes two node cluster with active-passive configuration.  I had requested the Storage team to add a new LUN to the Cluster so I can relocate tempDB from its current location to separate disk.

They had added the disk to the server; but when I went to Active Node (Node1)’s Computer Management > Storage > Disk Management and tried to initialize the disk I got the following error:

“The requsted operation cannot be completed because the media is write-protected.”

Doing research on the internet lead me to think the LUN was not proper configured for the cluster; I tried Actions > Rescan Disk option also with no success.  The new disk showed up with “” and Not Initialized message on the Node 1. I checked the passive node it was similar icon in Disk Management; I tried to Initialize the disk on Passive Node (Node 2); I was able to initialize it successfully.  I figured when the new LUN was added to the server it would be active on one node only as I couldn’t access it on Node 1.

After initializing the disk on Node 2, I failed all the Cluster Resources over from Node 1 to Node 2.  Since new disk was going to be used by SQL Server and was going to hold the temp database few additional steps must be completed before using the disk.  Format the disk making sure 64KB blocks are used (disk offset should also be 64KB off; Storage guys do that).

Adding New Disk as Cluster Resource:

  • Launch Cluster Administrator on Active Node.
  • Go to SQLServer Group.
  • Right click in left pane, select New.
  • Select Resource.
  • Give it a Name, for example Disk T:
  • Select Resource Type, Physical Disk.
  • Click Next.
  • Select the possible owners for the resource, in my case since it is only two node cluster both nodes get added automatically.
  • Click Next.
  • Select dependencies; that is which resources should be online before Disk T is brought online.  This option for disk is not required unless you are using Mount Points; then you would want to make sure Mount Point Disk is listed as a dependencies.
  • After adding dependencies, click Next.
  • From the Drop Down select the new Physical Disk.
  • Click Finish.

Adding New Disk as Dependencies to SQL Server:

  • Launch Cluster Administrator on Active Node.
  • Go to SQLServer Group.
  • Right Click on SQL Server, select Take Offline.
  • Right Click on SQL Server, select Properties.
  • Select “Dependencies” from Tabs at top.
  • Click Modify.
  • Select the New Disk Resource that was just added.
  • Click OK on both windows to get back to Cluster Admin console.
  • Right Click on SQL Server, select Bring Online.

Just FYI, if you don’t set the dependencies information you will not be able to use that disk in SQL Server; when trying to relocate TempDB you will get the following error from SQL Server:

Msg 5184, Level 16, State 1, Line 1
Cannot use file ‘T:\MSSQL.1\MSSQL\Data\tempdb.mdf’ for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.

Msg 5184, Level 16, State 1, Line 1
Cannot use file ‘T:\MSSQL.1\MSSQL\Data\templog.ldf’ for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.

After all this was done I tried failing resources back to Node 1 with no luck, it still gave me similar error on disk.  Talking to Storage guys they assumed it was an Windows issue, but after digging into the configuration settings on DMX it seems you can have different settings for read/write properties for same LUN on two different nodes.  After fixing the setting to read/write on both nodes; drive started working successfully.

Error 191: Some part of your SQL statement is nested too deeply.

October 21, 2009 2 comments

When inserting string into SQL Server table using the INSERT predicate that uses string concatenation there seems to be an issue with in SQL Server 2005. There seems to be an upper limit of about 480+ concaenations before the insert fails and you get following error messsge:

Msg 191, Level 15, State 1, Line 1
Some part of your SQL statement is nested too deeply. Rewrite the query or break it up into smaller queries.

What does that mean? I mean it was a simple insert statement no loops just two brackets; well it seems to be a bug and has been fixed in next major release of SQL Server.

To produce the error, create a new database and try executing the following SQL Statement:

… to create test table …
CREATE TABLE [dbo].[T1](
[ID] [int] IDENTITY(1,1) NOT NULL,
[TValue] [varchar](MAX) NULL,
CONSTRAINT [PK_T1] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

… test insert statements …

-- Good SQL Statement
INSERT INTO T1 VALUES ('A' + ... + 'A')
GO

-- Bad SQL Statement
INSERT INTO T1 VALUES ('A' + ... + 'A' + 'EXTRA')
GO

Please click here for full statement.

Only difference between the good statement and bad statement? The bad statement has ONE extra concatenation.

Microsoft Connect Article, Link.

Categories: SQL 2005, SQL Errors Tags:

Unique Stop List for Single Database in SQL 2005

September 17, 2009 Leave a comment

Unlike SQL 2008, in SQL 2005 we cannot define a unique stop list (Noise Word List) for each database. As this is controlled by the Language settings on Full Text Index defined in the catalog.

So lets say I want the stop list to only include words from following list:

a, an, the, I, am, us

Rest of the words must be indexed as per the requirements. Setting up a new instance for this one database was out of scope for budget and modifying the noise world list on SQL Server affected more then a single database. I was able to meet the requirements by manually rebuilding the index with new stop list then changing the list back. But I did not want to do this every month manually; so I decided to create SQL Server Job for this. Before job could be created following must be done:

  • Confirm the time Job should run, it cannot overlap with any other auto-populations for full text index.
  • The full text catalog in question cannot have auto-population schedule.
  • You will have to enable xp_cmdshell Extended stored procedure on the server.
  • Create two new text files, one called SQL_noiseNEU.txt which is copy of noiseNEU.txt in MSSQL.1\MSSQL\FTData\ and another text file called Simple_noiseNEU.txt with my stop list.

So I created a new job to run last day of the month at 6AM, 6:15AM, and 6:30AM. They reason I created three schedules was to execute each step of the job at different time. If I executed all three steps one after another I found SQL was too fast in starting step 2 before step 1 finished. So putting a small delay gives each step enough time to complete:

Step 1 (Executed at 6AM):
Replace the noiseNEU.txt in MSSQL.1\MSSQL\FTData\noiseNEU.txt with Simple_noiseNEU.txt.

Step 2 (Executed at 6:15AM):
Rebuild the catalog for the database. Please note when you change the noise word list you have to rebuild the entire catalog you cannot just rebuild one index.

Step 3 (Executed at 6:30AM):
Replace the noiseNEU.txt in MSSQL.1\MSSQL\FTData\noiseNEU.txt with SQL_noiseNEU.txt.

SQL Job Script Generated from SQL Server: Link.

Categories: Full Text Search, SQL 2005

SSPI Context Cannot be generated

July 12, 2009 Leave a comment

I am sure everyone has seen this error there are many articles on MSDN, newsgroups, etc.; but I ran into very interesting problem with these that none of the normal articles helped. Articles like [1], [2], [3], and [4] talk about various settings that can generated that error, from being SPN (Service Principal Name), to SQL Protocol Issues, to TDS packet issue, but none of these helped me resolve the issue. As I was getting this issue with one user only; unable to solve it I kept looking around and ran across article [5]; which indicated this is an issue with Kerberos authentication. Which was indicated by previous articles also, but what was interesting it stated that the Token generated by the user account in question was larger then the default size of 12,000 bytes thus the issue.

I had no way to check how many groups the user was in, so working with Active Directory guy we created copy of account and removed one group at a time until the user was able to login successfully. So in a sense we verified that number of groups the user belonged to was causing us the issue. But I did no have a work around for this, as article [5] suggested I was thinking of changing MaxTokenSize to FFFF (65535). I needed to verify that was the issue for sure; as it was only one user who was reported the issue. I had access to far more groups and resources but did not have any issues. After researching more I found Microsoft Utility, TokenSz [6] & [7] that lets you compute the token size generated for Kerberos authentication.

Running TokenSz utility on users account and my account we got interesting information …

Users account
Name: Kerberos Comment: Microsoft Kerberos V1.0
Current PackageInfo->MaxToken: 65535

Using user to user
QueryKeyInfo:
Signature algorithm =
Encrypt algorithm = RSADSI RC4-HMAC
KeySize = 128
Flags = 2083e
Signature Algorithm = -138
Encrypt Algorithm = 23
Start:7/12/2009 7:47:09
Expiry:7/12/2009 17:47:09
Current Time: 7/12/2009 7:47:09
MaxToken (complete context) 13383

If you notice users max token was above the default allowed of 12,000; so SQL Server failed in the authentication. I checked my token size it was only 7479. Looking at how the token is generated in [5]; it includes information from nested groups. So this had major impact on the number of groups she was really in.

Solution? We decided to change the MaxTokenSize on our SQL Server.

This caused another interesting (another word for annoying :S) issue, Link. But after all issues were resolved user was able to log on successfully.

Reference Links:
[1] How to troubleshoot the “Cannot generate SSPI context” error message, Link.
[2] PRB: Error Message: Cannot Generate SSPI Context, Link.
[3] SQL Protocols: “Cannot generate SSPI Context” Error message, when connecting to local SQL Server, Link & Link.
[4] How to configure an SPN for SQL Server Databases Servers, Link.
[5] New Resolution for Problems with Kerberos Authentication when users belong to many groups, Link.
[6] How do Token Size Affect SQL Server, Link.
[7] TokenSz Utility Download, Link.

Categories: Security, SQL 2005

Default Backup Path Error

June 12, 2009 Leave a comment

I had blogged about how to set default backups for SQL Server 2005 (Link) and SQL Server 2008 (Link); but I recently ran into an error.

Those registry settings were very helpful for me because I do most backups or restores for databases using T-SQL; but this one time I got lazy and decide to use the GUI interface and got following error when trying to enter the file name.

TITLE: Locate Backup File – ServerName
——————————

\\UNCPathServerName\HiddenShare$\ServerName
Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists.

If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box.

——————————
BUTTONS:
OK
——————————

So please be cautious when changing the registry settings; the GUI interface cannot work with UNC.

Categories: SQL 2005
%d bloggers like this: