Archive

Archive for the ‘T-SQL Reference’ Category

Calculating Datetime Based on NT Time

October 23, 2012 1 comment

A colleague of mine gave me an interesting challenge today. I am by no means a T-SQL expert, however it was interesting dissecting the problem.

Give the time, 128271382742968750, what does it mean? How to read this?

We can use command line utility called w32tm.exe with following command to get the exact time…

w32tm.exe /ntte 128271382742968750

Return we get

148462 05:57:54.2968750 – 6/24/2007 8:57:54 AM (local time).

Problem SOLVED!

Well not quite, this doesn’t translate well against many gigs of data that my friend wanted to translate. So reading the KB555936, started breaking down the time-stamp above.

  1. Multiple 128271382742968750 by 100 to get 12827138274296875000; because the time is recorded in number of 100 ns have ticked by since January 1, 1601.
  2. Next divide 12827138274296875000 by 1,000,000,000 to get number of seconds passed since January 1, 1601. We get  12,827,138,274.2968750.
  3. We can ignore everything after the decimal, that is number of ms passed (which we don’t care about).
  4. Unfortunately we cannot use the DATEADD function in SQL Server to calculate the date, as in SQL Server we can go back to only 1/1/1753.  So we need to calculate the number of seconds passed from 1/1/1601 to 1/1/1753 and subtract that from that.
  5. And that is 4,796,668,800 seconds (you can take my word for it, or calculate it using PowerShell script, below).
  6. So we take the number calculated in step 3 and subtract 4,796,668,800 from it. To get 8,030,469,474 seconds passed since 1/1/1753.  Now we can use our ADDDATE!!! Yeeh? Right?
  7. Umm unfortunately NO.  The DATEADD function accepts a integer parameter, and that number is too big so we get row over flow error :(.
  8. So we have to do some additional math, we take that number and divide by 60, to get number of minutes passed.  We get 133,841,157.90.
  9. Now the .90 is important as we’ll need to calculate the seconds; so don’t forget it.  But we can now pass in the above value to get the date.
  10. SELECT DATEADD(Minute,133,841,157.90,’1753/1/1′); almost done.  DATEADD function truncates any decimal value so we do not get the number of seconds passed.
  11. So now we have to add the number of seconds to the puzzle.  We can do that using SELECT DATEADD(Second,.90*60,DATEADD(Minute,133841157.90,’1753/1/1′)).
  12. Now  we have our final answer of 2007-06-24 05:57:54 :).
  13. Just for heck of it if we wanted ms also, the answer should be
    SELECT DATEADD(MILLISECOND,0.296800000*1000000,DATEADD(Second,.90*60,DATEADD(Minute,133841157.90,’1753/1/1′))).

So there you have it, NT time in normal time using T-SQL :).  Lots of work, but possible heh.

SQL Server Script to Calculate the NT Time in Readable formatting using T-SQL, combining all 12 steps into single step:

DECLARE @NTTime   BIGINT 
DECLARE @TimeSkip BIGINT 
DECLARE @BaseTime DATETIME

SET @NTTime = 128271382742968750
SET @TimeSkip = 47966688000000000
SET @BaseTime = '1753/1/1 0:00:00.000'

SELECT DATEADD(SECOND,((((@NTTime - @TimeSkip)*1.0)/600000000)-ROUND(((@NTTime - @TimeSkip)/600000000),0,1))*60,DATEADD(MINUTE,((@NTTime - @TimeSkip)/600000000),@BaseTime)) AS NormalTime

PowerShell Script to find time passed between 1/1/1601 and 1/1/1753:

[DateTime]$LowDateRange = '1/1/1601'

[DateTime]$HighDateRange = '1/1/1753'

$HighDateRange.Subtract($LowDateRange)

INSERTED and DELETED Logical Tables

February 23, 2009 Leave a comment

The INSERTED and DELETED logical tables that exist in SQL Server and allow for handling the data when information is inserted, updated and deleted in DML Triggers only:

DML trigger statements use two special tables: the deleted table and the inserted tables. SQL Server automatically creates and manages these tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for DML trigger actions. You cannot directly modify the data in the tables or perform data definition language (DDL) operations on the tables, such as CREATE INDEX. (Books Online, SQL Server 2008).

Below is summary of what special tables get modified with each DML statement.

DML Statement INSERTED DELETED
INSERT X
UPDATE X X
DELETE X

I wanted to know if there anything special happens when I work with single row versus batch. So I test follow cases …

For testing I Created a new table called ‘IDST_Testing’ (IDST = Inserted Deleted Special Table). Below is summary of test cases and records inserted in each of the special tables.

Test Case INSERTED DELETED
Single Insert 1 0
Double Insert – Two Statements 1/per statement 1/per statement
Single Update 1 1
Double Update – Two Statements 1/per statement 1/per statement
Single Delete 0 1
Double Delete – Two Statements 1/per statement 1/per statement
Batch Insert – Two Records 2 0
Batch Update – Two Records 2 2
Batch Delete – Two Records 0 2

All those results are normal; but what was surprising was when I was doing batch INSERTED and DELETED the records were in reverse order.

For example:

I inserted following two records:

John
Mary

But when I looked at INSERTED table it showed:

Mary
John

In my actual table it was in order what I entered in but when processing the INSERTED and DELETED table they are revered. It was same in tables with IDENTITY columns and without.

Another interesting information on these tables:

  1. In SQL Server 2000, these logical tables internally refer to database transaction log to provide data when user queries them.
  2. In SQL Server 2005, these logical tables are maintained in tempdb and they are maintained using the new technology Row versioning.
  3. Accessing of logical tables is much faster in SQL Server 2005 when compared to SQL Server 2000 as the load is removed from transaction log and moved to tempdb.
  4. Logical tables are never indexed. So, if you are going to loop through each and every record available in these tables, then consider copying data of logical tables to temporary tables and index them before looping through.

Ref: http://blog.techdreams.org/2007/01/logical-tables-of-sql-server-inserted.html

Ref: http://www.sqlmag.com/Article/ArticleID/93465/sql_server_93465.html

When creating trigger it is important to keep in mind that sometimes a Batch Import, Update or Delete might happen against the table; if you are referring the logical tables INSERTED and DELETED the data will be in reverse order so you don’t want to cause issues when traversing through these tables in triggers.

Note always keep in mind RBAR when designing triggers; its very easy set up RBAR scenario when working with triggers. For more information on that please read, http://www.simple-talk.com/sql/t-sql-programming/rbar–row-by-agonizing-row/.

%d bloggers like this: