Archive for the ‘T-SQL Reference’ Category

Learn How to Find Who Deleted Records in SQL Server

January 14, 2019 Leave a comment

Query 1: “Hello friends, I am writing for the first time hoping that some of the learned friends can help me sort out the problem. Some of the tables of my SQL database are accessible to other users too. Yesterday, I noticed that certain rows got missing from the database table. How to find who deleted those rows from that table? Please help me with some genuine method.”

Query 2: “I received a query from a database user who wants to know “how to find who deleted records in SQL Server”. Here, I am talking about the Express edition of database. As per my assumption, this data can be found in the LDF file (if it has not been truncated). If I am correct, tell me how to find out this particular information from LDF file.”

If you want to know how to find out who deleted records from SQL Server, you may find these queries familiar. Hundreds of users like these are looking for ways to find out the culprit who deleted SQL data that caused them serious problem. If you are one of them, hold on, as we will tell you the most impressive method to locate who deleted SQL Server records.

How to Find Who Deleted Records in SQL Server

The record of the user who deleted some data from SQL Server can be found in the Transaction Log file. If you want to know, you need to retrieve that information from LDF file of your SQL database. For this, you will require SQL Transaction Log Viewer that can open and read LDF file data easily. Here is how you can read Log file data using this software.

a. Install and start SQL Log Analyzer Tool of your Windows system. Click on Open to add your log file.

b. A dialog box will appear that will showcase two different tabs for Online Database and Offline Database. Here we will show you the procedure of reading Log file form Online database. Select Online DB Options tab and then choose the Server name from the list. Also, select the Authentication mode and Select Database from the list. Click OK when done.

c. The software will scan the LDF file and prompt when it is finished. Click OK.

d. It will display the summary of LDF file including counts of records, inserts, updates, and deletes. Again, click OK.

e. Now, you will see the complete LDF file data on the software screen. Any record can be previewed.
f. Users can sort the LDF file entries based on Transaction, Login Name, Time, table Name, and Transaction Name.

g. Just click on any table entry to select and preview the corresponding log entry of that operation. If you want to know who deleted a record, click on Delete transaction of that record. You will be able to see the Login Name of the person who deleted that data.
h. If you wish to export these data, check the boxes beside the necessary tables and click on Export.

i. Another dialog box will appear where you can apply record type filter, date filter, Export To/As, Database Credentials for Source database, and Destination Database.

j. Click on Export button when all the fields are filled properly.
k. The tool will export the selected data and upon finishing, prompt the users. Click OK.
l. The software will also display a report which you can save as CSV file for reference.

Note: Besides analyzing Log file data, this software can also retrieve deleted data if the database is in simple recovery mode.


SQL database often contains essential information and deletion of such data causes great inconvenience. Hence, we answered the question how to find who deleted records in SQL Server in this post. Using SQL Log Analyzer, users can open LDF file and find out who performed any particular transaction like Insert, Update, and Delete.


Calculating Datetime Based on NT Time

October 23, 2012 1 comment

A colleague of mine gave me an interesting challenge today. I am by no means a T-SQL expert, however it was interesting dissecting the problem.

Give the time, 128271382742968750, what does it mean? How to read this?

We can use command line utility called w32tm.exe with following command to get the exact time…

w32tm.exe /ntte 128271382742968750

Return we get

148462 05:57:54.2968750 – 6/24/2007 8:57:54 AM (local time).

Problem SOLVED!

Well not quite, this doesn’t translate well against many gigs of data that my friend wanted to translate. So reading the KB555936, started breaking down the time-stamp above.

  1. Multiple 128271382742968750 by 100 to get 12827138274296875000; because the time is recorded in number of 100 ns have ticked by since January 1, 1601.
  2. Next divide 12827138274296875000 by 1,000,000,000 to get number of seconds passed since January 1, 1601. We get  12,827,138,274.2968750.
  3. We can ignore everything after the decimal, that is number of ms passed (which we don’t care about).
  4. Unfortunately we cannot use the DATEADD function in SQL Server to calculate the date, as in SQL Server we can go back to only 1/1/1753.  So we need to calculate the number of seconds passed from 1/1/1601 to 1/1/1753 and subtract that from that.
  5. And that is 4,796,668,800 seconds (you can take my word for it, or calculate it using PowerShell script, below).
  6. So we take the number calculated in step 3 and subtract 4,796,668,800 from it. To get 8,030,469,474 seconds passed since 1/1/1753.  Now we can use our ADDDATE!!! Yeeh? Right?
  7. Umm unfortunately NO.  The DATEADD function accepts a integer parameter, and that number is too big so we get row over flow error :(.
  8. So we have to do some additional math, we take that number and divide by 60, to get number of minutes passed.  We get 133,841,157.90.
  9. Now the .90 is important as we’ll need to calculate the seconds; so don’t forget it.  But we can now pass in the above value to get the date.
  10. SELECT DATEADD(Minute,133,841,157.90,’1753/1/1′); almost done.  DATEADD function truncates any decimal value so we do not get the number of seconds passed.
  11. So now we have to add the number of seconds to the puzzle.  We can do that using SELECT DATEADD(Second,.90*60,DATEADD(Minute,133841157.90,’1753/1/1′)).
  12. Now  we have our final answer of 2007-06-24 05:57:54 :).
  13. Just for heck of it if we wanted ms also, the answer should be
    SELECT DATEADD(MILLISECOND,0.296800000*1000000,DATEADD(Second,.90*60,DATEADD(Minute,133841157.90,’1753/1/1′))).

So there you have it, NT time in normal time using T-SQL :).  Lots of work, but possible heh.

SQL Server Script to Calculate the NT Time in Readable formatting using T-SQL, combining all 12 steps into single step:


SET @NTTime = 128271382742968750
SET @TimeSkip = 47966688000000000
SET @BaseTime = '1753/1/1 0:00:00.000'

SELECT DATEADD(SECOND,((((@NTTime - @TimeSkip)*1.0)/600000000)-ROUND(((@NTTime - @TimeSkip)/600000000),0,1))*60,DATEADD(MINUTE,((@NTTime - @TimeSkip)/600000000),@BaseTime)) AS NormalTime

PowerShell Script to find time passed between 1/1/1601 and 1/1/1753:

[DateTime]$LowDateRange = '1/1/1601'

[DateTime]$HighDateRange = '1/1/1753'


INSERTED and DELETED Logical Tables

February 23, 2009 Leave a comment

The INSERTED and DELETED logical tables that exist in SQL Server and allow for handling the data when information is inserted, updated and deleted in DML Triggers only:

DML trigger statements use two special tables: the deleted table and the inserted tables. SQL Server automatically creates and manages these tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for DML trigger actions. You cannot directly modify the data in the tables or perform data definition language (DDL) operations on the tables, such as CREATE INDEX. (Books Online, SQL Server 2008).

Below is summary of what special tables get modified with each DML statement.


I wanted to know if there anything special happens when I work with single row versus batch. So I test follow cases …

For testing I Created a new table called ‘IDST_Testing’ (IDST = Inserted Deleted Special Table). Below is summary of test cases and records inserted in each of the special tables.

Single Insert 1 0
Double Insert – Two Statements 1/per statement 1/per statement
Single Update 1 1
Double Update – Two Statements 1/per statement 1/per statement
Single Delete 0 1
Double Delete – Two Statements 1/per statement 1/per statement
Batch Insert – Two Records 2 0
Batch Update – Two Records 2 2
Batch Delete – Two Records 0 2

All those results are normal; but what was surprising was when I was doing batch INSERTED and DELETED the records were in reverse order.

For example:

I inserted following two records:


But when I looked at INSERTED table it showed:


In my actual table it was in order what I entered in but when processing the INSERTED and DELETED table they are revered. It was same in tables with IDENTITY columns and without.

Another interesting information on these tables:

  1. In SQL Server 2000, these logical tables internally refer to database transaction log to provide data when user queries them.
  2. In SQL Server 2005, these logical tables are maintained in tempdb and they are maintained using the new technology Row versioning.
  3. Accessing of logical tables is much faster in SQL Server 2005 when compared to SQL Server 2000 as the load is removed from transaction log and moved to tempdb.
  4. Logical tables are never indexed. So, if you are going to loop through each and every record available in these tables, then consider copying data of logical tables to temporary tables and index them before looping through.



When creating trigger it is important to keep in mind that sometimes a Batch Import, Update or Delete might happen against the table; if you are referring the logical tables INSERTED and DELETED the data will be in reverse order so you don’t want to cause issues when traversing through these tables in triggers.

Note always keep in mind RBAR when designing triggers; its very easy set up RBAR scenario when working with triggers. For more information on that please read,–row-by-agonizing-row/.

%d bloggers like this: