SQL Server tracking changes data is useful for auditing purposes, troubleshooting, and data analysis. By tracking changes, you can easily identify what data has been modified, who made the changes, and when the changes were made. Aside with other solutions like CT (Change Tracking), CDC (Change Data Captured), Temporal Table (System-versioning). The most common and simple method for implementing tracking in SQL Server is by using Triggers (Stored Procedure). Which will be automatically executed in response to certain events, such as an insert, update, or delete operation. By using triggers, you can capture the changes made to a table and store them in a separate table for later analysis.

Create sample table

CREATE DATABASE TEST_CHANGES_TRACKING
USE TEST_CHANGES_TRACKING

-- Create Users table
CREATE TABLE Users (
    UserID INT IDENTITY(1,1) PRIMARY KEY,
    Username NVARCHAR(100),
    Email NVARCHAR(150),
    Age INT
);


-- Set the number of users to insert
DECLARE @NumUsers INT;
SET @NumUsers = 50;

-- Insert random data into Users table
DECLARE @Counter INT = 1;

WHILE @Counter <= @NumUsers
BEGIN
    -- Generating a radnom username
    DECLARE @Username NVARCHAR(50) = CONVERT(VARCHAR(36), NEWID()) + CAST(@Counter AS NVARCHAR(10)); 
    INSERT INTO Users (Username, Email, Age)
    VALUES (
        @Username,
        @Username + '@example.com',
        ROUND(RAND() * 50 + 20, 1) -- Generating a random age between 20 and 70
    );

    SET @Counter = @Counter + 1;
END

-- Query to view the inserted data
SELECT * FROM Users;

Here is the sample of 9 first of new 50 inserted users

UserID Username Email Age
1 52FD2FA1-2493-4AF7-B32D-B07FC3CCD2281 52FD2FA1-2493-4AF7-B32D-B07FC3CCD2281@example.com 24
2 040D7C77-2524-40C9-B255-7B1E9C86E83B2 040D7C77-2524-40C9-B255-7B1E9C86E83B2@example.com 42
3 E22CB32D-5700-4DCE-AC5F-357984A7FEE83 E22CB32D-5700-4DCE-AC5F-357984A7FEE83@example.com 36
4 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854@example.com 31
5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5@example.com 52
6 077CBF3F-51C2-4E75-928E-CFB58E652DA06 077CBF3F-51C2-4E75-928E-CFB58E652DA06@example.com 43
7 7A8C0419-1738-40AA-AD22-0019376285D47 7A8C0419-1738-40AA-AD22-0019376285D47@example.com 59
8 F5058719-65AE-4C40-946C-29F20639749B8 F5058719-65AE-4C40-946C-29F20639749B8@example.com 42
9 E451B6E2-D910-4C95-9742-29BBB1766ECA9 E451B6E2-D910-4C95-9742-29BBB1766ECA9@example.com 25

Create auditing table

Firstly, you need to clone entire Users table to Users_History, I added Trigger_Date and Trigger_Action in order to use later

SELECT *
INTO Users_History
FROM Users
WHERE 1 = 0

ALTER TABLE Users_History
ADD [Trigger_Date] DATETIME NOT NULL DEFAULT GETDATE(),
    [Trigger_Action] VARCHAR(12)
GO

Next, create trigger called TrackUsers, this SQL script creates or alters a trigger named TrackUsers on the Users table. The trigger is set to execute with the permissions of the owner and is configured to fire after any DELETE, UPDATE, or INSERT operations on the Users table.

  • Within the trigger, the SET IDENTITY_INSERT Users_History ON; statement allows for explicit insertion of identity values into the Users_History table, which is presumably an audit or history table tracking changes to the Users table.

  • The trigger then uses conditional logic to determine the type of operation (DELETE, UPDATE, or INSERT) and inserts corresponding records into the Users_History table. The GETDATE() function is used to capture the timestamp of the trigger action, and the CONCAT function is employed to construct the Trigger_Action column indicating whether it’s an INSERT, UPDATE (before or after), or DELETE operation.

  • Finally, the SET IDENTITY_INSERT Users_History OFF; statement is used to revert the identity insert setting, ensuring that subsequent operations on the ‘Users_History’ table behave as usual.

CREATE OR ALTER TRIGGER TrackUsers
ON Users
WITH EXECUTE AS OWNER
AFTER DELETE, UPDATE, INSERT
AS
BEGIN
    SET IDENTITY_INSERT Users_History ON;

    DECLARE @Operation CHAR(12);
    IF (EXISTS(SELECT * FROM inserted))
    BEGIN
        IF (EXISTS(SELECT * FROM deleted))
        BEGIN
            -- rows in both have to be an UPDATE
            SET @Operation = 'Update';
            INSERT INTO Users_History(UserID, Username, Email, Age, [Trigger_Date], [Trigger_Action]) 
            SELECT *, GETDATE(), CONCAT('After', @Operation) 
            FROM inserted;

            INSERT INTO Users_History(UserID, Username, Email, Age, [Trigger_Date], [Trigger_Action]) 
            SELECT *, GETDATE(), CONCAT('Before', @Operation) 
            FROM deleted;
        END;
        ELSE
        BEGIN
            -- no rows in "deleted," has to be an INSERT
            SET @Operation = 'Insert';
            INSERT INTO Users_History(UserID, Username, Email, Age, [Trigger_Date], [Trigger_Action]) 
            SELECT *, GETDATE(), @Operation 
            FROM inserted;
        END;
    END;
    ELSE
    BEGIN
        -- no rows in "inserted," has to be a DELETE
        SET @Operation = 'Delete';
        INSERT INTO Users_History(UserID, Username, Email, Age, [Trigger_Date], [Trigger_Action]) 
        SELECT *, GETDATE(), @Operation 
        FROM deleted;
    END;

    SET IDENTITY_INSERT Users_History OFF;
END;

Now for any change associated with Users, it will be inserted in Users_History table as well.

Update data

This SQL script is designed to perform an update operation on a specified field in a given source table named Users. The script begins by declaring variables to hold the source table name @source_table, the field to be updated @update_field_name, and the new value for the update @update_field_value.

A dynamic SQL statement is then constructed to achieve the following steps:

  • A temporary table #TempTable_ is created with a random name generated using NEWID(). It stores 50 percent of the records from the source table, selected randomly.
  • The source table is updated by setting the specified field @update_field_name to the provided value @update_field_value for records that match the ones in the temporary table.
  • Subsequently, the source table is further updated by setting the specified field to the value in the temporary table, ensuring that the updated values are consistent across records.
  • Finally, the temporary table is dropped to clean up the temporary storage.

The script utilizes dynamic SQL (sp_executesql) to execute the constructed SQL statement. This dynamic approach allows for flexibility in table and field names, making it adaptable for various scenarios where dynamic queries are necessary.

DECLARE @source_table VARCHAR(100) = 'Users'
DECLARE @update_field_name VARCHAR(50) = 'Age'
DECLARE @update_field_value VARCHAR(30) = '''30'''

DECLARE @sql NVARCHAR(MAX);
DECLARE @temp_table VARCHAR(50) = N'#TempTable_' + REPLACE(NEWID(), '-', '');
-- Generate the dynamic SQL statement
SET @sql = N'
    SELECT TOP 50 PERCENT UserID, ' + QUOTENAME(@update_field_name) + N'
    INTO ' + QUOTENAME(@temp_table) + N'
    FROM ' + QUOTENAME(@source_table) + N' 
    ORDER BY NEWID();

    UPDATE p
    SET ' + QUOTENAME(@update_field_name) + N' = '+ @update_field_value + N'
    FROM ' + QUOTENAME(@source_table) + N' p
    JOIN ' + QUOTENAME(@temp_table) + N' t ON p.UserID = t.UserID;

    UPDATE p
    SET ' + QUOTENAME(@update_field_name) + N' = t.' + QUOTENAME(@update_field_name) + N'
    FROM ' + QUOTENAME(@source_table) + N' p
    JOIN ' + QUOTENAME(@temp_table) + N' t ON p.UserID = t.UserID;

    DROP TABLE ' + QUOTENAME(@temp_table) + N';
    ';

-- Execute the dynamic SQL statement
EXEC sp_executesql @sql;

The results will look like:

Started executing query at Line 155
(25 rows affected)
(25 rows affected)
(25 rows affected)
(25 rows affected)
(25 rows affected)
(25 rows affected)
(25 rows affected)
Total execution time: 00:00:00.109

I use this condition to narrow down result set:

select * from Users_History where UserID <= 9 order by UserID
UserID Username Email Age Trigger_Date Trigger_Action
2 040D7C77-2524-40C9-B255-7B1E9C86E83B2 040D7C77-2524-40C9-B255-7B1E9C86E83B2@example.com 30 2023-12-23 02:10:13.140 AfterUpdate
2 040D7C77-2524-40C9-B255-7B1E9C86E83B2 040D7C77-2524-40C9-B255-7B1E9C86E83B2@example.com 42 2023-12-23 02:10:13.140 BeforeUpdate
2 040D7C77-2524-40C9-B255-7B1E9C86E83B2 040D7C77-2524-40C9-B255-7B1E9C86E83B2@example.com 42 2023-12-23 02:10:13.167 AfterUpdate
2 040D7C77-2524-40C9-B255-7B1E9C86E83B2 040D7C77-2524-40C9-B255-7B1E9C86E83B2@example.com 30 2023-12-23 02:10:13.167 BeforeUpdate
3 E22CB32D-5700-4DCE-AC5F-357984A7FEE83 E22CB32D-5700-4DCE-AC5F-357984A7FEE83@example.com 30 2023-12-23 02:10:13.167 BeforeUpdate
3 E22CB32D-5700-4DCE-AC5F-357984A7FEE83 E22CB32D-5700-4DCE-AC5F-357984A7FEE83@example.com 36 2023-12-23 02:10:13.167 AfterUpdate
3 E22CB32D-5700-4DCE-AC5F-357984A7FEE83 E22CB32D-5700-4DCE-AC5F-357984A7FEE83@example.com 36 2023-12-23 02:10:13.140 BeforeUpdate
3 E22CB32D-5700-4DCE-AC5F-357984A7FEE83 E22CB32D-5700-4DCE-AC5F-357984A7FEE83@example.com 30 2023-12-23 02:10:13.140 AfterUpdate
4 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854@example.com 30 2023-12-23 02:10:13.140 AfterUpdate
4 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854@example.com 31 2023-12-23 02:10:13.140 BeforeUpdate
4 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854@example.com 31 2023-12-23 02:10:13.167 AfterUpdate
4 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854 D3DCBFE8-3EA4-4DF6-A469-FDEE36A221854@example.com 30 2023-12-23 02:10:13.167 BeforeUpdate
5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5@example.com 30 2023-12-23 02:10:13.167 BeforeUpdate
5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5@example.com 52 2023-12-23 02:10:13.167 AfterUpdate
5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5@example.com 52 2023-12-23 02:10:13.140 BeforeUpdate
5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5 03F0B9C3-8455-4703-A3BB-631D24ECF2DC5@example.com 30 2023-12-23 02:10:13.140 AfterUpdate
9 E451B6E2-D910-4C95-9742-29BBB1766ECA9 E451B6E2-D910-4C95-9742-29BBB1766ECA9@example.com 30 2023-12-23 02:10:13.140 AfterUpdate
9 E451B6E2-D910-4C95-9742-29BBB1766ECA9 E451B6E2-D910-4C95-9742-29BBB1766ECA9@example.com 25 2023-12-23 02:10:13.140 BeforeUpdate
9 E451B6E2-D910-4C95-9742-29BBB1766ECA9 E451B6E2-D910-4C95-9742-29BBB1766ECA9@example.com 25 2023-12-23 02:10:13.167 AfterUpdate
9 E451B6E2-D910-4C95-9742-29BBB1766ECA9 E451B6E2-D910-4C95-9742-29BBB1766ECA9@example.com 30 2023-12-23 02:10:13.167 BeforeUpdate

Now you can easily retrieve the specific UserID to get its historical update.

Limitation

While trigger-based auditing can be useful, it comes with certain limitations and considerations:

  • Performance Impact: Triggers can introduce overhead as they execute additional logic for each affected row. This can impact the performance of the operations that modify the audited table.

  • Nested Trigger Issues: If the audited table already has triggers, the addition of another trigger for auditing purposes can lead to nested trigger execution. This might result in unexpected behavior and performance issues. So please do not perform any trigger to Users_History

  • Transaction Size: Large transactions can cause the trigger-based approach to consume significant resources, affecting performance and potentially causing timeouts.

  • No Rollback on Error: If an error occurs during the execution of the trigger, it might not be possible to roll back the entire transaction. This can lead to partial auditing or inconsistent data.

  • Data Volume: In high-volume systems, the volume of audit data generated by triggers can be substantial. Storing and managing large amounts of audit data may require additional resources.

  • Schema Changes: If the schema of the audited table changes, the trigger might need to be updated accordingly. Failure to do so could lead to errors or missing audit information. For example, you can add new column to Users, but cannot remove any existing column already tied to Users_History. If you need update, you have to drop Users_History and recreate trigger as well.

  • Concurrency Issues: Triggers may not handle concurrency well, especially in scenarios where multiple transactions are modifying the audited table simultaneously. This can lead to race conditions and potential data inconsistencies.

  • Maintenance Challenges: Managing and maintaining triggers across multiple tables can be challenging. Any changes to the auditing logic may require careful testing to ensure that it works as expected.

  • Security Concerns: Trigger-based auditing relies on the security context of the user executing the triggering operation. This may result in limited visibility into certain types of changes or operations.

To overcome above concerns, please stay tunned for next series of SQL Server Track Data Changes Part 2: Change Tracking


Next Post:
0%