Have you ever been confronted with the fact that you need to very quickly make changes to the stored procedure or to the view, or else where? I have this happens often. And in the period of implementation in general constantly. And here I'm afraid version control systems will not always be able to help. But how to understand what has changed? How has it changed? What happened before the change? When did it change?
This article is not a guide. In it, I just wanted to show possible solutions to this problem. I would be glad if they propose alternative solutions.
1) Create two tables (the first is for each of the observed databases, the second is for all observed server databases):
USE [__] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [srv].[ddl_log]( [DDL_Log_GUID] [uniqueidentifier] NOT NULL, [PostTime] [datetime] NOT NULL, [DB_Login] [nvarchar](255) NULL, [DB_User] [nvarchar](255) NULL, [Event] [nvarchar](255) NULL, [TSQL] [nvarchar](max) NULL, CONSTRAINT [PK_ddl_log] PRIMARY KEY CLUSTERED ( [DDL_Log_GUID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO ALTER TABLE [srv].[ddl_log] ADD CONSTRAINT [DF_ddl_log_DDL_Log_GUID] DEFAULT (newid()) FOR [DDL_Log_GUID] GO USE [__] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [srv].[ddl_log_all]( [DDL_Log_GUID] [uniqueidentifier] NOT NULL, [Server_Name] [nvarchar](255) NOT NULL, [DB_Name] [nvarchar](255) NOT NULL, [PostTime] [datetime] NOT NULL, [DB_Login] [nvarchar](255) NULL, [DB_User] [nvarchar](255) NULL, [Event] [nvarchar](255) NULL, [TSQL] [nvarchar](max) NULL, [InsertUTCDate] [datetime] NOT NULL, CONSTRAINT [PK_ddl_log_all] PRIMARY KEY CLUSTERED ( [DDL_Log_GUID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO ALTER TABLE [srv].[ddl_log_all] ADD CONSTRAINT [DF_ddl_log_all_DDL_Log_GUID] DEFAULT (newid()) FOR [DDL_Log_GUID] GO ALTER TABLE [srv].[ddl_log_all] ADD CONSTRAINT [DF_ddl_log_all_InsertUTCDate] DEFAULT (getutcdate()) FOR [InsertUTCDate] GO
2) Create a DDL trigger on the database that collects schema changes:
USE [__] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [SchemaLog] ON DATABASE --ALL SERVER FOR DDL_DATABASE_LEVEL_EVENTS AS SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; DECLARE @data XML begin try if(CURRENT_USER<>'NT AUTHORITY\NETWORK SERVICE' and SYSTEM_USER<>'NT AUTHORITY\NETWORK SERVICE') begin SET @data = EVENTDATA(); INSERT srv.ddl_log( PostTime, DB_Login, DB_User, Event, TSQL ) select GETUTCDATE(), CONVERT(nvarchar(255), SYSTEM_USER), CONVERT(nvarchar(255), CURRENT_USER), @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(255)'), @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(max)') where @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(255)') not in('UPDATE_STATISTICS', 'ALTER_INDEX') and @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(max)') not like '%Msmerge%'; -- end end try begin catch end catch GO SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER OFF GO ENABLE TRIGGER [SchemaLog] ON DATABASE GO
It is necessary, of course, to configure the filter, because, for example, I do not want to track the change in the index and update the statistics (but this is subjective, maybe, in your case, it will be needed, but something else will not be needed). I also do not advise doing a DDL trigger on the entire server. I tried, and believe me, I have never seen so much superfluous information anywhere, and with such a speed of growth. Although no, I saw the data coming from the AIS receivers of ships. But in general, I do not recommend. It is better to create a trigger on each of the observed databases.
This trigger will have to be disabled during complex initialization operations for replications, for example. But then you can turn it back on.
3) Then, in some way, collect information into a single table (for example, by assignment in the Agent 1 time per day)
4) For multiple servers, you can collect all in one table, too, in some way.
Do not forget to delete very old data (for example, those that are more than a month).
In this article, we reviewed an example of the implementation of automatic data collection on database schema changes in MS SQL Server, which allows us not only to know what and when and what has changed, but also to quickly roll back these changes. Basically, this mechanism helps out at the implementation stage, when most errors are made, and when copies of databases created by implementers (including me) are so divergent that you need to analyze what was changed and when. The reason for the changes can be found out just from a particular implementer (and from oneself), having received a history of changes, since in our head during a hectic activity, alas, not everything is remembered.
Source: https://habr.com/ru/post/314630/
All Articles