📜 ⬆️ ⬇️

SQL Server 2016 CTP3.1 - what's new for the developer?

Not long ago, the announcement of SQL Server 2016 , which Satya Nadella personally presented, was postponed in my memory. And suddenly, like snow on their heads, they began one by one to get fresh Community Technology Preview (at the moment the most recent version is CTP3.1 ). As I got acquainted with the new version, I increasingly wanted to share my impressions ...

Next, an overview of the new syntax for SQL Server 2016 : JSON , GZIP , DROP IF EXISTS , TRUNCATE TABLE by section, new features ...

# 1 - DROP IF EXISTS


CREATE TABLE dbo.tbl ( a INT, b INT, CONSTRAINT ck CHECK (a > 1), INDEX ix CLUSTERED (a) ) 

If earlier, before deleting an object, you had to do a check:
')
 IF OBJECT_ID(N'dbo.tbl', 'U') IS NOT NULL DROP TABLE dbo.tbl 

Now a more compact syntax has appeared:

 DROP TABLE IF EXISTS dbo.tbl 

Added the ability to delete child elements:

 ALTER TABLE dbo.tbl DROP COLUMN IF EXISTS b ALTER TABLE dbo.tbl DROP CONSTRAINT IF EXISTS ck ALTER TABLE dbo.tbl DROP CONSTRAINT IF EXISTS ix 

For a single parent object, structures can be combined:

 ALTER TABLE dbo.tbl DROP COLUMN IF EXISTS b, CONSTRAINT IF EXISTS ck, CONSTRAINT IF EXISTS ix 

DROP IF EXISTS is supported for almost all objects (the full list can be found here ):

 DROP TABLE IF EXISTS #temp DROP TABLE IF EXISTS ##temp DROP VIEW IF EXISTS dbo.view1 DROP PROCEDURE IF EXISTS dbo.proc1 DROP DATABASE IF EXISTS db 

# 2 - SESSION_CONTEXT


In my practice, there were tasks on sharing parameters within a user session. Previously, you had to use CONTEXT_INFO, the size of which was limited to 128 bytes:

 DECLARE @UserID SMALLINT = 1 , @LocaleID INT = 123 DECLARE @ctn VARBINARY(128) SET @ctn = CAST(@UserID AS BINARY(2)) + CAST(@LocaleID AS BINARY(4)) SET CONTEXT_INFO @ctn 

Now everything has become a bit more convenient due to the new function SESSION_CONTEXT in which it was allowed to store 256Kb per session:

 EXEC sys.sp_set_session_context @key = N'UserID', @value = 1 EXEC sys.sp_set_session_context @key = N'LocaleID', @value = 123 SELECT UserID = SESSION_CONTEXT(N'UserID') , LocaleID = SESSION_CONTEXT(N'LocaleID') 

# 3 - CHECKDB + MAXDOP


By default, when executing DBCC CHECKDB , the number of threads equal to the number of logical cores is used. Now the number of threads can be limited, so as not to impair the performance of the server as a whole:

 DBCC CHECKDB(N'AdventureWorks2016CTP3') WITH MAXDOP = 4 

Similar functionality has been added to DBCC CHECKTABLE and DBCC CHECKFILEGROUP :

 USE AdventureWorks2016CTP3 GO DBCC CHECKTABLE('HumanResources.Employee') WITH MAXDOP = 4 DBCC CHECKFILEGROUP(1) WITH MAXDOP = 4 

For DBCC, CHECKDB did some optimizations when checking filtered indexes and COMPUTED columns with the PERSISTED option. Another reduced time for checking tables that contain a large number of sections.

# 4 - FORMATMESSAGE


In previous versions, the FORMATMESSAGE function could only use previously added custom messages:

 EXEC sys.sp_addmessage @msgnum = 66667, @severity = 16, @msgtext = N'param1: %s, param2: %s' DECLARE @msg NVARCHAR(2048) = FORMATMESSAGE(66667, N'one', N'two') SELECT @msg 

now it is possible to specify an arbitrary mask:

 SELECT FORMATMESSAGE('val1: %+i, val2: %+d', 5, -6) 

Many routine quota operations or string concatenation can be made more elegant:

 SELECT FORMATMESSAGE('SELECT * FROM [%s].[%s]', SCHEMA_NAME([schema_id]), name) FROM sys.objects WHERE [type] = 'U' 

# 5 - COMPRESS & DECOMPRESS


The new edition has built-in support for GZIP : COMPRESS and DECOMPRESS . When decoding, it is important to monitor the correct data types into which the result is converted:

 DECLARE @a VARBINARY(MAX) = COMPRESS('test test test') SELECT @a , DECOMPRESS(@a) , CAST(DECOMPRESS(@a) AS NVARCHAR(MAX)) --  , CAST(DECOMPRESS(@a) AS VARCHAR(MAX)) 

We encode the ANSI string and try to decode the resulting value:

 ------------------------------------ ---------------------- ---------------- ----------------- 0x1F8B08000000000004002B492D2E512 0x746573742074657374 整瑳琠獥⁴整瑳 test test test 

# 6 - DATEDIFF_BIG


In SQL Server 2008 , new parameters appeared for the DATEDIFF function: MICROSECOND and NANOSECOND , but when the date range was too large:

 SELECT DATEDIFF(NANOSECOND, '20000101', '20160101') 

This could lead to an error:

 Msg 535, Level 16, State 0, Line 1 The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart. 

For such situations, a new DATEDIFF_BIG function has been added :

 SELECT DATEDIFF_BIG(NANOSECOND, '20000101', '20160101') 

# 7 - AT TIME ZONE


CTP3.0 has a new system view:

 SELECT name, current_utc_offset, is_currently_dst FROM sys.time_zone_info 

in which you can get a list of time zones:

 name current_utc_offset is_currently_dst -------------------------- ------------------ ---------------- Dateline Standard Time -12:00 0 UTC-11 -11:00 0 ... Central Standard Time -06:00 0 ... Pacific SA Standard Time -03:00 0 UTC-02 -02:00 0 ... UTC +00:00 0 GMT Standard Time +00:00 0 Greenwich Standard Time +00:00 0 ... Belarus Standard Time +03:00 0 Russian Standard Time +03:00 0 ... 

With AT TIME ZONE, you can display the time in a given time zone:

 SELECT CONVERT(DATETIME2, GETDATE()) AT TIME ZONE N'Belarus Standard Time' 

 ---------------------------------- 2015-12-02 14:51:02.1366667 +03:00 

which can be parameterized:

 DECLARE @tz NVARCHAR(256) = N'Belarus Standard Time' SELECT GETDATE() AT TIME ZONE @tz , CONVERT(DATETIME2, GETDATE()) AT TIME ZONE @tz 

 ---------------------------------- 2015-12-02 14:51:28.6266667 +03:00 

What is the use of such an innovation? For example, you can display how much time in other time zones, based on the current time we have:

 SELECT name, CONVERT(DATETIME, SWITCHOFFSET(SYSUTCDATETIME() AT TIME ZONE name, DATENAME(TzOffset, SYSDATETIMEOFFSET())) ) FROM sys.time_zone_info 

 ---------------------------------- ----------------------- Dateline Standard Time 2015-12-03 02:56:41.940 UTC-11 2015-12-03 01:56:41.940 ... Pacific SA Standard Time 2015-12-02 17:56:41.940 UTC-02 2015-12-02 16:56:41.940 ... UTC 2015-12-02 14:56:41.940 GMT Standard Time 2015-12-02 14:56:41.940 Greenwich Standard Time 2015-12-02 14:56:41.940 Central European Standard Time 2015-12-02 13:56:41.940 ... 

# 8 - JSON


JSON support is one of the main features of SQL Server 2016 . Starting from CTP2.0 , it became possible to generate JSON by analogy with XML . Two FOR JSON AUTO and FOR JSON PATH constructs are supported:

 SELECT TOP (2) name, database_id, source_database_id, create_date FROM sys.databases FOR JSON AUTO, ROOT('root') 

 {"root": [ {"name":"master","database_id":1,"create_date":"2003-04-08T09:13:36.390"}, {"name":"tempdb","database_id":2,"create_date":"2015-12-02T11:34:36.080"} ] } 

 SELECT TOP (2) name , [db.id] = database_id , [db.scr_id] = source_database_id , [db.date] = create_date FROM sys.databases FOR JSON PATH, ROOT 

 {"root": [ { "name":"master", "db":{"id":1,"date":"2003-04-08T09:13:36.390"} }, { "name":"tempdb", "db":{"id":2,"date":"2015-12-02T11:34:36.080"} } ] } 

In order for NULL values ​​to be included in JSON when generating, the INCLUDE_NULL_VALUES option must be used:

 SELECT TOP (1) name, database_id, source_database_id FROM sys.databases FOR JSON AUTO, INCLUDE_NULL_VALUES 

 [ { "name":"master", "database_id":1, "source_database_id":null } ] 

If you need to remove the square brackets from JSON , then the WITHOUT_ARRAY_WRAPPER option is provided for this (which is available starting from CTP3.2 version):

 SELECT TOP (1) name, database_id, source_database_id FROM sys.databases FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER 

 { "name":"master", "database_id":1 } 

You need to use NVARCHAR to store JSON , since there is no separate data type:

 DECLARE @json NVARCHAR(MAX) = ( SELECT key1 = 1, key2 = GETDATE() FOR JSON PATH ) SELECT @json 

 {"key1":1,"key2":"2015-12-02T15:45:05.530"} 

To make a selection of JSON, you can use OPENJSON . If the record is one, then the result is returned in the form of "key-value":

 DECLARE @json NVARCHAR(MAX) = N' { "UserID" : 1, "UserName": "JC Denton", "IsActive": true, "RegDate": "2015-12-02" }'; SELECT * FROM OPENJSON(@json) 

 key value type ----------- ------------ ---- UserID 1 2 UserName JC Denton 1 IsActive true 3 RegDate 2015-12-02 1 

This behavior can be used as a “another version” of the split line:

 DECLARE @a NVARCHAR(100) = '1,2,3' SELECT CAST(value AS INT) FROM OPENJSON(N'[' + @a + N']') 

 ----------- 1 2 3 

If there are several entries:

 DECLARE @json NVARCHAR(MAX) = N' [ { "UserID" : 1, "UserName": "JC Denton", "IsActive": true, "RegDate": "2015-12-02" }, { "UserID" : 2, "UserName": "Paul Denton", "IsActive": false, "RegDate": "2015-11-02" } ]'; SELECT * FROM OPENJSON(@json) 

then the result will be as follows:



In normal form, the data can be obtained as follows:

 DECLARE @json NVARCHAR(MAX) = N' [ { "UserID" : 1, "UserName": "JC Denton", "IsActive": true, "RegDate": "2015-12-02" }, { "UserID" : 2, "UserName": "Paul Denton", "IsActive": 0, "RegDate": "2015-11-02" } ]'; SELECT * FROM OPENJSON(@json) WITH ( UserID INT, UserName VARCHAR(50), IsActive BIT, [Date] DATE '$.RegDate' ) 

 UserID UserName IsActive Date ----------- --------------- -------- ---------- 1 JC Denton 1 2015-12-02 2 Paul Denton 0 2015-11-02 

If you need to get a scalar expression, you can use JSON_VALUE :

 DECLARE @json NVARCHAR(4000) = N' { "UserID" : 1, "Detail": [ { "Year":2016 }, { "Year":2015, "Options": [{ "Visible":true }] ] }' SELECT JSON_VALUE(@json, '$.UserID') , JSON_VALUE(@json, '$.Detail[0].Year') , JSON_VALUE(@json, '$.Detail[1].Year') , JSON_VALUE(@json, '$.Detail[1].Options[0].Visible') 

To get an array of objects from JSON , the JSON_QUERY function is provided :

 DECLARE @json NVARCHAR(4000) = N' { "Main" :{ "Detail": { "Name":"color", "Value":"blue" } }' SELECT JSON_QUERY(@json, '$.Main.Detail') 

You can use the ISJSON function if you need to make sure that the text is JSON :

 DECLARE @json NVARCHAR(MAX) = N'{"ID" : 1}'; SELECT ISJSON(@json), ISJSON('') 

There are no special indexes for JSON , but it is possible to use COMPUTED columns:

 DROP TABLE IF EXISTS dbo.Users CREATE TABLE dbo.Users ( OrderID INT PRIMARY KEY, JSON NVARCHAR(4000), CONSTRAINT CK_IsJSON CHECK (ISJSON(JSON)=1), Age AS (CONVERT(INT, JSON_VALUE(JSON, '$.Age'))) ) CREATE INDEX IX_Age ON dbo.Users(Age) 

Now not for all COMPUTED columns based on JSON, you can create an index:

 ALTER TABLE dbo.Users ADD RegDate AS (CAST(JSON_VALUE(JSON, '$.Age') AS DATE)) GO CREATE INDEX IX_RegDate ON dbo.Users(RegDate) 

 Msg 2729, Level 16, State 1, Line 15 Column 'RegDate' in table 'dbo.Users' cannot be used in an index or statistics or as a partition key because it is non-deterministic. 

This bug should be fixed in the next version of CTP .

# 9 - ONLINE ALTER COLUMN


The ALTER COLUMN command can now be carried out in the ONLINE mode. When the command is executed, the data on the column will be available for reading, and the blocking of the Sch-M scheme is imposed only at the very end of the ALTER operation, when switching to new pages with data takes place (you can read more here ).

 DROP TABLE IF EXISTS dbo.tbl CREATE TABLE dbo.tbl (x VARCHAR(255) NULL) GO ALTER TABLE dbo.tbl ALTER COLUMN x VARCHAR(255) NOT NULL WITH (ONLINE = ON) GO ALTER TABLE dbo.tbl ALTER COLUMN x NVARCHAR(255) COLLATE Cyrillic_General_100_CI_AS NOT NULL WITH (ONLINE = ON) 

# 10 - TRUNCATE TABLE + PARTITIONS


In version CTP2.0, for the TRUNCATE TABLE operation, the ability to work with individual sections, and not just over the entire table, was added. In this case, you can specify not only a separate section, but a whole range of sections. Create a test table:

 CREATE PARTITION FUNCTION PF (SMALLINT) AS RANGE RIGHT FOR VALUES (1, 2, 3, 4, 5) GO CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) GO DROP TABLE IF EXISTS dbo.tbl CREATE TABLE dbo.tbl (a SMALLINT PRIMARY KEY) ON PS (a) GO INSERT INTO dbo.tbl (a) VALUES (0), (1), (2), (3), (4), (5) SELECT partition_number, [rows] FROM sys.partitions WHERE [object_id] = OBJECT_ID('dbo.tbl') AND index_id < 2 

 ---------------- ------ 1 1 2 1 3 1 4 1 5 1 6 1 

 TRUNCATE TABLE dbo.tbl WITH (PARTITIONS (1, 4 TO 5)) 

 partition_number rows ---------------- ------ 1 0 2 1 3 1 4 0 5 0 6 1 

# 11 - CURRENT_TRANSACTION_ID


In CTP3.0 , a new function CURRENT_TRANSACTION_ID was added, which, based on the name, returns the current transaction ... Perhaps this function will be useful to someone, but for her, I could not think of a more illustrative example:

 BEGIN TRANSACTION UPDATE HumanResources.Employee SET SalariedFlag = 0 WHERE BusinessEntityID = 1 

 SELECT ec.session_id, s.[text] FROM sys.dm_tran_session_transactions t JOIN sys.dm_exec_connections ec ON t.session_id = ec.session_id CROSS APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) s WHERE t.transaction_id != CURRENT_TRANSACTION_ID() 

 session_id text ----------- --------------------------- 67 BEGIN TRANSACTION UPDATE HumanResources.Employee SET SalariedFlag = 0 WHERE BusinessEntityID = 1 

A small afterword ...

From what I'm watching, the release of SQL Server 2016 promises to be very interesting. With each new CTP , a large number of chips are added, which are difficult to describe in one article. To keep readability, I left Temporal Tables , Dynamic Data Masking and In-Memory enhancements, which I plan to add in the upcoming continuation, overboard.

If you want to share this article with an English-speaking audience:
SQL Server 2016 CTP3.1 - What's New for Developer?

Source: https://habr.com/ru/post/272211/


All Articles