📜 ⬆️ ⬇️

Working with MS SQL from Powershell on Linux

This article is purely practical and is dedicated to my sad story.


Preparing for Zero Touch PROD for RDS (MS SQL), about which we had all our ears buzzed, I made a presentation (POC - Proof Of Concept) of automation: a set of powershell scripts. After the presentation, when the stormy, prolonged applause, turning into an incessant applause, subsided, I was told that everything is fine, but for ideological reasons, all Jenkins slaves work under Linux!

Is it possible? Take such a warm, lamp DBA from under Windows and stick it into the thick of powershell under Linux? Isn't that cruel?


I had to immerse myself in this strange combination of technology. Of course, all my 30+ scripts have stopped working. To my surprise, I managed to fix everything in one working day. I am writing in hot pursuit. So, what pitfalls can you encounter when transferring powershell scripts from Windows to Linux?

sqlcmd vs Invoke-SqlCmd


I recall the main difference between them. The good old utility sqlcmd works under Linux, with almost identical functionality. For the execution, we pass -Q, the input file as -i, and the output -o. That's just the file names, of course, are made case-sensitive. If you use -i, write at the end in the file:
')
GO EXIT 

If at the end there is no EXIT, then sqlcmd will go to waiting for input, and if there is no GO before EXIT , then the last command will not work. All output, selects, messages, print and so on gets into the output file.

Invoke-SqlCmd returns the result as a DataSet, DataTables, or DataRows. Therefore, if you can process the result of a simple select using sqlcmd and parse its output, then it is almost impossible to derive something complex: for this there is Invoke-SqlCmd . But this team also has its own jokes:


And this is the main problem. Only in March this cmdlet became available for non-windows platforms , and finally we can move forward!

Variable substitution


In sqlcmd, there is variable substitution with -v, for example, like this:

 # $conn    sqlcmd $cmd = $conn + " -i D:\apps\SlaveJobs\KillSpid.sql -o killspid.res -v spid =`"" + $spid + "`" -v age =`"" + $age + "`"" Invoke-Expression $cmd 

In the script in SQL, we use substitutions:

 set @spid=$(spid) set @age=$(age) 

So here. In * nix , variable substitution does not work . The -v option is ignored. Invoke-SqlCmd is ignored by -Variables . Although the parameter that sets the variables themselves is ignored, the substitutions themselves work - you can use any variables from Shell. However, I was offended by the variables and decided not to depend on them at all, and acted roughly and primitively, since the scripts in sql are short:

 # prepend the parameters "declare @age int, @spid int" | Add-Content "q.sql" "set @spid=" + $spid | Add-Content "q.sql" "set @age=" + $age | Add-Content "q.sql" foreach ($line in Get-Content "Sqlserver/Automation/KillSpid.sql") { $line | Add-Content "q.sql" } $cmd = "/opt/mssql-tools/bin/" + $conn + " -i q.sql -o res.log" 

This, as you understand, is already a test from the Unix version.

File upload


In the Windows version, I had any operation accompanied by an audit: we executed sqlcmd, got some abuse in the output file, attached this file to the audit label. Fortunately, the SQL server worked on the same server as Jenkins, it was done like this:

 CREATE procedure AuditUpload @id int, @filename varchar(256) as set nocount on declare @sql varchar(max) CREATE TABLE #multi (filer NVARCHAR(MAX)) set @sql='BULK INSERT #multi FROM '''+@filename +''' WITH (ROWTERMINATOR = ''\0'',CODEPAGE = ''ACP'')' exec (@sql) select @sql=filer from #multi update JenkinsAudit set multiliner=@sql where ID=@id return 

So we swallow the whole BCP file, and shove it into the nvarchar (max) field of the audit table. Of course, this whole system crumbled, because instead of a SQL server, I got RDS, and BULK INSERT generally does not work on \\ UNC because of an attempt to take an exclusive file on the file, and with RDS it was initially doomed. So I decided to change the system design, keeping the audit line by line:

 CREATE TABLE AuditOut ( ID int NULL, TextLine nvarchar(max) NULL, n int IDENTITY(1,1) PRIMARY KEY ) 

And write in this table like this:

 function WriteAudit([string]$Filename, [string]$ConnStr, [string]$Tabname, [string]$Jobname) { # get $lastid of the last execution --    #create grid and populate it with data from file $audit = Get-Content $Filename $DT = new-object Data.DataTable $COL1 = new-object Data.DataColumn; $COL1.ColumnName = "ID"; $COL1.DataType = [System.Type]::GetType("System.Int32") $COL2 = new-object Data.DataColumn; $COL2.ColumnName = "TextLine"; $COL2.DataType = [System.Type]::GetType("System.String") $DT.Columns.Add($COL1) $DT.Columns.Add($COL2) foreach ($line in $audit) { $DR = $dt.NewRow() $DR.Item("ID") = $lastid $DR.Item("TextLine") = $line $DT.Rows.Add($DR) } # write it to table $conn=new-object System.Data.SqlClient.SQLConnection $conn.ConnectionString = $ConnStr $conn.Open() $bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $ConnStr $bulkCopy.DestinationTableName = $Tabname $bulkCopy.BatchSize = 50000 $bulkCopy.BulkCopyTimeout = 0 $bulkCopy.WriteToServer($DT) $conn.Close() } 

To select content, select by ID, choosing n (identity) in order.

In the next article I will focus on how this all interacts with Jenkins.

Source: https://habr.com/ru/post/447100/


All Articles