📜 ⬆️ ⬇️

Using Zabbix to monitor MS SQL Server database

Foreword


Often there is a need for real-time reporting to the administrator of problems associated with the database (database).

This article will describe what you need to configure in Zabbix to monitor the MS SQL Server database.

I draw your attention to the fact that there will be no detailed information on how to configure, however formulas and general recommendations, as well as a detailed description of adding custom data elements through stored procedures will be provided in this article.
Also here will be considered only the main performance counters.

Decision


First I will describe all those performance counters (via data elements in Zabbix) that we need:
  1. Logical disk
    1. Avg Disc sec / Read
      Shows the mean time, in seconds, of reading data from a disk. The average value of the Avg performance counter. Disk sec / Read should not exceed 10 milliseconds. The maximum value of the Avg performance counter. Disk sec / Read should not exceed 50 milliseconds.
      ')
      Zabbix : perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Read], and also it is important to track the necessary disk, like this: perf_counter [\ LogicalDisk (C:) \ Avg. Disk sec / Read]

      Examples of triggers :
      {NODE_NAME: perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Read] .last ()}> 0.005, level-high
      and
      {NODE_NAME: perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Read] .last ()}> 0.0025, level-medium
    2. Avg Disc sec / Write
      Shows the mean time recorded in seconds of writing data to disk. The average value of the Avg performance counter. Disk sec / Write should not exceed 10 milliseconds. The maximum value of the Avg performance counter. Disk sec / Write should not exceed 50 milliseconds.

      Zabbix : perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Write], and also it is important to track the necessary disk, like this: perf_counter [\ LogicalDisk (C:) \ Avg. Disk sec / Write]

      Examples of triggers :
      {NODE_NAME: perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Write] .last ()}> 0.005, level-high
      and
      {NODE_NAME: perf_counter [\ LogicalDisk (_Total) \ Avg. Disk sec / Write] .last ()}> 0.0025, level-medium
    3. Avg Disk Queue Length

      The average queue length for disk requests. Displays the number of disk requests waiting to be processed within a specified time interval. Normal is the queue is not more than 2 for a single disk. If there are more than two requests in the queue, then the disk is probably overloaded and does not have time to process incoming requests. You can specify the exact operations with which the disk cannot cope, using Avg counters. Disk Read Queue Length (queue read requests) and Avg. Disk Wright Queue Length.
      Avg value. Disk Queue Length is not measured, but calculated according to the Little law from the mathematical theory of queues. According to this law, the number of requests waiting to be processed is on average equal to the frequency of requests received, multiplied by the request processing time. Those. in our case Avg. Disk Queue Length = (Disk Transfers / sec) * (Avg. Disk sec / Transfer).

      Avg. Disk Queue Length is provided as one of the main counters for determining the workload of the disk subsystem, but for its adequate evaluation it is necessary to accurately represent the physical structure of the storage system. For example, for a single hard disk, a value greater than 2 is considered critical, and if the disk is located on a 4-disk RAID array, then it is worth worrying if the value is greater than 4 * 2 = 8.

      Zabbix : perf_counter [\ LogicalDisk (_Total) \ Avg. Disk Queue Length], and also it is important to trace the necessary disk, like this: perf_counter [\ LogicalDisk (C:) \ Avg. Disk Queue Length]

  2. Memory
    1. Pages / sec
      Shows the number of pages that SQL Server has read from disk or written to disk in order to allow access to memory pages that were not loaded into RAM at the time of access. This value is the sum of the Pages Input / sec and Pages Output / sec values, and also takes into account the page exchange (swapping / swapping) of the system cache for accessing the application data files. In addition, this includes swapping of non-cached files directly mapped into memory. This is the main counter, which should be monitored if there is a large load on the use of memory and the associated paging exchange. This counter characterizes the amount of swapping and its normal (not peak) value should be close to zero. The increase in swaping indicates the need to increase the RAM or reduce the number of applications executed on the server.

      Zabbix : perf_counter [\ Memory \ Pages / sec]
      Trigger example :
      {NODE_NAME: perf_counter [\ Memory \ Pages / sec] .min (5m)}> 1000, level-info
    2. Page Faults / sec

      This is the page error count value. A page error occurs when a process refers to a virtual memory page that is not in the working set of RAM. This counter takes into account both those page faults that require disk access and those caused by the page being located outside the working set in RAM. Most processors can handle page errors of the second type without much delay. However, error handling of the first type of page, which requires disk access, can lead to significant delays.

      Zabbix : perf_counter [\ Memory \ Page Faults / sec]
      Trigger example :
      {NODE_NAME: perf_counter [\ Memory \ Page Faults / sec] .min (5m)}> 1000, level-info
    3. Available bytes

      Tracks the amount of available memory in bytes to perform various processes. Low scores mean low memory. The solution is to increase memory. In most cases, this meter should be constantly above 5000 kV.
      It makes sense to set the threshold for Available Mbytes manually for reasons:

      • 50% free memory available = Excellent
      • 25% of available memory = Requires attention
      • 10% free = problems possible
      • Less than 5% of available memory = Critical for speed, you need to intervene.
      Zabbix : perf_counter [\ Memory \ Available Bytes]

  3. Processor (Total):% Processor Time
    This counter shows the percentage of time the processor was busy performing operations for non-idle threads. This value can be considered as a fraction of the time required to perform useful work. Each processor can be assigned to an idle thread that consumes unproductive processor cycles that are not used by other threads. This counter is characterized by short peaks that can reach 100 percent. However, if there are long periods when processor utilization is above 80 percent, the system will be more efficient with more processors.

    Zabbix : perf_counter [\ Processor (_Total) \% Processor Time], here can also take place on the cores
    Trigger example :
    {NODE_NAME: perf_counter [\ Processor (_Total) \% Processor Time] .min (5m)}> 80, level information
  4. Network Interface (*):% Bytes Total / sec
    The total number of bytes transmitted and received per second across all interfaces. This is the interface bandwidth (in bytes). It is necessary to compare the value of this counter with the maximum bandwidth of the network card. In general, this meter should show no more than 50% utilization of the network adapter bandwidth.
    Zabbix : perf_counter [\ Network Interface (*) \ Bytes Sent / sec]
  5. MS SQL Server: Access Methods
    The Access Methods object in SQL Server provides counters that help monitor access to logical data within a database. Physical access to the database pages on disk is controlled by the buffer manager counters. Monitoring data access methods in a database helps determine whether query performance can be improved by adding or changing indexes, adding or moving partitions, adding files or groups of files, defragmenting indexes, or changing query text. In addition, using Access Methods object counters, you can monitor the size of the data, indexes, and free space in the database, controlling the amount and fragmentation for each server instance. Excessive index fragmentation can significantly reduce performance.
    1. Page Splits / sec
      The number of page splits per second that resulted from overflowing index pages. A large value of this indicator means that when performing insert and change data, SQL Server has to perform a large number of resource-intensive operations on page splitting and transferring a part of an existing page to a new place. Such operations should be avoided if possible. The problem can be solved in two ways:
      - create a clustered index for auto-increment columns. In this case, new entries will not be placed inside pages already occupied by data, but will consistently occupy new pages;
      - rebuild indexes, increasing the value of the Fillfactor parameter. This option allows you to reserve in the pages of the index free space that will be used to accommodate new data, without having to perform page splitting operations.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Access Methods \ Page Splits / sec", 30]
      An example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Access Methods \ Page Splits / sec", 30] .last ()} SQLER_NAME: perf_counter ["\ MSSQL $ NAME OF EXEMPLES: SBRADAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA M carefnt for: perf_counter [" \ MSSQL $ NAME OF EXCLUSIVE_STAPLE_SIGNER_NAME: perf_counter ["\ MSSQL $ NAME_EASY_SUMPLES: perfl , 30] .last ()} / 5, level-information
    2. Full Scans / sec
      The number of unlimited full scan operations per second. These operations include scanning the main table and a full index scan. A steady increase in this indicator may indicate a degradation of the system (lack of necessary indices, their severe fragmentation, the failure of the existing indexes to be used by the optimizer, the presence of unused indices). However, it is worth noting that a full scan in small tables is not always bad, t if you manage to place the entire table in RAM, then a full scan will be faster. But in most cases, a steady increase in the indicator of this counter will talk about the degradation of the system. All this is applicable only for OLTP systems. In OLAP systems, permanent full scans are normal.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Access Methods \ Full Scans / sec", 30]

  6. MS SQL Server: Buffer Manager
    The Buffer Manager object provides counters that allow you to monitor how SQL Server uses the following resources:
    - memory for storing data pages;
    - counters used to monitor physical I / O when SQL Server reads and writes database pages;
    - expanding the buffer pool to expand the buffer cache using fast non-volatile memory, such as solid-state drives (SSD);
    - monitoring memory and counters used by SQL Server helps to get the following information;
    - whether there are "bottlenecks" caused by a lack of physical memory. If frequently used data cannot be cached, SQL Server is forced to read it from disk;
    - whether it is possible to increase the efficiency of query execution by increasing the amount of memory or allocating additional memory for data caching or storing the internal structures of SQL Server;
    - how often SQL Server reads data from the disk. Compared to other operations like memory access, physical I / O takes longer. Reducing I / O can improve query performance.
    1. Buffer Cache hit radio
      Shows how fully SQL Server can store data in the cache buffer. The higher this value, the better, because for SQL Server to efficiently access data pages, they must be in the cache buffer, and there must be no physical input / output (I / O) operations. If there is a steady decline in the average value of this counter, it is necessary to consider adding RAM. This indicator should always be above 90% for OLTP systems and above 50% for OLAP systems.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Buffer Manager \ Buffer cache hit ratio", 30]
      Examples of triggers : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Buffer Manager \ Buffer cache hit ratio", 30] .last ()} <70, level-high
      and
      {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Buffer Manager \ Buffer cache hit ratio", 30] .last ()} <80, level-average
    2. Page life expectancy
      Shows how long the page will be constantly in memory in its current state. If the value is constantly falling, it means that the system is abusing the buffer pool. Thus, potentially working memory can cause problems leading to poor performance. It should be noted that there is no universal indicator, below which we can definitely judge that the system is abusing the buffer pool (the indicator of 300 seconds is outdated with MS SQL Server 2012).
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Buffer Manager \ Page life expectancy", 30]
      Example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Buffer Manager \ Page life expectancy", 30] .last ()} <5, level-information

  7. MS SQL Server: General Statistics
    The General Statistics object in SQL Server provides counters that allow you to monitor total server activity, such as the number of simultaneous connections and the number of users per second connecting or disconnecting from the computer running the instance of SQL Server. These metrics are useful in large online transaction processing (OLTP) systems, where a large number of clients are constantly connected and disconnected from the SQL Server instance.
    1. Process blocked
      The number of processes currently blocked.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: General Statistics \ Processes blocked", 30]
      Example of a trigger : ({{NAME_NAME: perf_counter ["\ MSSQL $ NAME_EXPLACE: General Statistics \ Processes blocked", 30] .min (2m, 0)}> = 0)
      and ({NODE NAME: perf_counter ["\ MSSQL $ NAME EXAMPLE: General Statistics \ Processes blocked", 30] .time (0)}> = 50000)
      and ({NODE_NAME: perf_counter ["\ MSSQL $ NAME_EMPLAR: General Statistics \ Processes blocked", 30] .time (0)} <= 230000), level-information (here there is an alarm limit from 05:00 to 23:00 )
    2. User Connections
      The number of users currently connected to SQL Server.
      Zabbix : perf_counter ["\ MSSQL $ EXAMPLE NAME: General Statistics \ User Connections", 30]

  8. MS SQL Server: Locks
    The Locks object in Microsoft SQL Server provides information about SQL Server locks received for specific types of resources. Locks are issued to SQL Server resources, such as rows read or modified by a transaction, to prevent simultaneous use of resources by multiple transactions. For example, if an exclusive (X) lock is acquired by a transaction on a row in a table, no other transaction can modify this row until the lock is released. Minimizing the use of locks increases concurrency, which can improve overall performance. At the same time, several instances of the Locks object can be monitored, each of which will be a blocking of a particular type of resource.
    1. Average Wait Time (ms)
      The average wait time (in milliseconds) for all block requests that required a wait. This counter shows how many, on average, user processes have to queue up in order to impose a lock on the resource. The maximum allowable value of this counter completely depends on your task, it is difficult to determine any average value for all applications here. Too high of this counter may indicate a lock problem in your database.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Average Wait Time (ms)", 30]
      Example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Average Wait Time (ms)", 30] .last ()} = 500, level-information
    2. Lock Wait Time (ms)
      The total wait time for locks (in milliseconds) for the last second.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Lock Wait Time (ms)", 30]
    3. Lock Waits / sec
      The number of cases in the last second when a thread had to wait in connection with a lock request.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Lock Waits / sec", 30]
    4. Lock Timeouts / sec
      The number of repetitions when a lock cannot be obtained by cyclic reversal The value of the SQL Server spin counter configuration parameter determines the number of spins before the timeout expires and the stream goes into an inactive state.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Lock Timeouts / sec", 30]
      Example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Locks (_Total) \ Lock Timeouts / sec", 30] .last ()}> 1000, level-information
    5. Lock Requests / sec
      The number of requests per second of the specified type of lock.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Lock Requests / sec", 30]
      Example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Lock Requests / sec", 30] .last ()}> 500000, level-information
    6. Lock Number of Deadlocks / sec
      The number of lock requests per second resulting in a deadlock. The presence of deadlocks indicates improperly constructed requests that block shared resources.
      Zabbix : perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Number of Deadlocks / sec", 30]
      Example of a trigger : {NODE_NAME: perf_counter ["\ MSSQL $ NAME OF EXAMPLE: Locks (_Total) \ Number of Deadlocks / sec", 30] .last ()}> 1, high level

  9. MS SQL Server: Memory Manager
    The Memory Manager Object in Microsoft SQL Server provides counters to monitor the memory usage of the entire server. Monitoring server-wide memory usage for evaluating user actions and resource utilization can help identify performance gaps. The memory control used by the instance of SQL Server can help determine:
    - Are there insufficient physical memory to store frequently used data in the cache? If there is not enough memory, SQL Server should get the data from the disk;
    - whether query performance can improve if memory is added or the amount of available memory for caching data or internal SQL Server structures increases.
    1. Memory Grants Outstanding
      Indicates the total number of processes that successfully acquired the workspace memory. With a steady decline in the index, it is necessary to increase the RAM.
      Zabbix : perf_counter ["\ MSSQL $ SPAN_NAME: Memory Manager \ Memory Grants Outstanding", 30]
    2. Memory Grants Pending
      Indicates the total number of processes awaiting the working memory memory. With a stable growth rate, it is necessary to increase the RAM.
      Zabbix : perf_counter ["\ MSSQL $ SPAN_NAME: Memory Manager \ Memory Grants Pending", 30]

  10. MS SQL Server: Statistics
    The Statistics object in Microsoft SQL Server provides counters for monitoring the compilation and types of queries sent to the instance of SQL Server. Monitoring the number of compilations and re-compilations of queries and the number of batches received by an instance of SQL Server gives an idea of ​​how quickly SQL Server responds to users and how efficiently the query optimizer processes them.
    1. Batch Requests / sec
      The number of Transact-SQL batch commands received per second. Any statistics (I / O, number of users, cache size, query complexity, etc.) affect this statistic. A high packet request rate indicates high throughput.
      Zabbix : perf_counter ["\ MSSQL $ SPAN_NAME: SQL Statistics \ Batch Requests / sec", 30]


In addition to all the above, you can also configure other data items (as well as create triggers on them with subsequent notification). For example:
1) the amount of free disk space
2) the size of the database data files and the log
etc.
However, all these indicators do not show the problem of real-time queries.
For this you need to create your own special counters.
Due to confidentiality considerations, I will not give examples of such counters. Moreover, they are configured uniquely for each system. But I note that for systems such as 1C, NAV and CRM specialized meters can be created in conjunction with the appropriate developers.
I will give an example of creating a generalized indicator that shows how many requests are being executed and how many requests are pending (suspended or blocked) at each time point.
To do this, you must create a stored procedure:
Code
USE [__] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [nav].[ZabbixGetCountRequestStatus] @Status nvarchar(255) AS BEGIN /*  -     */ SET NOCOUNT ON; select count(*) as [Count] from sys.dm_exec_requests ER with(readuncommitted) where [status]=@Status END 


Next, you need to go to the folder where Zabbix is ​​located (zabbix \ conf \ userparams.d) and create 2 files with the ps1 (PowerShell) extension and write the following codes in each of them:
Code for running requests
 $SQLServer = "_"; $uid = ""; $pwd = ""; $Status="running"; $connectionString = "Server = $SQLServer; Database=_; Integrated Security = False; User ID = $uid; Password = $pwd;"; $connection = New-Object System.Data.SqlClient.SqlConnection; $connection.ConnectionString = $connectionString; #    MSSQL / Create a request directly to MSSQL $SqlCmd = New-Object System.Data.SqlClient.SqlCommand; $SqlCmd.CommandType = [System.Data.CommandType]::StoredProcedure; $SqlCmd.CommandText = "nav.ZabbixGetCountRequestStatus"; $SqlCmd.Connection = $Connection; $paramStatus=$SqlCmd.Parameters.Add("@Status" , [System.Data.SqlDbType]::VarChar); $paramStatus.Value = $Status; $connection.Open(); $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter; $SqlAdapter.SelectCommand = $SqlCmd; $DataSet = New-Object System.Data.DataSet; $SqlAdapter.Fill($DataSet) > $null; $connection.Close(); $result = $DataSet.Tables[0].Rows[0]["Count"]; write-host $result; 


Code for pending requests
 $SQLServer = "_"; $uid = ""; $pwd = ""; $Status="suspended"; $connectionString = "Server = $SQLServer; Database=_; Integrated Security = False; User ID = $uid; Password = $pwd;"; $connection = New-Object System.Data.SqlClient.SqlConnection; $connection.ConnectionString = $connectionString; #    MSSQL / Create a request directly to MSSQL $SqlCmd = New-Object System.Data.SqlClient.SqlCommand; $SqlCmd.CommandType = [System.Data.CommandType]::StoredProcedure; $SqlCmd.CommandText = "nav.ZabbixGetCountRequestStatus"; $SqlCmd.Connection = $Connection; $paramStatus=$SqlCmd.Parameters.Add("@Status" , [System.Data.SqlDbType]::VarChar); $paramStatus.Value = $Status; $connection.Open(); $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter; $SqlAdapter.SelectCommand = $SqlCmd; $DataSet = New-Object System.Data.DataSet; $SqlAdapter.Fill($DataSet) > $null; $connection.Close(); $result = $DataSet.Tables[0].Rows[0]["Count"]; write-host $result; 


Now you need to create a file with user parameters and the .conf extension (or add lines to the existing user file, if it was created earlier) and insert the following lines:
UserParameter = NAME_PARAMETRA_COLUMNO_ EXECUTED_REQUEST, powershell -NoProfile -ExecutionPolicy Bypass -File FULL_PUT \ zabbix \ conf \ userparams.d \ NAME_FILE_For_Text NUMBER
UserParameter = NAME_PARAMETRA_NUM_NUM_US_SERVING_REQUEST, powershell -NoProfile -ExecutionPolicy Bypass -File FULL_PUT \ zabbix \ conf \ userparams.d \ NAME_FILE_For_Ye.
After this, save the .conf file and restart the Zabbix agent.
After this, we add two new elements to Zabbix (in this case, the names and the key are the same):
TITLE_PARAMETRA_KOLICHESTVO_ EXECUTED_REQUEST
TITLE_PARAMETRA_KOLICHESTVO_ZHIDATIVICHIH_REQUEST
Now you can create graphs and triggers on the created custom data elements.

If the number of pending requests increases dramatically, then the following query can display all the running and pending requests at a given time with details from where and under what login the request, text and query plan, as well as other details are executed:
Code
 /*,      ,   ,     */ with tbl0 as ( select ES.[session_id] ,ER.[blocking_session_id] ,ER.[request_id] ,ER.[start_time] ,ER.[status] ,ES.[status] as [status_session] ,ER.[command] ,ER.[percent_complete] ,DB_Name(coalesce(ER.[database_id], ES.[database_id])) as [DBName] ,(select top(1) [text] from sys.dm_exec_sql_text(ER.[sql_handle])) as [TSQL] ,(select top(1) [objectid] from sys.dm_exec_sql_text(ER.[sql_handle])) as [objectid] ,(select top(1) [query_plan] from sys.dm_exec_query_plan(ER.[plan_handle])) as [QueryPlan] ,ER.[wait_type] ,ES.[login_time] ,ES.[host_name] ,ES.[program_name] ,ER.[wait_time] ,ER.[last_wait_type] ,ER.[wait_resource] ,ER.[open_transaction_count] ,ER.[open_resultset_count] ,ER.[transaction_id] ,ER.[context_info] ,ER.[estimated_completion_time] ,ER.[cpu_time] ,ER.[total_elapsed_time] ,ER.[scheduler_id] ,ER.[task_address] ,ER.[reads] ,ER.[writes] ,ER.[logical_reads] ,ER.[text_size] ,ER.[language] ,ER.[date_format] ,ER.[date_first] ,ER.[quoted_identifier] ,ER.[arithabort] ,ER.[ansi_null_dflt_on] ,ER.[ansi_defaults] ,ER.[ansi_warnings] ,ER.[ansi_padding] ,ER.[ansi_nulls] ,ER.[concat_null_yields_null] ,ER.[transaction_isolation_level] ,ER.[lock_timeout] ,ER.[deadlock_priority] ,ER.[row_count] ,ER.[prev_error] ,ER.[nest_level] ,ER.[granted_query_memory] ,ER.[executing_managed_code] ,ER.[group_id] ,ER.[query_hash] ,ER.[query_plan_hash] ,EC.[most_recent_session_id] ,EC.[connect_time] ,EC.[net_transport] ,EC.[protocol_type] ,EC.[protocol_version] ,EC.[endpoint_id] ,EC.[encrypt_option] ,EC.[auth_scheme] ,EC.[node_affinity] ,EC.[num_reads] ,EC.[num_writes] ,EC.[last_read] ,EC.[last_write] ,EC.[net_packet_size] ,EC.[client_net_address] ,EC.[client_tcp_port] ,EC.[local_net_address] ,EC.[local_tcp_port] ,EC.[parent_connection_id] ,EC.[most_recent_sql_handle] ,ES.[host_process_id] ,ES.[client_version] ,ES.[client_interface_name] ,ES.[security_id] ,ES.[login_name] ,ES.[nt_domain] ,ES.[nt_user_name] ,ES.[memory_usage] ,ES.[total_scheduled_time] ,ES.[last_request_start_time] ,ES.[last_request_end_time] ,ES.[is_user_process] ,ES.[original_security_id] ,ES.[original_login_name] ,ES.[last_successful_logon] ,ES.[last_unsuccessful_logon] ,ES.[unsuccessful_logons] ,ES.[authenticating_database_id] ,ER.[sql_handle] ,ER.[statement_start_offset] ,ER.[statement_end_offset] ,ER.[plan_handle] ,ER.[dop] ,coalesce(ER.[database_id], ES.[database_id]) as [database_id] ,ER.[user_id] ,ER.[connection_id] from sys.dm_exec_requests ER with(readuncommitted) right join sys.dm_exec_sessions ES with(readuncommitted) on ES.session_id = ER.session_id left join sys.dm_exec_connections EC with(readuncommitted) on EC.session_id = ES.session_id ) , tbl as ( select [session_id] ,[blocking_session_id] ,[request_id] ,[start_time] ,[status] ,[status_session] ,[command] ,[percent_complete] ,[DBName] ,OBJECT_name([objectid], [database_id]) as [object] ,[TSQL] ,[QueryPlan] ,[wait_type] ,[login_time] ,[host_name] ,[program_name] ,[wait_time] ,[last_wait_type] ,[wait_resource] ,[open_transaction_count] ,[open_resultset_count] ,[transaction_id] ,[context_info] ,[estimated_completion_time] ,[cpu_time] ,[total_elapsed_time] ,[scheduler_id] ,[task_address] ,[reads] ,[writes] ,[logical_reads] ,[text_size] ,[language] ,[date_format] ,[date_first] ,[quoted_identifier] ,[arithabort] ,[ansi_null_dflt_on] ,[ansi_defaults] ,[ansi_warnings] ,[ansi_padding] ,[ansi_nulls] ,[concat_null_yields_null] ,[transaction_isolation_level] ,[lock_timeout] ,[deadlock_priority] ,[row_count] ,[prev_error] ,[nest_level] ,[granted_query_memory] ,[executing_managed_code] ,[group_id] ,[query_hash] ,[query_plan_hash] ,[most_recent_session_id] ,[connect_time] ,[net_transport] ,[protocol_type] ,[protocol_version] ,[endpoint_id] ,[encrypt_option] ,[auth_scheme] ,[node_affinity] ,[num_reads] ,[num_writes] ,[last_read] ,[last_write] ,[net_packet_size] ,[client_net_address] ,[client_tcp_port] ,[local_net_address] ,[local_tcp_port] ,[parent_connection_id] ,[most_recent_sql_handle] ,[host_process_id] ,[client_version] ,[client_interface_name] ,[security_id] ,[login_name] ,[nt_domain] ,[nt_user_name] ,[memory_usage] ,[total_scheduled_time] ,[last_request_start_time] ,[last_request_end_time] ,[is_user_process] ,[original_security_id] ,[original_login_name] ,[last_successful_logon] ,[last_unsuccessful_logon] ,[unsuccessful_logons] ,[authenticating_database_id] ,[sql_handle] ,[statement_start_offset] ,[statement_end_offset] ,[plan_handle] ,[dop] ,[database_id] ,[user_id] ,[connection_id] from tbl0 where [status] in ('suspended', 'running', 'runnable') ) , tbl_group as ( select [blocking_session_id] from tbl where [blocking_session_id]<>0 group by [blocking_session_id] ) , tbl_res_rec as ( select [session_id] ,[blocking_session_id] ,[request_id] ,[start_time] ,[status] ,[status_session] ,[command] ,[percent_complete] ,[DBName] ,[object] ,[TSQL] ,[QueryPlan] ,[wait_type] ,[login_time] ,[host_name] ,[program_name] ,[wait_time] ,[last_wait_type] ,[wait_resource] ,[open_transaction_count] ,[open_resultset_count] ,[transaction_id] ,[context_info] ,[estimated_completion_time] ,[cpu_time] ,[total_elapsed_time] ,[scheduler_id] ,[task_address] ,[reads] ,[writes] ,[logical_reads] ,[text_size] ,[language] ,[date_format] ,[date_first] ,[quoted_identifier] ,[arithabort] ,[ansi_null_dflt_on] ,[ansi_defaults] ,[ansi_warnings] ,[ansi_padding] ,[ansi_nulls] ,[concat_null_yields_null] ,[transaction_isolation_level] ,[lock_timeout] ,[deadlock_priority] ,[row_count] ,[prev_error] ,[nest_level] ,[granted_query_memory] ,[executing_managed_code] ,[group_id] ,[query_hash] ,[query_plan_hash] ,[most_recent_session_id] ,[connect_time] ,[net_transport] ,[protocol_type] ,[protocol_version] ,[endpoint_id] ,[encrypt_option] ,[auth_scheme] ,[node_affinity] ,[num_reads] ,[num_writes] ,[last_read] ,[last_write] ,[net_packet_size] ,[client_net_address] ,[client_tcp_port] ,[local_net_address] ,[local_tcp_port] ,[parent_connection_id] ,[most_recent_sql_handle] ,[host_process_id] ,[client_version] ,[client_interface_name] ,[security_id] ,[login_name] ,[nt_domain] ,[nt_user_name] ,[memory_usage] ,[total_scheduled_time] ,[last_request_start_time] ,[last_request_end_time] ,[is_user_process] ,[original_security_id] ,[original_login_name] ,[last_successful_logon] ,[last_unsuccessful_logon] ,[unsuccessful_logons] ,[authenticating_database_id] ,[sql_handle] ,[statement_start_offset] ,[statement_end_offset] ,[plan_handle] ,[dop] ,[database_id] ,[user_id] ,[connection_id] , 0 as [is_blocking_other_session] from tbl union all select tbl0.[session_id] ,tbl0.[blocking_session_id] ,tbl0.[request_id] ,tbl0.[start_time] ,tbl0.[status] ,tbl0.[status_session] ,tbl0.[command] ,tbl0.[percent_complete] ,tbl0.[DBName] ,OBJECT_name(tbl0.[objectid], tbl0.[database_id]) as [object] ,tbl0.[TSQL] ,tbl0.[QueryPlan] ,tbl0.[wait_type] ,tbl0.[login_time] ,tbl0.[host_name] ,tbl0.[program_name] ,tbl0.[wait_time] ,tbl0.[last_wait_type] ,tbl0.[wait_resource] ,tbl0.[open_transaction_count] ,tbl0.[open_resultset_count] ,tbl0.[transaction_id] ,tbl0.[context_info] ,tbl0.[estimated_completion_time] ,tbl0.[cpu_time] ,tbl0.[total_elapsed_time] ,tbl0.[scheduler_id] ,tbl0.[task_address] ,tbl0.[reads] ,tbl0.[writes] ,tbl0.[logical_reads] ,tbl0.[text_size] ,tbl0.[language] ,tbl0.[date_format] ,tbl0.[date_first] ,tbl0.[quoted_identifier] ,tbl0.[arithabort] ,tbl0.[ansi_null_dflt_on] ,tbl0.[ansi_defaults] ,tbl0.[ansi_warnings] ,tbl0.[ansi_padding] ,tbl0.[ansi_nulls] ,tbl0.[concat_null_yields_null] ,tbl0.[transaction_isolation_level] ,tbl0.[lock_timeout] ,tbl0.[deadlock_priority] ,tbl0.[row_count] ,tbl0.[prev_error] ,tbl0.[nest_level] ,tbl0.[granted_query_memory] ,tbl0.[executing_managed_code] ,tbl0.[group_id] ,tbl0.[query_hash] ,tbl0.[query_plan_hash] ,tbl0.[most_recent_session_id] ,tbl0.[connect_time] ,tbl0.[net_transport] ,tbl0.[protocol_type] ,tbl0.[protocol_version] ,tbl0.[endpoint_id] ,tbl0.[encrypt_option] ,tbl0.[auth_scheme] ,tbl0.[node_affinity] ,tbl0.[num_reads] ,tbl0.[num_writes] ,tbl0.[last_read] ,tbl0.[last_write] ,tbl0.[net_packet_size] ,tbl0.[client_net_address] ,tbl0.[client_tcp_port] ,tbl0.[local_net_address] ,tbl0.[local_tcp_port] ,tbl0.[parent_connection_id] ,tbl0.[most_recent_sql_handle] ,tbl0.[host_process_id] ,tbl0.[client_version] ,tbl0.[client_interface_name] ,tbl0.[security_id] ,tbl0.[login_name] ,tbl0.[nt_domain] ,tbl0.[nt_user_name] ,tbl0.[memory_usage] ,tbl0.[total_scheduled_time] ,tbl0.[last_request_start_time] ,tbl0.[last_request_end_time] ,tbl0.[is_user_process] ,tbl0.[original_security_id] ,tbl0.[original_login_name] ,tbl0.[last_successful_logon] ,tbl0.[last_unsuccessful_logon] ,tbl0.[unsuccessful_logons] ,tbl0.[authenticating_database_id] ,tbl0.[sql_handle] ,tbl0.[statement_start_offset] ,tbl0.[statement_end_offset] ,tbl0.[plan_handle] ,tbl0.[dop] ,tbl0.[database_id] ,tbl0.[user_id] ,tbl0.[connection_id] , 1 as [is_blocking_other_session] from tbl_group as tg inner join tbl0 on tg.blocking_session_id=tbl0.session_id ) ,tbl_res_rec_g as ( select [plan_handle], [sql_handle], cast([start_time] as date) as [start_time] from tbl_res_rec group by [plan_handle], [sql_handle], cast([start_time] as date) ) ,tbl_rec_stat_g as ( select qs.[plan_handle] ,qs.[sql_handle] --,cast(qs.[last_execution_time] as date) as [last_execution_time] ,min(qs.[creation_time]) as [creation_time] ,max(qs.[execution_count]) as [execution_count] ,max(qs.[total_worker_time]) as [total_worker_time] ,min(qs.[last_worker_time]) as [min_last_worker_time] ,max(qs.[last_worker_time]) as [max_last_worker_time] ,min(qs.[min_worker_time]) as [min_worker_time] ,max(qs.[max_worker_time]) as [max_worker_time] ,max(qs.[total_physical_reads]) as [total_physical_reads] ,min(qs.[last_physical_reads]) as [min_last_physical_reads] ,max(qs.[last_physical_reads]) as [max_last_physical_reads] ,min(qs.[min_physical_reads]) as [min_physical_reads] ,max(qs.[max_physical_reads]) as [max_physical_reads] ,max(qs.[total_logical_writes]) as [total_logical_writes] ,min(qs.[last_logical_writes]) as [min_last_logical_writes] ,max(qs.[last_logical_writes]) as [max_last_logical_writes] ,min(qs.[min_logical_writes]) as [min_logical_writes] ,max(qs.[max_logical_writes]) as [max_logical_writes] ,max(qs.[total_logical_reads]) as [total_logical_reads] ,min(qs.[last_logical_reads]) as [min_last_logical_reads] ,max(qs.[last_logical_reads]) as [max_last_logical_reads] ,min(qs.[min_logical_reads]) as [min_logical_reads] ,max(qs.[max_logical_reads]) as [max_logical_reads] ,max(qs.[total_clr_time]) as [total_clr_time] ,min(qs.[last_clr_time]) as [min_last_clr_time] ,max(qs.[last_clr_time]) as [max_last_clr_time] ,min(qs.[min_clr_time]) as [min_clr_time] ,max(qs.[max_clr_time]) as [max_clr_time] ,max(qs.[total_elapsed_time]) as [total_elapsed_time] ,min(qs.[last_elapsed_time]) as [min_last_elapsed_time] ,max(qs.[last_elapsed_time]) as [max_last_elapsed_time] ,min(qs.[min_elapsed_time]) as [min_elapsed_time] ,max(qs.[max_elapsed_time]) as [max_elapsed_time] ,max(qs.[total_rows]) as [total_rows] ,min(qs.[last_rows]) as [min_last_rows] ,max(qs.[last_rows]) as [max_last_rows] ,min(qs.[min_rows]) as [min_rows] ,max(qs.[max_rows]) as [max_rows] ,max(qs.[total_dop]) as [total_dop] ,min(qs.[last_dop]) as [min_last_dop] ,max(qs.[last_dop]) as [max_last_dop] ,min(qs.[min_dop]) as [min_dop] ,max(qs.[max_dop]) as [max_dop] ,max(qs.[total_grant_kb]) as [total_grant_kb] ,min(qs.[last_grant_kb]) as [min_last_grant_kb] ,max(qs.[last_grant_kb]) as [max_last_grant_kb] ,min(qs.[min_grant_kb]) as [min_grant_kb] ,max(qs.[max_grant_kb]) as [max_grant_kb] ,max(qs.[total_used_grant_kb]) as [total_used_grant_kb] ,min(qs.[last_used_grant_kb]) as [min_last_used_grant_kb] ,max(qs.[last_used_grant_kb]) as [max_last_used_grant_kb] ,min(qs.[min_used_grant_kb]) as [min_used_grant_kb] ,max(qs.[max_used_grant_kb]) as [max_used_grant_kb] ,max(qs.[total_ideal_grant_kb]) as [total_ideal_grant_kb] ,min(qs.[last_ideal_grant_kb]) as [min_last_ideal_grant_kb] ,max(qs.[last_ideal_grant_kb]) as [max_last_ideal_grant_kb] ,min(qs.[min_ideal_grant_kb]) as [min_ideal_grant_kb] ,max(qs.[max_ideal_grant_kb]) as [max_ideal_grant_kb] ,max(qs.[total_reserved_threads]) as [total_reserved_threads] ,min(qs.[last_reserved_threads]) as [min_last_reserved_threads] ,max(qs.[last_reserved_threads]) as [max_last_reserved_threads] ,min(qs.[min_reserved_threads]) as [min_reserved_threads] ,max(qs.[max_reserved_threads]) as [max_reserved_threads] ,max(qs.[total_used_threads]) as [total_used_threads] ,min(qs.[last_used_threads]) as [min_last_used_threads] ,max(qs.[last_used_threads]) as [max_last_used_threads] ,min(qs.[min_used_threads]) as [min_used_threads] ,max(qs.[max_used_threads]) as [max_used_threads] from tbl_res_rec_g as t inner join sys.dm_exec_query_stats as qs with(readuncommitted) on t.[plan_handle]=qs.[plan_handle] and t.[sql_handle]=qs.[sql_handle] and t.[start_time]=cast(qs.[last_execution_time] as date) group by qs.[plan_handle] ,qs.[sql_handle] --,qs.[last_execution_time] ) select t.[session_id] -- ,t.[blocking_session_id] --,     [session_id] ,t.[request_id] -- .     ,t.[start_time] --    ,DateDiff(second, t.[start_time], GetDate()) as [date_diffSec] --         ,t.[status] --  ,t.[status_session] --  ,t.[command] --      , COALESCE( CAST(NULLIF(t.[total_elapsed_time] / 1000, 0) as BIGINT) ,CASE WHEN (t.[status_session] <> 'running' and isnull(t.[status], '') <> 'running') THEN DATEDIFF(ss,0,getdate() - nullif(t.[last_request_end_time], '1900-01-01T00:00:00.000')) END ) as [total_time, sec] --      , CAST(NULLIF((CAST(t.[total_elapsed_time] as BIGINT) - CAST(t.[wait_time] AS BIGINT)) / 1000, 0 ) as bigint) as [work_time, sec] --         , CASE WHEN (t.[status_session] <> 'running' AND ISNULL(t.[status],'') <> 'running') THEN DATEDIFF(ss,0,getdate() - nullif(t.[last_request_end_time], '1900-01-01T00:00:00.000')) END as [sleep_time, sec] --    , NULLIF( CAST((t.[logical_reads] + t.[writes]) * 8 / 1024 as numeric(38,2)), 0) as [IO, MB] --      , CASE t.transaction_isolation_level WHEN 0 THEN 'Unspecified' WHEN 1 THEN 'ReadUncommited' WHEN 2 THEN 'ReadCommited' WHEN 3 THEN 'Repetable' WHEN 4 THEN 'Serializable' WHEN 5 THEN 'Snapshot' END as [transaction_isolation_level_desc] --   () ,t.[percent_complete] --      ,t.[DBName] -- ,t.[object] -- , SUBSTRING( t.[TSQL] , t.[statement_start_offset]/2+1 , ( CASE WHEN ((t.[statement_start_offset]<0) OR (t.[statement_end_offset]<0)) THEN DATALENGTH (t.[TSQL]) ELSE t.[statement_end_offset] END - t.[statement_start_offset] )/2 +1 ) as [CURRENT_REQUEST] --     ,t.[TSQL] --   ,t.[QueryPlan] --   ,t.[wait_type] --     ,      (sys.dm_os_wait_stats) ,t.[login_time] --   ,t.[host_name] --   ,   .       NULL ,t.[program_name] --  ,   .       NULL ,cast(t.[wait_time]/1000 as decimal(18,3)) as [wait_timeSec] --     ,       ( ) ,t.[wait_time] --     ,       ( ) ,t.[last_wait_type] --    ,       ,t.[wait_resource] --     ,    ,     ,t.[open_transaction_count] -- ,     ,t.[open_resultset_count] --  ,     ,t.[transaction_id] -- ,     ,t.[context_info] -- CONTEXT_INFO  ,cast(t.[estimated_completion_time]/1000 as decimal(18,3)) as [estimated_completion_timeSec] --   .    NULL ,t.[estimated_completion_time] --   .    NULL ,cast(t.[cpu_time]/1000 as decimal(18,3)) as [cpu_timeSec] --  ( ),     ,t.[cpu_time] --  ( ),     ,cast(t.[total_elapsed_time]/1000 as decimal(18,3)) as [total_elapsed_timeSec] -- ,      ( ) ,t.[total_elapsed_time] -- ,      ( ) ,t.[scheduler_id] -- ,     ,t.[task_address] --  ,   ,     ,t.[reads] --  ,    ,t.[writes] --  ,    ,t.[logical_reads] --   ,    ,t.[text_size] --  TEXTSIZE    ,t.[language] --     ,t.[date_format] --  DATEFORMAT    ,t.[date_first] --  DATEFIRST    ,t.[quoted_identifier] --1 =  QUOTED_IDENTIFIER    (ON).    — 0 ,t.[arithabort] --1 =  ARITHABORT    (ON).    — 0 ,t.[ansi_null_dflt_on] --1 =  ANSI_NULL_DFLT_ON    (ON).    — 0 ,t.[ansi_defaults] --1 =  ANSI_DEFAULTS    (ON).    — 0 ,t.[ansi_warnings] --1 =  ANSI_WARNINGS    (ON).    — 0 ,t.[ansi_padding] --1 =  ANSI_PADDING    (ON) ,t.[ansi_nulls] --1 =  ANSI_NULLS    (ON).    — 0 ,t.[concat_null_yields_null] --1 =  CONCAT_NULL_YIELDS_NULL    (ON).    — 0 ,t.[transaction_isolation_level] -- ,        ,cast(t.[lock_timeout]/1000 as decimal(18,3)) as [lock_timeoutSec] --      ( ) ,t.[lock_timeout] --      ( ) ,t.[deadlock_priority] --  DEADLOCK_PRIORITY    ,t.[row_count] -- ,      ,t.[prev_error] -- ,     ,t.[nest_level] --   ,     ,t.[granted_query_memory] -- ,      (1 -  8 ) ,t.[executing_managed_code] --,           CLR (, ,   ). --      ,    CLR   ,       Transact-SQL ,t.[group_id] --   ,     ,t.[query_hash] -- -           . --          ,       ,t.[query_plan_hash] -- -             . --              ,t.[most_recent_session_id] --      ,     ,t.[connect_time] --    ,t.[net_transport] --    ,    ,t.[protocol_type] --      ,t.[protocol_version] --    ,     ,t.[endpoint_id] --,   .   endpoint_id       sys.endpoints ,t.[encrypt_option] -- , ,       ,t.[auth_scheme] --    (SQL Server  Windows),     ,t.[node_affinity] --  ,     ,t.[num_reads] -- ,     ,t.[num_writes] -- ,     ,t.[last_read] --       ,t.[last_write] --       ,t.[net_packet_size] --  ,     ,t.[client_net_address] --    ,t.[client_tcp_port] --    ,      ,t.[local_net_address] --IP- ,     .    ,        TCP ,t.[local_tcp_port] --TCP- ,     TCP ,t.[parent_connection_id] --  ,    MARS ,t.[most_recent_sql_handle] --   SQL,     .      most_recent_sql_handle   most_recent_session_id ,t.[host_process_id] --   ,   .       NULL ,t.[client_version] -- TDS- ,       .       NULL ,t.[client_interface_name] --   ,       .       NULL ,t.[security_id] --  Microsoft Windows,     ,t.[login_name] --SQL Server  ,     . --    ,      , .  original_login_name. --  SQL Server        ,    Windows ,t.[nt_domain] -- Windows  ,        Windows   . --    ,    ,    NULL ,t.[nt_user_name] --  Windows  ,        Windows   . --    ,    ,    NULL ,t.[memory_usage] -- 8-  ,    ,t.[total_scheduled_time] -- ,    (   )  ,   ,t.[last_request_start_time] --,      .    ,     ,t.[last_request_end_time] --        ,t.[is_user_process] --0,    .      1 ,t.[original_security_id] --Microsoft   Windows,    original_login_name ,t.[original_login_name] --SQL Server  ,      . --     SQL Server,   ,    Windows, --  ,     . -- ,               . --  EXECUTE AS  ,t.[last_successful_logon] --        original_login_name     ,t.[last_unsuccessful_logon] --        original_login_name     ,t.[unsuccessful_logons] --        original_login_name   last_successful_logon   login_time ,t.[authenticating_database_id] --  ,    . --       0. --             ,t.[sql_handle] ---  SQL- ,t.[statement_start_offset] --          ,     . --       sql_handle, statement_end_offset  sys.dm_exec_sql_text --         ,t.[statement_end_offset] --          ,     . --       sql_handle, statement_end_offset  sys.dm_exec_sql_text --         ,t.[plan_handle] ---   SQL ,t.[database_id] --  ,     ,t.[user_id] -- ,    ,t.[connection_id] -- ,     ,t.[is_blocking_other_session] --1-    , 0-      ,coalesce(t.[dop], mg.[dop]) as [dop] --   ,mg.[request_time] --        ,mg.[grant_time] --  ,     .   NULL,       ,mg.[requested_memory_kb] --      ,mg.[granted_memory_kb] --      . --   NULL,      . --       requested_memory_kb. --          , --        ,mg.[required_memory_kb] --     (),     . -- requested_memory_kb       ,mg.[used_memory_kb] --       ( ) ,mg.[max_used_memory_kb] --          ,mg.[query_cost] --   ,mg.[timeout_sec] --             ,mg.[resource_semaphore_id] --   ,     ,mg.[queue_id] --  ,       . -- NULL,     ,mg.[wait_order] --       queue_id. --      ,          . -- NULL,     ,mg.[is_next_candidate] --      (1 = , 0 = , NULL =   ) ,mg.[wait_time_ms] --   .  NULL,     ,mg.[pool_id] --  ,        ,mg.[is_small] -- 1 ,          . -- 0     ,mg.[ideal_memory_kb] --,   (),  ,        . --     ,mg.[reserved_worker_count] --  ,     ,      ,    ,mg.[used_worker_count] --  ,    ,mg.[max_used_worker_count] --??? ,mg.[reserved_node_bitmap] --??? ,pl.[bucketid] --  ,    . --    0    -   . --  SQL Plans  Object Plans  -   10007  32-    40009 —  64-. --  Bound Trees  -   1009  32-    4001  64-. --      -   127  32-  64-   ,pl.[refcounts] --  ,     . -- refcounts       1,     ,pl.[usecounts] --    . --  ,       . --        showplan ,pl.[size_in_bytes] -- ,    ,pl.[memory_object_address] --   . --      sys.dm_os_memory_objects, --     , --   sys.dm_os_memory_cache_entries       ,pl.[cacheobjtype] --   .       ,pl.[objtype] -- .       ,pl.[parent_plan_handle] --  --  sys.dm_exec_query_stats   ,     (, ) ,qs.[creation_time] --   ,qs.[execution_count] --       ,qs.[total_worker_time] --  ,       ,   (    ) ,qs.[min_last_worker_time] --  ,     ,   (    ) ,qs.[max_last_worker_time] --  ,     ,   (    ) ,qs.[min_worker_time] --  , -    ,   (    ) ,qs.[max_worker_time] --  , -    ,   (    ) ,qs.[total_physical_reads] --           . --   0       ,qs.[min_last_physical_reads] --         . --   0       ,qs.[max_last_physical_reads] --         . --   0       ,qs.[min_physical_reads] --        . --   0       ,qs.[max_physical_reads] --        . --   0       ,qs.[total_logical_writes] --           . --   0       ,qs.[min_last_logical_writes] --     ,      . --    «» (. . ),    . --   0       ,qs.[max_last_logical_writes] --     ,      . --    «» (. . ),    . --   0       ,qs.[min_logical_writes] --        . --   0       ,qs.[max_logical_writes] --        . --   0       ,qs.[total_logical_reads] --           . --   0       ,qs.[min_last_logical_reads] --         . --   0       ,qs.[max_last_logical_reads] --         . --   0       ,qs.[min_logical_reads] --        . --   0       ,qs.[max_logical_reads] --        . --   0       ,qs.[total_clr_time] --,   (    ), -- Microsoft .NET Framework    (CLR)        . --  CLR    , , ,     ,qs.[min_last_clr_time] -- ,   (    ), --  .NET Framework   CLR     . --  CLR    , , ,     ,qs.[max_last_clr_time] -- ,   (    ), --  .NET Framework   CLR     . --  CLR    , , ,     ,qs.[min_clr_time] -- , -       .NET Framework  CLR, --  (    ). --  CLR    , , ,     ,qs.[max_clr_time] -- , -       CLR .NET Framework, --  (    ). --  CLR    , , ,     --,qs.[total_elapsed_time] -- ,    ,   (    ) ,qs.[min_last_elapsed_time] -- ,     ,   (    ) ,qs.[max_last_elapsed_time] -- ,     ,   (    ) ,qs.[min_elapsed_time] -- , -    ,   (    ) ,qs.[max_elapsed_time] -- , -    ,   (    ) ,qs.[total_rows] --  ,  .     null. --   0,             ,qs.[min_last_rows] --  ,    .     null. --   0,             ,qs.[max_last_rows] --  ,    .     null. --   0,             ,qs.[min_rows] --  , -        --   0,             ,qs.[max_rows] --  , -        --   0,             ,qs.[total_dop] --          . --    0    ,    ,qs.[min_last_dop] --  ,     . --    0    ,    ,qs.[max_last_dop] --  ,     . --    0    ,    ,qs.[min_dop] --     -     . --    0    ,    ,qs.[max_dop] --     -     . --    0    ,    ,qs.[total_grant_kb] --        ,     . --    0    ,    ,qs.[min_last_grant_kb] --      ,     . --    0    ,    ,qs.[max_last_grant_kb] --      ,     . --    0    ,    ,qs.[min_grant_kb] --              . --    0    ,    ,qs.[max_grant_kb] --              . --    0    ,    ,qs.[total_used_grant_kb] --        ,     . --    0    ,    ,qs.[min_last_used_grant_kb] --      ,     . --    0    ,    ,qs.[max_last_used_grant_kb] --      ,     . --    0    ,    ,qs.[min_used_grant_kb] --             . --    0    ,    ,qs.[max_used_grant_kb] --             . --    0    ,    ,qs.[total_ideal_grant_kb] --     ,      . --    0    ,    ,qs.[min_last_ideal_grant_kb] --  ,    ,     . --    0    ,    ,qs.[max_last_ideal_grant_kb] --  ,    ,     . --    0    ,    ,qs.[min_ideal_grant_kb] --        -      . --    0    ,    ,qs.[max_ideal_grant_kb] --        -      . --    0    ,    ,qs.[total_reserved_threads] --        -     . --    0    ,    ,qs.[min_last_reserved_threads] --    ,     . --    0    ,    ,qs.[max_last_reserved_threads] --    ,     . --    0    ,    ,qs.[min_reserved_threads] --    , -     . --    0    ,    ,qs.[max_reserved_threads] --           . --    0    ,    ,qs.[total_used_threads] --       -     . --    0    ,    ,qs.[min_last_used_threads] --    ,     . --    0    ,    ,qs.[max_last_used_threads] --    ,     . --    0    ,    ,qs.[min_used_threads] --    ,     . --    0    ,    ,qs.[max_used_threads] --    ,     . --    0    ,    from tbl_res_rec as t left outer join sys.dm_exec_query_memory_grants as mg on t.[plan_handle]=mg.[plan_handle] and t.[sql_handle]=mg.[sql_handle] left outer join sys.dm_exec_cached_plans as pl on t.[plan_handle]=pl.[plan_handle] left outer join tbl_rec_stat_g as qs on t.[plan_handle]=qs.[plan_handle] and t.[sql_handle]=qs.[sql_handle] --and qs.[last_execution_time]=cast(t.[start_time] as date); 


Also I remind you that according to the collected statisticians you can get the most difficult requests:
Code
 /* creation_time - ,    .      ,         .  ,     ,   (  ),    ,         . last_execution_time -     . execution_count -                -       ,    -          . ,  -    ,    . CPU -      .    ,         ,       .          ,      - . ,      ,    . AvgCPUTime -      . TotDuration -    ,  .         , ,     " ".         CPU (   ) -    ,       - .          ,          .          sys.dm_os_wait_stats. AvgDur -      . Reads -   .     ,      .   -      ,    .     ,          .     ,     ,  ,      ,       . Writes -     .  ,   ""    .  ,       0     ,    ,     ,      tempdb. AggIO -     - ()  ,         ,            . AvgIO -         .        :    -  8192 .      , ""  .       ,    (       ),    ,             .   ,      ,    5,          300,     ,      10.          -         .          ,            .  ,          ,           ,         ,           . ,          -             .    ...   ,    ,        .   -       .  ,    ,       .               . query_text -    database_name -   ,   ,  . NULL    object_name -   (  ),  . */ with s as ( select creation_time, last_execution_time, execution_count, total_worker_time/1000 as CPU, convert(money, (total_worker_time))/(execution_count*1000)as [AvgCPUTime], qs.total_elapsed_time/1000 as TotDuration, convert(money, (qs.total_elapsed_time))/(execution_count*1000)as [AvgDur], total_logical_reads as [Reads], total_logical_writes as [Writes], total_logical_reads+total_logical_writes as [AggIO], convert(money, (total_logical_reads+total_logical_writes)/(execution_count + 0.0))as [AvgIO], [sql_handle], plan_handle, statement_start_offset, statement_end_offset from sys.dm_exec_query_stats as qs with(readuncommitted) where convert(money, (qs.total_elapsed_time))/(execution_count*1000)>=100 --    100  ) select s.creation_time, s.last_execution_time, s.execution_count, s.CPU, s.[AvgCPUTime], s.TotDuration, s.[AvgDur], s.[Reads], s.[Writes], s.[AggIO], s.[AvgIO], --st.text as query_text, case when sql_handle IS NULL then ' ' else(substring(st.text,(s.statement_start_offset+2)/2,( case when s.statement_end_offset =-1 then len(convert(nvarchar(MAX),st.text))*2 else s.statement_end_offset end - s.statement_start_offset)/2 )) end as query_text, db_name(st.dbid) as database_name, object_schema_name(st.objectid, st.dbid)+'.'+object_name(st.objectid, st.dbid) as [object_name], sp.[query_plan], s.[sql_handle], s.plan_handle from s cross apply sys.dm_exec_sql_text(s.[sql_handle]) as st cross apply sys.dm_exec_query_plan(s.[plan_handle]) as sp 


You can also write for MySQL. To do this, install mysql-connector-net and then write something like this:
Code for pending requests
 #     MySQL    [string]$sMySQLUserName = 'UserName' [string]$sMySQLPW = 'UserPassword' [string]$sMySQLDB = 'db' [string]$sMySQLHost = 'IP-address' [void][System.Reflection.Assembly]::LoadWithPartialName("MySql.Data"); [string]$sConnectionString = "server="+$sMySQLHost+";port=3306;uid=" + $sMySQLUserName + ";pwd="+"'" + $sMySQLPW +"'"+ ";database="+$sMySQLDB; #Open a Database connection $oConnection = New-Object MySql.Data.MySqlClient.MySqlConnection($sConnectionString) $Error.Clear() try { $oConnection.Open() } catch { write-warning ("Could not open a connection to Database $sMySQLDB on Host $sMySQLHost. Error: "+$Error[0].ToString()) } #The first query # Get an instance of all objects need for a SELECT query. The Command object $oMYSQLCommand = New-Object MySql.Data.MySqlClient.MySqlCommand; # DataAdapter Object $oMYSQLDataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter; # And the DataSet Object $oMYSQLDataSet = New-Object System.Data.DataSet; # Assign the established MySQL connection $oMYSQLCommand.Connection=$oConnection; # Define a SELECT query $oMYSQLCommand.CommandText='query'; $oMYSQLDataAdapter.SelectCommand=$oMYSQLCommand; # Execute the query $count=$oMYSQLDataAdapter.Fill($oMYSQLDataSet, "data"); $result = $oMYSQLDataSet.Tables[0].Rows[0]["Count"]; write-host $result; 


Result



This article has reviewed an example of performance counters (data elements) in Zabbix. This approach allows administrators to be notified of various problems in real time or after some specific time. Thus, this approach allows minimizing in the future the onset of a critical problem and stopping the operation of the DBMS and the server, which in turn protects production from stopping the workflow.
Previous article : Routine work with the database of the 24x7 information system in MS SQL Server

Sources:


» Zabbix 3.4
»
» Azure SQL SQL Server Database Engine
» SQL
» SQLSkills
» TechNet Microsoft
»
»
» SQL documentation
» Windows

Source: https://habr.com/ru/post/338498/


All Articles