How to check SQL Server restart time in SQL Server?

Easy way to check when SQL server was restarted.

select * from sys.dm_os_sys_info

SQL Server – xp_cmdshell

Windows commands can be run from SQL Server, if xp_cmdshell is enabled on the SQL Server instance. We can check if this is enabled from the system view sys.configurations.

-- CHECK FOR "xp_cmdshell"
SELECT name, CONVERT(INT, ISNULL(value, value_in_use)) AS IsConfigured 
FROM sys.configurations 
WHERE name = 'xp_cmdshell';

-- CHECK FOR "show advanced options"
SELECT name, CONVERT(INT, ISNULL(value, value_in_use)) AS IsConfigured 
FROM sys.configurations 
WHERE name = 'show advanced options';

Blocking sessions in SQL Server

The query given below is very handy to find blocking sessions in sql server.

SET NOCOUNT ON
GO
SELECT SPID, BLOCKED, REPLACE (REPLACE (T.TEXT, CHAR(10), ' '), CHAR (13), ' ' ) AS BATCH
INTO #T
FROM sys.sysprocesses R CROSS APPLY sys.dm_exec_sql_text(R.SQL_HANDLE) T
GO
WITH BLOCKERS (SPID, BLOCKED, LEVEL, BATCH)
AS
(
SELECT SPID,
BLOCKED,
CAST (REPLICATE ('0', 4-LEN (CAST (SPID AS VARCHAR))) + CAST (SPID AS VARCHAR) AS VARCHAR (1000)) AS LEVEL,
BATCH FROM #T R
WHERE (BLOCKED = 0 OR BLOCKED = SPID)
AND EXISTS (SELECT * FROM #T R2 WHERE R2.BLOCKED = R.SPID AND R2.BLOCKED <> R2.SPID)
UNION ALL
SELECT R.SPID,
R.BLOCKED,
CAST (BLOCKERS.LEVEL + RIGHT (CAST ((1000 + R.SPID) AS VARCHAR (100)), 4) AS VARCHAR (1000)) AS LEVEL,
R.BATCH FROM #T AS R
INNER JOIN BLOCKERS ON R.BLOCKED = BLOCKERS.SPID WHERE R.BLOCKED > 0 AND R.BLOCKED <> R.SPID
)
SELECT N'    ' + REPLICATE (N'|         ', LEN (LEVEL)/4 - 1) +
CASE WHEN (LEN(LEVEL)/4 - 1) = 0
THEN 'HEAD -  '
ELSE '|------  ' END
+ CAST (SPID AS NVARCHAR (10)) + N' ' + BATCH AS BLOCKING_TREE
FROM BLOCKERS ORDER BY LEVEL ASC
GO
DROP TABLE #T
GO

The result shows a tree structure where the parent blocking session can be easily identified.

To know more about deadlocks click here

SQL SERVER – FIX : ERROR 1101 : Could not allocate a new page for database because of insufficient disk space in filegroup

Check if autogrowth option is turned on for the database.

Make sure there is enough space on the drive, where the database resides

Mail alerts for new added record

Let’s create a new table

CREATE TABLE [dbo].[Users](
[UserId] [int] NOT NULL,
[UserName] [varchar](100) NOT NULL
) ON [PRIMARY]

Now to create the trigger for the new entries

CREATE TRIGGER [dbo].[NewUser_Email]
       ON [dbo].Users
AFTER INSERT
AS
BEGIN
       SET NOCOUNT ON;
       DECLARE @UserId INT
       SELECT @UserId = INSERTED.UserId    
       FROM INSERTED
       declare @body varchar(500) = 'User with ID: ' + CAST(@UserId AS VARCHAR(5)) + ' inserted.'
       EXEC msdb.dbo.sp_send_dbmail
            @profile_name = 'Primary Profile'
           ,@recipients = 'santhosh@gmail.com'
           ,@subject = 'New User Added'
           ,@body = @body
           ,@importance ='HIGH'
END

Before doing this make sure you have your DB Mail configured on your SQL Server. You should be changing the @profile_name to whatever you have configured and and @recipients to the list of emails you need the alerts.

Once this is done, you should be receiving a mail alert or each entry into the table.

INSERT INTO [USERS]
VALUES (1, 'AAA')

 

Check here to know how to Alert me when SQL Server Drive Space is low.

Whats New in SQL Server 2016 Database Engine

  • You can now configure multiple tempDB database files during SQL Server installation and setup.
  • New Query Store stores query texts, execution plans, and performance metrics within the database, allowing easy monitoring and troubleshooting of performance issues. A dashboard shows which queries consumed the most time, memory or CPU resources.
  • Temporal tables are history tables which record all data changes, complete with the date and time they occurred.
  • New built-in JSON support in SQL Server supports JSON imports, exports, parsing and storing.
  • New PolyBase query engine integrates SQL Server with external data in Hadoop or Azure Blob storage. You can import and export data as well as executing queries.
  • The new Stretch Database feature lets you dynamically, securely archive data from a local SQL Server database to an Azure SQL database in the cloud. SQL Server automatically queries both local and remote data in the linked databases.
  • In-memory OLTP:
    • Now supports FOREIGN KEY, UNIQUE and CHECK constraints, and native compiled stored procedures OR, NOT, SELECT DISTINCT, OUTER JOIN, and subqueries in SELECT.
    • Supports tables up to 2TB (up from 256GB).
    • Has column store index enhancements for sorting and Always On Availability Group support.
  • New security features:
    • Always Encrypted: When enabled, only the application that has the encryption key can access the encrypted sensitive data in the SQL Server 2016 database. The key is never passed to SQL Server.
    • Dynamic Data Masking: If specified in the table definition, masked data is hidden from most users, and only users with UNMASK permission can see the complete data.
    • Row Level Security: Data access can be restricted at the database engine level, so users see only what is relevant to them.

Excessive Query Compilation and Optimization – CPU Bottleneck

Let’s say you are experiencing High CPU load or longer transaction execution time. And you are seeing excessive query compilations. To troubleshoot we need to find which queries are causing the trouble.

Query compilation and optimization is a CPU-intensive process. The cost of optimization increases as the complexity of the query and the underlying schema increases, but even a relatively simply query can take 10-20 milliseconds of CPU time to parse and compile. To mitigate this cost, SQL Server caches and reuses compiled query plans.. Each time a new query is received from the client, SQL Server first searches the plan cache (sometimes referred to as the procedure cache) to see whether there is already a compiled plan that can be reused. If a matching query plan cannot be found, SQL Server parses and compiles the incoming query before running it.

Using parameterized queries or stored procedures for OLTP-based applications substantially increases the chance of reusing a cached plan and can result in substantial reductions in SQL Server CPU consumption. You can enable parameterization at the database or query level by using the PARAMETERIZATION FORCED database option or query hint, respectively. However, the best place to parameterize queries is within the application itself (at design time), which also helps mitigate the risk of SQL injection by avoiding string concatenation using parameter values.

DETECTION

During compilation, SQL Server 2008 computes a “signature” of the query and exposes this as the query_hash column in sys.dm_exec_requests and sys.dm_exec_query_stats, and the QueryHash attribute in Showplan/Statistics XML. Entities with the same query_hash value have a high probability of referring to the same query text if it had been written in a query_hash parameterized form. Queries that vary only in literal values should have the same value. For example, the first two queries share the same query hash, while the third query has a different query hash, because it is performing a different operation.

select * from sys.objects where object_id = 100
select * from sys.objects where object_id = 101
select * from sys.objects where name = 'sysobjects'

The query hash is computed from the tree structure produced during compilation. Whitespace is ignored, as are differences in the use of explicit column lists compared to using an asterisk (*) in the SELECT list. Furthermore, it does not matter if one query uses fully qualified name and another uses just the table name as long as they both refer to the same object. All of the following should produce the same query_hash value.

select * from Employee e
select * from Sales.Employee e
select * from employeeid, name, designation from Sales.Employee e

You can check this by turning on the showplan_xml

set showplan_xml on
go

An easy way to detect applications that submit lots of ad hoc queries is by grouping on the sys.dm_exec_query_stats.query_hash column as follows.

select q.query_hash, 
 q.number_of_entries, 
 t.text as sample_query, 
 p.query_plan as sample_plan
from (select top 20 query_hash, 
 count(*) as number_of_entries, 
 min(sql_handle) as sample_sql_handle, 
 min(plan_handle) as sample_plan_handle
 from sys.dm_exec_query_stats
 group by query_hash
 having count(*) > 1
 order by count(*) desc) as q
 cross apply sys.dm_exec_sql_text(q.sample_sql_handle) as t
 cross apply sys.dm_exec_query_plan(q.sample_plan_handle) as p
go

Queries that have a number_of_entries value in the hundreds or thousands are excellent candidates for parameterization. If you look at the CompileTime and CompileCPU attributes under the <QueryPlan> tag of the sample XML query plan and multiply those values times the number_of_entries value for that query, you can get an estimate of how much compile time and CPU you can eliminate by parameterizing the query (which means that the query is compiled once, and then it is cached and reused for subsequent executions). Eliminating these unnecessary cached plans has other intangible benefits as well, such as freeing memory to cache other compiled plans (thereby further reducing compilation overhead) and leaving more memory for the buffer cache.

RESOLUTION

Use the query_hash and query_plan_hash values together to determine whether a set of ad hoc queries with the same query_hash value resulted in query plans with the same or different query_plan_hash values, or access path. This is done via a small modification to the earlier query.

select q.query_hash, 
 q.number_of_entries, 
 q.distinct_plans,
 t.text as sample_query, 
 p.query_plan as sample_plan
from (select top 20 query_hash, 
 count(*) as number_of_entries, 
 count(distinct query_plan_hash) as distinct_plans,
 min(sql_handle) as sample_sql_handle, 
 min(plan_handle) as sample_plan_handle
 from sys.dm_exec_query_stats
 group by query_hash
 having count(*) > 1
 order by count(*) desc) as q
 cross apply sys.dm_exec_sql_text(q.sample_sql_handle) as t
 cross apply sys.dm_exec_query_plan(q.sample_plan_handle) as p
go

Note that this new query returns a count of the number of distinct query plans (query_plan_hash values) for a given query_hash value. Rows that return a large number for number_of_entries and a distinct_plans count of 1 are good candidates for parameterization. Even if the number of distinct plans is more than one, you can use sys.dm_exec_query_plan to retrieve the different query plans and examine them to see whether the difference is important and necessary for achieving optimal performance.

After you determine which queries should be parameterized, the best place to parameterize them is the client application. The details of how you do this vary slightly depending on which client API you use, but the one consistent thing across all of the APIs is that instead of building the query string with literal predicates, you build a string with a question mark (?) as a parameter marker.

-- Submitting as parameterized
select * from Sales.SalesOrderHeader where SalesOrderID = ?

The client driver or provider then submits the query in its parameterized form using sp_executesql.

exec sp_executesql N’select * from Sales.SalesOrderHeader where SalesOrderID = @P1’, N’@P1 int’, 100

Because the query is parameterized, it matches and reuses an existing cached plan.

More on Troubleshooting CPU Bottlenecks

 

Troubleshooting CPU Bottlenecks

CPU bottleneck can be caused due to insufficient hardware resource for the load. But before you rush to buy more hardware consider other factors. Most commonly excessive CPU utilization can be reduced by query tuning, application design changes and optimizing system configuration.

Identify the largest consumer of CPU and see if they can be tuned. Performance Monitor is generally one of the easiest means to determine whether the server is CPU bound. You should look to see whether the Processor:% Processor Time counter is high; sustained values in excess of 80% of the processor time per CPU are generally deemed to be a bottleneck. Within SQL Server, you can also check for CPU bottlenecks by checking the DMVs. Requests waiting with the SOS_SCHEDULER_YIELD wait type or a high number of runnable tasks can indicate that runnable threads are waiting to be scheduled and that there might be a CPU bottleneck on the processor. The below query gives currently cached batches or procedures that are using the most CPU. The query aggregates CPU consumed by statements having same plan_handle.

If you have enabled the data collector, the SQL Server Waits chart on the Server Activity report is a very easy way to monitor for CPU bottlenecks over time. Consumed CPU and SOS_SCHEDULER_YIELD waits are rolled up into the CPU Wait Category in this report, and if you do see high CPU utilization, you can drill through to find the queries that are consuming the most resources.

select top 50
    sum(qs.total_worker_time) as total_cpu_time,
    sum(qs.execution_count) as total_execution_count,
    count(*) as  number_of_statements,
    qs.plan_handle
from
    sys.dm_exec_query_stats qs
group by qs.plan_handle
order by sum(qs.total_worker_time) desc

Common CPU resource intensive operations are discussed below. Detection and Resolution of the issues are discussed for each.

  1. Excessive query compilation and optimization.
  2. Unnecessary Recompilation.
  3. Inefficient query plan.
  4. Intraquery Parallelism.
  5. Poor cursor usage.

Check this for more troubleshooting performance issues in Sql server

Tools for resolving resource bottleneck

Listed below are few tools that we can use to resolve performance issues in Sql server.

  • Perfmon or performance monitor that comes along with windows OS.
  • SQL Server Profiler that comes with SQL Server Performance Tools.
  • DBCC Commands.
  • DMV’s
  • Extended Events
  • Data Collector

Check this for more troubleshooting performance issues in Sql server

Troubleshooting Performance Issues in SQL Server

There are many reasons for SQL Server slowdown. They can be broadly classified into three types to start our investigation.

  1. Resource BottleneckCPU, Memory, I/O.  Check tools used for troubleshooting
  2. Tempdb Bottlenck – There is only one tempdb for a SQL Server instance. If there is an application that overloads tempdb through excessive DML/DDL operations it can affect other processes on the server.
  3. Slow running User Query – Performance of existing queries can degrade or a new query can be taking longer than expected to complete. Statistics/ Indexes / Locking / Blocking / Deadlocks/ Schema design / Isolation Level are factors that can affect the slowdown.