2014-10-09

DBCC CHECKDB with PowerShell

Microsoft Transact-SQL (T-SQL) is really great for many things inside the SQL Server Database Engine, but automated maintenance with file handling, logging and other nice things is more of a challenge. This is why I and other fellow DBAs use PowerShell to automate tasks.

A standard DBA task is to check the databases, also their integrity with the console command DBCC CHECKDB.
The output like below is (somewhat) nicely formatted for human reading, but I would like to catch it inside the automation so that I can process and log the output.

Service Broker Msg 9675, State 1: Message Types analyzed: 14.
Service Broker Msg 9676, State 1: Service Contracts analyzed: 6.
Service Broker Msg 9667, State 1: Services analyzed: 3.
Service Broker Msg 9668, State 1: Service Queues analyzed: 3.
Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0.
Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0.
Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0.
Service Broker Msg 9605, State 1: Conversation Priorities analyzed: 0.
DBCC results for 'sys.sysrscols'.
There are 1970 rows in 24 pages for object "sys.sysrscols".
DBCC results for 'sys.sysrowsets'.
There are 346 rows in 4 pages for object "sys.sysrowsets".
...
DBCC results for 'Person.vStateProvinceCountryRegion'.
There are 181 rows in 2 pages for object "Person.vStateProvinceCountryRegion".
DBCC results for 'sys.plan_persist_query'.
There are 0 rows in 0 pages for object "sys.plan_persist_query".
CHECKDB found 0 allocation errors and 0 consistency errors in database 'AdventureWorks2014'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

(example on DBCC CHECKDB output from the database [AdventureWorks2014])

The PowerShell code is
$DatabaseName = 'AdventureWorks2014'
$Sql = "DBCC CHECKDB ([$DatabaseName]);"
$CheckDbMessages = @()
$CheckDbWatch = [System.Diagnostics.Stopwatch]::StartNew()
SQLCMD.EXE -E -Q $Sql |
ForEach-Object {
  $CheckDbMessages += $_
  "  {0:s}Z  $_" -f ([System.DateTime]::UtcNow)
}
$CheckDbWatch.Stop()

"{0:s}Z  Database [$DatabaseName] integrity check done in $($CheckDbWatch.Elapsed.ToString()) [hh:mm:ss.ddd]." -f ([System.DateTime]::UtcNow)
if ($CheckDbMessages[($CheckDbMessages.Count) - 2] -clike 'CHECKDB found 0 allocation errors and 0 consistency errors in database *') {
  "{0:s}Z  Database [$DatabaseName] integrity check is OK." -f ([System.DateTime]::UtcNow)
}
else {
  "{0:s}Z  Error in integrity check of the database [$DatabaseName]:`n  $($CheckDbMessages[($CheckDbMessages.Count) - 2])" -f ([System.DateTime]::UtcNow)
  throw "Integrity check of the database [$DatabaseName] failed."
}

(example on PowerShell script to execute DBCC CHECKDB)

I have put it inside a (advanced) function, but that is not shown here as is is the DBCC CHECKDB command execution that is the subject. I can only recommend that you take a serious look on PowerShell advanced functions. The best introduction that I still go back to is the book „Learn PowerShell Toolmaking in a Month of Lunches“ by Don Jones and Jeffery Hicks.
The variable $DatabaseName is a String object containing the name of the database to check.The script above does not check the prerequisites to the DBCC CHECKDB command.

  2014-10-09T17:55:04Z  DBCC results for 'AdventureWorks2014'.
  2014-10-09T17:55:04Z  Service Broker Msg 9675, State 1: Message Types analyzed: 14.
  2014-10-09T17:55:04Z  Service Broker Msg 9676, State 1: Service Contracts analyzed: 6.
  2014-10-09T17:55:04Z  Service Broker Msg 9667, State 1: Services analyzed: 3.
  2014-10-09T17:55:04Z  Service Broker Msg 9668, State 1: Service Queues analyzed: 3.
  2014-10-09T17:55:04Z  Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0.
  2014-10-09T17:55:04Z  Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0.
  2014-10-09T17:55:04Z  Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0.
  2014-10-09T17:55:04Z  Service Broker Msg 9605, State 1: Conversation Priorities analyzed: 0.
  2014-10-09T17:55:04Z  DBCC results for 'sys.sysrscols'.
  2014-10-09T17:55:04Z  There are 1970 rows in 24 pages for object "sys.sysrscols".
  2014-10-09T17:55:04Z  DBCC results for 'sys.sysrowsets'.
  2014-10-09T17:55:04Z  There are 346 rows in 4 pages for object "sys.sysrowsets".
...
  2014-10-09T17:55:04Z  DBCC results for 'Person.vStateProvinceCountryRegion'.
  2014-10-09T17:55:04Z  There are 181 rows in 2 pages for object "Person.vStateProvinceCountryRegion".
  2014-10-09T17:55:04Z  DBCC results for 'sys.plan_persist_query'.
  2014-10-09T17:55:04Z  There are 0 rows in 0 pages for object "sys.plan_persist_query".
  2014-10-09T17:55:04Z  CHECKDB found 0 allocation errors and 0 consistency errors in database 'AdventureWorks2014'.
  2014-10-09T17:55:04Z  DBCC execution completed. If DBCC printed error messages, contact your system administrator.
2014-10-09T17:55:04Z  Database [AdventureWorks2014] integrity check done in 00:00:04.1890229 [hh:mm:ss.ddd].
2014-10-09T17:55:04Z  Database [AdventureWorks2014] integrity check is OK.

(example on DBCC CHECKDB output from script - AdventureWorks)

Using the good-old SQLCMD.EXE gives the output to the default output stream, where the output then can be processed in the automation by PowerShell.

The SQL Server PowerShell module SQLPS CmdLet Invoke-SqlCmd can also execute the command DBCC CHECKDB, and by setting the -Verbose flag the output is shown - but it is in the Verbose stream and can not be accessed direct in the automation for further processing.
Streaming the Verbose to a file with a redirection operator is possible, but introduces extra complexity...

With ADO.NET the challenge is the same as with the CmdLet Invoke-SqlCmd, and I guess that it is because the CmdLet is constructed in .NET using ADO.NET.

Actually it is a general challenge to catch T-SQL output shown as messages in SQL Server Management Studio. This is because the output is sent async as events.
It is possible to build a strict .NET solution by using SqlInfoMessageEventHandler but I find the solution with SQLCMD.EXE more simple, and that I do like.

The option TABLERESULTS to the command DBCC CHECKDB is not documented by Microsoft, but widely recognized. I do not use the TABLERESULTS option in this solution as the results are delivered during execution but af one answer when the execution is finish. This is the behavior in all cases, also with ADO.NET ExecuteReader.
I have some larger databases, where a check takes several hours and I would like to have a log of the execution. Is everything running? Any errors corrected up til now? and so on...
This is also why I add a timestamp to each message line from DBCC CHECKDB.

2014-10-08

SQL Agent check error count step

I had to create a workaround on a database backup job for a server withe 50+ important databases.
To be sure that an attempt is made to take a backup on each database, have some history easy to access and also a option to reconfigure the job for en extra backup on one or more databases I set up one job step for a backup of each database.
This job structure could be illustrated like this:

  1. job step 1: Backup database alpha.
  2. job step 2: Backup database beta.
  3. job step 3: Backup database gamma.
  4. etcetera...
Each job step continues to the next no matter if the job step fails or not. This is to ensure that an attempt to backup each database is made. This structure will give a false status if the last jobs step does not fail.

To ensure a true job status I have added a final control job step, that check the status of the previous job steps.
The status of the previous job steps I get from the table msdb.dbo.sysjobhistory where I filter on the job start time by inserting SQL Agent Tokens in the T-SQL statement:
DECLARE @error_count INT = (
SELECT COUNT(*)
FROM [msdb].[dbo].[sysjobhistory]
WHERE job_id = $(ESCAPE_NONE(JOBID))
  AND [run_date] = $(ESCAPE_NONE(STRTDT))
  AND [run_time] >= $(ESCAPE_NONE(STRTTM))
  AND [run_status] <> 1);

IF (@error_count > 0)
BEGIN
  DECLARE @error_msg NVARCHAR(2047) = N''Error in Database Full Backup job. Check log file for details. (Error Count = '' + CAST(@error_count AS NVARCHAR(3)) + N'' job steps).'';
  RAISERROR(@error_msg, 19, 0) WITH LOG;
END
ELSE
BEGIN
  RAISERROR(''Database Full Backup job conpleted with no errors.'', 0, 0) WITH NOWAIT;
END

This control jobs step generates a error if a previous job step in the same job has failed. The plan (hope) is that the general monitoring platform like Microsoft System Center will catch the error and raise an incident to the Service Desk.

2014-09-09

MS14-044 cleanup

With the roll-out of the Security Bulletin MS14-044 I found out the hard way that the DBA Repository (SQLAdmin Repository) did not have the new version numbers.

  • 10.0.5520 for SQL Server 2008 SP3.
  • 10.50.4033 for SQL Server 2008 R2 SP2.
  • 11.0.3153 for SQL Server 2012 SP1.
  • 12.0.2254 for SQL Server 2014.
When the data were inserted, the automation ran with success.
It looks like the Security Update only affect SP1 of SQL Server 2012.
Usually the SQL Server Database Engine service restart twice - one time to come up in Single-User Mode and the second time to come up in normal Multi-User Mode.
This is different on the versions, where SQL Server 2014 does the Single-User Mode restart without further logging in the SQL Server Errorlog.
On SQL Server 2012 the Database Engine service also restarts, but come up in Script Upgrade Mode with detailed logging on the execution of the upgrade scripts. The logging takes 2000+ lines in SQL Server Errorlog, which I will not trouble you with here.
The Update itself takes up to a few minutes. It depends...

Reference

Again it was a great help that the unofficial but well-known „Microsoft SQL Server Version List“ was updated.
In general I also like to keep an eye on „Update Center for Microsoft SQL Server“, which is on Microsoft TechNet and can be regarded as Microsoft official.
The abbreviations like GDR and SP are well explained at Microsoft SQL Server Version List, but you can get the Microsoft background at „An Incremental Servicing Model…“ from Microsoft Support and at the wikipedia article „Software release life cycle“.

2014-04-07

Add AMO without SQLPS

When installing a SQL Server Analysis Services instance without a Database Engine instance to comply with the principle of Least Service the namespace Microsoft.AnalysisServices part of AMO is not available locally, and the PowerShell command
Import-Module -Name SQLPS -DisableNameChecking
fails with the error
Import-Module : The specified module 'SQLPS' was not loaded because no valid module file was found in any module directory.
It looks like the SQLPS is not part of a bare Analysis Services installation where the feature selection only is
/FEATURES="AS"
in a command-line installation.

To use the namespace Microsoft.AnalysisServices I found out that all I had to do was to add the type Microsoft.AnalysisServices, but the PowerShell CmdLet Add-Type requires a full assembly name.
The article "Powershell Add-Type – Where’s That Assembly" by Kyle Neier gives the full name for what looks to be SQL Server 2005 with the version 9.n. I am working with SQL Server 2012 that has version 11.n, and would like to prepare for SQL Server 2014 (version 12.n) and beyond. This gives that a simple command like
Add-Type -AssemblyName 'Microsoft.AnalysisServices, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
is not good enough as it is version-dependant.

Kyle's article showed me where to look for the version and token, and some manual browsing showed me that the assembly is located among the combined (32/64 bit) assemblies. A little split and merge of strings gave this little script to add the type Microsoft.AnalysisServices independant of SQL Server version
$Assembly= $(Get-ChildItem 'C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.AnalysisServices.DeploymentEngine').Name.Split('_')
$AssemblyName = "Microsoft.AnalysisServices, Version=$($Assembly[1]), Culture=neutral, PublicKeyToken=$($Assembly[3])"
Add-Type -AssemblyName $AssemblyName

To get the token I take the third item in the array Assembly. The second item is a empty string as there are two underscores ('_') in the folder name between the version part and the token part of the folder name.

I think I am home-safe. That is a least until the path is changed...

2014-03-23

Test-Computer

I am working on an automated SQL Server installation, where we use Managed Serviece Accounts (MSA) as SQL Server service account. To create and configure a MSA some PowerShell CmdLets are given by Microsoft, but there are several steps each with it own CmdLets.
We are creating MSAs for a given computer that is used as SQL Server server, and we want to absolutely sure that the computer exists by a given name in Active Directory (AD)and DNS. That also includes that the Fully Qualified Domain Name (FQDN) is correct.
To do this check I have created a function that checks both AD and DNS. The function is constructed to a specific script, and you should probably alter something to make it suit your needs.

function Test-Computer {
[CmdletBinding()]
Param(
  [Parameter(Mandatory=$true, HelpMessage='Enter name of server. Use Fully Qualified Name (FQN), e.g. "SANDBOX.sqladmin.lan"')]
  [String]$ServerName
)

[String]$ComputerName = $ServerName.Split('.')[0]

"{0:s}Z Testing if the computer '$ComputerName' exists in DNS..." -f $([System.DateTime]::UtcNow) | Write-Verbose
try {
  [System.Net.IPHostEntry]$IpHost = [System.Net.Dns]::GetHostByName($ComputerName)
}
catch [System.Management.Automation.MethodInvocationException] {
  "'$Computername' does not exist in DNS as FQDN!"
  return $false
}
"{0:s}Z Testing if the FQDN of '$ServerName'..." -f $([System.DateTime]::UtcNow) | Write-Verbose
if ($IpHost.HostName -ieq $ServerName) {
  "{0:s}Z FQDN '$ServerName' is OK." -f $([System.DateTime]::UtcNow) | Write-Verbose
}
else {
  "The computer name '$ServerName' does not match the FQDN '$($IpHost.HostName)'." | Write-Error
  return $false
}

"{0:s}Z Testing if the computer '$ComputerName' exists in Active Directory..." -f $([System.DateTime]::UtcNow) | Write-Verbose
try {
  [Microsoft.ActiveDirectory.Management.ADComputer]$Computer = $null
  $Computer = Get-ADComputer -Identity $ComputerName
}
catch [Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException] {
  $ComputerError = $Error[0]
}
if ($Computer) {
  "{0:s}Z The computer '$ServerName' exists in Active Directory." -f $([System.DateTime]::UtcNow) | Write-Verbose
  return $true
}
else {
  "The computer '$ServerName' does not exist in Active Directory." | Write-Error
  return $false
}

} # Test-Computer()

2014-03-15

PowerShell MessageBox

Sometimes I need a tool made in PowerShell to require extra attention from the user. When I did the scripting in Windows Script Host (WSH) I had the MsgBox function in VBScript. It is possible to create a WSH COM object in PowerShell like shown in „VBScript-to-Windows PowerShell Conversion Guide“ on Microsoft TechNet, but I prefer a .NET solution when it is available. To be fair to TechNet, the guide is written for PowerShell v1 that did not have the same interation with .NET as the present version of PowerShell.

A solution that I find easy to use is the static method Show() from the MessageBox class.
[System.Windows.Forms.MessageBox]::('Hello world!', 'Hello', 'OK' ,'Information')
This class is in the .NET namespace System.Windows.Forms.

If the script is executed in PowerShell consloe (powershell.exe) you have to load the assembly System.Windows.Forms
Add-Type -AssemblyName System.Windows.Forms
Is the script running in PowerShell ISE (powershell_ise.exe) the graphics are loaded already.

If you do not want the 'OK' written on the PowerShell console when OK in the Message Box is clicked, then you can stream it to the CmdLet Out-Null
[System.Windows.Forms.MessageBox]::Show('Hello world!', 'Hello', 'OK' ,'Information') | Out-Null

The output is generated by the MessageBoxButtons defined in the MessageBox object and what button the user pressed. In the example above ther is only one possibility („OK“) but it is possible to present other buttons to the user.
The possible MessageBoxButtons are dokumented with the MessageBoxButtons Enumeration.
The answer is from the enumration System.Windows.Forms.DialogResult, and if the answer is piped to the console it is implicit converted to a String by PowerShell.
The answer can be grabbed in a PowerShell variable and used for a more complex reaction to the DialogResult.
$Answer = [System.Windows.Forms.MessageBox]::Show('Hello world!', 'Hello', 'OkCancel' ,'Information')
switch ($Answer) {
  'OK' { 'Cheers' }
  'Cancel' { 'Having a bad day?' }
}


There are several possibilities for an icon. This is described in the documentation of the MessageBoxIcon Enumeration. In the examples above I have used the MessageBoxIcon member „Information“.

One of the nice things with the .NET Windows Forms is that it takes descriptive parameters like "OkCancel" instead of the numeric parameters to MsgBox. This is the reason to the many constant definitions in VBScript files.

The message part can be created in a more complex proces outside the Show() command.
On one occation I wanted the message to be formatted in lines that I added up during the script execution.
To keep track of each message line I added the line to an array, and before showing the message in the MessageBox, I added a NewLine (`n) in the end of each line.
[String[]]$Msg = @()
$Msg += 'Keep Calm'
$Msg += 'And'
$Msg += 'Trust a DBA'
[String]$MsgTxt = ''
$Msg | ForEach-Object { $MsgTxt += $_ + "`n" }
Add-Type -AssemblyName System.Windows.Forms
[System.Windows.Forms.MessageBox]::Show($MsgTxt, 'SQLAdmin Message', 'OK', 'Information') | Out-Null

The MessageBox is then
Keep Calm And Trust a DBA

2014-03-02

SQL Server performance counters

This is some notes on a basic performance baseline. The baseline is created by some Performance Counters.
In general I am interested in these key resources
  • CPU
  • Memory
  • Storage
  • Network
These resources I look at from both Windows Server and SQL Server Database Engine.
A quick and broad collection can be done by using these Performance Counter objects at top level
  • Server
  • System
  • Processor
  • Memory
  • LogicalDisk
  • NetworkInterface
Use the <All instances> selection of instances to get all counters on each object. Define the collection with * (star) as this is less expensive.

The Collection can be automated using Logman. This I might look into later...

Data Collector

A User Defined Data Collector Set can be created in the Performance Monitor by a right-click on
Data Collector Sets \ User Defined
and select to create the data collector set manually.

Performance Counters

The sample interval is set to 5 seconds and the data are logged in a binary (.blg) file on a local drive. Set the maximum size of the blg-file to 1024 MB. This is to reduce the risk of the drive to be filled and to make it easier to handle for moving around and like. If you need logging for a longer period then set the collector to automatic restart. Keeping the file size low you are able to offload blg-files and analyse offline without stopping the collector.
To further reduce the risk of hurting the computer by the collector I usually put the blg-file(-s) on another drive than the system drive or drives holding active database files (mdf, ndf, ldf etc).

Never do a performance measure remote as you will create more preasure by the measure and then get a blurred result.
Also logging direct in a Performance Monitor graph might give you a performance penalty. Even if it is done local on the server.
Disk performance counters are on individual drives where user data (.mdf & .ndf), transaction log (.ldf) and tempdb data (.mdf & .ndf) are placed. If there are more drives installed for one of theese all drives should be included in the measure.
Windows Server
LogicalDisk \ % Disk Read Time (data; translog; tempdb; backup )
LogicalDisk \ % Disk Write Time (data; translog; tempdb; backup )
LogicalDisk \ Avg. Disk Queue Length ( data; translog; tempdb; backup )
LogicalDisk \ Avg. Disk Reads/sec ( data; translog; tempdb; backup ) - read throughput (IOPS)
LogicalDisk \ Avg. Disk Writes/sec ( data; translog; tempdb; backup ) - write throughput (IOPS)
LogicalDisk \ Avg. Disk sec/Read ( data; translog; tempdb; backup ) - latency on read
LogicalDisk \ Avg. Disk sec/Write ( data; translog; tempdb; backup ) - latency on write
LogicalDisk \ % Disk Idle Time (translog, tempdb, data, backup)
Logical Disk \ Split IO/sec (data, tempdb, backup) - Fragmentation indicator when one IO operration is split in multiple disk requests.
Memory \ Cache Bytes
Memory \ Available Mbytes
Memory \ Page Faults/sec
Memory \ Pages/sec
Memory \ Standby Cache Core Bytes
Memory \ Standby Cache Normal Priority Bytes
Memory \ Standby Cache Reserve Bytes
PhysicalDisk \ Avg. Disk Queue Length ( data; translog; tempdb; backup )
PhysicalDisk \ Avg. Disk sec/Read ( data; translog; tempdb; backup )
PhysicalDisk \ Avg. Disk sec/Write ( data; translog; tempdb; backup )
PhysicalDisk \ Disk Reads/sec ( data; translog; tempdb; backup )
PhysicalDisk \ Disk Writes/sec ( data; translog; tempdb; backup )
Processor \ % Privileged Time (All instances)
Processor \ % Processor Time (All instances)
NUMA \ total %
Network Interface \ Output Queue Length ( <physical interface> )
Paging File(_Total) \ % Usage
System \ Context Switches/sec
System \ Processor Queue Length

If you are looking for connectivity issues then you could add these counters:

TCPv4\Connections Established
TCPv4\Connection Failures
TCPv4\Connections Reset

Please take a look at the Microsoft documentation (link) for a description on the three counters.

Please notice that when you select <All instances> on Processor counters, you will get a total and not numbers on each core. You have to select the cores of the machine to get individual numbers on the cores.
This is also the case with the disk drives where each drive must be selected for collection.

Disk counters are nicely described in the article "Windows Performance Monitor Disk Counters Explained" (link).
SQL Server Database Engine
Also use the performance counters for Windows Server mentioned above.

Process(sqlservr) \ % Processor Time
Process(sqlservr) \ Page Faults/sec
Process(SQLAGENT) \ % Processor Time
Process(SQLAGENT) \ Page Faults/sec
MSSQL : Access Methods \ Forwarded Records/sec
MSSQL : Access Methods \ Full Scans/sec
MSSQL : Access Methods \ Index Searches/sec
MSSQL : Access Methods \ Page Splits/sec
MSSQL : Access Methods \ Table Lock Escalations/sec
MSSQL : Buffer Manager \ Buffer cache hit ratio
MSSQL : Buffer Manager \ Page life expectancy
MSSQL : Buffer Manager \ Page reads/sec
MSSQL : Buffer Manager \ Page writes/sec
MSSQL : General Statistics \ User Connections
MSSQL : Latches \ Average Latch Wait Time (ms)
MSSQL : Locks(_Total) \ Average Wait Time (ms)
MSSQL : Locks(_Total) \ Lock Timeouts/sec
MSSQL : Locks(_Total) \ Number of Deadlocks/sec
MSSQL : Locks(_Total) \ Locks Waits/sec
MSSQL : Memory Manager \ Total Server Memory (KB)
MSSQL : Memory Manager \ Target Server Memory (KB)
MSSQL : Plan Cache \ Cache Hit Ratio (_Total)
MSSQL : SQL Errors(_Total) \ *
MSSQL : SQL Statistics \ Batch Requests/sec
MSSQL : SQL Statistics \ SQL Compilations/sec
MSSQL : SQL Statistics \ SQL Re-Compilations/sec
MSSQL : Wait Statistics(Average wait time (ms)) \ Network IO waits
MSSQL : Wait Statistics(Average wait time (ms)) \ Page IO latch waits
SQL Server Integration Services
Also use the performance counters for Windows Server and SQL Server Database Engine mentioned above.

Process(MsDtsSrvr) \ % Processor Time
Process(MsDtsSrvr) \ Page Faults/sec
SQLServer : Databases \ Bulk Copy Rows/sec \ DW database (KB)
SQLServer : Databases \ Bulk Copy Throughput/sec \ DW database (KB)
SQLServer : SSIS Pipeline 12.0 \ Buffer Memory
SQLServer : SSIS Pipeline 12.0 \ Buffters in use
SQLServer : SSIS Pipeline 12.0 \ Buffers spooled
SQLServer : SSIS Pipeline 12.0 \ Rows read
SQLServer : SSIS Pipeline 12.0 \ Rows written
SQL Server Master Data Services
Also use the performance counters for Windows Server and SQL Server Database Engine mentioned above.

Process(???) \ % Processor Time
Process(???) \ Page Faults/sec
SQL Server Analysis Services
Also use the performance counters for Windows Server mentioned above.

Process(msmdsrv) \ % Processor Time
Process(msmdsrv) \ Page Faults/sec
Process(msmdsrv) \ Private Bytes
MSAS12 : Proc Aggregations \ Current Partitions
MSAS12 : Proc Aggregations \ Rows created/sec
MSAS12 : Proc Aggregations \ Temp file bytes written/sec
MSAS12 : Proc Indexes \ Current partitions
MSAS12 : Proc Indexes \ Rows/sec
MSAS12 : Processing \ Rows read/sec
MSAS12 : Processing \ Rows written/sec
MSAS12 : Threads \ Processing pool busy I/O job threads
MSAS12 : Threads \ Processing pool busy non-I/O threads
MSAS12 : Threads \ Processing pool idle I/O job threads
MSAS12 : Threads \ Processing pool idle non-I/O threads
MSAS12 : Threads \ Processing pool job queue length

Thomas Kejser, John Sirmon & Denny Lee: „Microsoft SQL Server 2008 R2 Analysis Services Operations Guide“.
SQL Server Reporting Services
Also use the performance counters for Windows Server and SQL Server Database Engine mentioned above.

Process(ReportingServicesService) \ % Processor Time
Process(ReportingServicesService) \ Page Faults/sec

Analysis

A quick view on one baseline measure series is easy done with a User Defined Report in Performance Monitor.
When you want to do some statistics a spreadsheet is a common tool.
To export the data from Performance Monitor to a spreadsheet you open the logged data in Performance Monitor and right-click in the graph area. Then you click Save Data As and save the data as tabulator seperated values (.tsv).
To import the data in a Excel spreadsheet you create a new sheet and select the tab Data. Click the item From Text in the panel, and do the proper selections. In Denmark where I live we use comma as decimal seperator and dot as thusand seperator. This is one of the details that could trick you. The first time you do a import you might not get it right, but I am sure you will get it right after a few tries.

Another way to post-process PerfMon files is to use Relog. This Tool can extract data, combine or split PerfMon files and export to database or other file formats. Using this Tool can enable automation on handling more complex or long-running PerfMon runs.

To analyse PerfMon data together with Windows Logs and other data you can use Performance Analysis of Logs (PAL), which is a free and open tool. It can generate some rather neat reports with impressive and usefull graphs. But be aware that it tend to aggregate numbers and then it can hide performance issues as these usually are in spike values and not in aggregated values.

When you have more sheets from several measure series you can do some delta compares and produce some nice diagrams for management. Most diagrams I create as X-Y diagrams, but the type depends on what you are looking for or trying to show.

Some basic statistics that are usefull are
  • Average = AVERAGE()
  • Median = MEDIAN()
  • Most Frequent Number = MODE.SNGL()
  • Standard Deviation = STDEV.S()
  • Min = MIN()
  • Max = MAX()
  • Percentile = PERCENTILE.EXC(). It is different from one organization to another if 95- or 98-percentile is preferred. Or another.
    Sometimes you have to look at the other end of the data. Then you should look at 5- or 2-percentile matching the high percentile.
    I recommend you to use the same (high) percentile as the rest of the organization.
The calculation I usually add in rows at the top of the sheet.

Discussion

Be aware when
(To Be Continued...)

Reference

Jonathan Kehayias: „The Accidental DBA (Day 21 of 30): Essential PerfMon counters“.
Microsoft Core Team blog: "Measuring Disk Latency with Windows Performance Monitor".
SmarterTools blog: "Windows Perfmon, Disk I/O and Possible Disk Issues".

History

2014-02-03  Blog post created with initial list of performance counters.
2015-08-06  Performance counters added: Table Lock Escalations/sec and Context Switches/sec.
2015-08-12  Performance counter on Logical Disk % Disk Time added and backup disk for all disk counters.
2017-05-15  Notes on Relog, Logman and PAL added.
2019-10-29  Counter on Disk Idle Time added on LogicalDisk.
2023-08-21  Percentile analysis added.

2014-01-19

DoD DISA SQL Server 2012 STIGs

The american (USA) Department of Defence (DoD) has released some Security Technical Implementation Guides (STIGs) for Microsoft SQL Server 2012.
There are two principal documents:
  • "Microsoft SQL Server Database Instance Security Technical Implementation Guide"
  • "Microsoft SQL Server 2012 Database Security Technical Implementation Guide"
Also DISA has composed a general "Database Security Requirements Guide", that can be used across database managers and versions.
In general the documents are a collection of discussions about issues like:
  1. Least Privilege.
  2. Separation of Duties (SoD).
  3. Least Service.
  4. Role Based Access Control (RBAC).
  5. Security Classification.

Least Privilege

In the STIG Least Privilege is refered to as a concept, where other places it is refered to as a principle. I will not discuss the difference here as it might take this text off course.
This principle sounds simple and easy to implement, but if you really asks the same question to every acces or right it is really complicated to construct a solution that implements this principle to the core.
If you are expected to implement a system build with a framework like SharePoint or Entity Framework, it might be rather difficult - like "impossible" -  to figure out the really needed privileges.
A good sub-principle is to avoid the builtin server roles and database roles in SQL Server. This forces everybody to define the actual privilege.

Seperation of Duties

As a DBA in a financial organisation this is a very important. Not only to comply with formal rules but also to ensure a robust installation. This concept (principle?) is very often tightly integrated with Role Based Access Control in the construction and implementation.

Least Service

To contribute to at stable and secure installation it is a good idea only to install the needed services. Other nice-to-have services increases the risk of security holes and also increases the need for upgrades. Such upgrades might reduce the service availability while restarting og likewise stopping the installation.

Role Based Access Control

To ensure Seperation of Duties and meet audit requirements it is usually expected that the Security Plan is based on Role Based Access Control. Mostly the roles are business functional roles, but they can also be based on duties in a system.
There is a lot of litteratur on this complex subject. If you are expected to enforce this principle I think you should spend some time on the theories behind.
Also there is a huge administrative benefit of using Role Based Access Control, in common opposition to individual user access and rights.
If you are asked to give a user "the same rights as <existing user>" this is a worring and sure sign of no security plan and no us of Role Based Access Control. Such a request could make the day very long rather suddenly.
The subject of Role Based Access Control easily leads to the subject of Discretionary Access Control (DAC), that also is refered to several times in the documents. This is important when setting a policy of ownership.
Enforcing Role Based Access Control is not only implementing functional roles, but also to handle the faces of SQL Server that might break this principle if handled without proper awareness. A classical situation is the database owner, where you might have a database owned by a subject that should have limited rights on some object in the database.

Security Classification

This issue is mentioned in several of the documents, sometimes as security labels on the data.
Such a rule is very usefull to implement especially when dealing with sensitive data. But it requires the organisation to define both the sensitive data and the security levels shown by the labels. These defnintions must be precise and unique enough to be implemented.

History

2014-01-19 Blog post created.
2017-05-29 Link for DISA database STIGs updated.

2014-01-08

Audit Log for Analysis Services

The SQL Server Database Engine has the feature Login Auditing, that can create a simple audit log. This log contains among other things service start and stop and logins.
I was approached with a similar requirement on SQL Server Analysis Services (SSAS), but this does not have something like Login Auditing.
SSAS does have tracing features, and can be monitored with Extended Events. The documentation is mostly written for the Database Engine, but with some trial-and-error it can be implemented on SSAS with a XMLA statement.

The documentation from Microsoft is not good, but I found a great blogpost with a working example (http://byobi.com/blog/2013/06/extended-events-for-analysis-services/). To this example I have added some features to meet requirements.

Auto Restart

Auto Restart is required to ensure that the trace is running even after a restart of the SSAS service or the server. This is implemented by adding a AutoRestart element to the XMLA statement.

Limited number of tracefiles

To ensure that the tracing does not run wild, we decided to limit the size of the logfiles and the number of logfiles. This is by the configuration of the Event File Target.
The size of each logfile is defined with the option "max_file_size" with a value i MBs (MiBs).
The number of logfiles is defined by the option "max_rollover_files".

Implementation

The XMLA statement to implement the Audit Log:
<Create
  xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2"
  xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"
  xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100"
  xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"
  xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
  <ObjectDefinition>
    <Trace>
      <ID>SQLAdmin_Audit</ID>
      <Name>SQLAdmin_Audit_2014-W02</Name>
      <AutoRestart>true</AutoRestart>

      <ddl300_300:XEvent>
        <event_session name="xeas"
          dispatchLatency="1"
          maxEventSize="4"
          maxMemory="4"
          memoryPartitionMode="none"
          eventRetentionMode="allowSingleEventLoss"
          trackCausality="true">

        <event package="AS" name="AuditLogin" />
        <event package="AS" name="AuditLogout" />

        <event package="AS" name="ExistingConnection" />
        <event package="AS" name="ExistingSession" />
        <event package="AS" name="SessionInitialize" />

        <target package="Package0" name="event_file">
          <parameter name="filename" value="L:\MSSQL_AS_Log\SQLAdmin_Audit.xel" />
          <parameter name="max_file_size" value="1024" />
          <parameter name="max_rollover_files" value="3" />
        </target>
      </event_session>
    </ddl300_300:XEvent>
    </Trace>
  </ObjectDefinition>
</Create>

Operation

This DMX statement shows what traces that are running:
SELECT *
FROM [$System].[DISCOVER_TRACES];


This XMLA statement deletes the trace. The logfiles (.xel) will not be deleted.
<Delete xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Object>
    <TraceID>SQLAdmin_Audit</TraceID>
  </Object>
</Delete>


A logfile can be browsed with SQL Server Management Studio. The file can be accessed on a UNC path.
The original blogentry mentions that the logfile can be read with the T-SQL function sys.fn_xe_file_target_read_file, but this is a Database Engine function.
To use this function to get the contents of the logfile – in XML – the file must be moved or copied to a location where a Database Engine can acces the file.