Job history template using SQL Server Agent tokens

To log the execution and output from all steps in a SQL Server Agent job into a single logfile, you can use SQL Server tokens and macros to create the logfile in the default SQL Server LOG-folder, and let each jobstep add their output to the file. The default SQL Server LOG-folder is also the folder where SQL Server places Errorlog files (ERRORLOG.nn) and tracefiles (.trc) from default trace.

The first jobstep is a dummy step that only initialize the logfile on a normal job execution.
The jobstep can be created with
EXECUTE [msdb].[dbo].[sp_add_jobstep]
@step_name=N'Job Begin',
@command=N'DECLARE @errmsg nvarchar(2047);
SET @errmsg = LEFT(CONVERT(nvarchar(128), GETUTCDATE(), 127), 22) + N''Z : Job Begin...'';

The output filename is created in the default SQL Server Log with „$(ESCAPE_SQUOTE(SQLDIR))\LOG\“ and the name of the file is created with a name like the jobname, a timestamp and the filetype „log“.
I miss two things as a token; the jobname and the time where the job began in UTC. Right now I will have to enter a jobname manually and take care of the local time when we switch between summer- and wintertime (Daylight saving time).
The logfiles initial output is generated by a RAISERROR call. Please notice that I give the errormessage a UTC timestamp in a ISO 8601 format.

A actual job step will the add the output to the logfile. A jobstep could be created with
EXECUTE [msdb].[dbo].[sp_add_jobstep]
@step_name=N'Job Execution 01',
@command=N'DECLARE @errmsg nvarchar(2047);
SET @errmsg = LEFT(CONVERT(nvarchar(128), GETUTCDATE(), 127), 22) + N''Z : Job Executing {01}...'';

The value 2 to the parameter @flag append the output to the logfile.

When the job has executed all (real) steps, the logfile is ended a dummy step, that enter a final timestamp. This can be used for execution time comparisons. The jobstep can be created with
EXECUTE [msdb].[dbo].[sp_add_jobstep]
@step_name=N'Job End',
@command=N'DECLARE @errmsg nvarchar(2047);
SET @errmsg = LEFT(CONVERT(nvarchar(128), GETUTCDATE(), 127), 22) + N''Z : Job End.'';

A execution of a job with three steps where one is the actual job execution could generate a output like this
Job 'MacroTest' : Step 1, 'Job Begin' : Began Executing 2012-11-28 08:21:51

2012-11-28T07:21:51.95Z : Job Begin... [SQLSTATE 01000]
Job 'MacroTest' : Step 2, 'Job Execution 01' : Began Executing 2012-11-28 08:21:51

2012-11-28T07:21:51.98Z : Job Executing {01}... [SQLSTATE 01000]
Job 'MacroTest' : Step 3, 'Job End' : Began Executing 2012-11-28 08:21:52

2012-11-28T07:21:52.01Z : Job End. [SQLSTATE 01000]

If the job is started from another step than the initial step, the output is still caught in a logfile. In this case a new file is created implicit by adding output to it. A partial execution could generate a output like this
Job 'MacroTest' : Step 2, 'Job Execution 01' : Began Executing 2012-11-28 09:50:21

2012-11-28T08:50:21.81Z : Job Executing {01}... [SQLSTATE 01000]
Job 'MacroTest' : Step 3, 'Job End' : Began Executing 2012-11-28 09:50:21

2012-11-28T08:50:21.84Z : Job End. [SQLSTATE 01000]

SQL Server Agent tokens and macros are documented in „Use Tokens in Job Steps“ on MSDN Library.

A more complex and dynamic creation of SQL Server Agent jobsteps can be studied in the SQL Server Maintenance Solution from Ola Hallengren.


SQL Server major version

When I automate installation or administration I often has to use the major version of the SQL Server database installation.
Unfortunately SERVERPROPERTY('ProductVersion') return a string with the complete versionnumber, and there is no other propertyname to get only the major version.
I would like something similar to the SMO Server.VersionMajor property.

Some cutting and casting does the trick to get the left part before the first dot:

This gives me a integer that can be compared or be part of a calculation like
... ( @version_major - 1 ) ...

Some security configuration statements are dependant on the major version, and can behandled like this
IF @version_major <= 8
  PRINT N'Not SQL Server 2005 or above.';
  PRINT N'SQL Server 2005 or above';

In my SQL Server repository I store the version information in the table [sqladmin].[version], where the version number is in the column [version].[version_number] as NVARCHAR(128).
The major version in a SELECT statement as integer can be done with something like this:
  N'ssdb_version_major' = CASE [version].[version_number]
    WHEN N'(unknown)' THEN 0
    ELSE CAST( LEFT([version].[version_number], ABS(CHARINDEX(N'.',[version].[version_number])-1)) AS INT)

Please notice the ABS() on the CHARINDEX() because of the subtraction (-1). It looks like the SQL Server optimizer looks at the subtraction before CHARINDEX(). Without the ABS() the statement failed with this error:
Msg 537, Level 16, State 3, Line 1
Invalid length parameter passed to the LEFT or SUBSTRING function.

The case on the string „(unknown)“ is because the column is defined NOT NULL, and a unknown version number - usually because there is no connection between the repository collector and the database instance - is given the default value „(unknown)“. Even when a subset is selected by a WHERE clause, all rows are evaluated for CAST(). Without the CASE the statement failed with this error:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the nvarchar value '(' to data type int.

(This is a running update from 2011-12-21)


VLF count on SQL Server 2012

Some time ago I posted the blog entry „VLF count“ where the (undocumented) function „DBCC LogInfo“ is used to collect the VLF count of a database.
This I have implemented in my SQL Server repository, but with the first SQL Server 2012 installation, the collection failed. The collection is done on a given database with this statement:
 [file_id] INT
 ,[file_size] BIGINT
 ,[start_offset] BIGINT
 ,[f_seq_no] BIGINT
 ,[status] BIGINT
 ,[parity] BIGINT
 ,[create_lsn] NUMERIC(38)
INSERT INTO #stage EXECUTE (N'DBCC LogInfo WITH no_infomsgs');
DROP TABLE #stage;

And the error message is:
Msg 213, Level 16, State 7, Line 1
Column name or number of supplied values does not match table definition.

It turns out that the resultset of DBCC LogInfo has changed with SQL Server 2012, where SQL Server 2000 to 2008 R2 has these DBCC LogInfo columns:
And SQL Server 2012 has these DBCC LogInfo columns:

The column „RecoveryUnitId“ is added in the beginning of the resultset.

On SQL Server 2012 this rewritten statement gives the VLF count of the current database:
 [recovery_unit_id] INT
 ,[file_id] INT
 ,[file_size] BIGINT
 ,[start_offset] BIGINT
 ,[f_seq_no] BIGINT
 ,[status] BIGINT
 ,[parity] BIGINT
 ,[create_lsn] NUMERIC(38)
INSERT INTO #stage EXECUTE (N'DBCC LogInfo WITH no_infomsgs');
DROP TABLE #stage;

I am rewriting (refactoring) my script to depend on the major version number.

BTW – Please take a look on the Microsoft Connect item „Document DBCC LOGINFO() or create new DMVs to view VLF info“.


Check NetBackup file backup

I have been looking in ways to ensure that a SQL Server backup file is backed up by NetBackup, so that the backup file can be deleted from the local disk. I have not found a API for the local NetBackup client, but there are a lot of command-line tools.
In this case our backup vendor has pointed out the tool "bplist" (bplist.exe). Usually bplist is located in the folder „%ProgramFiles\Veritea\NetBackup\bin\“.
A bplist answer example looks like this:
-rwx------ root;SQL1 root;Doma      485888 Jul 29 20:00:01 E:\MSSQL\Backup\ReportServer_backup_2012_07_28_221142_2861771.bak
if bplist is called with the parameters „-l -b -Listseconds“.

No year on NetBackup backup.
This is also discussed in the forum thread "bplist does not display the year of a backup?".
It looks like there is an issue on the age of the backup, if it is more than six months old or not.
Also when I look in the documentation, there are some differences on the answer from the NetBackup server if it is Windows or Linux. Please notice that it is the NetBackup server, not the client operating system that gives the answer.

bplist can be executed in PowerShell with the invoke operator (&):
& "<NetBackup folder>\bplist.exe" -l -b "X:\MSSQL\Backup\sqladmin_repository_backup_2012_12_24_172249_7006410.bak"
The full path must be provided for the file to examine in NetBackup.

To look at how to handle bplist and its answer in a automated way, I made this PowerShell spike script:
param (
  [ValidateScript({Get-ChildItem -Path $_})]
  [string]$NetBackup_Folder = 'C:\Program Files\Veritas\NetBackup\bin'

Set-StrictMode -Version 2.0

function Get-NetBackup_bplist {
  BEGIN { Write-Verbose "OS Version = $([System.Environment]::OSVersion.VersionString)" }
    Write-Verbose "File name = '$($File.FullName)'."

    $BpList = New-Object -TypeName PSObject
    $BpList = $File

    # Get file backup status from NetBackup
    $bplist_answer = $(& "$NetBackup_Folder\bplist.exe" -l -b -Listseconds "$($File.FullName)") 2>&1 # Redirect bplist.exe error to $bplist_answer
    if ($bplist_answer.GetType().IsArray) { # Multiple backups in NetBackup
      Write-Verbose " $($bplist_answer.Length) backups found in NetBackup. Will continue on last backup."
      $_bplist = $bplist_answer[0] # Get last backup in NetBackup
    else {
      $_bplist = $bplist_answer
    Write-Verbose " $($_bplist)"

    # Evaluate file backup status from NetBackup. Add -PassThru to last added member.
    if ($_bplist.ToString() -ceq 'EXIT STATUS 227: no entity was found') { # File not in NetBackup
      Write-Verbose " --- NO backup in NetBackup ($($File.Name))."
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name HasBackup -Value $false
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name BackupLength -Value $null
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name BackupTime -Value $null
    else {
      Write-Verbose " +++ Backup is in NetBackup ($($File.Name))."
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name HasBackup -Value $true

      # Get backup details
      $regex = [regex]"\w+"
      $Backup = $($regex.matches($_bplist)) # Returns [System.Text.RegularExpressions.Match]

      if ($Backup[5].Value.SubString($Backup[5].Length-1) -eq 'K') { # Is the most right char 'K'?
        $Backup_Size = [int]$Backup[5].Value.SubString(0,$Backup[5].Length-1) * 1024 # Convert from KB to Bytes
      else {
        $Backup_Size = [int]$Backup[5].Value
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name BackupLength -Value $Backup_Size
      $DateParse = "$($Backup[7].Value) $($Backup[6].Value) $([System.DateTime]::Now.Year) $($Backup[8].Value):$($Backup[9].Value):$($Backup[10].Value)"
      Add-Member -InputObject $BpList -MemberType NoteProperty -Name BackupTime -Value $([System.DateTime]::Parse($DateParse))
    Write-Output $BpList
  END {}

switch -casesensitive ($ComputerName) {
  Get-ChildItem -Path 'D:\MSSQL_Backup' |
  Sort-Object -Property Length -Descending |
  Sort-Object -Property LastWriteTime -Descending |
  Select-Object -First 100 |
  Get-NetBackup_bplist | #-Verbose |
  Where-Object { $_.HasBackup -eq $true } |
  #Format-Table Name,Length,HasBackup,BackupTime,BackupLength -AutoSize
  ForEach-Object { Remove-Item $_.FullName -Confirm:$true }
  default {
  Get-ChildItem -Path 'X:\MSSQL\Backup' |
  Sort-Object -Property Length -Descending |
  Select-Object -First 20 |
  Get-NetBackup_bplist |
  #Format-Table Name,Length,HasBackup,BackupTime -AutoSize
  ForEach-Object { Remove-Item $_.FullName -Confirm:$true }

Again – a spike…

The general motivation to look into this is to ensure that the restore chain is complete, also in the secondary backup on NetBackup.
The complete restore chain is necessary to ensure complete recovery.

(This is a running update on a post from 2012-08-27)


SQL Server 2008 on virtual Windows 8

As preparation for a study group at work for MCTS 70-448 „Microsoft SQL Server 2008 – Business Intelligence Development and Maintenance“ I install SQL Server 2008 R2 Developer Edition on a virtual Windows 8 Enterprise Edition.
The virtualizer is VMware Player. Not because I have evaluated the possibilities on other products, but because I am used to VMware products.

I choose not to use Easy Install
I once tried to map the ISO file in the wizard but even the wizard recognizes Windows 8 and I copied the license code in the VMware Player Easy Install, I get this error message from Windows Setup several times:
I guess that the issue is that Windows 8 is validating licence key against a licence server, and I don't have such one in my study.

The guest is defined with a larger virtual disk than default and in one file on host.
Also the guest is given 5120 MB (5 GB) of guest memory.

After the guest is defined the guest CD drive is mapped to Windows 8 ISO file and the installation when the guest is started.

The Windows 8 Action Center tells me that the installation it must be activated, but this fails with a error message about a DNS name:
The real issue is that I do not have a Key Management Server (KMS) in my study. This is described in KB929826, where some solutions also are given. In my case method 1 works fine.
Actually I started a PowerShell as administrator, but still it works.
Microsoft is talking a lot about PowerShell, but slmgr is a vbs-file executed by Windows Script Host (WSH). This is indicated by the success message, where the title of message box is „Windows Script Host“.

From then it is a normal SQL Server 2008 installation with the components described in the MCTS 70-448 Training Kit.


Start shared HTA with PowerShell

I have some tools made as HTML Applications (HTAs). And I am not the only one to use such a tool, but HTAs runs locally and still the tool must be updated for all users.
A solution is to have a launch script. In my case it is made in PowerShell. The launch script is called from a shortcut, that the user can either use from a fileshare or copy to a local place like the Windows Desktop.

The launch script is like this:
$Source_Folder = '\\filesrv42.sqladmin.lan\DBA\SqlAnchor\SqlAnchor_Hta'
$Destination_Folder = 'C:\SQLAdmin\SqlAnchor'
$SqlAnchor_Filenames = 'SqlAnchor.hta','anchor.ico','SqlAnchor.DetailPages.js'

# Make sure destination folder is available
if (!(Test-Path -path $Destination_Folder)) { New-Item $Destination_Folder -Type Directory }

# Copy HTA files
foreach ($Filename in $SqlAnchor_Filenames) {
  Copy-Item "$Source_Folder\$Filename" -Destination "$Destination_Folder\$Filename" -Force

# Start HTA
& "$env:SystemRoot\System32\mshta.exe" "$Destination_Folder\SqlAnchor.hta"

The shortcut for the launch script is
powershell.exe -WindowStyle "Hidden" -File "\\filesrv42.sqladmin.lan\DBA\SqlAnchor\SqlAnchor.Launch.ps1" -ExecutionPolicy "Bypass"

When the shortcut is activated by the user, PowerShell is started and the launch script is loaded. This can be done from a file share as the PowerShell execution policy is bypassed.
Disclaimer: If you use this solution, it is your responsibility to be compliant.

The launch script copies the files for the HTA to a local folder. If the folder does not exist, it is created.
If the files are present in the local folder, they are replaced.

Finally the launch script starts the HTA host „mshta.exe“ and load the hta script.
When the hta script is loaded, the launch script finish with PowerShell.


Back from vacation

What happened while I was away on vacation?
  [server_principals].[name] AS [database_owner_name],
FROM [master].[sys].[databases]
INNER JOIN [master].[sys].[server_principals] ON
  [databases].[owner_sid] = [server_principals].[sid]
  [databases].[database_id] >= 4 AND  -- Filter out system databases
  [databases].[create_date] >= '2012-07-27';


Readable seconds

When I restored a SQL Server database, the message ended with a status of the restore:
RESTORE DATABASE successfully processed 15025885 pages in 3372.932 seconds (34.803 MB/sec).

That is a lot of seconds, but I was asked by management about the restore time. Just giving a couple of thousands of seconds was just not good enough.
But PowerShell came to the rescue with the CmdLet New–TimeSpan:
(New-TimeSpan -Seconds 3372.932).ToString()
that gives

The format is hh:mm:ss given by the ToString() method to the .NET TimeSpan structure.

As the CmdLet is based on the .NET TimeSpan structure, and PowerShell is integrated with .NET, it is also possible to use the static method FromSeconds() exposed by the structure:

The result is exactly the same — as expected.


ISO 4217 Currency list

To maintain a list of currencies, I have been looking for a standard and a external resource.
The standard ISO 4217 is described at Wikipedia, and though that article I found the resource at iso.org in a XML document.

A quick way to read the currency list using PowerShell:
[xml]$ISO4217 = (New-Object System.Net.WebClient).DownloadString('http://www.currency-iso.org/dl_iso_table_a1.xml')

Write-Verbose -Message "Currency count = $($ISO4217.ISO_CCY_CODES.ISO_CURRENCY.count)" -Verbose

foreach($Currency in $ISO4217.ISO_CCY_CODES.ISO_CURRENCY) {

The output on a currency is like this:
CURRENCY        : Danish Krone
MINOR_UNIT      : 2

When you have the currency list, I think it is rather simple to update a internal list, e.g. in a database table.

You could add properties like last update timestamp or private identifier to meet your own needs.

Also the wikipedia article has a list of historical currency codes, that could be added to the internal list.

Logging in T-SQL scripts

I am working on a major version upgrade of a system that is using SharePoint and private databases. In both cases with 100+ GB data. Most database parts are scripted, and some tasks are running for hours.
Still it is important to the planning to know the execution time of the tasks and their steps.

This logging and timing I have done by wrapping the steps in some messages:
DECLARE @start datetime2 = SYSDATETIME();
DECLARE @errmsg nvarchar(2047) = LEFT(CONVERT(nvarchar(128), @start, 127), 22) + N' : Start';

-- Do something
WAITFOR DELAY '00:00:23';  -- HH:mm:ss

DECLARE @finish datetime2 = SYSDATETIME();
DECLARE @duration int = DATEDIFF(ms,@start,@finish);
SET @errmsg = LEFT(CONVERT(nvarchar(128), @finish, 127), 22) + N' : Finish after ' + CAST(@duration AS nvarchar(128)) + N' ms.';

I do the LEFT( ..., 22 ) on the timestamp in the message to increase readability of the output.
The duration of the step is in this case measured in milliseconds (ms), but could be measured in another unit. Unfortunately this unit can not be in a parameter, or as they say in the documentation on DATEDIFF() "User-defined variable equivalents are not valid.".

To get a readable duration instead of a lot of ms like 56135000 you can create a TimeSpan object in PowerShell.
PS> \([System.TimeSpan]::FromMilliseconds(56135000)).ToString()
This will give a formatted timespan
Which is 15 hours, 35 minutes and 35 seconds.

If I have a larger script file with several steps, I would go for a PowerShell solution as I then would have better possibilities to log and use general parameters.


2012-06-26 Entry created.
2015-02-05 Section about TimeSpan object added.



Some time ago I write about timestamp difference, and I have used it for measuring execution times.
But actually there is a much better way.

By using the Stopwatch class (System.Diagnostics.Stopwatch) I get a measument by a single object in stead of two DateTime (System.DateTime) objects.
Also I can take a look at the stopwatch while it is running by the Elapsed property.
The result given is a TimeSpan (System.TimeSpan) object, and by reading the value with the ToString method it is very usefull.

The Stopwatch is created by calling the static method StartNew():
PS C:\> $mywatch = [System.Diagnostics.Stopwatch]::StartNew()
Stopping the stopwatch is
PS C:\> $mywatch.Stop()
Reading the stopwatch without formatting the answer is also quite simple:
PS C:\> $mywatch.Elapsed

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 14
Milliseconds      : 975
Ticks             : 149755134
TotalDays         : 0,000173327701388889
TotalHours        : 0,00415986483333333
TotalMinutes      : 0,24959189
TotalSeconds      : 14,9755134
TotalMilliseconds : 14975,5134

Reading the stopwatch with standard formatting [HH:mm:ss.ddddddd] gives you the result more readable:
PS C:\> $mywatch.Elapsed.ToString()

I use it for execution times. On scripts for the entire script and some subtasks.


Describe database role

When you script out a database role in SQL Server Management Studio (SSMS) or another tool like Red-Gate SQL Compare, you most likely will get just the creation but the rights.
Yesterday I needed the rights for a database role on tables and other database objects. This I got by a small T-SQL script:

:Setvar _RoleName "sqlanchor_user"

DECLARE @script nvarchar(MAX) = N'USE [' + DB_NAME() + N'];' + CHAR(13) + N'GO' + CHAR(13);

SELECT @script += N'CREATE ROLE [' + [db_role].[name] + N'] AUTHORIZATION [' + [owner].[name] + N'];' + CHAR(13)
FROM [sys].[database_principals] AS [db_role]
INNER JOIN [sys].[database_principals] AS [owner]
  ON [db_role].[owning_principal_id] = [owner].[principal_id]
WHERE [db_role].[type] = N'R' AND [db_role].[name] = N'$(_RoleName)';

-- Database object rights
SELECT @script += 'GRANT ' + [db_perm].[permission_name] + ' ON [' + USER_NAME(OBJECTPROPERTY([db_perm].[major_id], 'OwnerId')) + '].[' + OBJECT_NAME([db_perm].[major_id]) + '] TO [' + [db_role].[name] + N'];' + CHAR(13) COLLATE Latin1_General_CI_AS
FROM [sys].[database_permissions] AS [db_perm]
INNER JOIN [sys].[database_principals] AS [db_role]
  ON [db_perm].[grantee_principal_id] = [db_role].[principal_id]
WHERE db_perm.[class] = 1  -- class 1 : Db Object or Column
  AND [db_role].[name] = N'$(_RoleName)';

PRINT @script;

The script only show how to create the database role and assign rights on database objects.
Rights on schemas for example is not by this script. Consider the script as a spike solution, that can be extended.
Take a look in the documentation for "sys.database_permissions" on the possibilities.


Get Backup Directory with ADO.NET

About 1½ year ago I made a entry on this blog on how to get the SQL Server Backup Directory by using the undocumented stored procedure "[master].[sys].[xp_instance_regread]".
Today I needed the path name in a PowerShell script, and I also wanted to call the procedure correct.
This I have done by calling the procedure through ADO.NET as a stored procedure, not in a EXECUTE statement as dynamic SQL.
$ServerName = '(local)'
$cnnStr = "Data Source=$ServerName;Integrated Security=SSPI;Application Name=SqlBackupFolder"
$Cnn = New-Object System.Data.SqlClient.SqlConnection $cnnStr
$Cmd = New-Object System.Data.SqlClient.SqlCommand
$Cmd.Connection = $Cnn
$Cmd.CommandText = '[master].[sys].[xp_instance_regread]'
$Cmd.CommandType = [System.Data.CommandType]::StoredProcedure
$Cmd.Parameters.Add("@rootkey", [System.Data.SqlDbType]::NVarChar, 128) | Out-Null
$Cmd.Parameters['@rootkey'].Direction = [System.Data.ParameterDirection]::Input
$Cmd.Parameters['@rootkey'].Value = 'HKEY_LOCAL_MACHINE'
$Cmd.Parameters.Add("@key", [System.Data.SqlDbType]::NVarChar, 128) | Out-Null
$Cmd.Parameters['@key'].Direction = [System.Data.ParameterDirection]::Input
$Cmd.Parameters['@key'].Value = 'SOFTWARE\Microsoft\MSSQLSERVER\MSSQLSERVER'
$Cmd.Parameters.Add("@value_name", [System.Data.SqlDbType]::NVarChar, 128) | Out-Null
$Cmd.Parameters['@value_name'].Direction = [System.Data.ParameterDirection]::Input
$Cmd.Parameters['@value_name'].Value = 'BackupDirectory'
$Cmd.Parameters.Add("@value", [System.Data.SqlDbType]::NVarChar, 128) | Out-Null
$Cmd.Parameters['@value'].Direction = [System.Data.ParameterDirection]::Output
$_RowCount = $Cmd.ExecuteNonQuery()
$SqlBackupFolder = $Cmd.Parameters['@value'].Value
":: SQL Server Backup Folder = '$SqlBackupFolder'."

The answer from the script is like
:: SQL Server Backup Folder = 'C:\MSSQL\Backup'.

It is possible to reduce the number of lines in the script, but this way I can use the call of the procedure for other information than the Backup Directory.

If you plan to reuse the Command object, you should consider to remove the Parameters ($Cmd.Parameters.Clear()) and reset the CommandType ($Cmd.CommandType = [System.Data.CommandType]::Text).


Get SQL Server root folder with xp_instance_regread

I often use the standard folders of SQL Server, like the folder "Log" with SQL Server Error Log files and my SQL Server Agent job log files or the folder "JOBS" with my job script files.
These folders are subfolders to the SQL Server root folder, so when I have the root folder, I have the rest.
"One folder to hold them all"
Also I would like to be able to get the root folder on named instances and non-standard installations.
By using the undocumented stored procedure "[master].[dbo].[xp_instance_regread]", I have a useful solution to my need.
Actually the procedure is placed in the schema "sys" on newer SQL Server versions, but by calling the "dbo" schema the same statement works on SQL Server 2000 and 2012.

DECLARE @root_folder nvarchar(128);
EXECUTE [master].[dbo].[xp_instance_regread]
  @rootkey = N'HKEY_LOCAL_MACHINE',
  @key = N'SOFTWARE\Microsoft\MSSQLSERVER\Setup',
  @value_name = N'SQLDataRoot',
  @value = @root_folder OUTPUT;
SELECT @root_folder AS [root_folder];

The answer could be

It does work on SQL Server 2000 and named instances.

If I place the statement in a PowerShell Here-String in a SQL Server Agent PowerShell job step and use the cmdlet Invoke-SqlCmd, I get the root folder without knowing the name of the server or the instance.

EXECUTE [msdb].[dbo].[sp_add_jobstep]
@command=N'$sql = @"
DECLARE @root_folder nvarchar(128);
EXECUTE [master].[dbo].[xp_instance_regread]
  @rootkey = N''HKEY_LOCAL_MACHINE'',
  @key = N''SOFTWARE\Microsoft\MSSQLSERVER\Setup'',
  @value_name = N''SQLDataRoot'',
  @value = @root_folder OUTPUT;
SELECT @root_folder AS [root_folder];
Invoke-SqlCmd -Query $sql',

Very nice I think, because it makes it possible to use the same job definition on all SQL Server database instances. E.g. in standard backup jobs.

Also I can use the Root Folder path direct in PowerShell:
Import-Module 'SQLPS'
$sql = @"
DECLARE @root_folder nvarchar(128);
EXECUTE [master].[dbo].[xp_instance_regread]
  @rootkey = N'HKEY_LOCAL_MACHINE',
  @key = N'SOFTWARE\Microsoft\MSSQLSERVER\Setup',
  @value_name = N'SQLDataRoot',
  @value = @root_folder OUTPUT;
SELECT @root_folder AS [root_folder];
$RootFolder = $(Invoke-SqlCmd -Query $sql).root_folder
":: MSSQL Root Folder = '$RootFolder'."

In this case the answer is


CPU count from T-SQL

I would like to know the number of processors (cores) on a given computer. No matter what the CPU affinity is or how many database instances is running on the computer.

The answer is given by the DMV "sys.dm_os_schedulers"
SELECT MAX([cpu_id]) + 1 AS [cpu_count]
FROM [master].[sys].[dm_os_schedulers];

I could use WMI and the Win32_Processor class (MSDN Library), but I don't always have the necessary rights to read the value remote.


Log failure in a SQL Agent PowerShell job step

I wanted to have a template for a SQL Server Agent PowerShell job step. A key feature of the template is to make the job step fail controlled with a customized and informative error message.
The job step is defined in a script file (.ps1) called by the SQL Server PowerShell job step. If an error occurs in the script file, this is caught, described and handled. The script exits and the job step presents the error in the job step history and fail. By failure I also want that the value of “run_status” in the table msdb.dbo.sysjobhistory is set to “0” (zero).
The output from the script and the job step is streamed to a log file defined in the job step. The log file should hold all output and messages of the job step execution. That is both customized output and standard output from components.

The name of the logfile is defined with other job step parameters in a script configuration file for the job step.
$LogName = 'JobStepFailure'

In the script file the log file is implemented and initialized
$ScriptStartTime = [System.DateTime]::Now
$LogFileName = "$LogFileFolder\$LogName.{0:yyyyddMMTHHmmss}.log" -f $ScriptStartTime

After the script and the logfile is initialized, I move on to reset the error array. Just in case...
and the script execution can start for real
"{0:s}  Begin." -f $([System.DateTime]::Now) > $LogFileName

If a error occurs in the script execution, I usually handle the basics like this
try {$rowsSource = $DataAdpt.Fill($DataSet) }
catch {
  "{0:s}  Exeption: $_" -f $([System.DateTime]::Now) >> $LogFileName
  throw "Error in script. Check logfile '$LogFileName' for details."
finally { $CnnSource.Close() }

At the end of the script
"{0:s}  End." -f $([System.DateTime]::Now) >> $LogFileName
I check one more time for errors and exit the script
if ($Error) {
  throw "One or more error in script execution.`nCheck logfile '$LogFileName' for details."
else {
  "Script executed with success.`nCheck logfile '$LogFileName' for details." | Write-Output

One major argument to create the log file from the script file instead of using the parameter @output_file_name to the procedure msdb.dbo.sp_add_jobstep is that if the PowerShell script throws an exception, the content of the logfile is only the exception.
I do believe that this template can be used as a general solution.


PowerShell script configuration file

There are two major reasons for looking into how to handle configuration files for PowerShell scripts.
  1. Makes a reuse of the script easy. Just make another configuration file, and you have a reuse.
  2. Makes test and deployment possible. With a configuration file your script can be executed in all environments with the right values.
I like to keep things simple, and by dot-sourcing (not Duck Sauce - Sorry...) a PowerShell script file I have a simple solution. For convenience I name the file "*.config.ps1".
. ".\myScript.taskOne.config.ps1"
When I use the structure for a SQL Agent job with several steps, like import data from various sources, I like the naming "{task}.{job step}.config.ps1".

Each value in the configuration file is defined as a normal PowerShell variable. If you want to tighten your solution, you can define context or type.
[string]$script:SourceServer = 'SANDY.sqladmin.lan'

The configuration file can be tested in the script file on each run by checking the parameter and their value.

One thing I almost always define i the configuration file is the logfile name for the current script execution. Usually with a timestamp in the name.

The solution I have used in production, and it works fine. The simplicity makes it easy to use and maintain.

I tried some XML formatted configuration files, like in Visual Studio projects, but I found them too complicated  to use in a script as I had to parse the XML file to get the values.
Also some old-style configuration files, also called initialization files, use to define sections with [] and values assigned in the sections. Again I found them too complicated to use. Usually the initialization file is accessed with regular expressions, which I normally find to be a rather powerful tool, but when I can do a thing more simple, I do that - Occam's razor...


SQL Server Agent PowerShell job step

Some time ago I wrote about this in another blog entry, and I do it again while there are some major changes in what I want to use.
First of all I want the SQL Server Agent jobstep to be defined as a PowerShell job step, not a CmdExec job step.
Also I have some other issues I want to handle.
  1. The job step is defined in a script file, not a script in the job step command field.
  2. The script file takes input by named parameters.
  3. There are one or more spaces in the script file path, e.g. "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\JOBS".
When I looked around on the internet, two different solutions were used.
One using the PowerShell call operator ampersand (&).
One using the Invoke-Expression CmdLet. The last should be avoided while it has some risks. These risks are described in the book "Windows PowerShell in Action, Second Edition" (ISBN-13 978-1-935182-13-9) by Bruce Payette in section 21.6.4. This book is most recommend.

Actually it turned out to be quite simple, when I had the quotation marks in place.
& "D:\Microsoft SQL Server\MSSQL10.SANDY\MSSQL\JOBS\SQLPS_test.ps1" -param_00 'Zero' -param_01 'One'
The trick is to put the full name of the script file in quotation marks, and the parameter names and values outside the quotation marks.

Whe you define the job step usin the stored procedure msdb.dbo.sp_add_jobstep, the value of the parameter @command has a lot of quotation marks.
@command=N'& "D:\Microsoft SQL Server\MSSQL10.SANDY\MSSQL\JOBS\SQLPS_test.ps1" -param_00 ''Zero'' -param_01 ''One'''
Please notice that some quotation marks are single and some are double.

The script file is like this.
"{0:s}  Hello." -f $([System.DateTime]::Now)
"{0:s}  Parameter value 00 = $param_00" -f $([System.DateTime]::Now)
"{0:s}  Parameter value 01 = $param_01" -f $([System.DateTime]::Now)
"{0:s}  Goodbye." -f $([System.DateTime]::Now)

and the output is like
2012-02-14T07:50:03  Hello.
2012-02-14T07:50:03  Parameter value 00 = Zero
2012-02-14T07:50:03  Parameter value 01 = One
2012-02-14T07:50:03  Goodbye.


Change database owner

I am about to join an internal course, and would like to prepare by installing the SQL Server Database Product Samples from CodePlex.
After the installation, the databases were owner by my login, and I would like to have the owner changed to "sa". I my case it is renamed, but that is another story.

Renaming a database in SQL Server Management Studio is done on the properties of the database in the page "Files".
When I ask for the change script by clicking the Script drop-down, I get a script that uses the stored procedure "sp_changedbowner".
The article on this procedure in Books Online (MSDN Library) notes that this feature will be removed, and that I should use ALTER AUTHORIZATION instead.
Then I rewrote the statement to use the recommendation
Please recall that I have renamed "sa".

The samples generates several databases. Six in my case.
Instead of generating a script for each database, I made a script to generate these scripts
SELECT N'ALTER AUTHORIZATION ON DATABASE::[' + [databases].[name] + N'] TO [sql_sa];' AS [DCL]
FROM [master].[sys].[databases]
WHERE [databases].[owner_sid] != (
  SELECT [server_principals].[sid]
  FROM [master].[sys].[server_principals]
  WHERE [server_principals].[name] = N'sql_sa'

I could have generated the scripts for the first principal, as "sa" always is the first, but maybe another day I would like to use another principal.

The generated scripts are like this
ALTER AUTHORIZATION ON DATABASE::[AdventureWorks2008R2] TO [sql_sa];
ALTER AUTHORIZATION ON DATABASE::[AdventureWorksDW2008R2] TO [sql_sa];
ALTER AUTHORIZATION ON DATABASE::[AdventureWorksLT2008R2] TO [sql_sa];


Backup filesize on collections of databases

This morning I had to find the total file size of backup files on a collection of databases for a given system.
Actually the answer can be generated by a single PowerShell statement :-)
$(ls MySystemName*.* | measure -s -pr Length).Sum / 1gb

The answer is the number of gigabytes given by a Double number.

A more readable version of the statement without aliases is
$(Get-ChildItem MySystemName*.* | Measure-Object -Sum -Property Length).Sum / 1GB

The statement is invoked with the location in the backup folder.

If a sum of sizes from more than one location the amounts can be taken from UNC paths and addeded up
($(ls '\\SERVER01.sqladmin.lan\Y$\SQL Server Backup\SystemOne*.bak' | measure -s -pr Length).Sum + $(ls '\\SERVER02.sqladmin.lan\Y$\SQL Server Backup\SystemOne*.bak'| measure -s -pr Length).Sum) / 1gb
In this example the default shares are used in the UNC path.


PowerShell WMI - The RPC server is unavailable

I have a ongoing task of refining an automated collection (instrumentation) of computers in the organisation running a SQL Server database installation.
Most of the data are collected by WMI and the automation is done by PowerShell.
The other day I ran into a challenge when the computer is available, but the rights are insufficient. The WMI call by the cmdlet Get-WmiObject er initialised as the computer exists, but the data request fails due to the lach of rights. Actually the error does not generate a exception that is caught by the PowerShell try-catch exception handling. It looks like by various fora that it is a remoting issue.

A quick workaround is to take a look at the $Error last element with a match, and filter out the error message with the cmdlet parameter "-ErrorAction SilentlyContinue".
I know that the errormessage contains "HRESULT: 0x800706BA" and a match on this will catch the error in a robust way.

$DebugPreference = 'continue'

$ComputerName = 'SQL30.test.dn.ext'

"{0:s}  `"$ComputerName`"." -f ([System.DateTime]::Now) | Write-Debug

$objComputer = New-Object System.Object
Add-Member -InputObject $objComputer -MemberType NoteProperty -Name Name -Value $ComputerName

$WQL = "SELECT Manufacturer,Model,TotalPhysicalMemory,NumberOfProcessors,SystemType FROM Win32_ComputerSystem"
try {
  $_ComputerSystem = Get-WmiObject -Query $WQL -ComputerName $ComputerName -ErrorAction SilentlyContinue
catch [System.UnauthorizedAccessException] {
  ":: ERROR: WMI access denied."

# The error "The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)" is not caught but still generates an error
if ($Error[0].Exception -match "HRESULT: 0x800706BA") {
  ":: ERROR: WMI COM (RPC) not available."

Add-Member -InputObject $objComputer -MemberType NoteProperty -Name Manufacturer -Value $_ComputerSystem.Manufacturer
Add-Member -InputObject $objComputer -MemberType NoteProperty -Name Model -Value $_ComputerSystem.Model
Add-Member -InputObject $objComputer -MemberType NoteProperty -Name MemoryInKb -Value $(if($hasWmiAccess) {($_wmi.TotalPhysicalMemory / 1KB)} else {$null})
Add-Member -InputObject $objComputer -MemberType NoteProperty -Name CpuCoreCount -Value $_ComputerSystem.NumberOfProcessors
Add-Member -InputObject $objComputer -MemberType NoteProperty -Name SystemType -Value $_ComputerSystem.SystemType



Looking at PowerShell exception handling

With PowerShell v2 we were given a more complete exception handling than the trap-handling in PowerShell v1.
This is a quick spike on throwing an exception, catching it and looking at the $Error variable.
Feel free to continue :-)


try {
  #throw "MyException"
  #throw [System.DivideByZeroException]
  #throw [IO.PathTooLongException]
  throw [System.Data.NoNullAllowedException]
catch {
finally {
  ":: Final"

if ($Error) {
  "{0:s}  Error in script execution. See log for details." -f $([System.DateTime]::Now)

The output of the script above is
Category   : OperationStopped
Activity   : 
Reason     : RuntimeException
TargetName : System.Data.NoNullAllowedException
TargetType : RuntimeType

:: Final
2012-01-02T22:15:20  Error in script execution. See log for details.

More details and links is in the article "Windows PowerShell Error Records" on MSDN Windows Dev Center.
Please notice that the returned objects are not of the System.Exception class but of the ErrorRecord class.