2015-12-25

SQL Server 2016 installation notes

These are some personal notes I have made on installing SQL Server 2016. I intent to update the notes when there are new releases or I have looked deeper into a detail.

Choices

I have selected these SQL Server features:
  • Database Engine Services, no sub-features (FEATURE=SQLENGINE).
  • Client Tools Connectivity (FEATURE=CONN).
  • Client Tools Backwards Compatibility (FEATURE=BC).
  • Client Tools SDK (FEATURE=SDK).
  • Management Tools - Complete (FEATURE=ADV_SSMS).
  • SQL Client Connectivity SDK (FEATURE=SNAC_SDK).
SQL Server Management Studio (SSMS) is with RC0 not part of the general setup. This I look more into in a later section of this post.
The installation is with these products:
  • Windows Server 2012 R2 virtualized on vmware Workstation 12 Pro.
  • SQL Server 2016 RC1.
  • Windows 10 Pro virtualized on vmware Workstation 12 Pro.

Preparations

Before installation I make some notes on what is installed. How to do this is nicely described by Marc Carter in the blog entry „Use PowerShell to Find Installed Software“.
Also I make notes on which services are running. This can be collected with the cmdlet Get-Service.

Then I check if .NET 3.5 is enabled on the computer as this is a requirement for my SQL Server installation. If .NET 3.5 is not installed I have to do so.
This I have to describe more detailed...

Storage

There are several new features on storage configuration with SQL Server 2016, like configuring multiple datafiles for tempdb on installation. To test a SQL Server installation with multiple drives I have made these drives for the Windows Server installtion:
  • C-drive: System drive with Windows Server and basic application files.
  • D-drive: Program drive with application files. Also the SQLAdmin folder for local DBA files.
    • GPT partition.
  • E-drive: Data drive with data files for user databases.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • F-drive: Transaction log drive with transaction log files for user databases and tempdb.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • G-drive: tempdb drive with data files for tempdb database.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • H-drive: Backup drive for any backup file.
    • GPT partition.
The Z-drive is reserved for removable drive.

Installation

The SQL Server installation is usually done by the wizard or by command-line. This I will look into the details later.

In this case I use the default service accounts given by the SQL Server installation, but I change the SQL Server Agent Startup Type to Automatic. These issues are subjects for many great discussions, but not here and now.
SQL Server 2016 gives the possibility to enable Database Instant File Initialization on installation, but I have chosen not to. The subject is greatly described by Kimberly Tripp in the blog post „Instant Initialization – What, Why and How?“.

The default collation for the SQL Server installation is still the SQL collation „SQL_Latin1_General_CP1_CI_AS“ where the recommendation for some years has been a Windows collation. Usually I recommend the SharePoint collation „Latin1_General_CI_AS_KS_WS“ to ensure a better information integration. In this case I changed to the collation „Latin1_General_100_CI_AS_KS_WS_SC“ to get the Special Characters available.

With SQL Server 2016 it is now possible to configure tempdb on installation. This is a really great feature – that I have not used in this case.

The installation took about 44 minutes by the summary log file.

Control

Like before the installation I note what is installed and which services that are running.
Also I take a look into the summary log file.

SQL Server Management Studio

As mentioned before SQL Server Management Studio (SSMS) is with RC0 removed from the general setup, and placed in "Tools\SSMS-Full-Setup.exe". When running the GUI setup, there are no selection of features.
With RC1 is SSMS removed from the installation set, but can be downloaded as described in BOL „Download SQL Server Management Studio (SSMS)“. There is a item in the setup GUI called "Install SQL Server Management Tools", but it just open the webpage I just mentioned.
As SSMS is removed from the SQL Server setup, the release notes for SSMS are also removed from the general SQL Server release notes. You might like to take a quick look at the SSMS Release Notes in "SQL Server Management Studio - Release Notes".

Post-Configuration

Usually I stop and disable the SQL Server VSS Writer service as I take backup with T-SQL through the Database Engine:
Get-Service -Name SQLWriter | Set-Service -StartupType Disabled -Status Stopped

Finally I run Windows Update to get the latest patches for the new components on the computer. In this case there were a Security Update and a regular Update for Visual Studio 2010 SP1 (KB2645410 & KB2635973).

2015-12-04

SQL Server Errorlog parsing with PowerShell

In a recent Incident I browsed the SQL Server Errorlog, but with a daily cycle of the Errorlog there were quite a few logfiles to browse. This is a description of some ways to parse the SQL Server Errorlog for specific messages.

Ways to read Errorlog

There are several more or less common ways to read the Errorlog. I have collected some with a short description.

  • T-SQL: The stored procedures master.sys.sp_readerrorlog uses the extended stored procedure master.sys.xp_readerrorlog to search for a somewhat simple string. This can be enough in simple situations, but not enough when in need for more detailed information. Also these procedures are not documented by Microsoft, and then not supported.
  • SMO: The method Server.ReadErrorLog() reads a Errorlog file and returns an array of Errorlog lines. This is rather usefull in most cases, but sometimes I need to combine several lines from the Errorlog.
  • SQLPSX: This Codeplex project "SQL Server PowerShell Extensions" has defined the cmdlet Get-SqlErrorLog. The cmdlet is using the SMO method Server.ReadErrorLog() described above. Unfortunately it looks like the project has been dormant since 2011.
  • .NET: The method [System.IO.File]::ReadAllText() reads the entire contents of the Errorlog file into one String. This makes it convenient for combining several lines in the Errorlog in one search filter.
  • PowerShell: The cmdlet Get-Content, especially with the -Raw parameter set, gives a direct acces the the contents of a Errorlog file. I prefer to use the parameter -LiteralPath to the parameter -Path as it ensures the precise path on casing, spaces and everything else.
In general I prefer to use the PowerShell cmdlet Get-Content. Get-Content -Raw can be used to get one string instead of an array of SQL Server Errorlog lines. This makes it easy to match the entire event with one regular expression (regex) pattern.
To match across several lines in the Errorlog, the regular expression pattern includes \s to match both CR&LF and LF. Actually in this case both symbols are in the Errorlog to generate a NewLine in the presentation of the Errorlog.

When the parsing is implemented in Powershell advanced functions the input can be both one SQL Server Errorlog file or a folder holding several Errorlog files. The function will implicit traverse the files in the pipeline through the Process part of the function.
Calling a function with one Errorlog file can be like this
Get-FlushCache -ErrorLogFile 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG.1'
while calling the function on all Errorlog files in a folder can be like this
Get-ChildItem -Path 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG*' |
Get-FlushCache


A PowerShell advanced function has this general structure
function Get-FlushCache {
<#
.DESCRIPTION
Find FlushCache entries in SQL Server Errorlog and get metrics from the entry.
.PARAMETER
ErrorlogFileName Name of SQL Server errorlog file with full path
.INPUTS
SQL Server Database Engine Errorlog file by full path.
.OUTPUTS
(none)
.RETURNVALUE
Collection of FlushCache events
.EXAMPLE
#>
[CmdletBinding()]
param(
  [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
  [System.IO.FileInfo]$ErrorLogFile
)
Begin {
  $FlushCaches = @() # Array to hold all FlushCache objects that are created on the matches
  $TotalWatch = [System.Diagnostics.Stopwatch]::StartNew()
}

Process {
  "Check file '$($ErrorLogFile.Name)'..." | Write-Verbose
  if ($ErrorLogFile.Exists -eq $false) {
    "File '$($ErrorLogFile.FullName)' does not exist" | Write-Error -Category ObjectNotFound
    return
  }

  "Matching pattern (File size = $($ErrorLogFile.Length) B)..." | Write-Verbose
  [String]$ErrorLog = Get-Content -LiteralPath $ErrorLogFile.FullName -Raw
  [String]$regex = ...

  $MatchWatch = [System.Diagnostics.Stopwatch]::StartNew()
  [MatchInfo]$FlushCacheMatches = $ErrorLog | Select-String -Pattern $regex -AllMatches
  $MatchWatch.Stop()
  if ($FlushCacheMatches -eq $null) {
    "No matches on pattern in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
  }
  else {
    "$($FlushCacheMatches.Matches.Count) matches found in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
    foreach($Match in $FlushCacheMatches.Matches) {
      $TimeStamp = [System.DateTime]$Match.Groups['timestamp'].Value
      $BufCount = [Int]$Match.Groups['bufs'].Value
      $WriteCount = [Int]$Match.Groups['writes'].Value
      $WriteTime = [Int]$Match.Groups['writetime'].Value
      $BufAvoidCount = [Int]$Match.Groups['bufsavoided'].Value
      $DbId = [String]$Match.Groups['db'].Value
      $AvgWritesPerSecond = [Double]$Match.Groups['avgwrites'].Value
      $AvgThroughput = [Double]$Match.Groups['avgthroughput'].Value
      $IoSaturation = [Int]$Match.Groups['iosaturation'].Value
      $ContextSwitchCount = [Int]$Match.Groups['contxtsw'].Value
      $LastTarget = [Int]$Match.Groups['lasttarget'].Value
      $AvgWriteLatency = [Int]$Match.Groups['avgwritelat'].Value

      $FlushCacheProperties = @{
        TimeStamp = $TimeStamp
        BufCount = $BufCount
        WriteCount = $WriteCount
        WriteTimeMs = $WriteTime
        BufAvoidCount = $BufAvoidCount
        DbId = $DbId
        AvgWritesPerSecond = $AvgWritesPerSecond
        AvgThroughput = $AvgThroughput
        IoSaturation = $IoSaturation
        ContextSwitchCount = $ContextSwitchCount
        LastTarget = $LastTarget
        AvgWriteLatency = $AvgWriteLatency
      }
      $FlushCache = New-Object -TypeName PSObject -Property $FlushCacheProperties
      $FlushCache.PSObject.TypeNames.Insert(0, 'SqlAdmin.FlushCache')
      $FlushCaches += $FlushCache
    }
  }
}

End {
  $TotalWatch.Stop()
  "$($FlushCaches.Count) matches in extract collection (Total watch = $($TotalWatch.Elapsed.ToString()))." | Write-Verbose
  $FlushCaches
}
} # Get-FlushCache()

The contents of the String regex is the regular expression pattern. Each pattern will define different output variables and the function must be defined with matching attributes on the custom object. Some examples on regular expression patterns are shown below.

How to match an event

When the content of a Errorlog file is read, the next step is to find the event(-s) in the file. This can in PowerShell be done somewhat direct with the cmdlet Select-String using the -Pattern parameter with a regular expression as value. The individual parts of the match can be isolated by defining matching groups in the regular expression.
Select-String return a MatchInfo object that contain a collection of Matches. This collection can be traversed with a ForEach statement on a variable that will be a Match object. The value of a match group element is in the Value property of each Group object in the Groups collection.
The group values in a match I put in a Custom Object (PSObject) and the objects are put in a collection. This collection is the basic output of the script.

FlushCache

This event is logged in the SQL Server Errolog like this example:
2013-01-13 12:13:14.37 spid16s     FlushCache: cleaned up 70601 bufs with 2731 writes in 130454 ms (avoided 13308 new dirty bufs) for db 42:0
2013-01-13 12:13:14.37 spid16s                 average writes per second:  20.93 writes/sec
            average throughput:   4.22 MB/sec, I/O saturation: 5536, context switches 12391
2013-01-13 12:13:14.37 spid16s                 last target outstanding: 498, avgWriteLatency 39

This is an example of an event that is logged across several lines in the Errorlog file.

The challenge in this case is immediately two-fold:
  1. Multiple lines in SQL Server Errorlog in one event entry.
  2. Multiple events in one SQL Server Errorlog.
The regular expression pattern used in this case is
[String]$regex = '(?\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'FlushCache: cleaned up (?<bufs>\d+) bufs ' `
+ 'with (?<writes>\d+) writes ' `
+ 'in (?<writetime>\d+) ms ' `
+ '\(avoided (?<bufsavoided>\d+) new dirty bufs\) ' `
+ 'for db (?<db>\d+:\d+)' `
+ '\s+.+average writes per second: (?<avgwrites>\d+\.\d+) writes/sec' `
+ '\s+.+average throughput: (?<avgthroughput>\d+.\d+) MB/sec, ' `
+ 'I/O saturation: (?<iosaturation>\d+), ' `
+ 'context switches (?<contxtsw>\d+)' `
+ '\s+.+last target outstanding: (?<lasttarget>\d+), ' `
+ 'avgWriteLatency (?<avgwritelat>\d+)'

The pattern string is in the script code spread over several lines with one line per regex group. This gives me the oppertunity to describe each grouping pattern.

I/O requests taking longer than 15 seconds

2013-01-13 12:13:14.39 spid17s     SQL Server has encountered 1 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database id 2. The OS file handle is 0x0000000000001178. The offset of the latest long I/O is: 0x000002dc510000
This issue is described in detail in the Microsoft Support article „Diagnostics in SQL Server help detect stalled and stuck I/O operations“.
The regular expression pattern to find these events is
[String]$regex = '(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'SQL Server has encountered (?<occurrence>\d+) occurrence\(s\) of I/O requests taking longer than 15 seconds to complete ' `
+ 'on file \[(?<file>.+)\] ' `
+ 'in database id (?<db>\d+). ' `
+ 'The OS file handle is (?<handle>0x[0-9a-f]+). ' `
+ 'The offset of the latest long I/O is: (?<offset>0x[0-9a-f]+)'


Login failed

This is something I want to look more into later. I mention the event now in a rather incomplete form as a teaser - to myself.
2013-01-13 12:13:14.64 Logon Error: 18456, Severity: 14, State: 38.
2013-01-13 12:13:14.16 Logon Login failed for user 'SQLAdmin\Albert'. Reason: Failed to open the explicitly specified database 'Adventureworks'. [CLIENT: 192.168.0.42]


2013-01-13 12:13:14.24 Logon Error: 18456, Severity: 14, State: 40.
2013-01-13 12:13:14.24 Logon Login failed for user 'Bobby'. Reason: Failed to open the database 'Adventureworks' specified in the login properties. [CLIENT: 192.168.0.66]


This error is described in the blog entry „Understanding "login failed" (Error 18456) error messages in SQL Server 2005“.

Anonymous login

When using Windows Authentication through double-hop like linked server or SharePoint and Kerberos delegation is not implemented it will fail with a entry in SQL Server Errorlog like:
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

This situation is very well described in SQL Server Protocols Blog: "SQL Linked Server Query failed with "Login failed for user...""

Memory page size - Large Pages

This is not a error, but a Errorlog entry on memory configuration.
By default Large Pages are not set, and this is logged in Errorlog when SQL Server start:
2013-01-13 12:13:14.15 Server Using conventional memory in the memory manager.
When Large Pages is enabled and SQL Server is restarted the Errorlog has a entry like:
???
The regular expression pattern to detect Larges Pages is like
[String]$regex = 'TBD'

Discussion

The PowerShell documentation in TechNet Library is not updated. Actually some of it is wrong like for Get-Content. Use the Get-Help cmdlet.

A more general function that takes regular expression pattern as parameter value could be nice. One thing I have to figure out is how to handle the groups in a regular expression pattern.

As the PowerShell advanced functions can take a folder content as input by the pipeline, it might speed up handling several files if the patterns matching is done in parallel. This might require another structure of the function with some thread handling.

This post will be updated when I have news or better ways to parse SQL Server Errorlog.

Reference

Happy SysAdmin: „Reading large text files with Powershell

2015-11-05

SQL Server Native Client WMI name

I am developing a SQL Server deployment package. And again I do not hit the nail in the first stroke.
This makes me install - and uninstall - SQL Server components several times. Rather quickly I get tired by clicking through Uninstall in Windows Control Panel. This can usually be fixed with a PowerShell script, but uninstalling SQL Server Native Client (SNAC) gave me some trouble.

To get the metadata on the installed programs through CIM I created a variable to work on
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client'"
But the result in $CimProduct was empty.

A general request showed me that SNAC is installed with the name that I filtered on
Get-CimInstance -ClassName Win32_Product -CimSession $CimSession
gave a long list which I have narrowed down here
Name             Caption                                              Vendor                                               Version
----             -------                                              ------                                               -------
...
Microsoft SQL... Microsoft SQL Server 2012 Native Client              Microsoft Corporation                                11.0.2100.60
...

Then I tried a more general filter
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE '%Native Client%'"
"Name = '$($CimProduct.Name)'."

with success
Name = 'Microsoft SQL Server 2012 Native Client '.
Immediately it looked like the text that I filtered on in the beginning.
But notice the trailing space!
If I some day meet the Product Manager we have something to talk about ;-)

With the new knowledge I got the search right
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client '"

Now I was able to uninstall SNAC in one line
$Uninstall = Invoke-CimMethod -Query "SELECT * FROM Win32_Product WHERE Name = 'Microsoft SQL Server 2012 Native Client '" -MethodName Uninstall
$Uninstall.ReturnValue

And with success
0

2015-10-22

PowerShell script configuration file

I have some PowerShell scripts that I execute in various landscapes that are not connected. To control different values for the same configuration parameters I put them in a configuration file. Then I can create different configuration files for the landscapes, and have the same script file for all landscapes. This makes it more easy to manage and maintain.
I try to keep the structure of the configuration file as simple as possible, and then add the "smart" features to the general script file.
There are several solutions to the structure of a configuration file. Actually I think that there is at least one solution per product.

After looking around and trying some ideas I went back to PowerShell and looked at how it can get values from files.
There are two ways to export data from PowerShell to a file:
  1. Export-Csv
  2. Export-Clixml
The first cmdlet creates a simple text file where the second cmdlet generates a more complex XML file. I prefer the simple solution that creates a file like this:
#TYPE SQLAdmin
"SecondValue","FirstValue"
"B2","A1"


The cmdlet Import-Csv is used to read the data from the file into a custom object.
$SQLAdmin = Import-Csv -LiteralPath 'D:\_temp\SQLAdmin.csv'
When the custom object is created it can be expanded with new properties:
$SQLAdmin | Add-Member -MemberType NoteProperty -Name 3rdValue -Value "$($SQLAdmin.FirstValue) : $($SQLAdmin.SecondValue)"
When the data is in the script, it is rather easy to work with them like any other PowerShell object.

2015-09-22

Move table to filegroup


Why?

Dealing with a database that has been around for some time may require some basic reconfiguration. In this case I had one (or more...) database that had some performance issues. One major cause was that the database only had one datafile (mdf) and the I/O was running serial. I could add some extra datafiles, but to benefit from more datafiles they should be of equal size. This would either require a massive extra amount of storage or a trade off.
I went for a solution with one new filegroup that had several datafiles for parallel I/O. To use the new filegroup I had to move data to the new filegroup. Usually by moving clustered indexes. The data was chosen with the developers and expert users on what was used most for central activities.

Prepare storage

Due to the IT infrastructure I work in I added some extra drives to the database server. On each drive I created a folder in the root to hold the datafiles.
To make the folders useful to the database service the service account was made owner of the folder. As we use Active Directory Managed Service Accounts (MSA) as service account for SQL Server services the service account name must be entered with a trailing dollar-sign.

Get index info

This statement I used to get some basic info about the indexes on a given table, so I could have an idea on what I was about to touch.
SELECT
  [tables].[name] AS [table_name],
  [schemas].[name] AS [schema_name],
  [indexes].[name] AS [index_name],
  [idx_phys_stats].[index_type_desc],
  [idx_phys_stats].[page_count]*8 AS [index_size_in_kb],
  ([idx_phys_stats].[page_count]*8.0)/(1024*1024) AS [index_size_in_mb]
FROM [master].[sys].[dm_db_index_physical_stats](DB_ID('BoPind'),OBJECT_ID('STOCKFLOW'),NULL,NULL,'LIMITED') AS [idx_phys_stats] -- filter by parameter values
  INNER JOIN [sys].[tables] ON [idx_phys_stats].[object_id] = [tables].[object_id]
  INNER JOIN [sys].[schemas] ON [tables].[schema_id] = [schemas].[schema_id]
  INNER JOIN [sys].[indexes] ON [idx_phys_stats].[index_id] = [indexes].[index_id] AND [idx_phys_stats].[object_id] = [indexes].[object_id];

The statement is inspired by the answer by Mikael Eriksson to the question „T-SQL query to get Index fragmentation information“ on stackoverflow (StackExchange).

Create Filegroup

ALTER DATABASE [my_database] ADD FILEGROUP [standard];
ALTER DATABASE [my_database] ADD FILE (
NAME = N'standard_0',
FILENAME = N'D:\MSSQL.Data\my_database.standard_0.mdf',
SIZE = 42GB,
FILEGROWTH = 1GB ) TO FILEGROUP [standard];
ALTER DATABASE [my_database] ADD FILE (
NAME = N'standard_1',
FILENAME = N'D:\MSSQL.Data\my_database.standard_1.ndf',
SIZE = 42GB,
FILEGROWTH = 1GB ) TO FILEGROUP [standard];

etcetera...

Setting the filegroup as default
ALTER DATABASE [my_database] MODIFY FILEGROUP [standard] DEFAULT;

The existence of the filegroup can be verified immidetaly with
SELECT * FROM [my_database].[sys].[filegroups];

Move table

The data are in a clustered index – if such is defined on the table. This article „Move an Existing Index to a Different Filegroup“ has a central remark about using SQL Server Management Studio GUI to move a table to anorther filegroup:
You cannot move indexes created using a UNIQUE or PRIMARY KEY constraint using Management Studio.
To move these indexes use the CREATE INDEX statement with the (DROP_EXISTING=ON) option in Transact-SQL.
This makes sense as uniquenes is a central property of a primary key. Remember that a primary key does not require a clustered index and vice versa.
Move clustered index
Generating a initial script using Management Studio is a quick way to get a good start.
USE [my_database];
CREATE CLUSTERED INDEX [IX_1st_table] ON [dbo].[1st_table] (
  [1st_table_id] ASC,
  [1st_table_col2] ASC,
  [1st_table_col3] ASC
)WITH (DROP_EXISTING = ON, ONLINE = OFF) ON [standard];

The ONLINE = OFF part is to be sure not to collide with KB2969896 on installations not fully patched.
Move non-clustered index
Generate initial script using Management Studio...
USE [my_database];
CREATE NONCLUSTERED INDEX [ix_2nd_table] ON [dbo].[2nd_table] (
  [2nd_table_code] ASC
)
INCLUDE (
  [2nd_table_col2],
  [2nd_table_col3]
) WITH (DROP_EXISTING = ON, ONLINE = OFF) ON [standard];

This is also an example on moving a including index. But that does not make a signifikant difference.
Move unique (clustered) index
In this case I didn't get much help from Management Studio script generation. Most of the script is done by hand.
USE [my_database];
CREATE UNIQUE CLUSTERED INDEX [PK_3rd_table] ON [dbo].[3rd_table]([3rd_table_id])
WITH (DROP_EXISTING =  ON )
ON [standard];

Move Primary Key index
Also inn this case I didn't get much help from Management Studio script generation.
Please notice that the activity is by several statements. To ensure the integrity of the data the statements are put in a transaction.
BEGIN TRANSACTION [move_pk_idx] WITH MARK 'Change 42';
USE [my_database];
ALTER TABLE [dbo].[4th_table] DROP CONSTRAINT [PK_4th_table];
ALTER TABLE [dbo].[my_table] ADD  CONSTRAINT [PK_4th_table] PRIMARY KEY NONCLUSTERED ([4th_table_id] ASC)
WITH (ONLINE = OFF) ON [standard];
COMMIT TRANSACTION [move_pk_idx];

This solution can be used for other constrains like foreign key.
I was inspired by this discussion on StachExchange: „Move Primary Key to Filegroup (SQL Server 2012)“.

Discussion

When moving large amounts of data I wrapped the T-SQL statements in a timing structure like discribed in „Job history template using SQL Server Agent tokens“.

2015-09-02

Code Region in Management Studio

In PowerShell ISE I am used to put the code in collapsable/expandable blocks with #region, but that is not direct possible in SQL Server Management Studio.

But this entry in stackoverflow gives a workaround: "sql server #region".
The precise details on how to configure Management Studio is given by Buck Woody in the elder blog post "Code Collapse and Expand Feature in SQL Server Management Studio 2008".
Actually Buck gives the whole workaround, but my search gave me stackoverflow first...

A short example:
BEGIN -- Region Title
--several
--lines
--of
--T-SQL
--and
--comments
END -- Region Title

If you put a space or two last in the region title the region collapse in a visual pleasant way.
I repeat the region title at the end of the block. Sometimes there are more than one screen full in one block, and this way I am reminded about the context of the END-line.

And it works fine in Management Studio 2014, also with SQL Prompt installed.

2015-08-18

CheckDB error because snapshot file already exists

A ordinary Maintenance Plan with a ordinary Check Database Integrity task failed with this error in the Maintenance Plan log:
A database snapshot cannot be created because it failed to start.
Cannot create file 'D:\\Data\\MyDatabase_Data.mdf_MSSQL_DBCC75' because it already exists. Change the file path or the file name, and retry the operation.

The error text indicates that the file exists which is true.
Also the text indicates that the file path or name can be changed. This is not true in this case as one can't define the physical database snapshot when running a Check Database Integrity task. This is why it can't be defined when running DBCC CHECKDB statement.
In the SQL Server Errorlog the failure is also logged:
2015-08-16 03:11:19.77 spid42      Error: 5170, Severity: 16, State: 1.
2015-08-16 03:11:19.77 spid42      Cannot create file 'D:\Data\MyDatabase_Data.mdf_MSSQL_DBCC75' because it already exists. Change the file path or the file name, and retry the operation.

As the information is similar the failure is confirmed on file details.

In this case the server had a hard shutdown (ups!!!) which caused the CheckDB failure.
To confirm if the database snapshot on the file is still active this statement:
SELECT *
FROM [master].[sys].[databases]
WHERE
  [databases].[name] = 'MyDatabase' AND
  [databases].[source_database_id] IS NOT NULL;

The result is empty which is confirmed by the message:
(0 row(s) affected)

The snapshot file can be inspected on the metadata through PowerShell using the CmdLet Get-Item:
Get-Item -LiteralPath 'E:\Data\MyDatabase_Data.mdf_MSSQL_DBCC75'

As I found no other reference on this error I preserved the database snapshot file by moving it to another location.

Testing the solution is quite simple as I only have to run a CheckDB on the database:
DBCC CHECKDB ('MyDatabase');
As planned the run is good:
CHECKDB found 0 allocation errors and 0 consistency errors in database 'MyDatabase'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

2015-08-14

Attach database

I had to get some data from a old database, where the server is recycled. But at that time I was present enough to get a copy of the data files of the database (mdf & ndf).
The quick solution is to attach the database on my workstation where I have a SQL Server Developer Edition running. This statement should do the trick:
CREATE DATABASE [Repository_old] ON PRIMARY
  (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQL2014\MSSQL\DATA\Repository.mdf'),
  (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQL2014\MSSQL\DATA\Repository_2.ndf')
FOR ATTACH_REBUILD_LOG


But when I tried to attach the database I got a error on file access.
Msg 5120, Level 16, State 101, Line nn
Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQL2014\MSSQL\DATA\Repository.mdf". Operating system error 5: "5(Access denied.)".

A quick look on the other SQL Server files in the same folder indicated that the service account should be owner of the files. The error message is on the first file in the statement, but this goes on all files for the database.
This can be done in Windows Explorer through the properties of the file (right-click > properties). On the Security tab you click the Advanced button. Then you get another window named Advanced Security Settings for ... In this windows Owner tab you click the Edit button. Now you might have to log on as you are required to be local administrator.
Click the button Other users or groups to enter the service account. This is not straight forward if you are using a virtual service account. First you have to pick the computer as location. Then you can limit the object type to Builtin service accounts.
Finally entering the name of the service account is a little tricky - in my case I entered "nt service\mssql$mssql2014". Notice the space after "nt"!
I am working on the named instance MSSQL2014 which gives the name of the account after the backslash.
Before clicking the OK button I do recommend that you verify the account name by clicking Check names. If the name is changed to capitals, the "nt service\" is removed and the name is underlined it is OK.

The service account is (default) a Virtual Service Account. This is a local edition of a Managed Service Account. To get the name of the service account you can look in the Server Facet of the instance with Management Studio.

But still I got error on file access.
Msg 5120, Level 16, State 101, Line nn
Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQL2014\MSSQL\DATA\Repository.mdf". Operating system error 5: "5(Access denied.)".

I usually work on installation where DBAs are isolated from other admins. In this case the database files are moved/copied around using one account and the attach statement is executed with another account. When I then added the user of SQL Server Management Studio (SSMS) who executes the statement to the files with Full Control permissions everything went smooth.
I first tried to limit the rights for the SSMS user the Read permissions, but that didn't work as I got this error
Msg 1813, Level 16, State 2, Line nn
Could not open new database 'Repository'. CREATE DATABASE is aborted.


In this case I did not have the transaction log file (ldf), but I really didn't need it. This is why the statement is with FOR ATTACH_REBUILD_LOG. There are some issues you have to consider like a clean shutdown of the database. Please take a look at the documentation for CREATE DATABASE.
The original transaction log file could not be found by the instance, as expected. This generates a failure message like
File activation failure. The physical file name "H:\MSSQL\Data\Repository_log.ldf" may be incorrect.
But right after the new transaction log file is created automatic
New log file 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQL2014\MSSQL\DATA\Repository_old_log.ldf' was created.

2015-01-09

Leap Second and Day

Leap Second

A positive leap second 2015-06-30 has been announced by International Earth Rotation & Reference Systems Service:
hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat
SQL Server gets the time from Windows, who knows how to handle a leap second (KB909614).
The leap second is „smeared“ so that the timestamps are not affected.
This gives that the leap second can not be registered.

In T-SQL it will look like this
(statement)
DECLARE @before_leap_second DATETIME = '2015-06-30 23:59:59'
SELECT @before_leap_second AS [normal_time];

(result)
normal_time
-----------------------
2015-06-30 23:59:59.000


(statement)
DECLARE @leap_second DATETIME = '2015-06-30 23:59:60';
SELECT @leap_second AS [leap_second];

(result)
Msg 242, Level 16, State 3, Line 1
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.

leap_second
-----------------------
NULL


Leap Day

A positive leap day is more precise defined. This gives that it can be defined in a data type, and calculations can be done correct. The Leap Day is nicely handled by T-SQL like this
(statement)
DECLARE @leap_day DATETIME = '2012-02-29';
SELECT @leap_day AS [leap_second];

(result)
leap_day
-----------------------
2012-02-29 00:00:00.000


Also a more tricky Leap Day is handled correct by T-SQL like this
(statement)
DECLARE @leap_day DATETIME = '2000-02-29';
SELECT @leap_day AS [leap_day];

(result)
leap_day
-----------------------
2000-02-29 00:00:00.000


An attempt to use a wrong Leap Day will give an error like above
(statement)
DECLARE @not_leap_day DATETIME = '2013-02-29'
SELECT @not_leap_day AS [not_leap_day];

(result)
Msg 242, Level 16, State 3, Line 11
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.

not_leap_day
-----------------------
NULL


History

2015-01-09 Blog entry about Leap Second.
2015-02-10 Blog entry added section about Leap Year.