SQL Server Native Client WMI name

I am developing a SQL Server deployment package. And again I do not hit the nail in the first stroke.
This makes me install - and uninstall - SQL Server components several times. Rather quickly I get tired by clicking through Uninstall in Windows Control Panel. This can usually be fixed with a PowerShell script, but uninstalling SQL Server Native Client (SNAC) gave me some trouble.

To get the metadata on the installed programs through CIM I created a variable to work on
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client'"
But the result in $CimProduct was empty.

A general request showed me that SNAC is installed with the name that I filtered on
Get-CimInstance -ClassName Win32_Product -CimSession $CimSession
gave a long list which I have narrowed down here
Name             Caption                                              Vendor                                               Version
----             -------                                              ------                                               -------
Microsoft SQL... Microsoft SQL Server 2012 Native Client              Microsoft Corporation                                11.0.2100.60

Then I tried a more general filter
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE '%Native Client%'"
"Name = '$($CimProduct.Name)'."

with success
Name = 'Microsoft SQL Server 2012 Native Client '.
Immediately it looked like the text that I filtered on in the beginning.
But notice the trailing space!
If I some day meet the Product Manager we have something to talk about ;-)

With the new knowledge I got the search right
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client '"

Now I was able to uninstall SNAC in one line
$Uninstall = Invoke-CimMethod -Query "SELECT * FROM Win32_Product WHERE Name = 'Microsoft SQL Server 2012 Native Client '" -MethodName Uninstall

And with success


PowerShell script configuration file

I have some PowerShell scripts that I execute in various landscapes that are not connected. To control different values for the same configuration parameters I put them in a configuration file. Then I can create different configuration files for the landscapes, and have the same script file for all landscapes. This makes it more easy to manage and maintain.
I try to keep the structure of the configuration file as simple as possible, and then add the "smart" features to the general script file.
There are several solutions to the structure of a configuration file. Actually I think that there is at least one solution per product.

After looking around and trying some ideas I went back to PowerShell and looked at how it can get values from files.
There are two ways to export data from PowerShell to a file:
  1. Export-Csv
  2. Export-Clixml
The first cmdlet creates a simple text file where the second cmdlet generates a more complex XML file. I prefer the simple solution that creates a file like this:

The cmdlet Import-Csv is used to read the data from the file into a custom object.
$SQLAdmin = Import-Csv -LiteralPath 'D:\_temp\SQLAdmin.csv'
When the custom object is created it can be expanded with new properties:
$SQLAdmin | Add-Member -MemberType NoteProperty -Name 3rdValue -Value "$($SQLAdmin.FirstValue) : $($SQLAdmin.SecondValue)"
When the data is in the script, it is rather easy to work with them like any other PowerShell object.


Move table to filegroup


Dealing with a database that has been around for some time may require some basic reconfiguration. In this case I had one (or more...) database that had some performance issues. One major cause was that the database only had one datafile (mdf) and the I/O was running serial. I could add some extra datafiles, but to benefit from more datafiles they should be of equal size. This would either require a massive extra amount of storage or a trade off.
I went for a solution with one new filegroup that had several datafiles for parallel I/O. To use the new filegroup I had to move data to the new filegroup. Usually by moving clustered indexes. The data was chosen with the developers and expert users on what was used most for central activities.

Prepare storage

Due to the IT infrastructure I work in I added some extra drives to the database server. On each drive I created a folder in the root to hold the datafiles.
To make the folders useful to the database service the service account was made owner of the folder. As we use Active Directory Managed Service Accounts (MSA) as service account for SQL Server services the service account name must be entered with a trailing dollar-sign.

Get index info

This statement I used to get some basic info about the indexes on a given table, so I could have an idea on what I was about to touch.
  [tables].[name] AS [table_name],
  [schemas].[name] AS [schema_name],
  [indexes].[name] AS [index_name],
  [idx_phys_stats].[page_count]*8 AS [index_size_in_kb],
  ([idx_phys_stats].[page_count]*8.0)/(1024*1024) AS [index_size_in_mb]
FROM [master].[sys].[dm_db_index_physical_stats](DB_ID('BoPind'),OBJECT_ID('STOCKFLOW'),NULL,NULL,'LIMITED') AS [idx_phys_stats] -- filter by parameter values
  INNER JOIN [sys].[tables] ON [idx_phys_stats].[object_id] = [tables].[object_id]
  INNER JOIN [sys].[schemas] ON [tables].[schema_id] = [schemas].[schema_id]
  INNER JOIN [sys].[indexes] ON [idx_phys_stats].[index_id] = [indexes].[index_id] AND [idx_phys_stats].[object_id] = [indexes].[object_id];

The statement is inspired by the answer by Mikael Eriksson to the question „T-SQL query to get Index fragmentation information“ on stackoverflow (StackExchange).

Create Filegroup

ALTER DATABASE [my_database] ADD FILEGROUP [standard];
NAME = N'standard_0',
FILENAME = N'D:\MSSQL.Data\my_database.standard_0.mdf',
SIZE = 42GB,
NAME = N'standard_1',
FILENAME = N'D:\MSSQL.Data\my_database.standard_1.ndf',
SIZE = 42GB,


Setting the filegroup as default

The existence of the filegroup can be verified immidetaly with
SELECT * FROM [my_database].[sys].[filegroups];

Move table

The data are in a clustered index – if such is defined on the table. This article „Move an Existing Index to a Different Filegroup“ has a central remark about using SQL Server Management Studio GUI to move a table to anorther filegroup:
You cannot move indexes created using a UNIQUE or PRIMARY KEY constraint using Management Studio.
To move these indexes use the CREATE INDEX statement with the (DROP_EXISTING=ON) option in Transact-SQL.
This makes sense as uniquenes is a central property of a primary key. Remember that a primary key does not require a clustered index and vice versa.
Move clustered index
Generating a initial script using Management Studio is a quick way to get a good start.
USE [my_database];
CREATE CLUSTERED INDEX [IX_1st_table] ON [dbo].[1st_table] (
  [1st_table_id] ASC,
  [1st_table_col2] ASC,
  [1st_table_col3] ASC

The ONLINE = OFF part is to be sure not to collide with KB2969896 on installations not fully patched.
Move non-clustered index
Generate initial script using Management Studio...
USE [my_database];
CREATE NONCLUSTERED INDEX [ix_2nd_table] ON [dbo].[2nd_table] (
  [2nd_table_code] ASC

This is also an example on moving a including index. But that does not make a signifikant difference.
Move unique (clustered) index
In this case I didn't get much help from Management Studio script generation. Most of the script is done by hand.
USE [my_database];
CREATE UNIQUE CLUSTERED INDEX [PK_3rd_table] ON [dbo].[3rd_table]([3rd_table_id])
ON [standard];

Move Primary Key index
Also inn this case I didn't get much help from Management Studio script generation.
Please notice that the activity is by several statements. To ensure the integrity of the data the statements are put in a transaction.
BEGIN TRANSACTION [move_pk_idx] WITH MARK 'Change 42';
USE [my_database];
ALTER TABLE [dbo].[4th_table] DROP CONSTRAINT [PK_4th_table];
ALTER TABLE [dbo].[my_table] ADD  CONSTRAINT [PK_4th_table] PRIMARY KEY NONCLUSTERED ([4th_table_id] ASC)
WITH (ONLINE = OFF) ON [standard];

This solution can be used for other constrains like foreign key.
I was inspired by this discussion on StachExchange: „Move Primary Key to Filegroup (SQL Server 2012)“.


When moving large amounts of data I wrapped the T-SQL statements in a timing structure like discribed in „Job history template using SQL Server Agent tokens“.


Code Region in Management Studio

In PowerShell ISE I am used to put the code in collapsable/expandable blocks with #region, but that is not direct possible in SQL Server Management Studio.

But this entry in stackoverflow gives a workaround: "sql server #region".
The precise details on how to configure Management Studio is given by Buck Woody in the elder blog post "Code Collapse and Expand Feature in SQL Server Management Studio 2008".
Actually Buck gives the whole workaround, but my search gave me stackoverflow first...

A short example:
BEGIN -- Region Title
END -- Region Title

If you put a space or two last in the region title the region collapse in a visual pleasant way.
I repeat the region title at the end of the block. Sometimes there are more than one screen full in one block, and this way I am reminded about the context of the END-line.

And it works fine in Management Studio 2014, also with SQL Prompt installed.


CheckDB error because snapshot file already exists

A ordinary Maintenance Plan with a ordinary Check Database Integrity task failed with this error in the Maintenance Plan log:
A database snapshot cannot be created because it failed to start.
Cannot create file 'D:\\Data\\MyDatabase_Data.mdf_MSSQL_DBCC75' because it already exists. Change the file path or the file name, and retry the operation.

The error text indicates that the file exists which is true.
Also the text indicates that the file path or name can be changed. This is not true in this case as one can't define the physical database snapshot when running a Check Database Integrity task. This is why it can't be defined when running DBCC CHECKDB statement.
In the SQL Server Errorlog the failure is also logged:
2015-08-16 03:11:19.77 spid42      Error: 5170, Severity: 16, State: 1.
2015-08-16 03:11:19.77 spid42      Cannot create file 'D:\Data\MyDatabase_Data.mdf_MSSQL_DBCC75' because it already exists. Change the file path or the file name, and retry the operation.

As the information is similar the failure is confirmed on file details.

In this case the server had a hard shutdown (ups!!!) which caused the CheckDB failure.
To confirm if the database snapshot on the file is still active this statement:
FROM [master].[sys].[databases]
  [databases].[name] = 'MyDatabase' AND
  [databases].[source_database_id] IS NOT NULL;

The result is empty which is confirmed by the message:
(0 row(s) affected)

The snapshot file can be inspected on the metadata through PowerShell using the CmdLet Get-Item:
Get-Item -LiteralPath 'E:\Data\MyDatabase_Data.mdf_MSSQL_DBCC75'

As I found no other reference on this error I preserved the database snapshot file by moving it to another location.

Testing the solution is quite simple as I only have to run a CheckDB on the database:
DBCC CHECKDB ('MyDatabase');
As planned the run is good:
CHECKDB found 0 allocation errors and 0 consistency errors in database 'MyDatabase'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.