Showing posts with label PowerShell. Show all posts
Showing posts with label PowerShell. Show all posts

2024-02-03

PowerShell Advanced Function Template

I have made this simple template to start a new PowerShell script file with a advanced function. It helps me to get the basics right from the beginning. Also it supports my effort to write all scripts with advanced functions.

The code for the template on a advanced function is on SQLAdmin GitHub (link).

PowerShell script documentation

To PowerShell is defined a syntax for comment-based help in scripts (about_Comment_Based_Help). And there are many more keywords than I have used above, but my personal PowerShell style is covered with these keyword. Feel free to create your own script template with our own personal selection of standard keywords.
Using only standard keywords ensure that your scripts can be presented in a central place with nice and useful documentation. And the presentation can be automatic updated without redundant documentation tasks.

Michael Sorens has on SimpleTalk posted a series of great posts about automatic documentation on a library of PowerShell scripts. The first post is called "How To Document Your PowerShell Library" where the basics are presented in a brilliant way.

Advanced function

If a script is evolving to become a central part of a process then you should elevate it by refactoring it to an advanced function.
The template above is for an advanced function, but not all possibilities are included.  The documentation on advanced functions (about_Functions_Advanced) and parameters on advanced functions (about_Functions_Advanced_Parameters).

A very important part of an advanced function is the CmdletBinding function attribute (about_Functions_CmdletBindingAttribute) that will make a function work like a compiled CmdLet. You can in most cases just use it blank and it will make a function work really nice as a CmdLet, but if you take some time to read and experiment you can create some awesome advanced functions. If you are looking into very large amounts of data you should take a look of the possibilities with CmdletBinding. More technical CmdletBinding is a PowerShell accelerator of the .NET class CmdletBindingAttribute that is a part of the namespace System.Management.Automation.

One thing that you will benefit from rather quick is the use of -Verbose or -Debug where it will by passed on through advanced functions. Then you will get ectended information not only from your own code but also from real CmdLets and .NET objects.

There are several articles that covers various facets of advanced functions ok, but my personal reference is, beside the documentation, the book „PowerShell In Depth“ (e.g. chapter 32) by Don Jones (@concentrateddon), Richard Siddaway (@RSiddaway) and Jeffery Hicks (@JeffHicks). Look for latest edition.

PowerShell script module

When you have refined the script, I think it is time to move the functionality to one or more PowerShell script modules. The general concepts about PowerShell modules are listed in the article „Windows PowerShell Module Concepts“ and short described in the MSDN article „Understanding a Windows PowerShell Module“. Right now I will focus on a script module as this text is about PowerShel scripting. There is a short and nice introduction in the MSDN article „How to Write a PowerShell Script Module“.

In the script template above I have indicated how to dynamically import a script module placed in a given path. If the script module is not signed and the PowerShell Execution Policy is somehow restricted – which it should be! – you can't import a module placed outside the local computer, e.g. on a file share through a UNC path.

PowerShell script modules are defined and stored in psm1-files, but looks a lot like plain script files. If you use the script template above for a script module the module import and the final lines with invokation should be removed. The invokation should be in the script file importing the script module.

There are several benefits of using PowerShell script modules. Many of them you will see when you start using them...
One thing that surprised me was the value of unit testing the script code.

Michael Sorens (again) has on SimpleTalk written the great article „Further Down the Rabbit Hole: PowerShell Modules and Encapsulation“ on PowerShell modules and the surroundings with some very valuable guidelines.

History

2017-03-15 : Blog post created on SQLAdmin blog.
2017-05-17 : Section about script documentation added. Inspired by comment from my old friend and colleague Jakob Bindslet.
2017-06-14 : Section about PowerShell script module added.
2017-07-04 : Section about PowerShell Advance function added.
2017-11-10 : Blog post text migrated from SQLAdmin blog and code to Github.
2024-02-03 : Blog post migrated (back) to SQLAdmin. Code migrated to SQLAdmin Github.

2024-01-23

PowerShell SMO by SqlServer module

Install SqlServer module:

Install-Module -Name sqlserver

Typical warning:

Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from 'PSGallery'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y


See installed module:

Get-Module -Name SqlServer -ListAvailable


Import SqlServer module in PowerShell session:

Import-Module -Name SqlServer


Create SMO Server object:

$smoServer = New-Object Microsoft.SqlServer.Management.Smo.Server '(local)\SSDB2019'

See SMO Server object information:

$smoServer.Information

2023-05-17

Phytagoras hypotenuse with PowerShell

To get the hypotenuse of a right angle triangle with Pythagoras formular using the two Methods Sqrt and Pow of the .NET System.Net class with PowerShell it is possible in just one line:

[System.Math]::Sqrt( [System.Math]::Pow(11.68, 2) + [System.Math]::Pow(7.24, 2) )

Which will give the implicit formatted result

13.7419067090415

Actually you do not have to spell out the namespace System as it is implicit to PowerShell. But I like to spell it out as a kind of documentation.

This is a example on using .NET features direct with PowerShell. Also I think it shows that you have to know .NET as a PowerShell user.

The numbers are from the dimensions in mm of a RJ45 plug. So now you know the inner size of the hole to drill to make it fit.

2022-01-12

Using sp_executesql with PowerShell

 The system extended stored procedure sp_executesql is recommended in general to execute dynamic SQL to protect from SQL injection attack. Using sp_execute in PowerShell is like using any other stored procedure in PowerShell. But I prefer to do it in ADO.NET instead of using dynamic CmdLets like Invoke-SqlCmd. The reason is that I have a rather tight control with the connectivity and can make the call really fast.

The example below is a test on if a database exists with the given name. I have chosen to implement the test with a COUNT() function call so that the result always is the same type (integer) no matter if the database exists or not. Also COUNT() is usually very effective in the statement execution.

First I give the database name to look for:

$DbName = 'SSISDB'

Then I prepare the call and set up the connection to the SQL Server installation:

$SqlInstanceName = '(local)\MSSQL2019'
$CnnStr = "Server=$SqlInstanceName;Integrated Security=SSPI"
$SqlStatement = 'SELECT COUNT(*) AS [db_count] FROM [master].[sys].[databases] WHERE [name]=@dbname;'
$SqlParameters = '@dbname sysname'
$CnnSql = New-Object System.Data.SqlClient.SqlConnection
$CnnSql.ConnectionString = $CnnStr
$CnnSql.Open()

Then I call sp_executesql with the query and get the result. I close the connection as soon as possible to release the session resources:

$CmdSql = New-Object System.Data.SqlClient.SqlCommand
$CmdSql.Connection = $CnnSql
$CmdSql.CommandText = '[master].[sys].[sp_executesql]'
$CmdSql.CommandType = [System.Data.CommandType]::StoredProcedure
$CmdSql.Parameters.Add('@stmt', [System.Data.SqlDbType]::NVarChar, -1).Value = $SqlStatement
$CmdSql.Parameters.Add('@params', [System.Data.SqlDbType]::NVarChar, 500).Value = $SqlParameters
$CmdSql.Parameters.Add('@dbname', [System.Data.SqlDbType]::NVarChar, 128).Value = $DbName
$Rdr = $CmdSql.ExecuteReader()
while ($Rdr.Read()) { $DbCount = $Rdr['db_count'] }
$Rdr.Close()
$CnnSql.Close()

Afterwards I can work with the result from the query:

"Db Count: $DbCount"

This example can be used for calling any stored procedure.

2022-01-01

Top-ten posts 2021

Looking back at 2021 not only as another year with the Covid-19 pandemic. For various reasons it has only been a year with fewer posts than planned. 

The top-ten from 2021 is very much like the top-ten list from 2020 (link).

  1. SqlBulkCopy with PowerShell (2017-07-30)
  2. ISO 8601 date formatting using PowerShell (2008-08-14)
  3. DBCC CHECKDB with PowerShell (2014-10-09)
  4. xp_instance_regwrite syntax (2013-09-10)
  5. T-SQL formatted duration (2017-04-29)
  6. Audit Log for Analysis Services (2014-01-08)
  7. Start shared HTA with PowerShell (2012-09-13)
  8. SQL Server Agent schedule owner (2010-02-08)
  9. PowerShell MessageBox (2014-03-15)
  10. Windows Storage Spaces (2019-03-27)
The date on each post is the creation date. Some posts I have updated since the creation, and then the history is described in the post itself. The oldest post is more than ten years old, and that I personally find quite satisfying.

Still I do not have many readers, That is fine with me as the purpose of this blog is more a personal collection of refined notes. Looking at the number of views on each post there is a rather big difference. The #1 post has about 20 times more views than post #10.

2021-11-21

PowerShell Class

With PowerShell 5 we were given the possibility to create more "real" custom classes.

I have some notes, trials and personal examples in SQLAdmin Github (link).

2021-11-20

Short name of file or directory

 Some situations still require a short name like a 8.3 name of a file. This will often be on not only the name of the object but on the entire path. Back in the days of the Windows Shell we would use the command dir /x.

I have looked into getting the short name in a PowerShell script to make it available in a more complex script. 

Unfortunately .NET does not have such a method to a FileInfo or DirectoryInfo object. So I have used the Win32 function GetShortPathName().

The script is in SQLAdmin Github (link) with some technical notes in the associated markdown file.

2020-01-03

High performance script timer

When you want to time something in a PowerShell script it is usually not high frequency timing you are looking into. That could be how long time it took to run the script. In a simple situation like this it is common to see a solution where a DateTime object is created in the beginning and another object is created in the end and finally the duration is calculated. Usually the DateTime object is created with the Get-Date cmdlet.
But this is rather ineffective as at least three objects are created: Two DateTime objects for the measure and one TimeSpan object on the calculation.

A general recommendation in books and articles is to use a Stopwatch object and read the elapsed time from there. This solution is way more effective as only one object is used for measure.

But...
There is a small penalty when asking the for the elapsed time from the Stopwatch object using the property Elapsed. The Stopwatch object actually works with processor ticks internal so when the object is asked for the elapsed time it will have to calculate this time and create a TimeSpan object to return the result.

Actually there is another property on a Stopwatch object that return elapsed milliseconds (ElapsedMilliseconds), but this return a Int64 object so the precision is not good enough for very fast activities.

You can get even closer to the Stopwatch object inner workings by asking for the Ticks with the property ElapsedTicks. This will still have to create a new object to return the result but this object is a Int64 object which is a much more simple object and then much more effective to create and handle.
When the activity is done you can convert the difference on these integers to a Timespan object. But this converting you can move away from the high performance part of your script.
Actually it is quite simple to convert Ticks or difference in Ticks to a TimeSpan in Powershell. All you have to do is to ask PowerShell for the conversion like this
$Timespan = [System.Timespan]$Stopwatch.ElapsedTicks

To make a simple comparison of the performance with the different solutions described above I have created four small test functions:
  1. DateTime by Get-Date
  2. Stopwatch.Elapsed
  3. Stopwatch Milliseconds
  4. Stopwatch Ticks
The general structure of each test function is to measure time over a simple loop. And this test is repeated over several iterations.
I have create a small function to calculate values of minimum, maximum and average on each testrun except in test function three on Stopwatch Milliseconds as this generates Int64 and not TimeSpan results.
[double]$Time = 0.0
[double]$sum = 0.0
[double]$max = 0.0
[double]$min = [double]::MaxValue
for ($i=0; $i -lt $Timespans.Length; $i++) {
  $Time = $Timespans[$i].TotalMilliseconds
  $sum += $Time
  if ($Time -gt $max) { $max = $Time }
  if ($Time -lt $min) { $min = $Time }
}
[double]$avg = $sum / $Timespans.Length

"Avg = $avg ms"
"Max = $max ms"
"Min = $min ms"


The first test function is rather straight foreward using the Get-Date cmdlet:
[System.TimeSpan[]]$TimeSpans = [System.TimeSpan[]]::new($Iterations)
foreach ($n in (0..($TimeSpans.Length-1))) {
  $Start = Get-Date
  foreach ($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  }
  $End = Get-Date
  $TimeSpans[$n] = $End - $Start
}
Get-TimeStats -TimeSpans $TimeSpans


The second test function looks like many other examples on Stopwatch.Elapsed:
[System.TimeSpan[]]$TimeSpans = [System.TimeSpan[]]::new($Iterations)
foreach ($n in (0..($TimeSpans.Length-1))) {
  $Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
  foreach ($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  }
  $Stopwatch.Stop()
  $TimeSpans[$n] = $Stopwatch.Elapsed
}
Get-TimeStats -Timespans $TimeSpans


As mentioned earlier the third test function on Stopwatch milliseconds is a bit longer as it has it own calculations on statistics:
[Int64[]]$Elapsed = [Int64[]]::new($Iterations)
[Int64]$sum = 0
[Int64]$min = [Int64]::MaxValue
[Int64]$max = 0

[System.Diagnostics.Stopwatch]$Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
foreach ($n in (0..($Elapsed.Length-1))) {
  [Int64]$Start = $Stopwatch.ElapsedMilliseconds
  foreach($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  }
  [Int64]$End = $Stopwatch.ElapsedMilliseconds

  $Elapsed[$n] = $End - $Start
  $sum += $Elapsed[$n]
  if ($Elapsed[$n] -lt $min) { $min = $Elapsed[$n] }
  if ($Elapsed[$n] -gt $max) { $max = $Elapsed[$n] }
}
[double]$avg = $sum / $Elapsed.Length

"Avg = $avg ms"
"Max = $max ms"
"Min = $min ms"


The fourth and last test function on the Stopwatch Ticks is not in common examples but still I think it is simple enough to be useful:
[System.TimeSpan[]]$Elapsed = [System.TimeSpan[]]::new($Iterations)

$Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
foreach ($n in (0..($Elapsed.Length-1))) {
  $Start = $Stopwatch.ElapsedTicks
  foreach($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  }
  $End = $Stopwatch.ElapsedTicks
  $Elapsed[$n] = [System.TimeSpan]($End - $Start)
}

Get-TimeStats -TimeSpans $Elapsed


I have completed some test runs and my general impression is that

  • The average test time is about the same on all test functions.
  • The DateTime  test function (#1) has a much higher max test time. Of the other three test functions the one on Stopwatch Ticks (#4) has the lowest max test time.
  • As the precision on the Stopwatch Milliseconds (#3) is very bad on very low values it's min test time value is useless. Of the rest test functions StopWatch ticks (#4) has a slightly lower min test time than the other two test functions, but this is not significant.

My general and personal (subjective) conclusion is that I usually will go with Stopwatch Elapsed as it gives a nice blend of performance and convenience.
One situation where I would consider using Stopwatch Ticks is a sequence of several small activities where I want to measure each activity. In this I would start one Stopwatch and the just read ElapsedTicks before and after each activity. And only when all activities are finished I will shut down the Stopwatch.
In real high performance scenarios I will also consider using Stopwatch Ticks. Or switch to a high performance platform like C# or C++.

2019-10-20

VMware VIX API

To automate virtualization the primary API from VMware is VIX. This api is released for both VMware Fusion, VMware Workstation Pro, and VMware Player. But it is not included in all products. Then you have to download and install the API manually.

The VIX API documentation is available on the VMware support (link). Here there is also a reference to the tool vmrun. Maybe I will get back to this later...

Unfortunately the API is not really maintained anylonger. The last version (1.17) is from 2017-09-26. This is too long ago. Especially when you look at all the nice things you can do on Hyper-V with PowerShell CmdLets (link).
One answer could be the new REST API, but that is too limited to be usefull. I have taken a breif look at this API earlier (link).
Another API from VMware is PowerCLI, which is actually a PowerShell interface. But this API does not work with VMware Workstation. That is a really bad decision from VMware.
Not that PowerCLI is fantastic. I think that they got the basics on PowerShell somewhat wrong.
At the PowerShell Galley VMware has the module VMware.VimAutomation.Core (link). But like PowerCLI this module is only for vSphere and can then not be used with VMware Workstation.

With VMware Workstation Pro VIX is installed in the path “%ProgramFiles(x86)%\VMware\VMware VIX\„. In this folder there are three folders with documentation (doc), examples (samples) and implementations for VMware Workstation 15 (Workstation-15.0.0). The implementation on WMware Workstation is both 32- and 64-bit.
Please notice that the documentation in the local doc folder is not complete. You should use the online documentation (link).
Also the folder holds C++ code files. Actually VIX is a C++ SDK with a COM implementation. They are in the VIX folder and the Workstation subfolder.

VixCOM

There is a wrapper for the VIX COM interface on GitHub in the project VMWareTasks. The wrapper is written in and for C#. The code is 32-bit only, which might give some issues on a modern 64-bit platform.
But unfortunately this project has not been updated since 2015-11-30. This is more than a year before the last update to VIX. In my point of view this project looks somewhat cold.

COM

I can add a type in PowerShell on a VIX DLL file like
Add-Type -LiteralPath $VixFile.FullName -PassThru
But even I use the PassThru option the is neither error or type info.
After I have added a type and try to address is with
$Vix = New-Object -ComObject 'Vix.COM'
things go wrong deep inside .NET with the error "Bad IL format".

Still I have not been able to figure out how to connect PowerShell to VIX using COM.

There are a lot of Perl examples, but using Perl integrates bad with my general infrastructure automation, which is based on PowerShell.

I could consider to use C++ code with Python, and that is a interesting option. That I might look into if we introduce Python to our infrastructure.

2019-09-13

PowerShell function switch parameter

I played a little with switch parameters to a PowerShell function. The initial purpose was to see how to use default values on switch parameters.
A quick (and dirty) function to test various usages of a switch parameter:

function Test-Switch {
    param (
        [Parameter(Mandatory=$true)][switch]$SwitchOne,
        [switch]$SwitchTwo = $false
    )
 
    if($SwitchOne) { 'One is True' }
    else { 'One is False' }

    if($SwitchTwo) { 'Two is True' }
    else { 'Two is False' }
}


I call the function in different ways like this:

'--- (true) (none) ---'
Test-Switch -SwitchOne
'--- false (none) ---'
Test-Switch -SwitchOne:$false
'--- (true) (false) ---'
Test-Switch -SwitchOne -SwitchTwo
'--- false true ---'
Test-Switch -SwitchOne:$false -SwitchTwo:$true
'--- (true) false ---'
Test-Switch -SwitchOne -SwitchTwo:$false


And the output is:

--- (true) (none) ---
One is True
Two is False
--- false (none) ---
One is False
Two is False
--- (true) (false) ---
One is True
Two is True
--- false true ---
One is False
Two is True
--- (true) false ---
One is True
Two is False


When I take a step back the functionality with a binary switch parameter to a PowerShell function is a nice feature.
But it should be used with caution.
Remember the principle to check for positives. So a usage of a false default I would consider more than twice.

2019-08-17

VMware Workstation REST API

With VMware Workstation Pro there is a REST API where you can manage virtual machines. This API has a standard protocol that can be handled by the PowerShell CmdLet Invoke-RestMethod.
I have looked into this with PowerShell Core 6 and it basically works fine.

To set up the REST API on your local VMware Workstation Pro there is a fine guide on this by VMware: "Use the VMware Workstation Pro REST API Service". You should notice that you have to initialize the API before use. and that the REST API is only active while you have running by the command-tool vmrest.exe.

There are several independant writings on the VMware REST API, but most are in relation to vCenter where you are working on a remote connection and the security is tighter. When you are working on a local VMWare Workstation you can omit most encryption and work on http direct.

After initializing the REST API there are some PowerShell workarounds to get the credential to work. This is described in detail by others like in the blog entry "Introduction to PowerShell REST API authentication". Most are against vCenter but works on a local VMware Workstation. In my case this works fine:

  $Credential = Get-Credential
  $Auth = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($Credential.UserName + ':' + $Credential.GetNetworkCredential().Password))
  $head = @{
    'Authorization' = "Basic $Auth"
  }

  $RestApiServer = '127.0.0.1:8697'
  $BaseUri = "http://$RestApiServer"


To get a list of defined virtual machines one call is enough:

  $vmListUri = $BaseUri + '/api/vms'
  "URI = $vmListUri" | Write-Debug
  $vmList = Invoke-RestMethod -Method Get -Uri $vmListUri -Headers $head
  $vmList


Unfortunately it looks like the REST API is rather limited in functionality. By calling the REST API in a web browser it presents the funtionality on virtual macine management like this:



I can create a copy of a existing virtual machine, but not create a new from scratch. This is just one example of features that I miss.

Mayby I should take another look at the VMware VIX API, even it is based on elder technology that fits poorly with PowerShell Core.

2019-03-27

Windows Storage Spaces

To look into Windows Storage Spaces on if and how to use it on a SQL Server database server I have added 5 SCSI disks to a virtual server running Windows Server 2016 on vmware Workstation.
Please notice that I am working with the local Windows Storage Spaces and not the networked Storage Spaces Direct (S2D). The physical disk on the host is a Samsung 970 Evo NVMe SSD also hosting the host OS Windows 10 Pro.

On these 5 disks I create a storage pool where I create 2 Virtual Disks of different size and with different allocation unit size. The allocation units are of different size with a SQL Server installation in mind.
Also the logical sector size in the storage pool is set to 4KB where the default is 512B. This is to see how easy it is to change this and maybe later experiment further with the impact of a larger logical sector size.

The storage pool is created with the CmdLet New-StoragePool:

New-StoragePool -FriendlyName Pool_00 -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true) -LogicalSectorSizeDefault 4KB

The CmdLet return

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly
------------ ----------------- ------------ ------------ ----------
Pool_00      OK                Healthy      False        False   


Then the first Virtual Disk is created with the CmdLet New-VirtualDisk:

New-VirtualDisk -StoragePoolFriendlyName Pool_00 -FriendlyName ProgramDisk -ResiliencySettingName Mirror -NumberOfDataCopies 3 -Size (42GB) -ProvisioningType Thin

This CmdLet return

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach  Size
------------ --------------------- ----------------- ------------ --------------  ----
ProgramDisk  Mirror                OK                Healthy      False          42 GB


To create a partition on the Virtual Disk and format the volume with ReFS a small PowerShell script does it:

Get-VirtualDisk -FriendlyName ProgramDisk | Get-Disk |
Initialize-Disk -PartitionStyle GPT -PassThru |
New-Partition -DriveLetter F -UseMaximumSize |
Format-Volume -NewFileSystemLabel Program -FileSystem ReFS


This small script return

DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining     Size
----------- --------------- ---------- --------- ------------ ----------------- -------------     ----
F           Program         ReFS       Fixed     Healthy      OK                     40.93 GB 41.81 GB


When the disk is created a dialog about formatting the drive might be shown like this


To check the volume I use the command-line tool fsutil:

.\fsutil.exe fsinfo refsinfo F:

The tool return

REFS Volume Serial Number :       0xee841e1a841de63d
REFS Version   :                  3.1
Number Sectors :                  0x0000000000a74000
Total Clusters :                  0x0000000000a74000
Free Clusters  :                  0x0000000000a3bb69
Total Reserved :                  0x0000000000018510
Bytes Per Sector  :               4096
Bytes Per Physical Sector :       4096
Bytes Per Cluster :               4096
Checksum Type:                    CHECKSUM_TYPE_NONE


The second Virtual Disk is created with

New-VirtualDisk -StoragePoolFriendlyName Pool_00 -FriendlyName DataDisk -ResiliencySettingName Mirror -NumberOfDataCopies 3 -Size (123GB) -ProvisioningType Thin

A return similar to last time

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach   Size
------------ --------------------- ----------------- ------------ --------------   ----
DataDisk     Mirror                OK                Healthy      False          123 GB


The volume is created and formatted with the small script:

Get-VirtualDisk -FriendlyName DataDisk | Get-Disk |
Initialize-Disk -PartitionStyle GPT -PassThru |
New-Partition -DriveLetter H -UseMaximumSize |
Format-Volume -NewFileSystemLabel Data -FileSystem ReFS -AllocationUnitSize 64KB


And the return is

DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining      Size
----------- --------------- ---------- --------- ------------ ----------------- -------------      ----
H           Data            ReFS       Fixed     Healthy      OK                    121.77 GB 122.81 GB


Check the volume

.\fsutil.exe fsinfo refsinfo H:

The tool return

REFS Volume Serial Number :       0x2620bb5220bb27a7
REFS Version   :                  3.1
Number Sectors :                  0x0000000001eb4000
Total Clusters :                  0x00000000001eb400
Free Clusters  :                  0x00000000001e710d
Total Reserved :                  0x00000000000027c8
Bytes Per Sector  :               4096
Bytes Per Physical Sector :       4096
Bytes Per Cluster :               65536
Checksum Type:                    CHECKSUM_TYPE_NONE


Discussion

This is just to look in the creation of a storage pool with different Virtual Disks in Windows Storage Spaces. Looking at performance and optimal configuration for SQL Server are other and later writings.

Please notice that Windows can't boot on a storage pool. This gives that you have to configure a more traditional storage for booting and running Windows. Here there is no difference between Windows Server and Windows desktop OS.

A quick thought though is that I think that I would avoid tiered storage. Just as in a SAN configuration for SQL Server.

2018-10-04

Access to vmware shared folders in PowerShell administrator session

When you define shared folders to a vmware guest as a drive they are also available to a PowerShell session as a drive. Usually the Z-drive. This you can see by the command "net use"

PS> net use
New connections will be remembered.

Status       Local     Remote                    Network
-------------------------------------------------------------------------------
             Z:        \\vmware-host\Shared Folders
                                                VMware Shared Folders
The command completed successfully.


But if the PowerShell session is a administrator session, you get something like a network error. This is because the share is not defined to the administrator session. If a "net use" is run and the result compared with the one from the normal user session you can see that the Z-drive is missing.

PS> net use
New connections will be remembered.

There are no entries in the list.


The drive is then mapped to the session. A new administrator session will also have the Z-drive mapped.

PS> net use Z: '\\vmware-host\shared folders'
The command completed successfully


It is possible to persist the mapping by the parameter "Persist", but you might experience that the persistance is a varying subject. I have not looked deeper into the details on this.
/persistent:{yes | no}

The short help on "net use" is available with the command:
net use /?
The longer help is available with the command:
net use /help

This is not only in PowerShell, but a general Windows UAC thing. You will have the somewhat same experience in other tools like Windows Shell (cmd) or Windows Scripting Host (WSH).

Setting the Z-drive with the PowerShell cmdlet New-PSDrive does not make the vmware shared folders available to administrator sessions.

2017-08-09

Getting and using PowerShell AzureRM

PowerShell


Check if PowerShell is available
powershell.exe
and what version that is installed.
prompt> $Host
or
prompt> $Host.Version

If PowerShell is an old version or not installed, then the installation is done by installing Windows Management Framework (WMF) in the latest version. Take a look at the PowerShell site (link) on MSDN or use your favorite search engine to find the latest version of WMF.

PowerShellGet

PowerShellGet is a PowerShell module that is used to get modules form a repository like PowerShell Gallery.

Check if the module PowerShellGet is installed and if then in what version:
Get-Module -Name PowerShellGet -ListAvailable
The output is a ModuleInfoGrouping object, where the Version property holds a Version object.

Directory: C:\Program Files\WindowsPowerShell\Modules

ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Script 1.0.0.1 PowerShellGet {Install-Module, Find-Module, Save-Module, Update-Module...}


Check what version that is the latest in the repository:
Find-Module -Name PowerShellGet -Repository PSGallery

Version Name Repository Description
------- ---- ---------- -----------
2.0.1 PowerShellGet PSGallery PowerShell module with commands for discovering,...


Compare the Version elements in the outputs.
The output is a ModuleInfoGrouping object, where the Version property holds a Version object like the output from the Cmdlet Get-Module.
You might have several versions of the same module installed. Then you should compare on the latest version.

Install the latest version of the PowerShellGet module:
Install-Module -Name PowerShellGet -AllowClobber -Force
I have seen this warning (error?) some times:
WARNING: The version '1.1.4.0' of module 'PackageManagement' is currently in use. Retry the operation after closing the applications.
A PowerShell restart does not help. I have not looked deeper in this issue.

A new compare shows to versions of PowerShellGet installed side-by-side.

Directory: C:\Program Files\WindowsPowerShell\Modules

ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Script 2.0.1 PowerShellGet {Find-Command, Find-DSCResource, Find-Module, Find-RoleCap...
Script 1.0.0.1 PowerShellGet {Install-Module, Find-Module, Save-Module, Update-Module...}

In general PowerShell will use the highest version number available when called without a specific version number.

Update the PowerShellGet module to the latest version:
Update-Module -Name PowerShellGet
I have seen that version 1.0.0.1 of PowerShellGet can't be updated, but the update doesn't fail.

Uninstall the module in all versions installed:
Uninstall-Module -Name PowerShellGet -AllVersions -Force
Again PowerShellGet in version 1.0.0.1 stand out as it is not uninstalled, but the uninstall doesn't fail.
If I try a version specific uninstall
Uninstall-Module -Name PowerShellGet -RequiredVersion '1.0.0.1'
then there is an error
PackageManagement\Uninstall-Package : No match was found for the specified search criteria and module names 'PowerShellGet'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.1.3.2\PSModule.psm1:2252 char:21
+ ... $null = PackageManagement\Uninstall-Package @PSBoundParameters
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power...ninstallPackage:UninstallPackage) [Uninstall-Package], Exception
+ FullyQualifiedErrorId : NoMatchFound,Microsoft.PowerShell.PackageManagement.Cmdlets.UninstallPackage

where it looks like a newer version is installed, but that does not show through Get-Module -ListAvailable. It might be the installation on my computer that has broken during several Install-Update-Uninstall sessions. I have not looked deeper in this issue.

Import the PowerShellGet module in the current PowerShell session with this statement:
Import-Module -Name PowerShellGet
This is usually not needed as the module is loaded when installed.

List functions available in module:
Get-Command -Module PowerShellGet

Remove the PowerShellGet module with
Remove-Module -Name PowerShellGet
but why???

AzureRM

It is the same Cmdlets as used with PowerShellGet above, but I go through the phases anyway to see the differences.

Check the installed and available version:
Get-Module -Name AzureRm -ListAvailable
Find-Module -Name AzureRM -Repository PSGallery


Install the latest version:
Install-Module -Name AzureRM -AllowClobber -Force
It will usually take several seconds before the installation begins. Until then no activity is shown in the PowerShell console - you just have to be a little patient.

Update to the latest version:
Update-Module -Name AzureRM

Uninstall the module:
Uninstall-Module -Name AzureRM -AllVersions -Force

Import the module in the current PowerShell session:
Import-Module -Name AzureRM

List functions available in module:
Get-Command -Module AzureRM

Remove the module from the current PowerShell session:
Remove-Module -Name AzureRM

Context

Windows 7 (Enterprise) and 10 (Pro), WMF 5

2017-07-30

SqlBulkCopy with PowerShell

The challenge

A rather common task is to copy many (all) rows from one SQL Server database table to another as smooth and fast as possible. There are some tuning on the platform that is important, but the copy itself can be done in several very different ways.

A copy direct from table to table can be done with the .NET SqlBulkCopy class using PowerShell.

Alternative T-SQL statements can be used with reduced possibilities and (maybe) better performance:
INSERT INTO ... SELECT FROM where the target table must be created before, but can be in any filegroup.
Or SELECT INTO can be used where the target table will be created in the primary filegroup if the table does not exist there already.

Copying data between a table and a file can also be done with the SQL Server utility bcp.exe.
To copy data from a file to a database table can be done with the T-SQL statement BULK INSERT.

But for now I will focus on SqlBulkCopy with PowerShell.
Yan Pan wrote the great post „Use PowerShell to Copy a Table Between Two SQL Server Instances“ at the Hey, Scripting Guy! blog, but I will go through details on my own to get a deeper understanding.

Using SqlBulkCopy

There is a special case on identity insert as key constraints and not null is not checked by default. This can be changed using the SqlBulkCopyOption enumerations CheckConstraints.

With large amounts of data it is important to use streaming with a SqlDataReader object instead of a static DataSet object, as the DataSet object will hold all data in memory. This can really stress a server and might bring it down.

There are some interesting articles on MSDN Library and a good discussion on stackoverflow on SqlBulkCopy specific and SQL Server bulk operations in general:

I made the PowerShell function Copy-Sqltable to handle the copy of the data:
function Copy-SqlTable {
<#
.DESCRIPTION
  Copy single table from source database to target database on same SQL Server Database Engine instance.
#>
[CmdletBinding()]
[OutputType([void])]
Param()

Begin {
  $mywatch = [System.Diagnostics.Stopwatch]::StartNew()
  "{0:s}Z :: Copy-SqlTable()" -f [System.DateTime]::UtcNow | Write-Verbose

  [string]$ApplicationName = 'SqlBulkCopy.ps1'

  #Candidates for function parameters:
  [string]$SourceInstanceName = '(local)\SQL2016A'
  [string]$SourceDatabaseName = 'source'
  [string]$SourceTableName = '[test].[business]'

  [string]$TargetInstanceName = $SourceInstanceName
  [string]$TargetDatabaseName = 'target'
  [string]$TargetTableName = $SourceTableName
}

Process {
  'Connect to source...' | Write-Verbose
  [string]$CnnStrSource = "Data Source=$SourceInstanceName;Integrated Security=SSPI;Initial Catalog=$SourceDatabaseName;Application Name=$ApplicationName"
  "Source connection string: '$CnnStrSource'" | Write-Debug
  $SqlCnnSource = New-Object -TypeName System.Data.SqlClient.SqlConnection $CnnStrSource
  $SqlCommand = New-Object -TypeName System.Data.SqlClient.SqlCommand("SELECT * FROM $SourceTableName;", $SqlCnnSource)
  $SqlCnnSource.Open()
  [System.Data.SqlClient.SqlDataReader]$SqlReader = $SqlCommand.ExecuteReader()

  'Copy to target...' | Write-Verbose
  [string]$CnnStrTarget = "Data Source=$TargetInstanceName;Integrated Security=SSPI;Initial Catalog=$TargetDatabaseName;Application Name=$ApplicationName"
  "Target connection string: '$CnnStrTarget'" | Write-Debug
  try {
    $SqlBulkCopy = New-Object -TypeName System.Data.SqlClient.SqlBulkCopy($CnnStrTarget, [System.Data.SqlClient.SqlBulkCopyOptions]::KeepIdentity)
    $SqlBulkCopy.EnableStreaming = $true
    $SqlBulkCopy.DestinationTableName = $TargetTableName
    $SqlBulkCopy.BatchSize = 1000000 # Another candidate for function parameter
    $SqlBulkCopy.BulkCopyTimeout = 0 # seconds, 0 (zero) = no timeout limit
    $SqlBulkCopy.WriteToServer($SqlReader)
  }
  catch [System.Exception] {
    $_.Exception | Write-Output
  }
  finally {
    'Copy complete. Closing...' | Write-Verbose
    $SqlReader.Close()
    $SqlCnnSource.Close()
    $SqlCnnSource.Dispose()
    $SqlBulkCopy.Close()
  }
}

End {
  $mywatch.Stop()
  [string]$Message = "Copy-SqlTable finished with success. Duration = $($mywatch.Elapsed.ToString()). [hh:mm:ss.ddd]"
  "{0:s}Z $Message" -f [System.DateTime]::UtcNow | Write-Output
}
} # Copy-SqlTable


Measure

Execution time is measured on each run. For the PowerShell function I used a .NET Stopwatch object. The T-SQL statements are clocked default by Management Studio.

Also I kept an eye on Memory and CPU usage in Windows Performance Monitor on the Process object with all counters (*) on the processes sqlsrvr.exe and powershell/powershell_ise during each run.

Finally I caught actual execution plan on T-SQL statements and kept an eye on the SQL Server Activity Monitor, e.g. the wait statistics. Actually I enabled the SQL Server Query Store on both tables on creation to have some extra figures to look at.

Create test data

Using the batch delimiter "GO" in Management Studio or SQLCMD with the number of rows as the count parameter value to generate several rows of source data:
USE [source];
GO
SET NOCOUNT ON;
GO
INSERT INTO [test].[business] ([test_str],[test_nr])
VALUES (NEWID(), CONVERT(int, RAND()*2147483647));

GO 1000000000


Define SQL Server objects

The source database and table is created simple but still with parallel I/O in mind:
CREATE DATABASE [source]
ON PRIMARY
  (NAME = N'source_data', FILENAME = N'C:\MSSQL_data\source.primary.mdf',
  SIZE = 8MB, FILEGROWTH = 10MB ),
FILEGROUP [user_data]
  ( NAME = N'user_data00',
   FILENAME = N'C:\MSSQL_data\source.user_data00.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data01',
   FILENAME = N'C:\MSSQL_data\source.user_data01.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data02',
   FILENAME = N'C:\MSSQL_data\source.user_data02.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data03',
   FILENAME = N'C:\MSSQL_data\source.user_data03.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data04',
   FILENAME = N'C:\MSSQL_data\source.user_data04.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data05',
   FILENAME = N'C:\MSSQL_data\source.user_data05.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data06',
   FILENAME = N'C:\MSSQL_data\source.user_data06.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data07',
   FILENAME = N'C:\MSSQL_data\source.user_data07.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB)
LOG ON
  ( NAME = N'source_log',
   FILENAME = N'C:\MSSQL_translog\source_log.ldf',
   SIZE = 56MB, FILEGROWTH = 10MB);
GO

ALTER DATABASE [source] SET QUERY_STORE = ON;
ALTER DATABASE [source] SET QUERY_STORE (OPERATION_MODE = READ_WRITE);
GO

USE [source];
GO
CREATE SCHEMA [test];
GO
CREATE TABLE [test].[business] (
  [test_id] bigint NOT NULL IDENTITY (1, 1),
  [test_str] nvarchar(256) NOT NULL,
  [test_nr] int NOT NULL
  ) ON [user_data];
GO

USE [master];
GO


The target database and table is created in similar way but on another drive to further optimize I/O:
CREATE DATABASE [target]
ON PRIMARY
  (NAME = N'taget_data', FILENAME = N'M:\MSSQL_data\target.primary.mdf',
  SIZE = 8MB, FILEGROWTH = 8MB ) ,
FILEGROUP [user_data]
  ( NAME = N'user_data00',
   FILENAME = N'M:\MSSQL_data\target.user_data00.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data01',
   FILENAME = N'M:\MSSQL_data\target.user_data01.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data02',
   FILENAME = N'M:\MSSQL_data\target.user_data02.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data03',
   FILENAME = N'M:\MSSQL_data\target.user_data03.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data04',
   FILENAME = N'M:\MSSQL_data\target.user_data04.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data05',
   FILENAME = N'M:\MSSQL_data\target.user_data05.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data06',
   FILENAME = N'M:\MSSQL_data\target.user_data06.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data07',
   FILENAME = N'M:\MSSQL_data\target.user_data07.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB)
LOG ON
  ( NAME = N'source_log',
   FILENAME = N'C:\MSSQL_translog\target_log.ldf',
   SIZE = 56MB, FILEGROWTH = 16MB);
GO

ALTER DATABASE [target] SET QUERY_STORE = ON
ALTER DATABASE [target] SET QUERY_STORE (OPERATION_MODE = READ_WRITE)
GO

USE [target];
GO
CREATE SCHEMA [test];
GO
CREATE TABLE [test].[business] (
  [test_id] bigint NOT NULL IDENTITY (1, 1),
  [test_str] nvarchar(256) NOT NULL,
  [test_nr] int NOT NULL
) ON [user_data];
GO

USE [master];
GO


Evaluation

The first measure is a basic run with default batch size.

Copy-SqlTable finished with success. Duration = 00:13:28.0940294. [hh:mm:ss.ddd]; 134 308 637 rows; 12.3 GB data
Copy-SqlTable finished with success. Duration = 00:16:12.9162091. [hh:mm:ss.ddd]; BatchSize = 1 000
Copy-SqlTable finished with success. Duration = 00:11:34.3647701. [hh:mm:ss.ddd]; BatchSize = 10 000
Copy-SqlTable finished with success. Duration = 00:10:15.7085043. [hh:mm:ss.ddd]; BatchSize = 100 000
Copy-SqlTable finished with success. Duration = 00:10:00.1098163. [hh:mm:ss.ddd]; BatchSize = 1 000 000

2017-07-06

COM objects with PowerShell

COM (Component Object Model) is a rather old technology from Microsoft. But still relevant i practice and recognized by Microsoft. Actually there is a new COM implementation in Windows Management Framework (WMF) 5.0. with significant performance improvements as described in „What's New in Windows PowerShell“. Also some issues are fixed with WMF 5.1 (Bug Fixes).

To get a list of the COM components available on the computer you can use this PowerShell statement:
Get-ChildItem HKLM:\Software\Classes -ErrorAction SilentlyContinue |
Where-Object {
$_.PSChildName -match '^\w+\.\w+$' -and (Test-Path -Path "$($_.PSPath)\CLSID")
  } |
Select-Object -ExpandProperty PSChildName | Out-GridView

On the computer I am writing this on there are 1219 COM components…
The statement above is part of the short and fine article „Get a list of all Com objects available“ by Jaap Brasser.

On MSDN there are some nice introductions:
MSDN Library: „Creating .NET and COM Objects (New-Object)“.

A COM object is created with the PowerShell CmdLet New-Object using the parameter -Strict to stabilize the script when the COM component is using an Interop Assembly.

The integration between COM and .NET is based on COM Interop.

Microsoft Office

Each product in Microsoft Office has at least one COM component. Usually they are used inside a Office product with Visual Basic for Applications (VBA), but VBA code can very easy be refactored to VBScript.

Excel

The central COM component when working with Excel is the „Application Object“. This is a very short and incomplete example:
$Excel = New-Object -ComObject Excel.Application -Property @{Visible = $true} -Strict -ErrorAction SilentlyContinue

$Workbook = $Excel.Workbooks.Add()
$Workbook.Author = 'Niels Grove-Rasmussen'
$Workbook.Title = 'PowerShell COM sandbox'
$Workbook.Subject = 'Sandbox on using COM objects in PowerShell'

$Workbook.Sheets.Item(3).Delete()
$Workbook.Sheets.Item(2).Delete()

$Sheet1 = $Workbook.Sheets.Item(1)
$Sheet1.Name = 'COM'

$Sheet1.Range('A1:A1').Cells = 'Cell A1'
$Sheet1.Range('B1:B1').Cells = 2
$Sheet1.Range('B2:B2').Cells = 3.1
$Sheet1.Range('B3:B3').Cells.Formula = '=SUM(B1:B2)'

$Excel.Quit()

Using -Strinct on the CmdLet New-Object also generates 5+ lines of output about IDispatch. This is suppressed with -ErrorAction SilentlyContinue.

Working with Excel through COM will often - of not always - end up using Application.Sheets. In the example above it is used to give the sheet a custom name and add values or a formula to cells.

The Excel object does not close nice. So some COM cleanup is required to reset the COM environment after a Excel object is closed.
while( [System.Runtime.Interopservices.Marshal]::ReleaseComObject($Excel) ) { 'Released one Excel COM count.' | Write-Verbose }
Remove-Variable Excel
[System.GC]::Collect()

As this is a old challenge there is a Microsoft article on the more general subject: „Getting Rid of a COM Object (Once and For All)“. One interesting thing about Excel in the article is that when Excel is closed manually a EXCEL.EXE process still is active.

Word

This is another example on a COM component in Microsoft Office. The naming is similar to Excel, so the central class is the Application Class.

$Word = New-Object -ComObject Word.Application -Property @{Visible = $true} -Strict -ErrorAction SilentlyContinue

$Word.Quit()


Internet Explorer

The old Microsoft web browser has a COM component that used to be popular for a automation GUI. Some naming is aligned with Microsoft Office COM components – or the other way. The central class is the Application Class.

$IE = New-Object -ComObject InternetExplorer.Application -Strict
$IE.Visible = $true

$IE = $null


Windows Scripting Host

The „old“ Windows automation Windows Scripting Host (WSH) from before PowerShell. By default there is script engine for Visual Basic Scripting Edition (VBScript) and JScript. Other languages are implemented to integrate with WSH, e.g. Perl and REXX. There are many old but good books, articles and examples around. Personally I prefer JScript as it has a more robust error handling than VBScript - and other subjective benefits.

WScript

In this part of WSH the central COM class is WshShell.

$vbInformation = 64
$vbOKCancel = 1

$Timeout = 0
$Wsh = New-Object -ComObject WScript.Shell -Strict
[int]$Button = $Wsh.Popup('Hello from PowerShell!', $Timeout, 'PowerShell', $vbInformation + $vbOKCancel)
$Wsh = $null
''
switch($Button) {
  -1 { '???' }
  1 { ':-)' }
  2 { ':-(' }
}

The Popup method used above is a less nice way to put a popup window in a PowerShell Script. I have elsewhere written a nicer solution for a PowerShell MessageBox.
I think that if you need GUI elements then you should look at .NET Windows Forms before WSH.

Controller

WshController

Network

WshNetwork

Windows Shell

The central COM component to Windows Shell is the Shell object.

Actually the name „Windows Shell“ is somewhat misleading as Windows Shell is usually used for the Windows command line shell cmd.exe. So if you are talking with others about the COM class Shell.Application you should make sure to distinguish from cmd.exe Windows Shell.

$Shell = New-Object -ComObject Shell.Application -Strict
$Shell.Open('C:\')

$Shell = $null


ScriptControl

.

$Script = New-Object -ComObject MSScriptControl.ScriptControl -Strict
$Script.Language = 'vbscript'
$Script.AddCode('function getInput() getInput = inputbox(...')
$Input = $Script.eval('getInput')

This is a rather poisonous way to call a piece of VBScript code as the eval method is used. The method opens up for injection attack to the script.

Disclaimer: The example above should not be used in production!

When I have a more suitable example on using ScriptControl this example will be replaced.

History

2017-06-06 Blog entry started as placeholder for ideas and old notes. Not released.
2017-07-06 Notes rewritten, restructured and released.

2017-05-30

PowerShell accelerators

PowerShell is object oriented and based on the .NET framework. This implies that there are no types, only classes, but to make the daily life to the PowerShell user Microsoft has added some accelerators to PowerShell. So instead of being forced to go the long way and write [System.Management.Automation.PSObject] you can settle with [psobject].

To get a list of accelerators through the Assembly class you can use the statement
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true)::Get
and on my current workstation with PowerShell 4.0 there are 80 accelerators.
The method GetType() has three implementations, and the one with three parameters is used to get the most stable and predictable invocation.
The class TypeAccelerators is private, but accessed through a PSObject object.

Some accelerators are named in lower-case and other in mixed casing (CamelCase). Generally I prefer to use the casing that the accelerator has in the definition.

I have not seen any indications on a performance hit when using accelerators. And as the code is so much more readable with accelerators I go with them where it is possible.
When it is not possible I use the full class name, e.g. "System.Data.SqlClient.SqlConnection".

It is possible to create custom accelerators like an accelerator for accelerators:
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators')::Add('accelerators',[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true))
The accelerator is then available like any other accelerator:
[accelerators]::Get
This accelerator is part of some PowerShell extensions.
Personally I prefer to name custom accelerator in lower-case.

Reference

RAVN Systems: PowerShell Type Accelerators – shortcuts to .NET classes

2015-12-04

SQL Server Errorlog parsing with PowerShell

In a recent Incident I browsed the SQL Server Errorlog, but with a daily cycle of the Errorlog there were quite a few logfiles to browse. This is a description of some ways to parse the SQL Server Errorlog for specific messages.

Ways to read Errorlog

There are several more or less common ways to read the Errorlog. I have collected some with a short description.

  • T-SQL: The stored procedures master.sys.sp_readerrorlog uses the extended stored procedure master.sys.xp_readerrorlog to search for a somewhat simple string. This can be enough in simple situations, but not enough when in need for more detailed information. Also these procedures are not documented by Microsoft, and then not supported.
  • SMO: The method Server.ReadErrorLog() reads a Errorlog file and returns an array of Errorlog lines. This is rather usefull in most cases, but sometimes I need to combine several lines from the Errorlog.
  • SQLPSX: This Codeplex project "SQL Server PowerShell Extensions" has defined the cmdlet Get-SqlErrorLog. The cmdlet is using the SMO method Server.ReadErrorLog() described above. Unfortunately it looks like the project has been dormant since 2011.
  • .NET: The method [System.IO.File]::ReadAllText() reads the entire contents of the Errorlog file into one String. This makes it convenient for combining several lines in the Errorlog in one search filter.
  • PowerShell: The cmdlet Get-Content, especially with the -Raw parameter set, gives a direct acces the the contents of a Errorlog file. I prefer to use the parameter -LiteralPath to the parameter -Path as it ensures the precise path on casing, spaces and everything else.
In general I prefer to use the PowerShell cmdlet Get-Content. Get-Content -Raw can be used to get one string instead of an array of SQL Server Errorlog lines. This makes it easy to match the entire event with one regular expression (regex) pattern.
To match across several lines in the Errorlog, the regular expression pattern includes \s to match both CR&LF and LF. Actually in this case both symbols are in the Errorlog to generate a NewLine in the presentation of the Errorlog.

When the parsing is implemented in Powershell advanced functions the input can be both one SQL Server Errorlog file or a folder holding several Errorlog files. The function will implicit traverse the files in the pipeline through the Process part of the function.
Calling a function with one Errorlog file can be like this
Get-FlushCache -ErrorLogFile 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG.1'
while calling the function on all Errorlog files in a folder can be like this
Get-ChildItem -Path 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG*' |
Get-FlushCache


A PowerShell advanced function has this general structure
function Get-FlushCache {
<#
.DESCRIPTION
Find FlushCache entries in SQL Server Errorlog and get metrics from the entry.
.PARAMETER
ErrorlogFileName Name of SQL Server errorlog file with full path
.INPUTS
SQL Server Database Engine Errorlog file by full path.
.OUTPUTS
(none)
.RETURNVALUE
Collection of FlushCache events
.EXAMPLE
#>
[CmdletBinding()]
param(
  [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
  [System.IO.FileInfo]$ErrorLogFile
)
Begin {
  $FlushCaches = @() # Array to hold all FlushCache objects that are created on the matches
  $TotalWatch = [System.Diagnostics.Stopwatch]::StartNew()
}

Process {
  "Check file '$($ErrorLogFile.Name)'..." | Write-Verbose
  if ($ErrorLogFile.Exists -eq $false) {
    "File '$($ErrorLogFile.FullName)' does not exist" | Write-Error -Category ObjectNotFound
    return
  }

  "Matching pattern (File size = $($ErrorLogFile.Length) B)..." | Write-Verbose
  [String]$ErrorLog = Get-Content -LiteralPath $ErrorLogFile.FullName -Raw
  [String]$regex = ...

  $MatchWatch = [System.Diagnostics.Stopwatch]::StartNew()
  [MatchInfo]$FlushCacheMatches = $ErrorLog | Select-String -Pattern $regex -AllMatches
  $MatchWatch.Stop()
  if ($FlushCacheMatches -eq $null) {
    "No matches on pattern in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
  }
  else {
    "$($FlushCacheMatches.Matches.Count) matches found in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
    foreach($Match in $FlushCacheMatches.Matches) {
      $TimeStamp = [System.DateTime]$Match.Groups['timestamp'].Value
      $BufCount = [Int]$Match.Groups['bufs'].Value
      $WriteCount = [Int]$Match.Groups['writes'].Value
      $WriteTime = [Int]$Match.Groups['writetime'].Value
      $BufAvoidCount = [Int]$Match.Groups['bufsavoided'].Value
      $DbId = [String]$Match.Groups['db'].Value
      $AvgWritesPerSecond = [Double]$Match.Groups['avgwrites'].Value
      $AvgThroughput = [Double]$Match.Groups['avgthroughput'].Value
      $IoSaturation = [Int]$Match.Groups['iosaturation'].Value
      $ContextSwitchCount = [Int]$Match.Groups['contxtsw'].Value
      $LastTarget = [Int]$Match.Groups['lasttarget'].Value
      $AvgWriteLatency = [Int]$Match.Groups['avgwritelat'].Value

      $FlushCacheProperties = @{
        TimeStamp = $TimeStamp
        BufCount = $BufCount
        WriteCount = $WriteCount
        WriteTimeMs = $WriteTime
        BufAvoidCount = $BufAvoidCount
        DbId = $DbId
        AvgWritesPerSecond = $AvgWritesPerSecond
        AvgThroughput = $AvgThroughput
        IoSaturation = $IoSaturation
        ContextSwitchCount = $ContextSwitchCount
        LastTarget = $LastTarget
        AvgWriteLatency = $AvgWriteLatency
      }
      $FlushCache = New-Object -TypeName PSObject -Property $FlushCacheProperties
      $FlushCache.PSObject.TypeNames.Insert(0, 'SqlAdmin.FlushCache')
      $FlushCaches += $FlushCache
    }
  }
}

End {
  $TotalWatch.Stop()
  "$($FlushCaches.Count) matches in extract collection (Total watch = $($TotalWatch.Elapsed.ToString()))." | Write-Verbose
  $FlushCaches
}
} # Get-FlushCache()

The contents of the String regex is the regular expression pattern. Each pattern will define different output variables and the function must be defined with matching attributes on the custom object. Some examples on regular expression patterns are shown below.

How to match an event

When the content of a Errorlog file is read, the next step is to find the event(-s) in the file. This can in PowerShell be done somewhat direct with the cmdlet Select-String using the -Pattern parameter with a regular expression as value. The individual parts of the match can be isolated by defining matching groups in the regular expression.
Select-String return a MatchInfo object that contain a collection of Matches. This collection can be traversed with a ForEach statement on a variable that will be a Match object. The value of a match group element is in the Value property of each Group object in the Groups collection.
The group values in a match I put in a Custom Object (PSObject) and the objects are put in a collection. This collection is the basic output of the script.

FlushCache

This event is logged in the SQL Server Errolog like this example:
2013-01-13 12:13:14.37 spid16s     FlushCache: cleaned up 70601 bufs with 2731 writes in 130454 ms (avoided 13308 new dirty bufs) for db 42:0
2013-01-13 12:13:14.37 spid16s                 average writes per second:  20.93 writes/sec
            average throughput:   4.22 MB/sec, I/O saturation: 5536, context switches 12391
2013-01-13 12:13:14.37 spid16s                 last target outstanding: 498, avgWriteLatency 39

This is an example of an event that is logged across several lines in the Errorlog file.

The challenge in this case is immediately two-fold:
  1. Multiple lines in SQL Server Errorlog in one event entry.
  2. Multiple events in one SQL Server Errorlog.
The regular expression pattern used in this case is
[String]$regex = '(?\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'FlushCache: cleaned up (?<bufs>\d+) bufs ' `
+ 'with (?<writes>\d+) writes ' `
+ 'in (?<writetime>\d+) ms ' `
+ '\(avoided (?<bufsavoided>\d+) new dirty bufs\) ' `
+ 'for db (?<db>\d+:\d+)' `
+ '\s+.+average writes per second: (?<avgwrites>\d+\.\d+) writes/sec' `
+ '\s+.+average throughput: (?<avgthroughput>\d+.\d+) MB/sec, ' `
+ 'I/O saturation: (?<iosaturation>\d+), ' `
+ 'context switches (?<contxtsw>\d+)' `
+ '\s+.+last target outstanding: (?<lasttarget>\d+), ' `
+ 'avgWriteLatency (?<avgwritelat>\d+)'

The pattern string is in the script code spread over several lines with one line per regex group. This gives me the oppertunity to describe each grouping pattern.

I/O requests taking longer than 15 seconds

2013-01-13 12:13:14.39 spid17s     SQL Server has encountered 1 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database id 2. The OS file handle is 0x0000000000001178. The offset of the latest long I/O is: 0x000002dc510000
This issue is described in detail in the Microsoft Support article „Diagnostics in SQL Server help detect stalled and stuck I/O operations“.
The regular expression pattern to find these events is
[String]$regex = '(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'SQL Server has encountered (?<occurrence>\d+) occurrence\(s\) of I/O requests taking longer than 15 seconds to complete ' `
+ 'on file \[(?<file>.+)\] ' `
+ 'in database id (?<db>\d+). ' `
+ 'The OS file handle is (?<handle>0x[0-9a-f]+). ' `
+ 'The offset of the latest long I/O is: (?<offset>0x[0-9a-f]+)'


Login failed

This is something I want to look more into later. I mention the event now in a rather incomplete form as a teaser - to myself.
2013-01-13 12:13:14.64 Logon Error: 18456, Severity: 14, State: 38.
2013-01-13 12:13:14.16 Logon Login failed for user 'SQLAdmin\Albert'. Reason: Failed to open the explicitly specified database 'Adventureworks'. [CLIENT: 192.168.0.42]


2013-01-13 12:13:14.24 Logon Error: 18456, Severity: 14, State: 40.
2013-01-13 12:13:14.24 Logon Login failed for user 'Bobby'. Reason: Failed to open the database 'Adventureworks' specified in the login properties. [CLIENT: 192.168.0.66]


This error is described in the blog entry „Understanding "login failed" (Error 18456) error messages in SQL Server 2005“.

Anonymous login

When using Windows Authentication through double-hop like linked server or SharePoint and Kerberos delegation is not implemented it will fail with a entry in SQL Server Errorlog like:
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

This situation is very well described in SQL Server Protocols Blog: "SQL Linked Server Query failed with "Login failed for user...""

Memory page size - Large Pages

This is not a error, but a Errorlog entry on memory configuration.
By default Large Pages are not set, and this is logged in Errorlog when SQL Server start:
2013-01-13 12:13:14.15 Server Using conventional memory in the memory manager.
When Large Pages is enabled and SQL Server is restarted the Errorlog has a entry like:
???
The regular expression pattern to detect Larges Pages is like
[String]$regex = 'TBD'

Discussion

The PowerShell documentation in TechNet Library is not updated. Actually some of it is wrong like for Get-Content. Use the Get-Help cmdlet.

A more general function that takes regular expression pattern as parameter value could be nice. One thing I have to figure out is how to handle the groups in a regular expression pattern.

As the PowerShell advanced functions can take a folder content as input by the pipeline, it might speed up handling several files if the patterns matching is done in parallel. This might require another structure of the function with some thread handling.

This post will be updated when I have news or better ways to parse SQL Server Errorlog.

Reference

Happy SysAdmin: „Reading large text files with Powershell

2015-10-22

PowerShell script configuration file

I have some PowerShell scripts that I execute in various landscapes that are not connected. To control different values for the same configuration parameters I put them in a configuration file. Then I can create different configuration files for the landscapes, and have the same script file for all landscapes. This makes it more easy to manage and maintain.
I try to keep the structure of the configuration file as simple as possible, and then add the "smart" features to the general script file.
There are several solutions to the structure of a configuration file. Actually I think that there is at least one solution per product.

After looking around and trying some ideas I went back to PowerShell and looked at how it can get values from files.
There are two ways to export data from PowerShell to a file:
  1. Export-Csv
  2. Export-Clixml
The first cmdlet creates a simple text file where the second cmdlet generates a more complex XML file. I prefer the simple solution that creates a file like this:
#TYPE SQLAdmin
"SecondValue","FirstValue"
"B2","A1"


The cmdlet Import-Csv is used to read the data from the file into a custom object.
$SQLAdmin = Import-Csv -LiteralPath 'D:\_temp\SQLAdmin.csv'
When the custom object is created it can be expanded with new properties:
$SQLAdmin | Add-Member -MemberType NoteProperty -Name 3rdValue -Value "$($SQLAdmin.FirstValue) : $($SQLAdmin.SecondValue)"
When the data is in the script, it is rather easy to work with them like any other PowerShell object.