Add GPU to Folding@home

In my old rig I have two identical GPUs, but at first the Folding@home client could only use one. The 2nd GPU was not even present in neither the Web Control or the Advanced Control.

To add the 2nd GPU showed up to be rather simple:

  1. Open Folding@home Client Advanced Control.
  2. Open the Configure window.
  3. Pick the Slots tab.
  4. Click Add.
  5. Click the GPU radio buttom. Go with the default values, typically -1.
  6. OK - Save

In both Web Control and Advanced Control the 2nd GPU is now present.

Each folding slot works on its own package in its own Work Queue, so you can see a mixture of different activities.

When all CPUs and GPUs are working your power consumption wil be high for a significant time. You should keep an eye on hardware errors and keep the room well ventilated.

Now all there is left is to hope this can contribute to a solution to the current Covid-19 pandemic.



To read the SQL Server errorlog through T-SQL one usually uses the system stored procedure sys.sp_readerrorlog. Unfortunately you can't grant execute right on this to others than members of the server roles sysadmin and securityadmin or logins with VIEW SERVER STATE right. It would be nice to grant this right to trainees or other that does not have the mentioned administrative rights.
But when you look at the definition of the stored procedure you see a IF-statement that finters on the mentioned rights. Also you see that the extended stored procedure sys.xp_readerrorlog is called with various parameters.
So going to the source and taking a deeper look at sys.xp_readerrorlog. It turns out that this procedure has more parameters and the more possibilities than the procedure sys.sp_readerrorlog. These parameters are unfortunately not documented by Microsoft – actually the procedure sys.xp_readerrorlog is not documented at all by Microsoft.
But using my favorite internet search engine I found eight different parameters.

xp_readerrorlog syntax

The procedure can be called with two different sets of parameter names where the names are long a descriptive or the parameter names are short just indicating the position in the parameter list.

sys.xp_readerrorlog @ArchiveID[, @LogType[, @FilterText1[, @FileterText2[, @FirstEntry[, @LastEntry[, @SortOrder[, @InstanceName]]]]]]]

sys.xp_readerrorlog @p1[, @p2[, @p3[, @p4[, @p5[, @p6[, @p7[, @p8]]]]]]]

[int]Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...
[int]Log file type: 1 or NULL = error log, 2 = SQL Agent log
[nvarchar()]Search string 1: String one you want to search for
[nvarchar()]Search string 2: String two you want to search for to further refine the results
[datetime]Search from start time
[datetime]Search to end time
[nchar()]Sort order for results: N'asc' = ascending, N'desc' = descending
[nvarchar()]Instance name. This parameter actually I don't get. A database instance name has no influence.

When a string is used for parameter value then mark string af unicode string with N and single quotes (N'<value>').

You can call the procedure without giving parameter names, but if you want to omit the parameter name somewhere in the execution string then the subsequent parameter must be called without parameter name or you will get an error like this
Msg 119, Level 15, State 1, Line 18
Must pass parameter number 2 and subsequent parameters as '@name = value'. After the form '@name = value' has been used, all subsequent parameters must be passed in the form '@name = value'.


Read current errorlog file

EXECUTE master.sys.xp_readerrorlog;

Read current errorlog file in descending order

EXECUTE master.sys.xp_readerrorlog 0, 1, NULL, NULL, NULL, NULL, N'desc';

Read previous errorlog file

EXECUTE master.sys.xp_readerrorlog 1;

Read messages on failed backup in current errorlog file

EXECUTE master.sys.xp_readerrorlog 0, 1, N'backup', N'failed';

Read messages on failed login in current errorlog file

EXECUTE master.sys.xp_readerrorlog 0, 1, N'login', N'failed';

Read messages after a given time

EXECUTE master.sys.xp_readerrorlog 0, 1, @p3 = NULL, @p4 = NULL, @p5 = '2020-01-07 14:36:29';

Read all messages between two given times

EXECUTE master.sys.xp_readerrorlog 0, 1, NULL, NULL, @FirstEntry = '2020-01-07 14:36:29', @LastEntry = '2020-01-07 18:00';

Read messages with newest first

EXECUTE master.sys.xp_readerrorlog 0, 1, NULL, NULL, @FirstEntry = '2020-01-07 20:56:29', @LastEntry = NULL, @SortOrder = N'desc';

Reading all errorlog files

Some solutions use xp_dirtree to get the number of errorlog files, but output needs some treatment to be usable.
Using the procedure xp_instance_regread instead gives a direct usable output.

DECLARE @search_string1 nvarchar(256) = N'backup';
DECLARE @search_string2 nvarchar(256) = N'failed';

DECLARE @num_errorlogs int;
EXECUTE xp_instance_regread
  @value=@num_errorlogs OUTPUT;

  LogDate datetime,
  ProcessInfo nvarchar(64),
  [Text] nvarchar(4000)
DECLARE @i int = 0;
WHILE @i <= @num_errorlogs
  INSERT INTO @errors EXECUTE xp_readerrorlog @i, 1, @search_string1, @search_string2;
  SET @i = @i + 1;
SELECT * FROM @errors;


mssqltips.com: „Reading the SQL Server log files using TSQL“

sqlserver-help.com: „SQL Internals : Useful Parameters for xp_readerrorlog“


High performance script timer

When you want to time something in a PowerShell script it is usually not high frequency timing you are looking into. That could be how long time it took to run the script. In a simple situation like this it is common to see a solution where a DateTime object is created in the beginning and another object is created in the end and finally the duration is calculated. Usually the DateTime object is created with the Get-Date cmdlet.
But this is rather ineffective as at least three objects are created: Two DateTime objects for the measure and one TimeSpan object on the calculation.

A general recommendation in books and articles is to use a Stopwatch object and read the elapsed time from there. This solution is way more effective as only one object is used for measure.

There is a small penalty when asking the for the elapsed time from the Stopwatch object using the property Elapsed. The Stopwatch object actually works with processor ticks internal so when the object is asked for the elapsed time it will have to calculate this time and create a TimeSpan object to return the result.

Actually there is another property on a Stopwatch object that return elapsed milliseconds (ElapsedMilliseconds), but this return a Int64 object so the precision is not good enough for very fast activities.

You can get even closer to the Stopwatch object inner workings by asking for the Ticks with the property ElapsedTicks. This will still have to create a new object to return the result but this object is a Int64 object which is a much more simple object and then much more effective to create and handle.
When the activity is done you can convert the difference on these integers to a Timespan object. But this converting you can move away from the high performance part of your script.
Actually it is quite simple to convert Ticks or difference in Ticks to a TimeSpan in Powershell. All you have to do is to ask PowerShell for the conversion like this
$Timespan = [System.Timespan]$Stopwatch.ElapsedTicks

To make a simple comparison of the performance with the different solutions described above I have created four small test functions:
  1. DateTime by Get-Date
  2. Stopwatch.Elapsed
  3. Stopwatch Milliseconds
  4. Stopwatch Ticks
The general structure of each test function is to measure time over a simple loop. And this test is repeated over several iterations.
I have create a small function to calculate values of minimum, maximum and average on each testrun except in test function three on Stopwatch Milliseconds as this generates Int64 and not TimeSpan results.
[double]$Time = 0.0
[double]$sum = 0.0
[double]$max = 0.0
[double]$min = [double]::MaxValue
for ($i=0; $i -lt $Timespans.Length; $i++) {
  $Time = $Timespans[$i].TotalMilliseconds
  $sum += $Time
  if ($Time -gt $max) { $max = $Time }
  if ($Time -lt $min) { $min = $Time }
[double]$avg = $sum / $Timespans.Length

"Avg = $avg ms"
"Max = $max ms"
"Min = $min ms"

The first test function is rather straight foreward using the Get-Date cmdlet:
[System.TimeSpan[]]$TimeSpans = [System.TimeSpan[]]::new($Iterations)
foreach ($n in (0..($TimeSpans.Length-1))) {
  $Start = Get-Date
  foreach ($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  $End = Get-Date
  $TimeSpans[$n] = $End - $Start
Get-TimeStats -TimeSpans $TimeSpans

The second test function looks like many other examples on Stopwatch.Elapsed:
[System.TimeSpan[]]$TimeSpans = [System.TimeSpan[]]::new($Iterations)
foreach ($n in (0..($TimeSpans.Length-1))) {
  $Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
  foreach ($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  $TimeSpans[$n] = $Stopwatch.Elapsed
Get-TimeStats -Timespans $TimeSpans

As mentioned earlier the third test function on Stopwatch milliseconds is a bit longer as it has it own calculations on statistics:
[Int64[]]$Elapsed = [Int64[]]::new($Iterations)
[Int64]$sum = 0
[Int64]$min = [Int64]::MaxValue
[Int64]$max = 0

[System.Diagnostics.Stopwatch]$Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
foreach ($n in (0..($Elapsed.Length-1))) {
  [Int64]$Start = $Stopwatch.ElapsedMilliseconds
  foreach($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  [Int64]$End = $Stopwatch.ElapsedMilliseconds

  $Elapsed[$n] = $End - $Start
  $sum += $Elapsed[$n]
  if ($Elapsed[$n] -lt $min) { $min = $Elapsed[$n] }
  if ($Elapsed[$n] -gt $max) { $max = $Elapsed[$n] }
[double]$avg = $sum / $Elapsed.Length

"Avg = $avg ms"
"Max = $max ms"
"Min = $min ms"

The fourth and last test function on the Stopwatch Ticks is not in common examples but still I think it is simple enough to be useful:
[System.TimeSpan[]]$Elapsed = [System.TimeSpan[]]::new($Iterations)

$Stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
foreach ($n in (0..($Elapsed.Length-1))) {
  $Start = $Stopwatch.ElapsedTicks
  foreach($i in [int[]](0..$TimeLoops)) {
    #Burn Time
  $End = $Stopwatch.ElapsedTicks
  $Elapsed[$n] = [System.TimeSpan]($End - $Start)

Get-TimeStats -TimeSpans $Elapsed

I have completed some test runs and my general impression is that

  • The average test time is about the same on all test functions.
  • The DateTime  test function (#1) has a much higher max test time. Of the other three test functions the one on Stopwatch Ticks (#4) has the lowest max test time.
  • As the precision on the Stopwatch Milliseconds (#3) is very bad on very low values it's min test time value is useless. Of the rest test functions StopWatch ticks (#4) has a slightly lower min test time than the other two test functions, but this is not significant.

My general and personal (subjective) conclusion is that I usually will go with Stopwatch Elapsed as it gives a nice blend of performance and convenience.
One situation where I would consider using Stopwatch Ticks is a sequence of several small activities where I want to measure each activity. In this I would start one Stopwatch and the just read ElapsedTicks before and after each activity. And only when all activities are finished I will shut down the Stopwatch.
In real high performance scenarios I will also consider using Stopwatch Ticks. Or switch to a high performance platform like C# or C++.


2019 top-10 posts

When I look at the statistics from blogger.com you have visited these posts the most during the year 2019:

10. IOmeter execution (2016-09-04)
9. Could not load file or assembly Microsoft.AnalysisServices (2011-03-14)
8. xp_instance_regwrite syntax (2013-09-10)
7. Installing Ola Hallengren Maintenance Solution (2019-02-03)
6. DBCC CHECKDB with PowerShell (2014-10-09)
5. CheckDB error because snapshot file already exists (2015-08-18)
4. PowerShell MessageBox (2014-03-15)
3. Audit Log for Analysis Services (2014-01-08)
2. ISO 8601 date formatting using PowerShell (2008-08-14)
1. SqlBulkCopy with PowerShell (2017-07-30)

Some posts are several years old, but I think it is great that you still find them interesting.


VMware vmrun command

The file vmrun.exe is usually placed in the folder C:\Program Files (x86)\VMware\VMware Workstation. From there you can call it as a normal command-line tool.
How to use the tool and a descrition of the features is in the document "Syntax of vmrun Commands". The documentation is found in the VMware Fusion documentation, but to me it also covers vmrun on WMware Workstation Pro.

There are many nice features in the vmrun tool.
But - it is not possible to create a virtual machine from groud up on a ISO-file.
This I think is a severe lack of features as I do think that it should be possible to automate all tasks in a virtualized setup.