The file vmrun.exe is usually placed in the folder C:\Program Files (x86)\VMware\VMware Workstation. From there you can call it as a normal command-line tool.
How to use the tool and a descrition of the features is in the document "Syntax of vmrun Commands". The documentation is found in the VMware Fusion documentation, but to me it also covers vmrun on WMware Workstation Pro.
There are many nice features in the vmrun tool.
But - it is not possible to create a virtual machine from groud up on a ISO-file.
This I think is a severe lack of features as I do think that it should be possible to automate all tasks in a virtualized setup.
2019-12-04
2019-10-20
VMware VIX API
To automate virtualization the primary API from VMware is VIX. This api is released for both VMware Fusion, VMware Workstation Pro, and VMware Player. But it is not included in all products. Then you have to download and install the API manually.
The VIX API documentation is available on the VMware support (link). Here there is also a reference to the tool
Unfortunately the API is not really maintained anylonger. The last version (1.17) is from 2017-09-26. This is too long ago. Especially when you look at all the nice things you can do on Hyper-V with PowerShell CmdLets (link).
One answer could be the new REST API, but that is too limited to be usefull. I have taken a breif look at this API earlier (link).
Another API from VMware is PowerCLI, which is actually a PowerShell interface. But this API does not work with VMware Workstation. That is a really bad decision from VMware.
Not that PowerCLI is fantastic. I think that they got the basics on PowerShell somewhat wrong.
At the PowerShell Galley VMware has the module
With VMware Workstation Pro VIX is installed in the path “
Please notice that the documentation in the local doc folder is not complete. You should use the online documentation (link).
Also the folder holds C++ code files. Actually VIX is a C++ SDK with a COM implementation. They are in the VIX folder and the Workstation subfolder.
But unfortunately this project has not been updated since 2015-11-30. This is more than a year before the last update to VIX. In my point of view this project looks somewhat cold.
But even I use the PassThru option the is neither error or type info.
After I have added a type and try to address is with
things go wrong deep inside .NET with the error "
Still I have not been able to figure out how to connect PowerShell to VIX using COM.
There are a lot of Perl examples, but using Perl integrates bad with my general infrastructure automation, which is based on PowerShell.
The VIX API documentation is available on the VMware support (link). Here there is also a reference to the tool
vmrun
. Maybe I will get back to this later...Unfortunately the API is not really maintained anylonger. The last version (1.17) is from 2017-09-26. This is too long ago. Especially when you look at all the nice things you can do on Hyper-V with PowerShell CmdLets (link).
One answer could be the new REST API, but that is too limited to be usefull. I have taken a breif look at this API earlier (link).
Another API from VMware is PowerCLI, which is actually a PowerShell interface. But this API does not work with VMware Workstation. That is a really bad decision from VMware.
Not that PowerCLI is fantastic. I think that they got the basics on PowerShell somewhat wrong.
At the PowerShell Galley VMware has the module
VMware.VimAutomation.Core
(link). But like PowerCLI this module is only for vSphere and can then not be used with VMware Workstation.With VMware Workstation Pro VIX is installed in the path “
%ProgramFiles(x86)%\VMware\VMware VIX\
„. In this folder there are three folders with documentation (doc), examples (samples) and implementations for VMware Workstation 15 (Workstation-15.0.0). The implementation on WMware Workstation is both 32- and 64-bit.Please notice that the documentation in the local doc folder is not complete. You should use the online documentation (link).
Also the folder holds C++ code files. Actually VIX is a C++ SDK with a COM implementation. They are in the VIX folder and the Workstation subfolder.
VixCOM
There is a wrapper for the VIX COM interface on GitHub in the project VMWareTasks. The wrapper is written in and for C#. The code is 32-bit only, which might give some issues on a modern 64-bit platform.But unfortunately this project has not been updated since 2015-11-30. This is more than a year before the last update to VIX. In my point of view this project looks somewhat cold.
COM
I can add a type in PowerShell on a VIX DLL file likeAdd-Type -LiteralPath $VixFile.FullName -PassThru
But even I use the PassThru option the is neither error or type info.
After I have added a type and try to address is with
$Vix = New-Object -ComObject 'Vix.COM'
things go wrong deep inside .NET with the error "
Bad IL format
".Still I have not been able to figure out how to connect PowerShell to VIX using COM.
There are a lot of Perl examples, but using Perl integrates bad with my general infrastructure automation, which is based on PowerShell.
2019-09-13
PowerShell function switch parameter
I played a little with switch parameters to a PowerShell function. The initial purpose was to see how to use default values on switch parameters.
A quick (and dirty) function to test various usages of a switch parameter:
I call the function in different ways like this:
And the output is:
When I take a step back the functionality with a binary switch parameter to a PowerShell function is a nice feature.
But it should be used with caution.
Remember the principle to check for positives. So a usage of a false default I would consider more than twice.
A quick (and dirty) function to test various usages of a switch parameter:
function Test-Switch {
param (
[Parameter(Mandatory=$true)][switch]$SwitchOne,
[switch]$SwitchTwo = $false
)
if($SwitchOne) { 'One is True' }
else { 'One is False' }
if($SwitchTwo) { 'Two is True' }
else { 'Two is False' }
}
I call the function in different ways like this:
'--- (true) (none) ---'
Test-Switch -SwitchOne
'--- false (none) ---'
Test-Switch -SwitchOne:$false
'--- (true) (false) ---'
Test-Switch -SwitchOne -SwitchTwo
'--- false true ---'
Test-Switch -SwitchOne:$false -SwitchTwo:$true
'--- (true) false ---'
Test-Switch -SwitchOne -SwitchTwo:$false
And the output is:
--- (true) (none) ---
One is True
Two is False
--- false (none) ---
One is False
Two is False
--- (true) (false) ---
One is True
Two is True
--- false true ---
One is False
Two is True
--- (true) false ---
One is True
Two is False
When I take a step back the functionality with a binary switch parameter to a PowerShell function is a nice feature.
But it should be used with caution.
Remember the principle to check for positives. So a usage of a false default I would consider more than twice.
2019-08-17
VMware Workstation REST API
With VMware Workstation Pro there is a REST API where you can manage virtual machines. This API has a standard protocol that can be handled by the PowerShell CmdLet Invoke-RestMethod.
I have looked into this with PowerShell Core 6 and it basically works fine.
To set up the REST API on your local VMware Workstation Pro there is a fine guide on this by VMware: "Use the VMware Workstation Pro REST API Service". You should notice that you have to initialize the API before use. and that the REST API is only active while you have running by the command-tool
There are several independant writings on the VMware REST API, but most are in relation to vCenter where you are working on a remote connection and the security is tighter. When you are working on a local VMWare Workstation you can omit most encryption and work on http direct.
After initializing the REST API there are some PowerShell workarounds to get the credential to work. This is described in detail by others like in the blog entry "Introduction to PowerShell REST API authentication". Most are against vCenter but works on a local VMware Workstation. In my case this works fine:
To get a list of defined virtual machines one call is enough:
Unfortunately it looks like the REST API is rather limited in functionality. By calling the REST API in a web browser it presents the funtionality on virtual macine management like this:
I can create a copy of a existing virtual machine, but not create a new from scratch. This is just one example of features that I miss.
Mayby I should take another look at the VMware VIX API, even it is based on elder technology that fits poorly with PowerShell Core.
I have looked into this with PowerShell Core 6 and it basically works fine.
To set up the REST API on your local VMware Workstation Pro there is a fine guide on this by VMware: "Use the VMware Workstation Pro REST API Service". You should notice that you have to initialize the API before use. and that the REST API is only active while you have running by the command-tool
vmrest.exe
.There are several independant writings on the VMware REST API, but most are in relation to vCenter where you are working on a remote connection and the security is tighter. When you are working on a local VMWare Workstation you can omit most encryption and work on http direct.
After initializing the REST API there are some PowerShell workarounds to get the credential to work. This is described in detail by others like in the blog entry "Introduction to PowerShell REST API authentication". Most are against vCenter but works on a local VMware Workstation. In my case this works fine:
$Credential = Get-Credential
$Auth = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($Credential.UserName + ':' + $Credential.GetNetworkCredential().Password))
$head = @{
'Authorization' = "Basic $Auth"
}
$RestApiServer = '127.0.0.1:8697'
$BaseUri = "http://$RestApiServer"
To get a list of defined virtual machines one call is enough:
$vmListUri = $BaseUri + '/api/vms'
"URI = $vmListUri" | Write-Debug
$vmList = Invoke-RestMethod -Method Get -Uri $vmListUri -Headers $head
$vmList
Unfortunately it looks like the REST API is rather limited in functionality. By calling the REST API in a web browser it presents the funtionality on virtual macine management like this:
I can create a copy of a existing virtual machine, but not create a new from scratch. This is just one example of features that I miss.
Mayby I should take another look at the VMware VIX API, even it is based on elder technology that fits poorly with PowerShell Core.
2019-08-10
AMD EPYC Rome
AMD has launched the new generation of their server processor EPYC with the Zen 2 design. It looks really promising, and I look foreward to see some performance figures from the real world.
I will not go into the technical details om the processor, as there are others who already has done that. And far better than I will be able to:
tom's hardware has the very interesting article "AMD's 64-Core EPYC and Ryzen CPUs Stripped: A Detailed Inside Look" (link).
I will not go into the technical details om the processor, as there are others who already has done that. And far better than I will be able to:
- ars technica: A detailed look at AMD’s new Epyc “Rome” 7nm server CPUs (link)
- ServeTheHome: AMD EPYC 7002 Series Rome Delivers a Knockout (link)
- AnadTech: AMD Rome Second Generation EPYC Review (link)
- Storage Review: AMD EPYC Rome 7002 Series Launched (link)
- tom's hardware: AMD Unveils 7nm EPYC Rome Processors (link)
tom's hardware has the very interesting article "AMD's 64-Core EPYC and Ryzen CPUs Stripped: A Detailed Inside Look" (link).
2019-03-27
Windows Storage Spaces
To look into Windows Storage Spaces on if and how to use it on a SQL Server database server I have added 5 SCSI disks to a virtual server running Windows Server 2016 on vmware Workstation.
Please notice that I am working with the local Windows Storage Spaces and not the networked Storage Spaces Direct (S2D). The physical disk on the host is a Samsung 970 Evo NVMe SSD also hosting the host OS Windows 10 Pro.
On these 5 disks I create a storage pool where I create 2 Virtual Disks of different size and with different allocation unit size. The allocation units are of different size with a SQL Server installation in mind.
Also the logical sector size in the storage pool is set to 4KB where the default is 512B. This is to see how easy it is to change this and maybe later experiment further with the impact of a larger logical sector size.
The storage pool is created with the CmdLet New-StoragePool:
The CmdLet return
Then the first Virtual Disk is created with the CmdLet New-VirtualDisk:
This CmdLet return
To create a partition on the Virtual Disk and format the volume with ReFS a small PowerShell script does it:
This small script return
When the disk is created a dialog about formatting the drive might be shown like this
To check the volume I use the command-line tool fsutil:
The tool return
The second Virtual Disk is created with
A return similar to last time
The volume is created and formatted with the small script:
And the return is
Check the volume
The tool return
Please notice that I am working with the local Windows Storage Spaces and not the networked Storage Spaces Direct (S2D). The physical disk on the host is a Samsung 970 Evo NVMe SSD also hosting the host OS Windows 10 Pro.
On these 5 disks I create a storage pool where I create 2 Virtual Disks of different size and with different allocation unit size. The allocation units are of different size with a SQL Server installation in mind.
Also the logical sector size in the storage pool is set to 4KB where the default is 512B. This is to see how easy it is to change this and maybe later experiment further with the impact of a larger logical sector size.
The storage pool is created with the CmdLet New-StoragePool:
New-StoragePool -FriendlyName Pool_00 -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true) -LogicalSectorSizeDefault 4KB
The CmdLet return
FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly
------------ ----------------- ------------ ------------ ----------
Pool_00 OK Healthy False False
Then the first Virtual Disk is created with the CmdLet New-VirtualDisk:
New-VirtualDisk -StoragePoolFriendlyName Pool_00 -FriendlyName ProgramDisk -ResiliencySettingName Mirror -NumberOfDataCopies 3 -Size (42GB) -ProvisioningType Thin
This CmdLet return
FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach Size
------------ --------------------- ----------------- ------------ -------------- ----
ProgramDisk Mirror OK Healthy False 42 GB
To create a partition on the Virtual Disk and format the volume with ReFS a small PowerShell script does it:
Get-VirtualDisk -FriendlyName ProgramDisk | Get-Disk |
Initialize-Disk -PartitionStyle GPT -PassThru |
New-Partition -DriveLetter F -UseMaximumSize |
Format-Volume -NewFileSystemLabel Program -FileSystem ReFS
This small script return
DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining Size
----------- --------------- ---------- --------- ------------ ----------------- ------------- ----
F Program ReFS Fixed Healthy OK 40.93 GB 41.81 GB
When the disk is created a dialog about formatting the drive might be shown like this
To check the volume I use the command-line tool fsutil:
.\fsutil.exe fsinfo refsinfo F:
The tool return
REFS Volume Serial Number : 0xee841e1a841de63d
REFS Version : 3.1
Number Sectors : 0x0000000000a74000
Total Clusters : 0x0000000000a74000
Free Clusters : 0x0000000000a3bb69
Total Reserved : 0x0000000000018510
Bytes Per Sector : 4096
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 4096
Checksum Type: CHECKSUM_TYPE_NONE
The second Virtual Disk is created with
New-VirtualDisk -StoragePoolFriendlyName Pool_00 -FriendlyName DataDisk -ResiliencySettingName Mirror -NumberOfDataCopies 3 -Size (123GB) -ProvisioningType Thin
A return similar to last time
FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach Size
------------ --------------------- ----------------- ------------ -------------- ----
DataDisk Mirror OK Healthy False 123 GB
The volume is created and formatted with the small script:
Get-VirtualDisk -FriendlyName DataDisk | Get-Disk |
Initialize-Disk -PartitionStyle GPT -PassThru |
New-Partition -DriveLetter H -UseMaximumSize |
Format-Volume -NewFileSystemLabel Data -FileSystem ReFS -AllocationUnitSize 64KB
And the return is
DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining Size
----------- --------------- ---------- --------- ------------ ----------------- ------------- ----
H Data ReFS Fixed Healthy OK 121.77 GB 122.81 GB
Check the volume
.\fsutil.exe fsinfo refsinfo H:
The tool return
REFS Volume Serial Number : 0x2620bb5220bb27a7
REFS Version : 3.1
Number Sectors : 0x0000000001eb4000
Total Clusters : 0x00000000001eb400
Free Clusters : 0x00000000001e710d
Total Reserved : 0x00000000000027c8
Bytes Per Sector : 4096
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 65536
Checksum Type: CHECKSUM_TYPE_NONE
Discussion
This is just to look in the creation of a storage pool with different Virtual Disks in Windows Storage Spaces. Looking at performance and optimal configuration for SQL Server are other and later writings.
Please notice that Windows can't boot on a storage pool. This gives that you have to configure a more traditional storage for booting and running Windows. Here there is no difference between Windows Server and Windows desktop OS.
A quick thought though is that I think that I would avoid tiered storage. Just as in a SAN configuration for SQL Server.
2019-03-06
vmxnet3 network adapter
What
When creating a virtual machine in VMware Workstation the default virtual network adapter is „e1000“. This adapter works fine in most cases but if you are working with heavy network load like iSCSI this adapter is not the right.There are several virtual network adapters by VMware as described in the document „VMware Virtual Networking Concepts“ where the latest vmxnet paravirtualised network adapter „vmxnet3“ usually does the trick.
How
Stop the virtual machine. You can edit the vmx-file while the virtual machine is running, but the running configuration with the e1000 adaptor will be written to the vmx-file next time the virtual machine is shut down or restarted.Edit the vmx-file which normally is in the folder to each virtual machine from
ethernet0.virtualDev = "e1000"
to
ethernet0.virtualDev = "vmxnet3"
If you have defined multiple network adapters you should consider to change the type on each adapter.
And then start the virtual machine.
Please notice that when changing the network adapter type on a virtual Windows machine then it will be handled as a new network adapter not a replace. This will among other thing have the effect that your network configurations will be default on the new network adapter.
This is why I would recommend setting the virtual network adapter type as one of the first things when configuring a virtual Windows machine.
I can't tell how other operating systems like freeBSD or Linux will act as I have not tried.
Why
As mentioned using the vmxnet3 paranirtualised network adapter will give you extra network performance. There might be an increased memory pressure on the virtual machine. Personally I haven't experienced severe penalty using the vmxnet3 adapter.History
2019-03-06 Text moved from AzureAdmin blog.
2017-03-12 Blog entry created.
Rename TFS Collection databasefiles
During TFS (2017) restore on a new server the wizard might rename datafiles.
If the TFS Collection database is defined with multiple datafiles (mdf & ndf) where the files are name with a postfix like "*_<number>", e.g. "TFS_SQLAdmin_Collection_2.ndf".
Datafiles are renamed with a replacement of number last in name with a GUID. This put you in a unpleasant situation with unnecessary long file names.
To fix this you have to detatch the database, rename the files and attach the database with the new file names.
If the TFS Collection database is defined with multiple datafiles (mdf & ndf) where the files are name with a postfix like "*_<number>", e.g. "TFS_SQLAdmin_Collection_2.ndf".
Datafiles are renamed with a replacement of number last in name with a GUID. This put you in a unpleasant situation with unnecessary long file names.
To fix this you have to detatch the database, rename the files and attach the database with the new file names.
2019-02-19
Installing SQL Server 2017 on Windows 10
Installing SQL Server Database Engine 2017 together with other major versions can give an error like this:
The entire installation will fail, and looking into the summary log file reveils an issue installing Microsoft Visual C++ 2015 Redistributable. This is thoroughly described in a StackExchange question (link), but the solution details differ in various situations according to the comments. The solution given in the Microsoft CSS SQL Server Engineers blog entry (link) gives another solution on repairing the Visual C++ 2015 Redistributable that did not work in my case as the component was not installed.
The existing Visual C++ 2017 Redistributable was version 14.16.27012.6 where the link in the StackOverflow answer goes to 14.11.25325.0. The latest version can be found through the support article 2977003 (link). Today the given version is 14.16.27027.1. The version is shown in the file details:
The name of the installation file is always "VC_redist.x64.exe".
After removing the two Visual C++ 2017 Redistributable installations the SQL Server installation completes with success - partly.
I am actually not using the failed components, but this will be fixed later in this text with a update.
After the SQL Server 2017 installation Visual C++ 2015 Redistributable is on the computer in both x86 and x64.
Installing Visual C++ 2017 Redistributable requires a computer restart on both x64 and x86 installations. This gives two restarts.
After all this SQL Server 2017 runs nice.
And then the latest CU can be installed. I notice that the CU installation also updates the two shared features that failed in the SQL Server 2017 installation above.
The update completes with success. Please notice that a final reboot is required as mentioned in the summary log file:
This is not raised in the GUI.
The entire installation will fail, and looking into the summary log file reveils an issue installing Microsoft Visual C++ 2015 Redistributable. This is thoroughly described in a StackExchange question (link), but the solution details differ in various situations according to the comments. The solution given in the Microsoft CSS SQL Server Engineers blog entry (link) gives another solution on repairing the Visual C++ 2015 Redistributable that did not work in my case as the component was not installed.
The existing Visual C++ 2017 Redistributable was version 14.16.27012.6 where the link in the StackOverflow answer goes to 14.11.25325.0. The latest version can be found through the support article 2977003 (link). Today the given version is 14.16.27027.1. The version is shown in the file details:
The name of the installation file is always "VC_redist.x64.exe".
After removing the two Visual C++ 2017 Redistributable installations the SQL Server installation completes with success - partly.
I am actually not using the failed components, but this will be fixed later in this text with a update.
After the SQL Server 2017 installation Visual C++ 2015 Redistributable is on the computer in both x86 and x64.
Installing Visual C++ 2017 Redistributable requires a computer restart on both x64 and x86 installations. This gives two restarts.
After all this SQL Server 2017 runs nice.
And then the latest CU can be installed. I notice that the CU installation also updates the two shared features that failed in the SQL Server 2017 installation above.
The update completes with success. Please notice that a final reboot is required as mentioned in the summary log file:
Instance SSDB2017 overall summary:
Final result: Passed but reboot required, see logs for details
Exit code (Decimal): 3010
...
This is not raised in the GUI.
2019-02-03
Installing Ola Hallengren Maintenance Solution
Ola Hallengren Maintenance Solution (OHMS) is with good reason a well-known and widely used solution for SQL Server database backups and maintenance of indexes and statistics together with database integrity check. The solution is recommended by many great DBAs and SQL Server MVPs. Personally I have used the solution with great success in several financial organisations.
Installing OHMS is quite simple. In this text I will look into some details when installing OHMS in a scalable SQL Server infrastructure. The installation is here done i five steps:
- Create DBA database
- Configure OHMS installation
- Run OHMS installation
- Configure OHMS
- Test OHMS
Create DBA Database
Create seperate userdatabase on each instance for SQL Server administration, e.g.[sqladmin]
with a prefix of the organisation. User underscore for space in the name like [nielsgrove_sqladmin]
.This is a general solution that can be used for other database administration solutions.
This database does not require OHMS, but you should still do backups and maintain indexes.
Configure installation
Put OHMS in seperate filegroup to use database for other purposes now and in the future.
The filegroup
OHMS is using the schema
Download script from ola.hallengren.com. Define database name from above in the USE-statement around line 22.
Create cmd- or PowerShell-file to implement configuration. The simple solution used here is a one-line cmd-script as in the next section. You should always script even simple things, as you then are prepared to scale out with the same quality.
ALTER DATABASE [sqladmin] ADD FILEGROUP [OHMS];
GO
ALTER DATABASE [sqladmin] ADD FILE ( NAME = N'ohms_data0', FILENAME = N'C:\Data\sqladmin_ohms_data0.ndf' , SIZE = 8MB , FILEGROWTH = 64MB ) TO FILEGROUP [OHMS];
GO
ALTER DATABASE [sqladmin] MODIFY FILEGROUP [OHMS] DEFAULT;
GO
The filegroup
[OHMS]
is marked as default during the OHMS installation.OHMS is using the schema
[dbo]
by default. You could change this, but this would require a rather detailed rewrite of the code. As the code in the scriptfile ManitenanceSolution.sql
is more than 8000 lines I would not recomment changing the schema.Download script from ola.hallengren.com. Define database name from above in the USE-statement around line 22.
Create cmd- or PowerShell-file to implement configuration. The simple solution used here is a one-line cmd-script as in the next section. You should always script even simple things, as you then are prepared to scale out with the same quality.
If you are building a automatic installation and upgrade you should get the precise URLs from the general .com-URL. That might require a more complex configuration.
If you want a longer or shorter history on backuphistory, jobhistory or CommandLog from the default 30 days you can edit the file
MaintenanceSolution.sql
by searching for DATEADD(dd,-30,GETDATE())
and change the value. You should change the history length before running the OHMS installation as it is more simple than alter the jobs afterwards. I like to have 42 days of history...Run OHMS installation
sqlcmd command-line using configuration above. If you create a more complex configuration, the execution itself will have to be considered in detail.
C:\>sqlcmd.exe -S "(local)\SQLSERVER" -E -d "sqladmin" -i "C:\SysAdmin\MaintenanceSolution.sql"
Changed database context to 'sqladmin'.
And change the default filegroup back to [PRIMARY]
ALTER DATABASE [sqladmin] MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO
Configure OHMS
The sql-script dowes create SQL Agent jobs, but the jobs has no schedule. Add a schedule to each job by the stored procedure
OHMS create these 11 jobs in SQL Agent:
Glenn Berry has written a script that creates the jobs with a good reference schedule. The script is introduced in the blog post "Creating SQL Server Agent Job Schedules for Ola Hallengren’s Maintenance Solution".
All OHMS jobs are placed in the SQL Agent job category
Log files are placed in SQL Server ErrorLogPath where the SQL Server ErrorLog and default traces are placed by default. If you want the OHMS log files another place, then change the job steps with the stored procedure
[msdb].[dbo].[sp_add_jobschedule]
. This is a quick example:DECLARE @schedule_id int;
EXECUTE [msdb].[dbo].[sp_add_jobschedule]
@job_name=N'CommandLog Cleanup',
@name=N'OHMS CommandLog Cleanup',
@enabled=1,
@freq_type=8,
@freq_interval=1,
@freq_subday_type=1,
@freq_subday_interval=0,
@freq_relative_interval=0,
@freq_recurrence_factor=1,
@active_start_date=20190203,
@active_end_date=99991231,
@active_start_time=190000,
@active_end_time=235959, @schedule_id = @schedule_id OUTPUT;
SELECT @schedule_id;
GO
OHMS create these 11 jobs in SQL Agent:
- CommandLog Cleanup
- DatabaseBackup - SYSTEM_DATABASES - FULL
- DatabaseBackup - USER_DATABASES - DIFF
- DatabaseBackup - USER_DATABASES - FULL
- DatabaseBackup - USER_DATABASES - LOG
- DatabaseIntegrityCheck - SYSTEM_DATABASES
- DatabaseIntegrityCheck - USER_DATABASES
- IndexOptimize - USER_DATABASES
- Output File Cleanup
- sp_delete_backuphistory
- sp_purge_jobhistory
[msdb].[dbo].[sysjobs]
:SELECT * FROM msdb.dbo.sysjobs
WHERE [description]=N'Source: https://ola.hallengren.com'
ORDER BY [name];
Glenn Berry has written a script that creates the jobs with a good reference schedule. The script is introduced in the blog post "Creating SQL Server Agent Job Schedules for Ola Hallengren’s Maintenance Solution".
All OHMS jobs are placed in the SQL Agent job category
Database Maintenance
. You could change the category with the stored procedure [msdb].[dbo].[sp_update_job]
, but I think that the benefits will be rather limited.Log files are placed in SQL Server ErrorLogPath where the SQL Server ErrorLog and default traces are placed by default. If you want the OHMS log files another place, then change the job steps with the stored procedure
[msdb].[dbo].[sp_update_jobstep]
. You should only do this if you are forced to by stupid policies.Put configuration in script file(-s) and make it idempotent. Also put everything in version control, with the documentation.
If you want to remove the OHMS jobs then this query on the table [msdb].[dbo].[sysjobs] can generate the statements:
SELECT N'EXECUTE [msdb].[dbo].[sp_delete_job] @job_name=N''' + [name] + N''', @delete_unused_schedule=1;'
FROM [msdb].[dbo].[sysjobs]
WHERE [description]=N'Source: https://ola.hallengren.com';
Test OHMS
Run each job starting with database integrity checks, then full data backup moving to differential backup and closing with transaction log backup spiced up with index and statistics maintenance jobs.
Check output of each job and check SQL Server errorlog after each SQL Server Agent job execution.
You can generate the statements th start the OHMS jobs refering the stored procedure
[msdb].[dbo].[sp_start_job]
in a query on the table [msdb].[dbo].[sysjobs] like above:SELECT N'EXECUTE [msdb].[dbo].[sp_start_job] @job_name=N''' + [name] + N''';'
FROM [msdb].[dbo].[sysjobs]
WHERE [description]=N'Source: https://ola.hallengren.com';
Improvements
- The installation script handles both installation and update. Some identification on which version is installed and the version about to be deployed.
- Integrate installation with your Definitive Software Library (DSL). Put the complete installations set with Ola's and you own scripts in a combined installation set.
- More robust errorhandling. Especially in the tests.
History
2019-02-03 : First text. 2021-06-15 : Section on changed history added.
Subscribe to:
Posts (Atom)