2018-09-19

Team Foundation Server on a single server

Preparations

This is a description on installing Microsoft Team Foundation Serverv (TFS) 2017 or 2018 on a single server. I am doing it on a virtual machine running Microsoft Windows Server 2016 in vmware Workstation version 14.

Microsoft has provided the document "Install TFS on a single server" (TFS single server) and this document describes several activities and hints quite nicely. I am quite new to TFS and similar platforms which gives me some experiences that are not described in the Microsoft document. This is the motivation to this text and the works behind.

There is a document from Microsoft on "Install and configure Team Foundation Server". But for now this document is on TFS 2015 and not TFS 2018.

Microsoft SQL Server

The Microsoft document on TFS single server installation describes a TFS installation using Microsoft SQL Server Express as database platform. But this solution is rather limited with SQL Server Express and then I have decided to use a "real" Microsoft SQL Server installation in this scenario.

Besides SQL Server Database Engine - with Full-Text Search - the SQL Server installation also includes SQL Server Analysis Services (SSAS) with a Multidimensional database model, SQL Server Reporting Services (SSRS) in a Native setup and SQL Server Client Connectivity Tools.

If you are using SQL Server 2017 then Reporting Services is a separate download and installation. But you will need the product key for the installation.

The trick is
  1. Install SQL Server Database Engine first. During installation copy the product key. You can put it in a text-file for future reference. Do not patch yet to required CU level.
  2. Install SQL Server Reporting Services with the product key from the general SQL Server installation. During installation you are required to restart the server. Do that and configure Reporting Services with the Report Server Configuration Manager. You can name the Reporting Services the same way as the databases you plan for TFS, e.g. "Tfs_SQLAdmin_*".
  3. Patch the SQL Server platform with the lastest approved Cumulative Update (CU). The CU installation set will also patch Reporting Services.

To make TFS backup and restore work a network path to a share is required. This requirement is to (try to) ensure that the backup files are placed on another machine than TFS.
To make the TFS backup and restore work on a stand-alone sandbox server like this a local share can be created.
DO NOT DO THIS IN PRODUCTION!!!

If you want the database files placed in another path than the default location you should change the database default locations on the SQL Server Database Engine instance.
You might run into some difficulties if you create the path in your own context. To get the ownership of the folders correct you can use the (undocumented) extended stored procedure master.sys.xp_create_subdir. I would not recommend to enable xp_cmdshell as described in some "solutions".
Or you could move de databases right after completion of TFS installation. Still you should do it through SQL Server with xp's.

Disclaimer: I have worked with SQL Server as my primary field for 10+ years, so this description might be too thin to some. If you can't find what you need in this text or the documentation from Microsoft on installing SQL Server then please write a comment to this text.

Oracle JRE

TFS is using the 3rd part product Elasticsearch for Code Search, and Elasticsearch requires a local Java runtime installation. The documentation from Microsoft specifies the requirement to be Oracle Java Runtime Engine (JRE) 8 update 20 or later. I usually prefer the latest version and on this day it is JRE 8 update 172.

Get the Server JRE from Oracle Java downloads (link). You might have to scroll a few screens down to find Java 8.

Unlike other Java installations it is packed several times into a tar.gz-file. This is how I unpacked the installation using 7-zip before putting it on the server:
  1. Unpack the tar-gz-file to the same folder.
  2. Unpack the tar-file to a subfolder, e.g. "\server-jre-8u172-windows-x64\".
  3. The subfolder then holds only one element with a JDK subfolder, e.g. "\jdk1.8.0_172\"
  4. Copy this folder to the server in the path you see fit, E.g. %ProgramFiles%\Java.
It looks like we actually gets a JRE plus some JDK elements...

After installing JRE the Windows environmental variable JAVA_HOME is required by Elasticsearch.

Set the value of the variable to the full path to the JRE folder:

Please notice that the full path to the JRE installation on the given update level. If When the JRE is updated then the environmental variable JAVA_HOME must also be updated.

Service accounts

The SQL Server components are installed with local virtual accounts as service accounts. If the installation is in a Active Directory (AD) domain then I would consider using AD Managed Service Accounts (MSA) for SQL Server components.

For the TFS components these service accounts are created locally:
  • TFS Service (tfsservice); Log on as a service
  • TFS Reports (tfsreports)
  • TFS Build (tfsbuild); Log on as a service
  • TFS Search (tfssearch); Log on as a service
Most services requires the Windows local security policy "Log on as a service".
  1. Open "Administrative Tools".
  2. Expand "Local Policies" (secpol.msc).
  3. Click "User Rights Assignment".
  4. Right-click "Log on as a service" in the right pane and select Properties.
  5. Click "Add User or Group...".
  6. Enter user to add and click OK.
  7. Click OK.
  8. Close "Local Security Policy".
If the installation is in a AD domain then I would create the service accounts as domain accounts.

When preparing for a TFS installation in Production I would definitely ensure that the service accounts are hardened and secured. Generally I think that source code and development documentation is core business information and must be secured as much as possible.

Installation

The installation set can be downloaded as either EXE- or ISO-file. I prefer the ISO-file as I like the way it can be mounted the virtual server and used without much ado.

TFS installation

The installation is started with the Tfs exe-file in the root of the installation set, e.g. "Tfs2018.exe". In the first screen the installation path can be entered, which I think should be considered if your server standard aim to protect the system drive by seperation of system and applications.

The basic TFS installation takes several minutes:


When the basic TFS installation is completed the machine must be restarted.

After the restart the TFS installation automatic start the TFS Configuration Wizard:
The TFS Configuration Wizard collects configuration data, verify the configuration and execute the configuration.

As this is a isolated sandbox installation I choose to participate in the Microsoft Improvements Program:
This I would not do in Production as it requires the server to be connected to the internet, and I really don't know or control what is send to Microsoft and how it is send.

This is a new TFS installation and I don't have databases or any other configuration items on TFS:

With TFS 2018 Update 3 you are asked if the installation is a trial. The default answer is a trial installation. If you have Visual Studio or MSDN subscription I recommend not to install as trial. One thing is that the trial installation will expire after 60 days:

At first it would seem obvious to do a basic deployment of TFS, but as I would like integration with SSRS I have to select a Advanced TFS deployment:

The TFS language selection is English. While I am in a very little language area with Danish I generally prefer servers to "speak" English. This remove a lot of communication issues - also with consultants not speaking Danish:

The SQL Server Database Engine is here installed with a named instanced. I have not created empty and pre-sized databases. This I will get back to later:

The default service account for the TFS service is Local Service, but as mentioned earlier I have created dedicated service accounts. Please notice the possibility to test the service account:

The TFS Application Tier is configured with defaults:

The TFS Search is also configured with a dedicated service account and not the default Local Service. As this installation is a single server installation there is no existing Search Service. When installing TFS in Production a separate TFS Search server i highly recommended:

With Update 3 for TFS 2018 Microsoft has introduced basic authentication to secure the communication with TFS Code Search. There is a note on the Microsoft DevOps blog "Introducing Search service authorization to make communications with TFS more secure" with some background and a upgrade path.

This is where the integration with SSRS is indicated:

When the SSRS installation is configured and running before TFS installation the TFS Configuration Wizard can detect not only the SSRS instance but also the SSRS URLs:

The TFS Reporting also uses SSAS, and when the SSAS installation is configured and running before TFS installation then the TFS Configuration Wizard can detect the SSAS instance:

To segregate the TFS Reporting from the rest of the TFS installation it is highly recommended to use a dedicated service account for TFS Reporting:

I do not go with the default TFS project collection as this will give some nasty restrictions on future consolidations:

When then TFS configuration definition is completed then the wizard gives you the possibility to manually review the complete TFS configuration:

Finally the TFS Configuration Wizard verify the complete configuration. It is at this point things like SSAS database model is verified. The verification takes several minutes:

When the verification is complete you get a picture like this. I had to do some reruns of the verification before getting it right:

Then it is time for the actual configuration of the TFS installation. This will take some minutes:

Again you will get a nice status picture when the TFS configuration is complete:

Finally the TFS Configuration Wizard shows link for log file, link to TFS and other useful configuration details.

The TFS site is running on Microsoft Internet Information Server (IIS) in a dedicated application pool. The default application pool is also running, but in a Production installation I would really consider to stop the default application pool to comply with the Principle of Least Service:

And the scrolling text is something I think should be in the general documentation:

When the TFS Configuration Wizard is closed the installation automatically start the TFS Administration Console:

Configurations

After the TFS installation I pinned some tools to the Windows Start:

Also I created these shortcuts on the Windows Desktop:

  • Report Server (http://localhost/reportserver)
  • Reports (http://localhost/reports)
  • TFS (http://localhost:8080/tfs)
  • TFS Logs Folder (C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs)

TFS Scheduled Backup

Even in a TFS sandbox a backup should be scheduled. See more below in the section about Database Configuration.

The TFS Scheduled Backups Wizard require a Network Backup Path that must be a UNC-path. Please see the section in the beginning of this text about SQL Server preparations.

A simple TFS backup schedule could look like this:
Please notice that this is a TFS backup schedule for a sandbox installation. In Production or another more important environment I would really consider to change the retention to a higher value.

Database Configuration

The TFS Configuration Wizard is running in the user context of the administrator who started the TFS installation. This gives that the user is owner of the databases. Such a configuration should be changed to your standard as it otherwise could generate problems when the user is disabled or deleted in the AD domain:

The database recovery model on the created TFS databases is Full - even if the model database is configured for Recovery Model Simple before TFS installation. Usually I would change the database Recovery Model to Simple in non-Production installations, but scheduling TFS database backup in the TFS Administration Console will change the Recovery Model to full on all databases, also ReportingServicesTempDB. And actually this makes sense as TFS use marked transactions on the backup to ensure consistency acress the databases.
Actually the change of Recovery Model is done when the Backup Schedule is configured and in the context of the user working in TFS Administration Console. The backup itself is taken in the context of the TFS service account (tfsservice). Please remember that the backup files are created in the context of the Database instance service account, and that this user must have create and write access to the shared storage on the backup share.

If the TFS databases are created on SQL Server 2017 then the Compability Level of the TFS databases will not match the Compability Level of the Database Engine instance. To fix this the Compability Level of the TFS databases should be raised from 130 to 140. The TFS installation will still be supported as SQL Server 2017 is on the list of requirements for TFS 2018.

The initial size of the databases is defined by the size of the system database model. During the TFS installation the databases grows several times. The databases should be re-sized with additional space to minimize autogrow events. When installation TFS in Production I would really go with empty predefined databases to avoid database autogrows:

Also I would reconfigure the database autogrow size - especially the TFS Collection database that is initially defined with exponentially growth: 

To optimise performance on the TFS Collection database I added some datafiles to optimise for parallel I/O:
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_01', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_01.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_02', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_02.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_03', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_03.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_04', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_04.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_05', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_05.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_06', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_06.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_07', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_07.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];

Discussion

Continuity

When installing a more permanent TFS installation you must plan and test full or partly recovery of the TFS installation. There is a possibility to configure Scheduled Backups in TFS Administration Console:

If you choose to go with this possibility you should not reduce the recovery tests.

Security

The default authentication on TFS is NTLM, but it is according to the documentation possible to use Kerberos authentication. This I would like to look into before building a TFS installation in Production.

History

2018-09-19 Moved to SQLAdmin blog.
2018-02-13 First published on AzureAdmin blog.

2018-09-11

Multiple data files for TFS Collection database

Unfortunately Microsoft Team Foundation Server (TFS) creates a collection database with only one data file. Like other products with high activity database like SharePoint the database should have multiple datafiles to optimise for parallel I/O.
Paul Randal has written a fine piece on "Benchmarking: do multiple data files make a difference?" you also should read.

Findings

1111 autogrow events on the datafile in 22 days. This is partly due to the growth rate is 50 MiB.
The data file is about 176 GiB.
Unfortunately I can´t get the file fragmentation as I don´t have access to the OS. But worst case could be the difference between current size and model database size divided by 50 MiB which is about 3600 fragments. Even I have to expand the extra data files in small steps of 512 MiB the final fragmentation on each extra data file will only be about 48 fragments. The total fragmentation after this activity will teoretically be around 830 fragments which is a significant reduction (77 %) in external fragmentation.

General solution

  1. Create extra data files (ndf).
  2. Move data to extra data files from first data file (mdf).
  3. Shrink first data file to equal size of the extra data files.

Sounds easy - but...

  • Mind the transaction log. It should not autogrow as a result of this activity. And movement of data pages is registered in the transaction log.
  • Data pages are not moved from end of first data file automatically by SQL Server.
  • Limited storage...

Testing in a sandbox with sufficient storage showed some challenges.
Looks like target size on extra data files i 24 GiB. With increments in 512 MiB there are about 48 rounds.

The real deal

  • Automated translog backup once a hour scheduled and managed by extrenal operator. 48 rounds times one hour gives 48 hours. At least - but on the other hand each round takes only about 10-15 minutes on the given hardware.
  • sysadmin. But not local administrator on the Windows Server. Outsourced operations...
  • Trace Flag 1117 set default by outsourcing vendor. Otherwise this was a different challenge.
All this considered the activity can be described as two major activities where the second will have several rounds:
  1. Create the extra data files (ndf) with small initial size, e.g. 512 MiB. You could disable autogrow on the first data file (mdf) to avoid further unwanted autogrows.
  2. Repeat these steps until the first data file (mdf) has equal size compared to the extra data files (ndf).
    1. Disable automatic filegrowth on extra data files (ndf). This is to avoid the extra data files (ndf) growing wild and consuming storage out of control while data pages are moved from the first data file (mdf).
    2. Move data pages to extra datafiles using DBCC SHRINKFILE (databasename, EMPTYFILE).
    3. Enable autogrow on the the extra data files, e.g. 512 MiB.
    4. Move allocated data pages in first data file from the end of the file in front of the file using DBCC SHRINKFILE (databasename, NOTRUNCATE).
    5. Truncate first data file (mdf) using DBCC SHRINKFILE (databasename, targetsize in MiB).
    6. Expand the extra datafiles in a smaller increment than you wanted, e.g. 512 MiB. This step can be omitted first time right after the extra data files (ndf) are created.
    7. Check data space in each data file through the DMV sys.database_files or the SSMS report "Disk Usage".
    8. Wait for translog backup.
    9. Confirm translog backup and free space in translog.
The steps above are described and discussed in detail in the following. I have note created a complete turn-key script as the naming will differ from one TFS installation to another. But you should be able to create your own script by copy-alter the statements.

    First - Create extra data files

    In this case I add seven extra data files as the database server has more than eight processors (cores). This gives a total of eight data files which I find a fine compromise between multiple data files and simple administration.
    I add each extra data file in a separate batch to minimise the impact on the running database installation:
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_1',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_1.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_2',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_2.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_3',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_3.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_4',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_4.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_5',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_5.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_6',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_6.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
      NAME = N'TFS_SQLADMIN_Collection_7',
      FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_7.ndf',
      SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
    GO


    After adding the extra data files the autogrowth is disabled on the first data file (mdf):
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE (NAME = N'TFS_SQLADMIN_Collection', FILEGROWTH = 0);

    Second - Balance data files

    The second activity could be automated, but in my case the server hosts other databases, and maybe some developoers were working late... So I choose to do this manually step-by-step while keeping a sharp eye on the general behaviour of the server.

    Filegrowth on extra data files

    Automatic filegrowth on the extra data files is disabled during movement of data pages from the first data files (mdf) to the extra data files (ndf). This is to avoid autogrow events and keep the storage usage in control during the entire operation.

    If you set alle datafiles, both the first and the newly added, to a zero filegrowth then you can reduce the fragmentation of the files and spare some time by not completely emtying the first datafile.

    The filegrowth on the extra data files (ndf) is handled manually during the operation. In this case I ended up with a growth on 512 MiB in each step by several experiments in a sandbox.

    The growtrate should be small enough to make it possible to handle the amount of data within one log backup without autogrow events on the transaction log file (ldf). This is again to keep the storage usage in control during the operation.

    Move data pages to extra files

    Data pages are moved in the database "TFS_SQLADMIN_Collection" from the first data file (mdf) to the extra data files (ndf) with the command
    DBCC SHRINKFILE (TFS_SQLADMIN_Collection, EMPTYFILE);
    It is the logical name of the data file that is required as the first parameter. Here it is so unfortunately that the database and the first data file has the same name.
    The string holding the logical filename is not wrapped in apostrophes like a string usually is in T-SQL.
    The parameter EMPTYFILE will move all data pages from the given data file and prepare the file for deletion. In this case I only want the file reduced to a size equal to the other data files. According to the documentation on DBCC SHRINKFILE a ALTER-statement on the data file will remove the emptyfile-flag from the file.

    While the data are moved to the extra datafiles you can se a session with your user running the command "DbccFilesCompact". Also you can see heavy I/O activity on all datafiles; reads on the first datafile and writes on the extra datafiles.

    Please notice that the data pages moved also are in the transaction log with their data.

    Move allocated data pages to front of data file

    This can be visualised as a movement of allocated data pages from right to left on a line illustrating the file initialisation to the left.

    Truncate data files

    When the end of the data file is liberated of allocated data pages, the data file can be truncated down to last extend holding allocated data pages with the command
    DBCC SHRINKFILE (TFS_SQLADMIN_Collection, 24576);

    The command will truncate the file to 24 MiB, which is my initial qualified guess on the final size of all data files. If the file can't be truncated to this size because the file holds more allocated datapages than the limit then the shrink operation will stop a the last extend holding allocated data pages. But then the command will not throw an error - just finish gracefully as documented.

    Now you can set the desired autogrow rate on all the datafiles. The growth rate must be the same on alle the data files.

    Expand the extra datafiles

    The extra datafiles ndf) are then expanded with the increment described above with a statement like
    USE [master]
    GO
    DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z]  Expand extra data files (ndf) to 6 GiB each...';
    RAISERROR( @msg, 0, 0);
    GO
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_1', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_2', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_3', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_4', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_5', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_6', SIZE = 6GB );
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_7', SIZE = 6GB );
    GO
    DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Extra data files (ndf) expanded to 6 GiB each.';
    RAISERROR( @msg, 0, 0);
    GO

    The size can only be defined in integers on the given scale like "GB" or "MB". So the next expansion of the extra data files (ndf) will have the final size defined in a lower scale but a larger number like
    ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_1', SIZE = 6656MB );
    as 6656 MiB equals 6.5 GiB.

    Before and after I have put a small message with a timestamp. This makes it possible to log the execution and get some durations.
    During the entire activity I have copied the output like these messages to a Word document. As the activity is spread over several days I think it is usefull to compile the output and take a look at it when the dust has settled.

    As there are several rounds with expansion statements like above I have created a PowerShell script to generate the statements.
    function Get-SqlExpandNdf {
      for ([int]$Size = 512; $Size -ile (24*1024); $Size += 512) {
        if ($Size % 1024 -ieq 0) {
          [string]$FileSize = "$($Size/1024)GB"
        }
        else {
          [string]$FileSize = "$($Size)MB"
        }

        [string]$SizeLabel = "$(([System.Math]::Round($Size/1024,1)).ToString().Replace(',','.')) GiB"

        'USE [master];'
        'GO'
        "DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Expand extra data files (ndf) to $SizeLabel each...';"
        'RAISERROR( @msg, 0,0 );'
        'GO'

        (1..7) |
        ForEach-Object {
          "ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_$_', SIZE = $FileSize );"
        }

        'GO'
        "DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Extra data files (ndf) expanded to $SizeLabel each.';"
        'RAISERROR( @msg, 0,0 );'
        'GO'
        ''
      }
    }


    The output can be streamed to a sql-file with the CmdLet Out-File
    Get-SqlExpandNdf | Out-File -LiteralPath 'C:\temp\ExpandNdf.sql' -NoClobber
    and added to a general script file for the activity.

    Check data space

    The data space and the distribution of data on the data files can be seen with a query like this
    USE [TFS_SQLADMIN_Collection];
    GO
    SELECT
      [database_files].[name],
      [database_files].[file_id],
      [database_files].[physical_name],
      ([database_files].[size] * 8.0/1024/1024) AS [size_in_gb],
      (([database_files].[size] * 8.0/1024) - (FILEPROPERTY([database_files].[name], 'SpaceUsed') * 8.0/1024)) AS [free_space_in_mb],
      (([database_files].[size] * 8.0/1024/1024) - (FILEPROPERTY([database_files].[name], 'SpaceUsed') * 8.0/1024/1024)) AS [free_space_in_gb]
    FROM [sys].[database_files];
    GO
    USE [master];
    GO

    The query will return a result set on six columns and as many rows as there are files to the database inclusive transaction log files (ldf). A result set could look like this
    namefile_idphysical_namesize_in_gbfree_space_in_mbfree_space_in_gb
    TFS_SQLADMIN_Collection1G:\MSSQL\data\TFS_SQLADMIN_Collection.mdf132.5422.500.02
    TFS_SQLADMIN_Collection_log2T:\MSSQL\data\TFS_SQLADMIN_Collection_log.ldf28.7822171.1021.65
    TFS_SQLADMIN_Collection_13G:\MSSQL\data\TFS_SQLADMIN_Collection_1.ndf6.50502.060.49
    TFS_SQLADMIN_Collection_24G:\MSSQL\data\TFS_SQLADMIN_Collection_2.ndf6.50509.560.49
    TFS_SQLADMIN_Collection_35G:\MSSQL\data\TFS_SQLADMIN_Collection_3.ndf6.50509.370.49
    TFS_SQLADMIN_Collection_46G:\MSSQL\data\TFS_SQLADMIN_Collection_4.ndf6.50510.250.49
    TFS_SQLADMIN_Collection_57G:\MSSQL\data\TFS_SQLADMIN_Collection_5.ndf6.50509.620.49
    TFS_SQLADMIN_Collection_68G:\MSSQL\data\TFS_SQLADMIN_Collection_6.ndf6.50510.620.49
    TFS_SQLADMIN_Collection_79G:\MSSQL\data\TFS_SQLADMIN_Collection_7.ndf6.50510.430.49

    Discussion

    This activity should have been held as the TFS Collection database was created. But when this is not the case one might as well get on with it as soon as possible as the technical debt only will increase.

    I could have expanded the extra data files in each run just after autogrow was re-enabled on the files, but this order gave me a better script file structure.

    I have seen a few recommendations on moving the table "tbl_Content" to a custom filegroup. This filegroup should also be configured with multiple data files. I have not gone further with this configuration as the organisation is only about 1100 employees with around 300 active TFS users and then I figure the TFS collection will not be big enough for this to really matter.