Michelle Ufford (SQLFool) wrote a blog entry that is the inspiration of this entry. The script of Ufford is change to fit my present needs.
The script is in three parts, where one part is in two editions to handle one database or all databases in a instance
Part I (temporary data)
SET NOCOUNT ON;
CREATE TABLE #results(
[database_name] sysname
,[vlf_count] INT
,[log_file_count] INT
,[log_size_mb] numeric(15,1)
);
CREATE TABLE #stage(
[file_id] INT
,[file_size] BIGINT
,[start_offset] BIGINT
,[f_seq_no] BIGINT
,[status] BIGINT
,[parity] BIGINT
,[create_lsn] NUMERIC(38)
);
The temporary table #stage holds the output of DBCC LogInfo as a staging area. The data is transformed into the temporary table #results.
Part IIa (Single database)
INSERT INTO #stage EXECUTE (N'DBCC LogInfo WITH no_infomsgs'); INSERT INTO #results
SELECT
DB_NAME()
,COUNT(*)
,COUNT(DISTINCT [file_id])
,(SELECT SUM([size]*8/1024.) FROM sys.database_files WHERE [type_desc] = 'LOG')
FROM #stage;
DROP TABLE #stage;
The value [size] is the number of database pages, and is recalculated to MiB in the column #resuls.log_size_mb.
Part IIb (All databases)
DECLARE csr_db CURSOR FAST_FORWARD READ_ONLY FOR SELECT [name]
FROM master.sys.sysdatabases;
DECLARE @dbname nvarchar(128);
OPEN csr_db;
FETCH NEXT FROM csr_db INTO @dbname;
WHILE (@@fetch_status <> -1)
BEGIN
INSERT INTO #stage
EXECUTE (N'DBCC LogInfo([' + @dbname + N']) WITH no_infomsgs');
INSERT INTO #results
SELECT
@dbname
,COUNT(*)
,COUNT(DISTINCT [file_id])
,(SELECT SUM([size]*8/1024.)
FROM master.sys.master_files
INNER JOIN master.sys.databases
ON master.sys.master_files.database_id = master.sys.databases.database_id
WHERE [master_files].[type_desc]=N'LOG' AND [databases].[name]=@dbname)
FROM #stage;
TRUNCATE TABLE #stage;
FETCH NEXT FROM csr_db INTO @dbname;
END
CLOSE csr_db;
DEALLOCATE csr_db;
DROP TABLE #stage;
Ufford used the (undocumented) stored procedure master.sys.sp_MSforeachdb, where I use a cursor to avoid a undocumented object.
Part III (Analysis)
DECLARE @vlfPerGBHighLimit int; SET @vlfPerGBHighLimit = 50;
DECLARE @vlfPerGBLowLimit int;
SET @vlfPerGBLowLimit = 2;
SELECT
[database_name]
,[vlf_count]
,[log_file_count]
,[log_size_mb]
,([log_size_mb]/1024) AS N'log_size_gb'
,N'maintenance' =
CASE
WHEN [log_size_mb] < 1024 THEN N'Log too small to evaluate.'
WHEN [vlf_count]/([log_size_mb]/1024) <= @vlfPerGBLowLimit THEN N'Too few VLFs. Shrink and grow log in 8 GB chunks.'
WHEN [vlf_count]/([log_size_mb]/1024) >= @vlfPerGBHighLimit THEN N'Too many VLFs. Shrink and grow log in 8 GB chunks.'
ELSE N'VLF count OK :-)'
END
FROM #results
ORDER BY [vlf_count] DESC;
DROP TABLE #results;
This is inspired by a comment by Kendra Little to the blog entry by Michelle Ufford. I rewrote it to use CASE instead of IF statements.
There was a error in SQL Server when expanding in multiplies of 4 GB, and this is why Tripp and others recommend to expand by 8000 MB instead of 8192 MB.
Reference
Michelle Ufford (SQLFool): „Check VLF Counts“Kimberly L. Tripp: „Transaction Log VLFs - too many or too few?“
Linchi Shea: „Performance impact: a large number of virtual log files“ - Part I and Part II
Paul S. Randall: „Understanding Logging and Recovery in SQL Server“
David Levy (Adventures in SQL): „A Busy/Accidental DBA’s Guide to Managing VLFs“
No comments:
Post a Comment