Wednesday, March 30, 2016

Only Transaction Log Backup can free space in the Transaction Log in Full and Bulk-Logged Recovery Models

This post is about well known fact that only way to claim back space Transaction Log in Full and Bulk-Logged Recovery Models is to do a Transaction Log Backup.

Paul Randall has a wonderful post about that topic: "Misconceptions around the log and log backups: how to convince yourself".
Will quote him: "There’s still a widely held misconception that when properly in the FULL or BULK_LOGGED recovery models that full or differential backups can truncate the log. No. It *NEVER* happens."

His post was published almost SEVEN years ago, but there are still a lot of people who thinks differently. There are thousands of databases where transaction log is hundred times bigger than the data itself.
Millions of petabytes of unnecessary data around the world are sitting uselessly on companies' hard drives bringing revenue to SAN distributors.

Will do it, as always, in steps:

What is the problem?

Will repeat it again and again: If your database in Full or Bulk-Logged Recovery Models, all transactions, recorded to the Log file, won't go away and will sit there forever, until Transaction Log Backup will be performed.

What is wrong with that?

At first, growing log file will eat your disk space. If your database is small and has almost no activity, you can live without even knowing about the problem for couple of years and when your database will stop accepting any new transactions because of Full Disk issue you will be running and screaming trying to figure out how to fix the issue.
At second, Log file will grow in small increments. By default, Log file auto-growth increment is 10%. It will take 100! File growth events until you reach size of 10Gb from it's initial size of 0.75 Mb. That creates huge file fragmentation not only on a drive, but also it creates internal Log File fragmentation.
In your firs 10Gb will be about 800 Virtual Log Files (VLFs). I've seen the case when Database's Log file had 200K VLFs!
At third, in case of disaster you would have to backup whole your Transaction Log, which will take a lot of extra time for the old Gigabytes of log data while your production system is down.

Why that was happened?

Every new database you create is just a copy of Model database. Model database, by default, is in Full Recovery Model.Very few people are changing recovery model to Simple or scheduling a log backup job right after database creation.

How to prevent that?

The easiest thing is to switch Model database on the server to Simple Recovery Model. then all new databases will be in Simple mode.
In a case, when customer requires you to have a database in Full Recovery Model, everybody must understand what does it mean, why that is necessary, and how to use it for your advantage.
You also have to develop the RESTORE strategy, which will include Transaction Log backup and you won't have a problem of growing transaction log at all.

To justify my post will provide a little demo:

Script #1. Create a test database. Switch it to the Full Recovery Model and do an initial Full Database Backup.
use Master;
GO
IF EXISTS (SELECT TOP 1 1 FROM sys.databases WHERE name = 'TestLogBackup')
BEGIN
  ALTER DATABASE TestLogBackup SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE 
  DROP DATABASE TestLogBackup;
END
GO
CREATE DATABASE TestLogBackup;
GO
ALTER DATABASE TestLogBackup SET RECOVERY FULL;
GO
BACKUP DATABASE [TestLogBackup] TO  DISK = N'C:\SS2014\Backup\TestLogBackup_1_Full.bak' 
WITH NOFORMAT, INIT, SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

Script #2. Reporting script. We will use it multiple times during the demo:
USE TestLogBackup;
GO
SELECT name AS [Database Name], create_date, recovery_model_desc FROM sys.databases 
WHERE name = 'TestLogBackup';
GO
;WITH DbData as (
SELECT name, 
 file_id as "File Id", 
 is_percent_growth,
 growth,
 physical_name as "Physical Name",
 CAST(CAST(ROUND(Size/128.,3) as DECIMAL(16,3)) as VARCHAR(16)) AS SizeMB,
 CAST(CAST(ROUND(FILEPROPERTY(name, 'SpaceUsed')/128.,3) as DECIMAL(16,3)) as VARCHAR(16)) AS UsedSpaceMB
FROM sys.database_files with (nolock) 
)
SELECT name AS [File Name], [File Id],
 RIGHT(SPACE(16) + CASE WHEN Len(SizeMB) > 7 
  THEN CASE WHEN Len(SizeMB) > 10 
  THEN LEFT(SizeMB, LEN(SizeMB) - 10) + ',' + SUBSTRING(SizeMB, LEN(SizeMB) - 10, 3) + ',' + RIGHT(SizeMB, 7)
  ELSE LEFT(SizeMB, LEN(SizeMB) - 7) + ',' + RIGHT(SizeMB, 7) END ELSE SizeMB END, 16) as SizeMB,
 RIGHT(SPACE(16) + CASE WHEN Len(UsedSpaceMB) > 7 
  THEN CASE WHEN Len(UsedSpaceMB) > 10 
  THEN LEFT(UsedSpaceMB, LEN(UsedSpaceMB) - 10) + ',' + SUBSTRING(UsedSpaceMB, LEN(UsedSpaceMB) - 10, 3) + ',' + RIGHT(UsedSpaceMB, 7)
  ELSE LEFT(UsedSpaceMB, LEN(UsedSpaceMB) - 7) + ',' + RIGHT(UsedSpaceMB, 7) END ELSE UsedSpaceMB END, 16) as UsedSpaceMB,
 CASE is_percent_growth WHEN 0 THEN CAST(growth/128 as VARCHAR) + ' Mb'
    ELSE CAST(growth as VARCHAR) + ' %' END as AutoGrowth
FROM DbData
OPTION (RECOMPILE);
GO  
EXEC sp_executesql N'DBCC LOGINFO() WITH NO_INFOMSGS'; 
GO
That script produces following result sets:
1. Verify that Database in Full Recovery Model;
2. Provide File size and Used space size for Data and Log files;
3. Gives current list of VLFs from the Log file.
Here is what I have:

Script #3. Generating some data and transaction activity:
SELECT TOP 1000 REPLICATE('A',8000) AS Column_1
INTO tbl_Test_01 FROM sys.messages;
GO
Run Script #2 to see how files have been change:
As you can see Used Space have increased in both files; Log file has been grown and number of VLFs have been increased.

Script #4. First test: Will try to decrease Used Space in Transaction Log by doing Differential Database Backup:
BACKUP DATABASE [TestLogBackup] TO DISK = N'C:\SS2014\Backup\TestLogBackup_2_Diff.bak' 
WITH  DIFFERENTIAL, NOFORMAT, INIT, SKIP, NOUNLOAD, STATS = 10;
GO
By running Script #2 you can verify that Used Space in Transaction Log not even shrunk, but even increased a little bit:

Script #5. Second test: Will try to decrease Used Space in Transaction Log by doing Full Database Backup:
BACKUP DATABASE [TestLogBackup] TO  DISK = N'C:\SS2014\Backup\TestLogBackup_3_Full.bak' 
WITH NOFORMAT, INIT, SKIP, NOUNLOAD,  STATS = 10;
GO
Run Script #2 again and verify that this does not help:

Script #6. Third test: Will try to decrease Used Space in Transaction Log by doing Transaction Log Backup:
BACKUP LOG [TestLogBackup] 
TO DISK = N'C:\SS2014\Backup\TestLogBackup_4_Log.bak' 
WITH NOFORMAT, INIT, SKIP, STATS = 10;
GO

That Worked!!! Used Space in Transaction Log has shrunk:

So,if you did not know before, now you know: The first line of defense against growing Log file is having Model database in Simple Recovery Model.
The second line of defense is implementation of well planned RESTORE strategy with regular Transaction Log backups.


No comments:

Post a Comment