ColdFusion Muse

MS SQL and the Tragedy of Log File Obesity

Mark Kruger June 30, 2005 9:59 PM MS SQL Server Comments (9)

Did you ever have this annoying problem? About once every 2 months a customer will call panicky because the server is generating errors that say the "transaction log is full". Many (maybe most) customers don't know what it is or what it's for - but they suddenly have a problem with it. The transaction log is the history of all the database changes from the time of the last complete backup. It's purpose is to provide recovery for you AFTER the last complete backup. That is to say, if you have a complete backup from 1 a.m. and a transaction log, you could restore your database back to "1:00 a.m." status - and then restore it "forward" to a particular point in time.

This useful feature comes at a cost. Everything must be recorded in the transaction log. The transaction log on a database with many updates, deletes and inserts can grow ginormous (thanks Elf)! I saw one today that was 9 gigabytes for a 12 megabyte database - a logging and tracking database. Unless you've had this problem you may not know how to fix it. Here are two ways to do it.

Truncate and Shrink

Using the "sa" user and the "master database, run the following commands.

EXEC sp_dboption 'yourDatabaseName', 'trunc. log on chkpt.', 'TRUE'
This effectively causes the server to truncate the log the next time it runs a "checkpoint". A checkpoint is a synchronization task that flushes "dirty pages" to the disk. That sounds kind of naughty, but it really means that the server is periodically saving your data from the cache - where much of it is kept for speed reasons - to the disk, where it is stored "permanently". By changing the "trunc. log on chkpt." command you are telling the db that the next time it is safe to clear the transaction log to go ahead and do so.

Next you can "force" a checkpoint for your db. by running the following:

use yourdatabasename
         checkpoint   
      go

You are now ready to "shrink" the database. MS SQL actually gives you a couple ways to do this, but the DBCC command is probably the easiest.

DBCC SHRINKDATABASE (yourdatabasename, 10)
What's that number 10 for? It's the "percent free" number. It leaves a certain amount of space (as a percentage) in the file. This is useful because it keeps the server from having to "resize" the file too often. This command will return a record set with some size parameters (in megabytes) showing you how successful you have been.

Detach and Reattach.

This is more complicated, but there are times when the DBCC command won't do because the database is too large or the process is failing for some reason (like disk size). Have care when using this method. Again in query analyzer run the following:

EXEC sp_detach_db 'yourdatabasename', 'true'
Note - at this point your database is no longer a member of the server - it's gone. The MDF and LDF file are still in the directory, but the db is no longer "online" - so be prepared. Now, go into the data directory (usually something like c:\mssql\data\) and find the LDF file associated with the DB. The LDF file is the log file the MDF file is the actual data file. If you delete the MDF you are out of luck. Let me say it one more time. Don't delete the MDF file. The LDF file is usually the name of your db + _log.ldf. For example, "pubs_log.ldf". Once you locate it, delete it. That's right, send it to the showers.

You now have a valid db file with no log file. Your next task is to "attach" it back to the server. Here's the code:

EXEC sp_attach_db @dbname = N'yourdatabasename',
@filename1 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\yourdatabasename_data.mdf'
If you look at the docs you will notice that it usually calls for 2 file names - the data and the log - like this:
EXEC sp_attach_db @dbname = N'yourdatabasename',
@filename1 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\yourdatabasename_data.mdf',
@filename2 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\yourdatabasename_log.ldf'
If you leave off the log file however, you only get a warning. The stored proc creates a new log file for you and you are good to go. Remember, to use this method you are taking your DB off line until it is reattached, and you are deleting the log file. That makes it a more drastic approach than the one above.

  • Share:

9 Comments

  • Ryan Guill's Gravatar
    Posted By
    Ryan Guill | 6/30/05 8:56 PM
    Wouldn't the best bet be to just do a new update?
  • thomaxin's Gravatar
    Posted By
    thomaxin | 7/1/05 1:49 AM
    If you set a limit to the size of your Transaction Log in the DB properties AND backup the log regularly (using SQL Server Agent Jobs) it'll automatically rotate it and you don't need to worry about it any longer...

    I found this when I had the same problem:
    http://www.experts-exchange.com/Databases/Microsof...
  • mkruger's Gravatar
    Posted By
    mkruger | 7/1/05 8:20 AM
    Yes - but if you read carefully there's some trade offs. You have to backup the log files regularly - say 1 to 3 times a day (a slight performance drag). In addition, backing up the logs regularly in itself doesn't help you if you have a space issue. And because you have the "maxsize" set for the log file you still run the risk of getting a "log full" error message - it will take some fine tuning to get it right. In fact, this is the approach we often take when solving this problem. It DOES give you the benefit of full logging, and it will prevent the error from occuring


    What often happens is a small datase with little activity will suddenly become quite important because of business changes - but fine tuning that database is not on anyone's radar so it suddenly "pops up" with an error. It may even be configured as you suggest, but it is suddenly generating log file data in the gigabytes instead of megabytes - which causes the error to show up.

    I would add that full logging is not necessary on every DB. I always try to get the customer to consider what "worse case" they can tolerate. Some of the worst offenders are "logging" database whose sole task is to track page views or banners. Losing a maximum of 24 hours of data is not critical in MOST cases, so I suggest setting the truncate on checkpoint option to true and minimizing the log file size.
  • Ryan Guill's Gravatar
    Posted By
    Ryan Guill | 7/1/05 8:40 AM
    Im sorry, I meant backup, not update last night. It was getting late and I didnt really have any business trying to write anything at all. I see what you are saying though, its getting too full too quick for you to do an backup, cause you cant be running backups all day.
  • mkruger's Gravatar
    Posted By
    mkruger | 7/1/05 9:25 AM
    Yes - you could do a full backup and then shrink the db - but that will doubtless take longer than solution 1 - and it will be a drain on the resources of the server. Backups are usually done during lulls in server activity - while this error usually occurs at a peak. Consider a 1 or 2 gig database with a 10 gig log file. backups could slow you down for a lot longer than it will take to truncate the log and shrink it.

    The other is is space. Often this error comes to light because a server is running out of space - due to an enormous log file. Backups don't help (unless they are to tape - which takes even longer). There are many "emergency situations" where this is the way to go. It should not be standard practice, but when you are in a bind, you may be out of options.
  • PeteC's Gravatar
    Posted By
    PeteC | 8/22/05 9:23 AM
    Hi there.
    Had a similar problem recently with a non-CF app called WebTime. I lost most of my hair before I realised that the DB was set for single user permissions.
    This is typical of a database set up and used by a 3rd party software.
    With the single user permissions, DBCC SHRINKDATABASE won't work if the website is connecting to the SQL server, as it effectively locks the DB.
    Of note was the fact that sa account couldn't complete backups scheduled in the maintenance plan either, as the web application account had it locked.
    You will need to KEEP single user disabled for this to be done. Fortunately in my case. it didn't affect the operation of the application.

    Keep an eye out for that gotcha.
  • Mark's Gravatar
    Posted By
    Mark | 8/22/05 9:27 AM
    Pete - thanks for the insight. That's a good one to look out for. I've also had to wrangle with the user status to get some to work or NOT work in the past. It's not easy to rest the db status away from the server at times :)
  • F. Steffensen's Gravatar
    Posted By
    F. Steffensen | 1/25/07 9:35 AM
    First I must comment on the article which I find to be one of the most quintessential pieces of mssql information around. No doubt the subject has (and will continue to do so) resulted in many frustrations with database administrators. Having said that I will allow myself to elaborate:

    using the checkpoint operation either implicitly or explicitly the automated truncation of the transaction log will only occur when the database is using the simple recovery model. Hence, if the database is using the full recovery modol which is mostly the standard setting for the various mssql versions truncating the transaction log using checkpoint will have not effect.

    An alternative to rely on the checkpoint operation I have found that using the "backup" approach is useful:

    BACKUP LOG Pubs WITH TRUNCATE_ONLY

    This will result in a truncation of the log file and without any new backup file being created. And as an alternative to shrinking the entire database I have found shrinking the log file only useful:

    DBCC SHRINKFILE (Pubs_Log, 10)

    Here the number indicates desired size in MB. Hope this helps you guys out there. More power to mssql server :)
  • Jeremy's Gravatar
    Posted By
    Jeremy | 10/20/08 12:28 PM
    This post solved my problem. I had a 20 MB database with a 24 GB log file!