Tag Archives: Microsoft SQL Server

Helpful tips and tricks for SSMS

I have been busy working on another project and haven’t been able to get a post together in a bit. So, I wanted to put something quick together for everyone. These are some of the tips and tricks that I have learned over the years and have found uses for almost everyday.

Adding line numbers

Line numbers are key to debugging code. It is the reason code editors tend to have them on by default. But, this is not the case with management studio. To add these select Tools -> Options menu item. In the dialog box that appears click on Text Editor and then Transact-SQL. In the display section on the left, select the check box for line numbers

Inserting a comma separated list

Every select statement needs a comma separated list of columns you would like to return. But, this can be a pain to type especially for many columns. But, what I tend to do is just grab all the columns in the tables and select the blocks I do not need. Now some of you are stating, but why in the world would you write those all out???

The trick is not to write them out to begin with. Drag the ‘Columns’ item in Object Explorer and drop it onto to a query window. This generates a list in your query window. Just add the select, from, and where clause.

Selecting a block of text

Now that you have all the columns from your tables, you need to remove some. Instead of just highlighting the text and deleting the rows. Try to hold the ‘alt’ key and highlight the text. This allows you to select the block of text and remove it. That’s pointless, why would anyone need this?

The best use case for this is to remove the schema/table references at the front of your columns.

Inserting a block of text

Now that you can select this text, there is always one question that comes up. Inserting text, how?

To do this, add the ‘shift’ key along to your ‘alt’ key. Once held down, use the directional keys to navigate up and down. This makes aliasing tables easy. A great Gif can be found on Brent Ozar’s Website.

Color coding connections

Finally, many of us tend to work on every server at once. This can lead to some confusion when trying to execute queries on a specific box. This can be great to label different boxes specific colors. SSMS has the ability to color code specific connection. This displays at the bottom of the query window.

E.G. Green for test or Red for Production for example.

When connecting, select the Options button in the Connect to Database Engine window. Then select the Connection Properties window. Select the check box towards the bottom of the window and use the ‘Select…’ button to choose a colour.

Transaction log Q&A

This week I am going to take a break from the database modeling series. Instead, I want to discuss some of the confusion around the transaction log. The transaction log is one of the most important things to understand in SQL server. Especially when referring to High availability or Disaster recovery. In these features, SQL uses the transaction log as a key component. Afterall, without your transaction log your database is unusable.

What is a transaction?

A transaction is a single unit of work that complies with A.C.I.D. standards. A.C.I.D stands for:

  • Atomic – It either all works or none of it does
  • Consistent – It complies with the “Rules”, meaning constraints, triggers, datatype adherence, etc.
  • Isolated – It will not affect other transactions that are running at the same time
  • Durable – Changes are permanent and will survive a power outage, crash, and/or memory dump

Additionally, SQL server uses implicit transactions. This means that it will apply the begin and commit/rollback for you if not specified. The line shown below is a single transaction.

DELETE FROM person WHERE lastname='Billy'

Whereas, this line is two transactions.

DELETE FROM person WHERE lastname='Billy'
DELETE FROM person WHERE firstname='Timmy'

To make this durable you need to wrap it with a begin transaction and end with commit or rollback. Otherwise known as an explicit transaction.

Transaction log Operation

To outline the transaction log operation I always found it easier

  1. User executes a transaction
  2. Pages that are going to change go to the SQL cache on RAM
  3. The log file records the details of the transaction. This process also assigns the transaction a Log Sequence Number (LSN).
  4. After storing the intent of the transaction, SQL server then modifies the pages on RAM.
  5. The pages are, at some point, written back to the disk.

So the now that we understand the basics of the transaction log. We can answer one of the first questions I receive.

What happens during a crash?

If your server crashes, it is the responsibility of the log file to help you recover. The dirty pages (pages that were in RAM that did not make it to disk) are now lost. But, the transaction log contains a full description of the intent of each transaction.

Upon restart, SQL begins to check integrity and consistency utilizing the log. To do this, it compares the LSNs in the log to those on disk. If the LSN in the log is newer it updates the file on the disk, known as the REDO phase. If the LSN in the log is older, it then knows it needs to rollback, known as the UNDO phase. These make sure that when the database comes online, it is consistent.

This segways into the new question I often receive.

Why does my log file keep growing?

To understand this, you need to first understand how the log file handles information. To start, the transaction log file is a circular file. This means that it has a set amount of space and continues to create new log records sequentially until the end. Once it reaches the end of the file SQL server faces two options.

  1. Circle back to the beginning and override older log files.
  2. Grow the bounds of the log file and continue to consume space.

With Option 2, it requires extra space and needs the overhead to grow the file. When the log file grows, it does so by creating new virtual log files (VLFs) with the log. You can consider these as groups of transactions within the log. Thus using the transaction log with option 1 is preferable.

So since this is preferable, why does sql keep growing? Well it is simple. SQL server needs to keep log records around until no purpose remains for their storage. These purposes vary, but the main reason is the records are critical to the REDO and UNDO phase. SQL does this by marking them as “Active”. This means that SQL is not allowed to reuse the space in the log until all the transactions are “Inactive” in the VLF. Then and ONLY then can SQL server reuse the space allocated to the VLF.

So in short, your SQL server is leaving the transactions as active. But don’t freak out, this is normal for the FULL recovery model. In this model you need to take transaction log backups to fix this.

Unfortunately, this conversation usually doesn’t happen right away. This comes up when the log file is using a massive amount of space on the drive. This leads to the next question…

How do I Manage the log file?

There are many different articles about this. But, they do not always have the best/correct answer. Most people just recommend to either:

  • Switching the database to Simple/Shrink the file/Switch back to full
  • Truncating the log with NO_LOG/TRUNCATE_ONLY

Both of these solutions kill me, luckily, starting in 2008 option two is no longer possible. Yet, Option 1 is still TERRIBLE. When you look at what we talked about earlier, I hope this makes you cringe.

To explain:

When you shrink the file this way you completely discard everything in the log file. That means all the work since the last backup is gone if your database crashes or fails for whatever reason. This violates the whole point of the FULL/BULK_LOGGED recovery models. The point of this model is to preserve the transaction log so a database can recover to a specific time.

Besides, if you shrink the file, it will grow again. This growth is likely to create log fragmentation in the VLFs. This is a performance concern much like disk fragmentation is. Though it is not as much of a problem in SQL 2014 and above but it should still be on your mind. Paul Randall has a good article about the creation of VLFs listed here.

Additionally, this will pause your workload while the log file is growing.

There are only 2 solutions that are acceptable in my opinion:

  1. Run in Simple mode – If you don’t mind the possibility of losing data in the event of a disaster, this is a good option.
  2. Running regular log backups. Especially, if you are looking for point-in-time recovery.

Finally, after I explain this to them, I always get…

How do I get it back to a reasonable size?

It is always tricky to give the “Reasonable” size answer. The truth is that it varies based on your workload and size of your database. I would always suggest to do the following:

  • If it is your company’s proprietary database speak to the developers. They should have a staging environment where you can track the log growth.
  • If you are using another company’s database reach out to their support. They might have an answer but they are more likely to give you the “It depends” response.
  • Guess… I have seen guestimates from 20-25% of the MDF to 1-2% on Multi-terabyte databases.

The good news is that it is easy to change. To resize the log file I always use the following steps:

Wait for an inactive time of day.  It would be best to put the database into single user mode first but it is not required.

ALTER DATABASE databasename
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;

Run a transaction log backup if you are in full recovery mode. Or if you’re using the simple recovery just clear the transaction log by running a checkpoint.

BACKUP LOG databasename
TO DISK = 'File path and extension (.TRN)';

Shrink the log to the smallest possible amount.

DBCC SHRINKFILE(TlogLogicalFileName, TRUNCATEONLY);

You will need the filename of the log to complete this step. Run the following script if you do not know it.

EXEC SP_HELP

Alter the database to change the transaction log file

ALTER DATABASE databasename
MODIFY FILE
(
NAME = TlogLogicalFileName
, SIZE = NewSize
);

If you set it to single user, Change it back

ALTER DATABASE databasename
SET MULTI_USER;

SQLSkills  suggests to grow the logs in 8GB chunks using 8000MB as the NewSize variable. This creates VLFs that are 512MB. These smaller chunks make it easier to maintain the smaller log file size.