Wednesday, July 22, 2015

Guest Post: Shrink the SQL Database Log File!

For the first time I'm introducing a guest blogger through this post.
Priyanka Chouhan has gladly volunteered to be the guest blogger for us. She has come up with an informative article on shrinking log file which you can read below.


SQL Server log files can often grow humongous in size and occupy a lot of space on your hard disk. Factors such as a long-running transaction or a paused database mirroring session can cause a transaction log to fill up quickly. As such, it is a good practice to keep the size of these log files under control by backing up and shrinking them regularly. Failing to do so could eventually put your database at a data loss threat due to lack of space and might raise the requirement of SQL database recovery tool.
Shrinking the transaction log file basically involves reclaiming the unused space within the log file by reducing it in size. The precious disk space thus salvaged can be utilized for other purposes. This guide will be covering the basic steps and also some good practices that you can follow while attempting to shrink the transaction log file.

Shrinking the log file – How does it work?

Let’s start with the basics. When a transaction log file is shrunk, its physical size on the disk is reduced since one or more inactive virtual log files associated with it are removed. The unit of the size reduction is always in terms of the virtual log file. For example, if you have an 800 MB log file that’s been divided into eight 100 MB virtual log files, the size of the log file can only be reduced in increments of 100 MB. In other words, the size of the file can be reduced to 600 MB or 700 MB but not 634 MB or 782 MB. Any virtual log file that does not hold any active log records is considered to be an inactive virtual log file and thus deemed fit to be removed via shrinking.
Like all other file shrinking operations, a transaction log shrinking operation frees space from the end of the file. Enough inactive virtual logs are removed to make the transaction file size fit the size requested by the user (see handy tips section).

Shrinking the log file – Command and Syntax

To shrink the transaction log file, the DBCC SHRINKFILE command is used with the following syntax:


Note: This is the most basic form of the DBCC SHRINKDB command. 
A lot of flags can be applied to this command to make it achieve more tasks. However, for our current illustration,the basic syntax will suffice.
Here is the detailed explanation of what each argument means.
log_file_name_Log – this indicates the logical name of the file that is to be shrunk
database – this indicates the name of the database to which the log file belongs
TRUNCATE_ONLY – this flag does the actual task of releasing all free space at the end of the transaction log file. It however, does not perform any page movement within the file. The file is shrunk only to the last allocated limit.

Shrinking the log file – Complete step-wise procedure

Now that we’ve gone through the command that does the trick, here’s a summarized step-wise account of how to shrink a large SQL Server transaction log file:
1.    As the very first step, back up your database. This is of utmost importance.
2.    Launch SQL Server Management Studio.
3.    Right-click on your database name and choose “New Query” to open up a query window with the large transaction log.
4.    Right-click on the database name, select “Properties”, and then in the “Files” screen copy the Logical Name (probably ends in <_log>.)
5.    Execute the DBCC SHRINKFILE command as shown in the previous section.
6.    At the end, perform a full backup of the database.

Few handy tips to prevent disasters

1.    The TRUNCATE_ONLY operation though shrinks the log file size dramatically, might inadvertently lead you to losing some valuable log file data at the end of the file. This is because it shrinks the truncated log file to the minimum size possible. Hence, a safer way would be to explicitly specify the target size for the file after shrinking. This can be done through the following parameter:
target_size – indicates the desired transaction log file size (after truncation) in MBs. The target size is expressed as an integer which if not supplied to the command is reduced to the default file size (mentioned at the time of creation of the file).
Note: The “target_size” parameter is ignored if specified with TRUNCATE_ONLY hence only use one at a time.
2.    Few DBAs make the mistake of executing chain of commands wherein they first set the database to simple recovery mode, then shrink the log file and then set the database back to full recovery mode. While this may theoretically seem correct, it might in fact cause more damage than help. By setting the database to full recovery mode after shrinking the log file you might end up losing valuable log data and might not even be able to restore point in time. Plus, you might not be able to use subsequent log files i.e. it will break the log file chain.


So now that you’ve learnt how to shrink the transaction log file, you shall use the above mentioned steps to reduce the log file to a size that’s much smaller than its original size. However, as shown above, the best practice is to regularly backup the log file to avoid oversized file growth.

About Author

Priyanka Chouhan is a technical writer in “Stellar Data Recovery “with 5 years of experience and has written several articles on SQL. She has in-depth knowledge on SharePoint and SQL Server. In the spare time she loves reading and gardening.

Saturday, June 6, 2015

SSMS Tips: How to Get This Debug Button Away From My Execute Button?

This has been an annoying issue for me many times.
The positioning of Debug button so close to Execute button in SQL Editor toolbar has caused me to hit Debug by mistake which would launch the debugger many times when all I needed was to execute query.
Of late I've also had multiple friends asking me on the same help as they also seem to have bitten by the same problem more than once. This prompted me to come up with this blog post so that it saves some effort for the wider audience to avoid issues like above.

The toolbar needs to be customized to move debug button away from execute button or even remove it from toolbar to avoid issues like above.

This can be done as below

Go to Tools ->  Customize from the top menu

 Go to Commands tab and select SQL Toolbar

Select Debug button in options and choose Move Up or Delete to change its position or remove it altogether.

The toolbar will look like below once you move it

This will spare you from issues like above due to the wrong launching of the debugger instead of executing the query

Saturday, April 25, 2015

ANSI NULLS Behavioral Difference - Variables vs Literals

The impetus for writing this blog comes from a recent interesting discussion that happened in one of the groups.
The topic of discussion was regarding ANSI NULL setting behavior. There is a difference in the way ANSI NULLS setting works while dealing with variables as well as with literal values. This blog illustrates on this difference in behavior as shown by ANSI NULLS


Consider the below illustration

For this example lets consider a table as below

declare @t table
ID int IDENTITY(1,1),
Val varchar(100)

Now lets populate the table with some values

INSERT @t(Val)
VALUES (NULL),('Test Val'),('foo'),('foobar')

Not lets see how ANSI NULL works in the case of a variable comparison


declare @searchword varchar(10) = 'test'

WHERE Val != @searchword

The result will be as below

As you see it will also consider NULL as a value and will return it among resultset as we expect.

Now lets see how it behaves while passing a value as a literal

Taking the example of the same table using a query as below

WHERE Val != 'test'

We will get the output as below

Compare this output with what we got earlier and you can see that it ignores the record with NULL value even when the ANSI NULLS is set to off. This would be different from how we expect ANSI NULLS to work.


The explanation given by Microsoft for this discrepancy is that this behavior is by design.
This is documented in their official documentation as below

SET ANSI_NULLS ON affects a comparison only if one of the operands of the comparison is either a variable that is NULL or a literal NULL. If both sides of the comparison are columns or compound expressions, the setting does not affect the comparison.

Many thanks to Adam Mechanic for sharing this useful piece of information.

This will clear the ambiguity around functionality of ANSI NULLS while dealing with variables and literals



Saturday, February 21, 2015

In Memory Tables vs Table Variables - Comparison

Recently there was a casual discussion with one of my colleagues who was asking me regarding the newly introduced memory-optimized tables in SQL Server 2014. The discussion went through few aspects of memory-optimized tables and their comparison against table variables. This blog summarizes the points that came out of the discussion. Thought of sharing it for the benefit of larger audience

What are memory optimized tables?
Memory-optimized tables are special type of table which resides in main memory. They got introduced in SQL Server 2014 version and is a method of implementing in-memory OLTP. They are transactional in nature and can preserve durability if required. Another important aspect of memory-optimized tables are that rows support versioning. This means that at any point of time the table preserves multiple earlier versions of its rows.
Another important feature of these tables are that you can make them durable which is the default behavior or keep them as non durable in which case the data doesn't get persisted in disk. This will help you to save some disk IO, but at the expense of data loss in case of a server crash or restart.
The memory-optimized tables can be accessed using normal adhoc T-SQL statements or using natively compiled stored procedures.
What is a table variable?
Table variables are variables of table data type. Tables variable reside in main memory until they get swapped out to disk for space. They also does not store distribution statistics information within them. They also cause no recompiles. Tables variables also does not allow modification using ALTER TABLE. Explicit indexes cannot be created in table variables until SQL Server 2014
 Now that we have seen what a memory optimized table and table variables is we will now see some differences in behavior between them

Differences between memory optimized vs table variables

When you consider scope memory optimized tables have scope same as actual tables with a slight change. The memory optimized tables will be available across all connections. They come in two flavors - durable (default) and non durable. In the case of durable tables, they preserve the schema as well as the data as they're persisted to the disk whereas in the case of non durable tables only the schema is persisted. This means that during a server restart the tables gets recreated but without any data in them. The durability can also be made fully durable or delayed durability with the latter having a risk of causing some data loss in case of an intermediate server restart/failure.
In contrast the table variables have scope only within a batch in which they're declared. This is evident from the below example

declare @t table

id int,
val varchar(10)
insert @t
values (1,'Test'),

select * from @t


The first select works whereas second one does not as its outside the batch in which table variable is created (GO represents the end of that batch)

Tempdb Usage
Memory-optimized tables will reside in main memory itself.So they wont use up any tempdb resources at all!
On the other hand table variables will be in main  memory only until the space requirements is within the  available space of cache. Once there is a requirement for extra space it utilizes the tempdb. Hence while using table variables for large datasets you would see higher usage of tempdb.

Transaction Support
Memory-optimized provides transaction support similar to normal tables. They have two ways of handling transactions based on whether we want it to be fully durable or delayed durable. The former will cause the changes made by the transaction to be persistent whereas the latter would be durable   only after the in-memory transaction log is persisted to disk.
Table variables on the other hand doesn't support transactions. This is very evident from the below example snippet

begin tran testtemptabletran
declare @t table

id int,
val varchar(10)
insert @t
values (1,'Test'),

rollback tran testtemptabletran
select * from @t

Table Statistics
Memory-optimized tables store statistics info just like normal tables. But there is a difference in the way statistics gets updated for memory-optimized tables against normal disk persisted tables. Whilst disk based tables update statistics automatically in the case of memory-optimized tables we have to do this manually. We need to make use of CREATE STATISTICS statement for this purpose. We need specify  NORECOMPUTE clause while creating/updating statistics for memory-optimized tables as they does not update the statistics themselves. We should also specify FULLSCAN mode as memory-optimized tables only support Fullscan statistics which is a time consuming process as against disk based tables which uses sampled statistics as the default option.
In comparison table variables store only limited distribution statistics. This is why quite often you find wrong cardinality estimates in the execution plan when query involves table variables.

So as seen from above points both have their own pros and cons.
The main advantage of memory optimized tables are that they are in main memory itself unless you want to persist data to disk. So question is whether these advantages can be coupled with table variables to get more advantage. Fortunately we have a way of doing this.

The Hybrid Approach - Using Memory Optimized Table Type Variables
For creating memory optimized table variables what we can do is to create a table type which is memory-optimized. Once this is done we can create a variable for this table type which will be memory-optimized in nature. This will give merge the advantages of both tables variables as well as memory-optimized objects.
The main among them are

  • The memory-optimized table type variables never get swapped to disk as they are now memory-optimized. As such there is no tempdb usage and associated IO costs
  • The memory-optimized table type variables also take advantage of memory-optimized algorithms and can be accessed using natively compiled procedures for more efficiency
  • One more advantage of memory-optimized table type variables over memory optimized tables is that since they're non transactional in nature they support cross db queries whereas normal memory-optimized tables will not allow cross db access. This comes handy especially in the case where you want data to copied across multiple db tables where memory-optimized table type variables can be used as a medium for data transfer
We can create memory-optimized table type variable using below code

      Col1 [int] NOT NULL,

      Col2 varchar(100) NOT NULL,

      INDEX [IX_Col1] HASH (Col1)

            WITH ( BUCKET_COUNT = 8)



Now you can use this variable in dml operations like below



      (345,'Blah blah')

From the above points we can clearly see the advantages of conventional table variables vs the new memory-optimized tables usage. This also gives us some insights into how the two concepts can be merged to get a hybrid solution which merges the advantages of each.


Tuesday, January 13, 2015

Implementing Automatically Tidying Up Adhoc Jobs in SQLServer Agent

Business Case
Recently I had an opportunity to inherit a system from a client for doing some enhancements. One of the tasks involved in the project was also to clean up the system which had lots of left overs like unused tables, stored procedures etc. Along with them one thing I noticed was the presence of lots of jobs which are not executed at all but still existing on the production environment. On inquiring on this I was told that these were created by developers for performing activities which would require onetime execution of scripts like for example flushing out old data, applying quarter level financial calculations etc. As a policy the db admin team were creating jobs with a one time schedule for performing them unattended as most of activities had to be performed on weekends or out of office hours.

Issue with the above approach was that jobs were left in the SQLServer agent even after required task was performed and it was left for someone to go and tidy up them from the server. When I inherited the system it was a big mess with lots of unwanted jobs retained in the instance and it was a time consuming process to identify and remove the unwanted jobs from the system.

There is an easier solution for scenarios like above to create auto cleanup jobs which takes care of  the tidy up part by itself once the required task is performed. This blog explains how we can create the jobs with the auto cleanup capability.

 Consider the case of deleting oldest quarter data from the table at the beginning of each quarter. This would usually be done over the weekend as deleting bulk data would take considerable amount of time and also there would be some following task like index reorganizing etc.
We can utilize a one time scheduled job for this purpose. There's an option in the job properties which is seldom used which enables the job to do auto tidy up once the task is performed. The property is set as below in jobs notification properties

Once you set this property the job will get automatically destroyed once it runs successfully as per the set schedule.

One caveat to note here is that since the job gets deleted upon successful execution you will not be able to check the history to see if job executed successfully.
So if you need a mechanism to ensure the job executed successfully before getting deleted what you could do is to add a dummy step in the end with just a print statement as below

And then configure to log the output to a file

Once the job is executed and removed you can go and check the output file to see the logged message indicating successful job execution.

This method can be used to create auto tidying up jobs with notification which will help you to automate adhoc one time tasks based on a predefined schedule. This will help us to make sure the jobs get auto cleaned up after performing the requested tasks and would certainly save some amount of developer effort in getting this cleaned up.