Sunday, August 9, 2015

SQLServer Tips: Distributed Query Issues With ACE Provider in 64 bit

Here is another quick blog on solution for an issue you may face while working with ACE provider in 64 bit windows server


A colleague of mine came to me today morning with an issue. He was trying out few distributed query options using OPENROWSET for exporting some data from Excel 2007 for an adhoc requirement utilizing the ACE provider . Suddenly he got a transport error and couldn't connect to instance anymore.
I started a remote desktop session to the server and to my surprise I saw SQL Server service and SQL Agent agent being down. I restarted them and asked him to try again and monitored myself this time. The OPENROWSET query fetched the results and immediately after that gave the same transport error. At the same time the services for SQL Server and SQL Agent went down. So it was obvious that ACE provider distributed query was causing the services to be down.
I tried the same query in my server and it worked fine. But we were consistently able to reproduce the issue in the problem server. My server was having Windows Server 2008 R2 as the OS whereas the one with the problem had Windows Server 2012 as the OS.


I was not sure what should be done for solving this however I decided to apply the below specified steps to see if it fixes the issue.

as they also related to ACE provider issues

As I expected doing the above steps fixed the issue.
The steps that were of significant relevance in the above case were the below

EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0'
    , N'AllowInProcess', 1
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0'
    , N'DynamicParameters', 1


Once this is applied the distributed queries started working fine without affecting SQLServer services.
Something to lookout for the next time when you experience similar problems!

Tuesday, August 4, 2015

SQL Server Data Tools BI Installation Issues on Windows 10

This is a short blog to share the details of an issue that you may face while installing SQL Server Data Tools for BI (SSDT-BI) on Windows 10


You've a laptop or desktop which you upgraded to Windows 10 recently and you're trying to install SSDT-BI in it. You may get an error during the installation and log files will have an entry similar to below
 Stop Block: CompatibilityMode : Windows Program Compatibility mode is on. Turn it off and then try Setup again.

Please note that this error happens only when when you're trying for a fresh install of SSDT-BI on Windows 10. If you already had SSDT installed before the upgrade it should still work fine.
Microsoft has acknowledged this as a bug and the fix is expected to be out soon.

Meanwhile you can apply the below workaround if you want to get things sorted out right away.


The workaround is to follow the below sequence

  • Start with the full installation of Visual Studio on Windows 10
  • Subsequently do the installation of SSDT-BI on top of it
Once this sequence is followed the installation will complete successfully and you would be able to use SSDT-BI for your development.

If you don't want to do this, you can wait for patch to release with the Windows updates for avoid getting the  above specified issue

Hope this will help the people facing similar issues with a quick fix method until the patches are out.

Wednesday, July 22, 2015

Guest Post: Shrink the SQL Database Log File!

For the first time I'm introducing a guest blogger through this post.
Priyanka Chouhan has gladly volunteered to be the guest blogger for us. She has come up with an informative article on shrinking log file which you can read below.


SQL Server log files can often grow humongous in size and occupy a lot of space on your hard disk. Factors such as a long-running transaction or a paused database mirroring session can cause a transaction log to fill up quickly. As such, it is a good practice to keep the size of these log files under control by backing up and shrinking them regularly. Failing to do so could eventually put your database at a data loss threat due to lack of space and might raise the requirement of SQL database recovery tool.
Shrinking the transaction log file basically involves reclaiming the unused space within the log file by reducing it in size. The precious disk space thus salvaged can be utilized for other purposes. This guide will be covering the basic steps and also some good practices that you can follow while attempting to shrink the transaction log file.

Shrinking the log file – How does it work?

Let’s start with the basics. When a transaction log file is shrunk, its physical size on the disk is reduced since one or more inactive virtual log files associated with it are removed. The unit of the size reduction is always in terms of the virtual log file. For example, if you have an 800 MB log file that’s been divided into eight 100 MB virtual log files, the size of the log file can only be reduced in increments of 100 MB. In other words, the size of the file can be reduced to 600 MB or 700 MB but not 634 MB or 782 MB. Any virtual log file that does not hold any active log records is considered to be an inactive virtual log file and thus deemed fit to be removed via shrinking.
Like all other file shrinking operations, a transaction log shrinking operation frees space from the end of the file. Enough inactive virtual logs are removed to make the transaction file size fit the size requested by the user (see handy tips section).

Shrinking the log file – Command and Syntax

To shrink the transaction log file, the DBCC SHRINKFILE command is used with the following syntax:


Note: This is the most basic form of the DBCC SHRINKDB command. 
A lot of flags can be applied to this command to make it achieve more tasks. However, for our current illustration,the basic syntax will suffice.
Here is the detailed explanation of what each argument means.
log_file_name_Log – this indicates the logical name of the file that is to be shrunk
database – this indicates the name of the database to which the log file belongs
TRUNCATE_ONLY – this flag does the actual task of releasing all free space at the end of the transaction log file. It however, does not perform any page movement within the file. The file is shrunk only to the last allocated limit.

Shrinking the log file – Complete step-wise procedure

Now that we’ve gone through the command that does the trick, here’s a summarized step-wise account of how to shrink a large SQL Server transaction log file:
1.    As the very first step, back up your database. This is of utmost importance.
2.    Launch SQL Server Management Studio.
3.    Right-click on your database name and choose “New Query” to open up a query window with the large transaction log.
4.    Right-click on the database name, select “Properties”, and then in the “Files” screen copy the Logical Name (probably ends in <_log>.)
5.    Execute the DBCC SHRINKFILE command as shown in the previous section.
6.    At the end, perform a full backup of the database.

Few handy tips to prevent disasters

1.    The TRUNCATE_ONLY operation though shrinks the log file size dramatically, might inadvertently lead you to losing some valuable log file data at the end of the file. This is because it shrinks the truncated log file to the minimum size possible. Hence, a safer way would be to explicitly specify the target size for the file after shrinking. This can be done through the following parameter:
target_size – indicates the desired transaction log file size (after truncation) in MBs. The target size is expressed as an integer which if not supplied to the command is reduced to the default file size (mentioned at the time of creation of the file).
Note: The “target_size” parameter is ignored if specified with TRUNCATE_ONLY hence only use one at a time.
2.    Few DBAs make the mistake of executing chain of commands wherein they first set the database to simple recovery mode, then shrink the log file and then set the database back to full recovery mode. While this may theoretically seem correct, it might in fact cause more damage than help. By setting the database to full recovery mode after shrinking the log file you might end up losing valuable log data and might not even be able to restore point in time. Plus, you might not be able to use subsequent log files i.e. it will break the log file chain.


So now that you’ve learnt how to shrink the transaction log file, you shall use the above mentioned steps to reduce the log file to a size that’s much smaller than its original size. However, as shown above, the best practice is to regularly backup the log file to avoid oversized file growth.

About Author

Priyanka Chouhan is a technical writer in “Stellar Data Recovery “with 5 years of experience and has written several articles on SQL. She has in-depth knowledge on SharePoint and SQL Server. In the spare time she loves reading and gardening.

Saturday, June 6, 2015

SSMS Tips: How to Get This Debug Button Away From My Execute Button?

This has been an annoying issue for me many times.
The positioning of Debug button so close to Execute button in SQL Editor toolbar has caused me to hit Debug by mistake which would launch the debugger many times when all I needed was to execute query.
Of late I've also had multiple friends asking me on the same help as they also seem to have bitten by the same problem more than once. This prompted me to come up with this blog post so that it saves some effort for the wider audience to avoid issues like above.

The toolbar needs to be customized to move debug button away from execute button or even remove it from toolbar to avoid issues like above.

This can be done as below

Go to Tools ->  Customize from the top menu

 Go to Commands tab and select SQL Toolbar

Select Debug button in options and choose Move Up or Delete to change its position or remove it altogether.

The toolbar will look like below once you move it

This will spare you from issues like above due to the wrong launching of the debugger instead of executing the query

Saturday, April 25, 2015

ANSI NULLS Behavioral Difference - Variables vs Literals

The impetus for writing this blog comes from a recent interesting discussion that happened in one of the groups.
The topic of discussion was regarding ANSI NULL setting behavior. There is a difference in the way ANSI NULLS setting works while dealing with variables as well as with literal values. This blog illustrates on this difference in behavior as shown by ANSI NULLS


Consider the below illustration

For this example lets consider a table as below

declare @t table
ID int IDENTITY(1,1),
Val varchar(100)

Now lets populate the table with some values

INSERT @t(Val)
VALUES (NULL),('Test Val'),('foo'),('foobar')

Not lets see how ANSI NULL works in the case of a variable comparison


declare @searchword varchar(10) = 'test'

WHERE Val != @searchword

The result will be as below

As you see it will also consider NULL as a value and will return it among resultset as we expect.

Now lets see how it behaves while passing a value as a literal

Taking the example of the same table using a query as below

WHERE Val != 'test'

We will get the output as below

Compare this output with what we got earlier and you can see that it ignores the record with NULL value even when the ANSI NULLS is set to off. This would be different from how we expect ANSI NULLS to work.


The explanation given by Microsoft for this discrepancy is that this behavior is by design.
This is documented in their official documentation as below

SET ANSI_NULLS ON affects a comparison only if one of the operands of the comparison is either a variable that is NULL or a literal NULL. If both sides of the comparison are columns or compound expressions, the setting does not affect the comparison.

Many thanks to Adam Mechanic for sharing this useful piece of information.

This will clear the ambiguity around functionality of ANSI NULLS while dealing with variables and literals