Sunday, May 3, 2020

Fun with SQL – Find out numbers where adjacent digit differs by 1

My friend and mentor Madhivanan had posted an interesting scenario on his blog

https://exploresql.com/2020/05/01/fun-with-sql-find-out-numbers-where-adjacent-digit-differs-by-1/

and gave it as a challenge for others

The question was below



I decided to have a try myself on this
Here;s my version of the query for the scenario


--Provide input value 
declare @number int
set @number=105

 ;With numbers
 as
 (
 select 1 as N UNION ALL
 SELECT 1 UNION ALL
 SELECT 1 UNION ALL
 SELECT 1 UNION ALL
 SELECT 1 UNION ALL
 SELECT 1
 ),
 Num_matrix
 AS
 (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1))-1 AS Seq
 FROM numbers n1
 cross join numbers n2
 cross join numbers n3
 cross join numbers n4
 )

 select m.Seq
 from
 (
 select Seq 
 from num_matrix 
 where seq < @number 
 )m
 cross join num_matrix n
 where n.seq between 1 and len(m.Seq)
 group by m.Seq 
 having count(*) = 1
 or 
 sum(case when abs(substring(cast(m.seq as varchar(100)),n.seq,1)*1 - nullif(substring(cast(m.seq as varchar(100)),n.seq+1,1),'')*1) = 1
 or  abs(substring(cast(m.seq as varchar(100)),n.seq,1)*1 - nullif(substring(cast(m.seq as varchar(100)),n.seq-1,1),'')*1) = 1 then 0 else 1 end) = 0
 order by m.Seq



And output is the below



Try your version and post back the query in comments

Happy coding!

Monday, February 3, 2020

SQLServer : Real Time Streaming RDBMS Data Using Apache Kafka Platform


Problem Statement 

There is an application database hosted in a SQLServer instance which is capturing data from a automation software including the readings as well as general health related details of multiple machines. 
The requirement asks for the creation of a close to real time reporting platform which can be used as a monitoring and health check mechanism for the machine as well as perform time series based analysis on it.

Solution

The requirements like above requires the usage of a real time streaming platform which is capable of transmitting huge amount of data close to real time from the database. There are many such streaming platforms available. 
One of the popular streaming tool which can be used in the above case is Apache Kafka. Apache Kafka is capable to connecting to variety of heterogeneous sources and transmit data from them in close to real time as streams. 
So in our above case we make use Kafka to connect to the SQLServer database to extract data from tables and stream them in real time.
You can learn more about Kafka from the below link


Kafka Connect is an open source component of Kafka which provides the required framework to connect Kafka to a variety of systems to act as source or sink.


For connecting to SQL Server we shall make use of either of the below connectors

1. JDBC based connector


This connect directly to the tables and extract data from them

2. Debezium connector for SQLServer


Debezium is a distributed platform which enables converting RDBMS data to event streams by means of set of source and sink connectors. The connectors enables Kafka Connect to systems as source to extract data and stream them as Kafka topics close to realtime. They also includes sink connectors which enables data from Kafka streams to be stored as data inside RDBMS table.


Illustration

A very detailed step by step illustration for the above scenario is explained in the latest article I've contributed to Technet community whose link is given below


The article explains from setting up of the required systems (SQLServer, Kafka, Kafka Connect, Debezium connector) to streaming data and analyzing the stream through command line and through K-SQL server.

The article should help someone who is starting on the above mentioned systems and requires implementing use case similar to that specified above

Wish you happy reading.

As always feel free to give your valuables feedback

Thursday, August 1, 2019

Graph Table Enhancements : Edge Constraints, Utility Functions


The second part of the article https://visakhm.blogspot.com/2017/05/graph-tables-in-sql-2017.html can be found here

https://social.technet.microsoft.com/wiki/contents/articles/53162.graph-table-enhancements-edge-constraints-system-utility-functions.aspx

Here I have explained on the concept of Edge Constraints and how they can be used to enforce the relationships between Node tables within an Edge table.

The article also explains on certain system functions which can be used in conjunction with Node and Edge tables to extract the information from pseudo columns in Graph tables which can be utilized in the filter conditions of queries.

Wednesday, April 10, 2019

T-SQL Tips: Generating Nested XML Structures Efficiently Using FLOWR Expressions

Introduction

I have been thinking about publishing an article on this for quite a while now. Of late have been too busy (or may be lazy is a better word!) to write on this. Finally thought of breaking the 
Previously I had blogged about how FOR XML PATH can be used to generate nested XML structures in the below two articles



There is an alternate method that can be used to generate these nested XML structures using FLOWR expressions. This will be much more efficient than using nested subqueries for large data volumes.

 Illustration

Using the same example as in the previous article, we can see how FLOWR expression can be applied to get same result

CREATE TABLE [dbo].[Orders](
[OrderID] [int] IDENTITY(1,1) NOT NULL,
[CustName] [varchar](100) NULL,
[OrderDate] [date] NULL,
[ReferredBy] [varchar](100) NULL,
[AgentGrp] [varchar](30) NULL,
PRIMARY KEY CLUSTERED 
(
[OrderID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
SET IDENTITY_INSERT [dbo].[Orders] ON
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (1, N'ABC Corp', CAST(0xB8350B00 AS Date), N'Agent 1', N'110')
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (12353, N'R & K Associates', CAST(0x7A370B00 AS Date), N'Agent 5', N'105')
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (13345, N'Zyng Enterprises', CAST(0x5D370B00 AS Date), N'Agent 3', N'110')
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (15789, N'Maxim Bailey', CAST(0x7A370B00 AS Date), N'Agent 1', N'120')
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (22345, N'Kyzer', CAST(0xA5370B00 AS Date), N'Agent 2', N'120')
INSERT [dbo].[Orders] ([OrderID], [CustName], [OrderDate], [ReferredBy], [AgentGrp]) VALUES (29398, N'ABC Corp', CAST(0x54370B00 AS Date), N'Agent 4', N'105')
SET IDENTITY_INSERT [dbo].[Orders] OFF
GO







The result will be as below


As seen from result above the FLOWR expression helps us in building the XML in the structure we want.
The first part of the query using FOR XML will built a simple XML structure with all attributes involved. Then we apply query function on top of the simple XML we built and use FLOWR expressions inside to get XML to the format we desire

Another illustration is given below

declare @catalog table
(
catalogid int identity(1001,1),
catalogdescription nvarchar(100),
catalogcreationdate datetime
)


insert @catalog (catalogdescription,catalogcreationdate)
values
(N'mens wear',getdate()-120),
(N'womens wear',getdate()-35),
(N'sports wear',getdate()-90),
(N'accessories',getdate()-25),
(N'Beauté',getdate()-20)


declare @products table
(
ProductID  int identity(10001,1),
ProductDesc nvarchar(100),
CatalogID int
)

insert @products (ProductDesc,CatalogID)
values ('Crop Tops',1002),
('Sweat Shirts',1002),
('Bodyfit Jeans',1001),
('Golden Perfurme',1005),
('Jerseys',1003),
('Pendant with Earstuds',1004),
('Anklet',1004),
('Shorts',1001)


declare @productattributeTypes table
(
AttributeTypeID int identity(10001,1),
AttributeTypeDesc  nvarchar(100)
)

insert @productattributeTypes (AttributeTypeDesc)
values ('Shoe Size'),('Belt Size'),('Base Material'),('Color'),('Pattern'),('Size')

declare @productattributevalues table
(
AttributeID int identity(10001,1),
ProductID  int ,
AttributeTypeID int,
AttributeValue nvarchar(100)
)

insert @productattributevalues (AttributeTypeID,ProductID,AttributeValue)
values (10003,10001,'Cotton'),
(10003,10002,'Polyester'),
(10006,10002,'XL'),
(10006,10003,'32'),
(10005,10003,'Slim fit'),
(10003,10004,'Cologne'),
(10006,10004,'100 ml'),
(10003,10005,'Polyester'),
(10003,10006,'Black Metal'),
(10003,10007,'White metal with stones'),
(10006,10008,'Cotton')

Here's the query


with the result as below




Conclusion

From the two illustrations above, its evident that FLOWR expression based method can be used effectively to generate nested XML structures

Artifacts

The full code for generating the XML structures can be found below


Monday, October 29, 2018

SQL Tips: String Or Binary Data Truncated Error Message Enhancement

Introduction

I'm sure lots of us who have been developing in SQLServer over ages would agree to the fact that the error

String or binary data would be truncated.
The statement has been terminated.
is one of the most frustrating errors you would have ever come across in T-SQL. Especially in case of long stored procedures with lengthy INSERT...SELECT statements it was always a daunting to task to find the column which acted as the root cause for the above error. And most times the error happens at a later stage due to absence of any breaking data at the time of implementation.

A good majority of senior developers have always complained against this ambiguity and multiple connect items were logged for this issue which got pretty good support as well.

Implementation

When feedback platform was moved to feedback.azure.com  Microsoft had opened a request in that for the connect requests and this had also got good number of votes


Accordingly MS started background work to fix this and finally the fix was released in SQL 2019 version to enhance the error message to include more information. Based on this, the error message has been modified to the below

String or binary data would be truncated in table 'XXXXXXXXXX', column 'YYYYYYY'. Truncated value: 'ZZZZZZ'.
The statement has been terminated.

This was really a good news for all of us, but still there was a small concern that we have wait a while to see this in action as most of the currently implemented instances were on SQL 2016 and SQL 2017.
But it seems MS read our mind on this and now I'm really happy to see the announcement that this enhancement has been backported to SQL 2017 CU 12 and on SQL 2016 SP2 CU.
To enable this currently a trace flag has also being introduced (trace flag 460) which can be enabled at session level or at server level itself. Once set it replaces the older error message with the new one above for truncation exceptions raised. The future SQL 2019 releases should have this message as the default and wont require setting the trace flag explicitly for this.

Illustration

Lets see an illustration of the above error message on a sample table

The code will look like below

--setting the trace flag for the session
DBCC TRACEON  (460);  
GO  

--sample table
declare @t table(
v varchar(10)
)

-- insert values
insert @t 
values
('test'),
('this is a long value to check for truncation error') -- this raises the error

Now lets check the result



As expected we will get the  new error message which gives clear indication of the table, column and value which caused the truncation exception. This makes it much easier for someone to debug and fix the issue.

Now if you turn off the trace flag and try, you can see that it reverts to the old error message 



Conclusion

As seen from the above illustration, this new enhancement on truncation error is really a life saver for someone developing or debugging Transact SQL code and is sure to save many hours of development effort on truncation issues which is one of the high frequent issues we come across in ETL, datawarehousing projects.

The official announcement regarding the release can be found in the below link


Let me end this article by conveying big thanks to Pedro Lopes (@SQLPedro) – Senior Program Manager and the entire MS team for the help and support provided in addressing this issue and coming up with the release.