Wednesday, August 31, 2016

Parsing "DBCC MEMORYSTATUS" without using PowerShell

When I wanted to research memory problem on a server and started to dig deeper into "DBCC MEMORYSTATUS" command.Very useful links to understand that command were from Microsoft:
https://support.microsoft.com/en-us/kb/271624
https://support.microsoft.com/en-us/kb/907877

During the research I've faced two problems:
1. I had to wait several seconds to get the full result set.
2. I had to scroll down and shuffle 115 different data sets to find the counter I want.

To eliminate both these problem all these different counters have to be in the same table/data set.
That will make research easier and data can be stored or compared with the base line.

So in order to achieve that we have to parse results of "DBCC MEMORYSTATUS".

I've did a quick search over the internet and found several parsing solutions, but all of them are based on PowerShell.
There is no problem with PowerShell for me, but the whole Idea seems wrong.
It is like if I'm asking for driving direction I can get the answer only in Chinese.

Being in frustration, I've decided to make it available in native SQL language.
So, here is the script:
SET NOCOUNT ON
GO
IF OBJECT_ID('tempdb..#tbl_MemoryStatusDump') IS NOT NULL
DROP TABLE #tbl_MemoryStatusDump;
GO
IF OBJECT_ID('tempdb..#tbl_MemoryStatus') IS NOT NULL
DROP TABLE #tbl_MemoryStatus;
GO
CREATE TABLE #tbl_MemoryStatusDump(
 ID INT IDENTITY(1,1) PRIMARY KEY
 , Dump VARCHAR(100));
GO
CREATE TABLE #tbl_MemoryStatus(
 ID INT IDENTITY(1,1), 
 [DataSet] VARCHAR(100), 
 [Measure] VARCHAR(20), 
 [Counter] VARCHAR(100), 
 [Value] MONEY);
GO
INSERT INTO #tbl_MemoryStatusDump(Dump)
EXEC ('xp_cmdshell ''sqlcmd -E -S localhost -Q "DBCC MEMORYSTATUS" ''');
GO
DECLARE @f BIT = 1
 , @i SMALLINT = 1
 , @m SMALLINT = (SELECT MAX(ID) FROM #tbl_MemoryStatusDump)
 , @CurSet VARCHAR(100)
 , @CurMeasure VARCHAR(20)
 , @Divider TINYINT
 , @CurCounter VARCHAR(100)
 , @CurValue VARCHAR(20);

WHILE @i < @m
BEGIN
 SELECT @Divider = PATINDEX('% %',REVERSE(RTRIM(Dump)))
  , @CurCounter = LEFT(Dump, LEN(Dump) - @Divider)
  , @CurValue = RIGHT(RTRIM(Dump), @Divider - 1)
 FROM #tbl_MemoryStatusDump WHERE ID = @i;

 IF @f = 1 
  SELECT @CurSet = @CurCounter, @CurMeasure = @CurValue, @f = 0 
  FROM #tbl_MemoryStatusDump WHERE ID = @i;
 ELSE IF LEFT(@CurCounter,1) = '(' SET @f = 1;
 ELSE IF @CurCounter != 'NULL' and LEFT(@CurCounter,1) != '-'
  INSERT INTO #tbl_MemoryStatus([DataSet], [Measure], [Counter], [Value])
  SELECT @CurSet, @CurMeasure, @CurCounter, CAST(@CurValue as MONEY)
  FROM #tbl_MemoryStatusDump WHERE ID = @i;
 SET @i += 1;
END
GO
SELECT * FROM #tbl_MemoryStatus
GO


Side notes:


1. Script uses "xp_cmdshell" command, which can be disabled on your server. In case you've got to the point when you are using "DBCC MEMORYSTATUS" command you have to have enough privileges to turn it on/off. Also, there are plenty of resources in the Internet on "how to enable xp_cmdshell".

2. Within "xp_cmdshell" command I use trusted connection "-E" and default local server: "-S localhost". These parameters can be changed adjusting for your instance/environment.

3. I've tested it on SQL Server 2014, but I'm pretty sure the script is supposed to work from 2008 to 2016. For SQL Server 2005 it would require some adjustments.

4. For the Value column I've used "MONEY" data type, which is limiting amount of "Available Virtual Memory" by 838 TB. There is possibility to use BIGINT, but "Memory Broker Clerk (Buffer Pool)" returns decimal values and that might be the problem. If you'd like you can use value data type as "DECIMAL(38,6)", but that I think would be the extreme case, which you might like to catch.

Saturday, August 27, 2016

SQL Saturday in Charlotte, NC. Sessions I want to visit.

Usually, I do my blog posts after visiting SQL Saturday events, but I want to change that tradition and talk about them before the actual event.

In this post I want to talk about sessions I have personal interest in and I understand that it might drive people from other sessions. However, it will definitely drive some audience in and I hope those people will thank me for an additional information/suggestions/guidance.

So, will start:
SQL Saturday #560, Charlotte, NC

Event date: September 17th, 2016.
Session schedule: http://www.sqlsaturday.com/560/Sessions/Schedule.aspx

Slot #1. 9:00 A.M. - 10:00 A.M.

In that slot there is ONLY ONE session I'm going to visit. That is my own session "Inside of Indexes". I'll be talking about:
- What indexes are;
- How SQL Server is using different types of indexes to retrieve data in the best possible way;
- What are Page Split and how they impact performance;
and some more.
I hope my session will be very interesting, but I'm afraid that half of the people, interested in indexing, will split between my session and Jeffrey Garbus' session "Indexing for performance".

If I didn't have my session in the very first slot I'd definitely go to Aaron Nelson's session "Achieve More in Less Time by Using the New SQL PowerShell".

Slot #2. 10:15 A.M. - 11:15 A.M.

In the second slot, the winner, without a doubt is Adam Machanic's "SQL Server 2016 "Hidden Gems". The room has to be big enough to accommodate all attendees. If you desperately want to visit his session, I'd suggest to come to the previous session of Angela Henry's "Lightning Talks" to secure your space.

Even though Adam Machanic will be the shining star in that slot I won't go for his session because I've already attended it on another event.

The second session in the priority is  Rick Heiges' "Winning with Columnstore - A Customer Story". Since Microsoft nailed down Columnstores in SQL Server 2016 and because of Rick's HIP (High Influential Power) - that would be the session to visit.

Despite on all advantages of Rick's presentation I won't go there. The trick is: I want to visit his session two weeks later on SQL Saturday #545 in in Pittsburgh.
So, I'll have a tough  choice between Rushabh Mehta's "Mastering and Enhancing Company Data" and William Assaf's "SQL Server Permissions and Security Principals".

Slot #3. 11:30 A.M. - 12:30 P.M.

The Best and "Must Go" in the third slot is Geoff Hiten's session "What’s your Imperative doing in my Declarative?". The name is not very descriptive, but that session can do a Inventor's Light bulb  effect on any beginner SQL Developer, who just graduated college with most of the experience in Java and C#. I guarantee, there will be the "AHA! moment" during that session, which will shift a paradigm of your programming experience for life.

So, because I already know all of that I won't go to that session.
The next in the line is Tom Norman's "Encrypting Data within Sql Server". I'm from DC region and "Security" here is always #1 priority.

Slot #4. 1:45 P.M. - 2:45 P.M.

After lunch session. There will be a lot of other performance tuning topics I like, but my choice will be Brian Carrig's session "Managing Very Large Databases with SQL Server".

Slot #5. 3:00 P.M. - 4:00 P.M.


Very good session is supposed to be of Kevin Feasel's "APPLY Yourself" about the art of using "Apply" operator.
Wonderful session of Jonathan Buller: "Tips and Tricks for the SSIS Catalog" about new abilities of managing SSIS packages directly from SQL. That  topic is not fully covered in SQL community yet and I'm waiting the time when we will discuss parallel executions of multiple queries using SSIS catalog and SSIS packages addressing their own scheduling manipulating SSISDB.

Because it will be the last slot of sessions and everybody, including me, will fill pretty tired, I'd probably will go an relax listening to "Replication v. AlwaysOn, the great debate", hosted by Chris Skorlinski and Curt Mathews. They encourage to bring your own questions to that discussion, so, I assume there might be several of very good blog posts delivered only based on that conversation.

See you in Charlotte!
And not hesitate to show your smile when I'll make your picture.

Wednesday, August 24, 2016

Four types of SQL Server Authentication

In SQL Server can be four types of user authentication.
To illustrate this I'll run following query in my test box:
SELECT authentication_type, name, type_desc, authentication_type_desc
FROM sys.database_principals
WHERE principal_id between 5 and 16383;

It gives me the following result:

So, there are four authentication types:
- 0 - NONE;
- 1 - INSTANCE;
- 2 - DATABASE;
- 3 - WINDOWS;

#1 - INSTANCE and #3 - WINDOWS are the most common authentication types: they are simply SQL and Windows authentications.

Will review the more exotic ones: #0 - NONE and #2 - DATABASE.

SQL Server User with "NONE" authentication:

"TestUser0" with NONE authentication is just a database user without SQL Server Login.
That means that user can't authenticate into a SQL Server. Because of that it can't be assigned to any person or an application using SQL Server.

That user account can be used within "EXECUTE AS" in functions, stored procedures and triggers.
That user account can own schemas and have privileges and permissions, which will allow restricted user to have an access to objects and actions via stored procedures or functions, which she/he is not granted to see or perform.

For example I can try to do a select as user "TestUser1" from newly created table.

USE TestDB;
GO
SELECT 1 as ID INTO tbl_TestTable;
GO
EXECUTE AS USER = 'TestUser1' ;
GO 
SELECT * FROM tbl_TestTable;
GO
REVERT ;
GO 

I'll get following error:
Msg 229, Level 14, State 5, Line 417
The SELECT permission was denied on the object 'tbl_TestTable', database 'TestDB', schema 'dbo'.

However, if I create a stored procedure with EXECUTE AS permissions for user "TestUser0" and grant access to that procedure to "TestUser1" then that user will be able successfully see content of a new table:
GRANT SELECT ON tbl_TestTable TO TestUser0;
GO
CREATE PROCEDURE usp_ViewTestTable
WITH EXECUTE AS 'TestUser0' AS
SELECT * FROM tbl_TestTable;
GO
GRANT EXECUTE ON usp_ViewTestTable TO TestUser1;
GO
EXECUTE AS USER = TestUser1';
GO 
EXEC usp_ViewTestTable;
GO
REVERT ;
GO 

Create user with NONE authentication is pretty simple, you just specify that you want to create a user with no SQL Server login:

USE TestDB;
GO
CREATE USER [TestUser0]  WITHOUT LOGIN;
GO

SQL Server User with "DATABASE" authentication:

That is absolutely unique type of authentication.
At first, when I've tried to connect to SQL Server using "TestUser2" I've got an error:

Then I've tried to go to additional connection "Options":

And explicitly specified the name of the database I want to connect to:

That made the trick. I was be able to connect and see my database:

Wow! I can see ONLY and ONLY one database I'm assigned to.

I've tried to see what else I can see:
SELECT * FROM sys.databases;

And here is the list of databases I can really see:
Along with "TestDB" I'm exposed to "master" and "tempdb".
I can see data from tables in "master", I can create "#Temp" tables, but can write directly only to "TestDB".

Now, here is how to set up "DATABASE" authenticated user. It is pretty simple, just specify a user name in current database with a password:
USE TestDB;
GO
CREATE USER [TestUser2]  WITH PASSWORD = 'TestUser2';
GO

 Actually, that is not only it. Before you be able to setup database authenticated user you have to switch your database into Partial Containment mode and to do so, you have to enable it on the server level first:
USE master
GO
sp_configure 'show advanced options', 1;
GO
RECONFIGURE
GO
sp_configure 'CONTAINED DATABASE AUTHENTICATION', 1;
GO
RECONFIGURE
GO
sp_configure 'show advanced options', 0 ;
GO
RECONFIGURE
GO
ALTER DATABASE TestDB SET CONTAINMENT = PARTIAL;
GO

Looks cool?
Yep, it is really cool that you can COMPLETELY separate a user form ANY OTHER databases on your server!!!

BUT. There is always "BUT". There no "Silver Bullet" nor "Free Cheese".
If you use Partially contained databases cannot use replication, change data capture, or change tracking and some other features.

The good news: you are good to go with AlwaysOn.

Conclusion:

Because Contained Database Users can easily coexist with all other types of users and they are highly restricted within boundaries of one database, since SQL Server 20014, it can be very good addition to your tool set of other security features.

MSDN links for further research:

Contained Databases: https://msdn.microsoft.com/en-us/library/ff929071.aspx
Contained Database Users: https://msdn.microsoft.com/en-us/library/ff929188.aspx
Create a Database User: https://msdn.microsoft.com/en-us/library/aa337545.aspx

Tuesday, August 23, 2016

Shrinking database File with a help of "Spatial Fragmentation Report"

There are more than "many" questions in the internet from people who can't claim free unused space from their data files.

In this post I'll cover that problem using "Spatial Fragmentation Report"

At first, will identify the problem:
USE [TestFF];
GO
SELECT
Name, physical_name,
    size/128. as "File Size, Mb",
    FILEPROPERTY(name, 'SpaceUsed')/128. as "Space Used, Mb"
FROM sys.database_files
WHERE file_id = 1;

As you can see, my database is 10Gb and only 1.6 Gb are in use.

Will try to fix that problem the conventional way by trying to shrink database to 20% of it's current size:
DBCC SHRINKDATABASE('TestFF',20);

As the result, got following message:

DBCC SHRINKDATABASE: File ID 1 of database ID 5 was skipped because the file does not have enough free space to reclaim.
DBCC SHRINKDATABASE: File ID 2 of database ID 5 was skipped because the file does not have enough free space to reclaim.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

What is that??? I have 85% of free space in my File 1 !!! What is the problem?

Will try to target an individual file and reduce its size to ~2Gb:
DBCC SHRINKFILE('TestFF',2000);

After waiting for almost 2 minutes I've got following error:
Msg 0, Level 11, State 0, Line 144
A severe error occurred on the current command.  The results, if any, should be discarded.

Then I've tried to reduce file size for anything I can:
DBCC SHRINKFILE('TestFF');
Then almost immediately got following result:

After checking the size again see very strange behavior: size of the file have not reduced, but amount of USED space reduced by 300Mb:

Have you ever been in the same situation before when no matter what, but you can't shrink your database?

So, now the time to see WHY SQL Server can't shrink our file.
I use following script to see object allocations/fragmentation in a data file:

IF Object_ID('tempdb..#Used_Pages_List') Is not Null
 DROP TABLE #Used_Pages_List;
GO
SELECT object_id, allocated_page_page_id as page_id INTO #Used_Pages_List
FROM sys.dm_db_database_page_allocations(DB_ID(), NULL, NULL, NULL, NULL) as a
WHERE partition_id = 1
GO
CREATE CLUSTERED INDEX #CLIX_Used_Pages_List ON #Used_Pages_List(object_id,page_id)
GO
DECLARE @i INT = 0;
DECLARE @m INT = (SELECT MAX(object_id) FROM #Used_Pages_List);
DECLARE @g TABLE(Alloc GEOMETRY, Table_Object SYSNAME, object_id INT);
DECLARE @s INT = 131072
DECLARE @LineMegabytes INT = (SELECT Size/(64000)+1 FROM sys.database_files WHERE file_id = 1);
DECLARE @d INT = 128*@LineMegabytes;
DECLARE @DBSize INT = (SELECT Size FROM sys.database_files WHERE file_id = 1) / @d + 1
DECLARE @t TABLE(ID INT, MessageText VARCHAR(100));

INSERT INTO @t(ID,MessageText) VALUES
(1,'Scale: 1 Square 10x10 = 1 Page (8 Kb)'),
(2,'Row: ' + CAST(@LineMegabytes as VARCHAR) + ' Mb'),
(3,'Vertical: 100 points = ' + CAST(10*@LineMegabytes as VARCHAR) + ' Mb')

SELECT * FROM @t ORDER BY ID;

WHILE @i < @m
BEGIN
 SELECT @i = MIN(object_id) FROM #Used_Pages_List
 WHERE @i < object_id

 PRINT CONVERT(VARCHAR,@i) 

 INSERT INTO @g(object_id, Table_Object, Alloc) 
 SELECT @i, '[' + Object_Schema_Name(@i)+ '].[' + Object_Name(@i) + ']'
  , CONVERT(GEOMETRY,'POLYGON(' + SUBSTRING( (
  SELECT 
   ',(' 
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ','
   + CONVERT(VARCHAR, (page_id % @d+1) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ','
   + CONVERT(VARCHAR, (page_id % @d+1) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 - 10) + ','
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 - 10) + ','
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ')'
  FROM #Used_Pages_List
  WHERE @i = object_id
  FOR XML PATH ('')
 ),2,@@TEXTSIZE) + ')');
END

SELECT object_id, Table_Object, Alloc FROM @g
UNION ALL
SELECT 0, 'Database Size', CONVERT(GEOMETRY,'LINESTRING(0 ' 
 + CONVERT(VARCHAR, @DBSize * -10) + ', ' + CONVERT(VARCHAR, @d * 10) + ' ' + CONVERT(VARCHAR, @DBSize * -10) + ')')
ORDER BY object_id
GO

Here is the result for my database:
As you can see, I definitely have a lot of free space, but my data are so spread across the file and especially up to it's border, that there is no way to make file size smaller.

If we zoom at the very tail we can figure out the names of tables at the very end of the file, which prevent file from shrinking:

After I performed "Index Rebuild" for all user tables the Page Allocation picture changed to this:
As you can see, the most of the tables were moved to the beginning of the database, but some are still wondering somewhere in the middle. Lets Zoom and figure out which tables might prevent Shrinking of the file:
Unfortunately, it is the system table and we can do nothing about it.

However, will try to shrink the file again:
DBCC SHRINKFILE('TestFF',2000);

This time I was more lucky sis not get an error, but also had my file shrank:


Here is how fragmentation looked after the Shrink:

The main conclusion of that exercise is that if you struggle with shrinking your database, look at your index fragmentation, that can cause errors and inability to shrink files.

Monday, August 22, 2016

How to Copy a file using SQL Server in three easy steps

Might happen that you can't access SQL Server shared drives, but you desperately want to copy file from the server to another location.

Here is the solution I have:

Step 1. Reading a file.

At first, you have to read the file you want to copy into a SQL Server. You have to choose a database to perform that action. It can be Test database or you can create a new database to perform that action or it can be even TempDB. There is only two requirements for the database:
- It must not be a production Database;
- Database should have enough of space to accommodate the file you want to copy.

So, as the firs step in copying we create a table in TempDB and read backup file into that table:

USE tempdb
GO
DROP TABLE IF EXISTS tbl_StoredFile;
GO
CREATE TABLE
tbl_StoredFile(StoredFile VARBINARY(MAX));
GO
INSERT INTO tbl_StoredFile(StoredFile) SELECT BulkColumn
FROM OPENROWSET(BULK'C:\Backup\AdventureWorks2014.bak', SINGLE_BLOB) AS x;
GO
sp_spaceused tbl_StoredFile;
GO 


Step 2. Extract a File.

From this point you are going out of SSMS and run a NT Shell commands.
At first you press Win+R, then type "CMD" and then press enter.
That has to open new "CMD.EXE" window.
You have to choose a folder on your computer where you will extract your file.
You also have to provide SQL Server name and credentials to it:
bcp "SELECT StoredFile FROM tempdb.dbo.tbl_StoredFile;" queryout "L:\Temp\AdventureWorks2014.bak" -S <ServerName> -U <UserName>
- After you fill server name and user name and press Enter, BCP utility will ask you for your password.
- Then it will ask you to enter "Enter prefix-length of field StoredFile [8]:" - By default it is "8", but you have to enter "0" and press enter.
- Then it will ask you for "Enter length of field StoredFile [0]:" - just press enter;
- Then it will ask you for "Enter field terminator [none]:" - just press enter;
- Then it will ask you if you want to save this format information - answer "n" and press Enter.

As the result, file will be extracted to your file system.

Step 3. Cleanup.

Just cleanup the space in your database by dropping the table:
USE tempdb
GO
DROP TABLE tbl_StoredFile;

As you can see it is extremely easy to copy files using SQL Server, you just need to have enough of free space in a database to load a file.

Tuesday, August 16, 2016

Redirecting Linked Server

Have you been in the situation when all your SQL code is referring to Linked server XYZ and now you have to migrate it to Linked Server ABC?

What do you have to do? Re-Write all stored procedures, functions, views, triggers, which use old server name?

Not necessary. You can redirect your old Linked server name to another location.

Would say you have following Linked Server:
SELECT * FROM sys.servers
WHERE server_id > 0;

In my example my linked server has an IP addressed name "192.168.1.103" and pointed to the same SQL Server host.

If we want to redirect that linked server to another SQL Server host we can do following:
EXEC sp_setnetname
    @server = N'SQL2016_01', -- Original Linked Server Name
    @netname = N'SQL2016_02'; -- Newly redirected SQL Server

After running that statement we will see the following picture:

That means ALL our code, which uses OLD Linked Server name is pointed now to the new SQL Server.

Be very careful with that tool. Potentially you can create cross reference like this:

Linked Server "SQL2016_01" is now referring to actual SQL Server "SQL2016_02" and Linked Server "SQL2016_02" is now referring to actual SQL Server "SQL2016_01".
 That is complete mess which can lead not only to loss/discrepancy of the data, but easy loss of a job.

Monday, August 15, 2016

Seven steps to Welcome a Consultant.

In this post I want to talk about hard life of SQL Server consultants, who are hired by companies to perform SQL Server performance troubleshooting, and their interactions with SQL DBA and SYSADMIN.

That should be the guideline for SQL DBAs on how to setup  an access for temporary hired contractors.

Step #1. Mr. Consultant came to the office. Sysadmin created AD account to log to the network. SQL DBA creates Login for consultant to log in to the server.

If SQL Server allows "SQL Server Authentication" then SQL DBA can create local SQL Login for the new user:
USE [master]
GO
CREATE LOGIN [MrConsultant] WITH PASSWORD=N'MrConsultant20160815' MUST_CHANGE,
DEFAULT_DATABASE=[tempdb], CHECK_EXPIRATION=ON, CHECK_POLICY=ON 
GO
Just note that Default Database is "tempdb", not "master". That is a good practice for any new SQL Login account.

If "SQL Server Authentication" is not allowed, then DBA can create new Login based on AD account:
USE [master]
GO
CREATE LOGIN [Domain\MrConsultant] FROM WINDOWS WITH DEFAULT_DATABASE=[tempdb]
GO

At first, Mr. Contractor wants to look through the SQL Server Log:
EXEC sys.xp_readerrorlog;
However, it generates following error:
Msg 229, Level 14, State 5, Procedure xp_readerrorlog, Line 1 [Batch Start Line 39]
The EXECUTE permission was denied on the object 'xp_readerrorlog', database 'mssqlsystemresource', schema 'sys'.


In order to give the right to see Error Log DBA has to create user in "master" database and give rights to execute "xp_readerrorlog".

Step #2. Giving rights to execute "xp_readerrorlog".

USE [master]
GO
CREATE USER [MrConsultant] FOR LOGIN [MrConsultant]
GO
GRANT EXECUTE ON xp_readerrorlog TO [MrConsultant]
GO

Then Consultant tries to execute several commands to get an idea what is going on on the server:
SELECT * FROM sys.dm_exec_connections
SELECT TOP 10 * FROM sys.dm_exec_cached_plans
SELECT TOP 10 * FROM sys.dm_exec_query_stats
SELECT * FROM sys.dm_io_virtual_file_stats(NULL, NULL)
SELECT * FROM sys.dm_server_services
SELECT * FROM sys.dm_os_sys_info
SELECT * FROM sys.dm_os_windows_info
SELECT * FROM sys.dm_os_buffer_descriptors
SELECT * FROM sys.dm_os_sys_memory
SELECT * FROM sys.dm_os_process_memory
SELECT * FROM sys.dm_os_performance_counters
However, he/she immediately gets following error for each of these selects:
Msg 300, Level 14, State 1, Line 3
VIEW SERVER STATE permission was denied on object 'server', database 'master'.
Msg 297, Level 16, State 1, Line 3
The user does not have permission to perform this action.


So, SQL DBA has to solve the problem.

Step #3. Giving "VIEW SERVER STATE" permissions.

To give "VIEW SERVER STATE" permissions DBA has to execute following statement:
USE [master]
GO
GRANT VIEW SERVER STATE TO [MrConsultant];
GO

To see list of the databases, their sizes, file names, locations consultant has to run following query:

SELECT * FROM sys.master_files
However, there is nothing in return.
DBA has to grant permissions to see databases' definitions:

Step #4. Granting databases' definition access.

USE [master]
GO
GRANT VIEW ANY DEFINITION TO [MrConsultant];
GO

The main production database in the system is "AdventureWorks2014". Consultant tries to access that database, but gets following error:
Msg 916, Level 14, State 1, Line 20
The server principal "MrConsultant" is not able to access the database "AdventureWorks2014" under the current security context.


So, DBA has to permit consultant to access the database.

Step #5. Giving database access.

Administrator has to create a user for Consultant in the Database:

USE [AdventureWorks2014]
GO
CREATE USER [MrConsultant] FOR LOGIN [MrConsultant]
GO

In order to troubleshoot a query Mr Consultant runs following statement:
DBCC SHOW_STATISTICS('Person.Person','PK_Person_BusinessEntityID');
And gets following errors:
Msg 229, Level 14, State 5, Line 34
The SELECT permission was denied on the object 'Person', database 'AdventureWorks2014', schema 'Person'.
Msg 2557, Level 16, State 7, Line 34
User 'MrConsultant' does not have permission to run DBCC SHOW_STATISTICS for object 'Person.Person'.


Consultant needs to read objects within the database.

Step #6. Granting Selection access.

Administrator has to grant rights to "SELECT" for Consultant in the Database:
USE [AdventureWorks2014]
GO
GRANT SELECT TO [MrConsultant]
GO
SORRY, contractor have to be able to see your underlying data. If you have to keep the data secured, encrypt it.

Now Mr. Consultant wants to see actual query plan generated by a query:
SELECT * FROM Person.Person;

And gets an error:
Msg 262, Level 14, State 4, Line 30
SHOWPLAN permission denied in database 'AdventureWorks2014'.

DBA has to allow to see an Execution Plan.

Step #7. Granting Showplan access.

USE [AdventureWorks2014]
GO
GRANT SHOWPLAN TO [MrConsultant]
GO


From  this point the consultant has the most rights needed for performance and error troubleshooting.
He/she does not have any rights to:
- Insert/Update/Delete any record;
- Create/Alter/Drop any database object;
- Create/Drop Login or User;
- Grant/Deny/Revoke any permissions;
- Run DBCC commands such as "CHECKDB", "FREEPROCCACHE", "DROPCLEANBUFFERS", "PAGE", Etc.

That is the minimal level of trust you can maintain as a DBA with external contractor, which will allow him/her to perform the required research.

In case Contractor will be requested doing any changes DBA can assist with Object creation or giving individual permissions to the specific objects.

At the end will combine all scripts into one. If contractor needs access to more than one database you have to repeat last three steps for all databases he/she need the access:

USE [master]
GO
CREATE USER [MrConsultant] FOR LOGIN [MrConsultant];

GO
GRANT EXECUTE ON xp_readerrorlog TO [MrConsultant];

GO
GRANT VIEW SERVER STATE TO [MrConsultant];
GO
GRANT VIEW ANY DEFINITION TO [MrConsultant];
GO
USE [AdventureWorks2014]
GO
CREATE USER [MrConsultant] FOR LOGIN [MrConsultant];
GO
GRANT SELECT TO [MrConsultant];
GO
GRANT SHOWPLAN TO [MrConsultant];
GO

Step #Last One. After Consultant is done...

After consultant helped your company to solve a problem, do not forget to clean up after him/her:
USE [AdventureWorks2014]
GO
DROP USER [MrConsultant];
GO
USE [master]
GO
DROP USER [MrConsultant];
GO
DROP LOGIN [MrConsultant];
GO



Monday, August 8, 2016

Four ways to filter results of fn_dblog function.

Some of you might know/use SQL Server undocumented function "fn_dblog" to look at SQL Server Database log.

The syntax of it's usage is

SELECT * FROM fn_dblog (NULL, NULL);

That is pretty easy to use if you are in Simple transnational mode and just doing some testing/research, but if you are looking at production database with huge Log file?

Here are 4 ways to filter the results to make research and troubleshooting more manageable:

Solution #1. Copy Log file content into tempdb table or into another database:

SELECT * INTO #tbl_DBLog
FROM fn_dblog (NULL, NULL);
GO
SELECT * INTO tempdb.dbo.tbl_DBLog
FROM fn_dblog (NULL, NULL);
GO
SELECT * INTO TestDB.dbo.tbl_DBLog
FROM fn_dblog (NULL, NULL);
GO

In this sample code I copy content into a temp table, permanent table in temp db and permanent table in TestDB. By doing this you have to be aware of the following:
1. Never copy Transnational Log data into the same database you are doing research in. By doing that you will change the log.
2. Be aware of the size of the data. It might take a long while to extract whole log and the amount of space required for that table will exceed amount of space used by the Log file.

If you went this path you can easily select needed data using WHERE clause.

Solution #2. Extract only needed columns.

Function "fn_dblog" returns about 130 columns. It is obvious you do not need all of them.
You can limit an output by selecting only certain columns:
SELECT [Current LSN]
      ,[Operation]
      ,[Context]
      ,[AllocUnitName]
      ,[Transaction Name]
      ,[Description]
      ,[Begin Time]
      ,[End Time]
      ,[Prepare Time]
      ,[Lock Information]
      ,[Checkpoint Begin]
      ,[Checkpoint End]
      ,[RowLog Contents 0]
      ,[RowLog Contents 1]
      ,[RowLog Contents 2]
      ,[RowLog Contents 3]
      ,[RowLog Contents 4]
      ,[RowLog Contents 5] 
FROM fn_dblog (NULL, NULL);

However, if you have millions and millions records in the log file it still might be not appropriate solution.

Solution #3. Search for particular operation.

For instance you want to search for any changes in your database you can run following script:
SELECT [Current LSN]
      ,[Transaction Name]
      ,[Begin Time]
FROM fn_dblog (NULL, NULL) 
WHERE [Transaction Name] in ('INSERT','DELETE','UPDATE');

In my test DB I've got following:
In that smaller data set you can locate possible transnational troublemaker, but in order to get details, you still need to extract WHOLE log file content to look at the details.

Here comes my last script to extract LSN for ONLY needed transaction:

Solution #4. Extract only needed LSNs.

Would say we need to extract an information associated with an "UPDATE" for LSNs started at "0000004f:00000087:0001". You can just specify Starting and Ending LSNs as "fn_dblog" parameters:
DECLARE
    @SLSNA BIGINT = 0x4f,
    @SLSNB BIGINT = 0x87,
    @SLSNC BIGINT = 0x1,
    @ELSNA BIGINT = 0x4f,
    @ELSNB BIGINT = 0x88,
    @ELSNC BIGINT = 0x1
DECLARE
    @SLSN BIGINT = (@SLSNA * 10000000000 + @SLSNB) * 100000 + @SLSNC,
    @ELSN BIGINT = (@ELSNA * 10000000000 + @ELSNB) * 100000 + @ELSNC
SELECT [Current LSN]
      ,[Operation]
      ,[Context]
      ,[AllocUnitName]
      ,[Transaction Name]
      ,[Begin Time]
      ,[End Time]
      ,[Lock Information]
FROM fn_dblog (@SLSN,@ELSN);

That portion of code would return you ONLY Log records between LSNs "0000004f:00000087:0001" and "0000004f:00000088:0001".

That means you do not have to read entire log into a temp or staging table and do not have to scan entire log again and again  to do your research.

Friday, August 5, 2016

Top 10 SQL Server queries in Graphical form


My latest Spatial-Data post SQL Server CPU utilization in Graphical form was pretty popular and I've decided to continue doing SQL Server reporting using spatial capabilities of SSMS.

This time I want to show a query to report top 10 SQL Serve queries in your system.
Will start with a diagram:


At first there is nothing interesting until you learn how to interpret the diagram.
The diagram is reporting three main parameters of SQL Server queries stored in the query plan cache:

1. Horizontal coordinate reports amount of CPU used by particular query. It goes from Left to Right. On this diagram the winner is query #1. It's execution time is way higher than for any other query. Obviously, Query #1 is my CPU eater.

2. Vertical coordinate reports amount of I/O used by particular query.It goes from Bottom to a Top. On this diagram the winner by I/O is query #2. It's I/O usage is higher than for any other query. That is my biggest I/O consumer.

3. Size of a circle indicates number of query executions. Her my winner is query #3. That parameter might not represent well the usage of my resources, but it is something I have to be aware of.

Now, here is a query to produce that diagram:

;WITH total_worker_time as (
 SELECT TOP 10 WITH TIES query_hash
  , execution_count
  , total_worker_time
  , total_logical_reads + total_logical_writes as Total_IO
  , total_elapsed_time
  , total_logical_reads
  , total_logical_writes
  , total_physical_reads
  , max_worker_time
  , max_logical_reads
  , max_logical_writes
  , max_physical_reads
  , max_elapsed_time
  , max_rows
 FROM sys.dm_exec_query_stats qs WITH (NOLOCK)
 ORDER BY total_worker_time DESC
), Total_IO as (
 SELECT TOP 10 WITH TIES query_hash
  , execution_count
  , total_worker_time
  , total_logical_reads + total_logical_writes as Total_IO
  , total_elapsed_time
  , total_logical_reads
  , total_logical_writes
  , total_physical_reads
  , max_worker_time
  , max_logical_reads
  , max_logical_writes
  , max_physical_reads
  , max_elapsed_time
  , max_rows
 FROM sys.dm_exec_query_stats qs WITH (NOLOCK)
 ORDER BY total_logical_reads + total_logical_writes DESC
), CPlans AS (
 SELECT *, ROW_NUMBER() over(order by total_worker_time ) as Weight FROM total_worker_time  UNION ALL
 SELECT *, ROW_NUMBER() over(order by Total_IO ) as Weight FROM Total_IO
), PlanRadius as (
SELECT DISTINCT query_hash
 , execution_count as exec_count
 , total_worker_time/1000000. AS total_worker_time
 , total_elapsed_time/1000000. AS total_elapsed_time
 , Total_IO/128. as Total_IO
 , total_logical_reads/128. as total_logical_reads
 , total_logical_writes/128. as total_logical_writes
 , total_physical_reads/128. as total_physical_reads
 , max_worker_time/1000000. AS max_worker_time
 , max_logical_reads/128. as max_logical_reads
 , max_logical_writes/128. as max_logical_writes
 , max_physical_reads/128. as max_physical_reads
 , max_elapsed_time/1000000. as max_elapsed_time
 , max_rows
FROM CPlans as cp
WHERE cp.query_hash in (SELECT TOP 10 WITH TIES query_hash FROM CPlans GROUP BY query_hash ORDER BY SUM(Weight) DESC)
)
SELECT ROW_NUMBER() over(order by total_worker_time DESC) as Plan_ID
 , exec_count as [# of Executions]
 , CAST(total_worker_time as DECIMAL(9,3)) as [Worker time, Sec]
 , CAST(Total_IO/1024. as DECIMAL(9,3)) as [Total IO, Gb]
  , CAST(max_worker_time as DECIMAL(9,3)) as [Max Worker time, Sec]
 , CAST(total_worker_time/exec_count as DECIMAL(9,3)) as[Avg Worker time, Sec]
 , CAST(total_elapsed_time as DECIMAL(9,3)) as [Elapsed time, Sec]
 , CAST(max_elapsed_time as DECIMAL(9,3)) as [Max Elapsed time, Sec]
 , CAST(total_elapsed_time/exec_count as DECIMAL(9,3)) as [Elapsed time, Sec]
 , CAST(Total_IO/1024./exec_count as DECIMAL(9,3)) as [Average IO, Gb]
 , CAST(total_logical_reads/1024. as DECIMAL(9,3)) as [Logical Reads, Gb]
 , CAST(max_logical_reads/1024. as DECIMAL(9,3)) as [Max Logical Reads, Gb]
 , CAST(total_logical_reads/1024./exec_count as DECIMAL(9,3)) as [Avg Logical Reads, Gb]
 , CAST(total_logical_writes/1024. as DECIMAL(9,3)) as [Logical Writes, Gb]
 , CAST(max_logical_writes/1024. as DECIMAL(9,3)) as [Max Logical Writes, Gb]
 , CAST(total_logical_writes/1024./exec_count as DECIMAL(9,3)) as [Avg Logical Writes, Gb]
 , CAST(total_physical_reads/1024. as DECIMAL(9,3)) as [Physical Reads, Gb]
 , CAST(max_physical_reads/1024. as DECIMAL(9,3)) as [Max Physical Reads, Gb]
 , CAST(total_physical_reads/1024./exec_count as DECIMAL(9,3)) as [Avg Physical Reads, Gb]
 , query_hash
 , Diagram = CONVERT(GEOMETRY,'CURVEPOLYGON(CIRCULARSTRING('
 + CONVERT(VARCHAR,LOG(total_worker_time+1)*500 + LOG(exec_count+10)*5) + ' ' + CONVERT(VARCHAR,LOG(Total_IO+1)*100) + ','
 + CONVERT(VARCHAR,LOG(total_worker_time+1)*500) + ' ' + CONVERT(VARCHAR,LOG(Total_IO+1)*100 + LOG(exec_count+10)*5) + ','
 + CONVERT(VARCHAR,LOG(total_worker_time+1)*500 - LOG(exec_count+10)*5) + ' ' + CONVERT(VARCHAR,LOG(Total_IO+1)*100) + ','
 + CONVERT(VARCHAR,LOG(total_worker_time+1)*500) + ' ' + CONVERT(VARCHAR,LOG(Total_IO+1)*100 - LOG(exec_count+10)*5) + ','
 + CONVERT(VARCHAR,LOG(total_worker_time+1)*500 + LOG(exec_count+10)*5) + ' ' + CONVERT(VARCHAR,LOG(Total_IO+1)*100) + '))')
FROM PlanRadius as p
OPTION (RECOMPILE);

Besides of the diagram that query produces a list of most crucial query characteristics:
As you can see query #1 consumed more than 12.5 seconds of my CPU, query #2 used more than 2Gb of I/O and query #3 has been executed more than 1K times.

As you can guess and see in the query, all parameters are presented in Logarithmic scale to keep sizes and coordinates in manageable format.

What Next?

When we identified the most annoying queries in our system we can run another query to get actual queries' texts and plans. You just have to use value from "query_hash" column to identify the query you want to do a research on:

SELECT DB_NAME(st.dbid) DatabaseNm, st.objectid,
 qs.creation_time,
 cp.cacheobjtype,
 cp.objtype,
 qs.last_execution_time,
 OBJECT_NAME(st.objectid,st.dbid) AS ObjectName, st.TEXT AS SQLBatch,
 SUBSTRING(st.text,1+qs.statement_start_offset/2,
 (CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END
 - qs.statement_start_offset)/2) as Query_Text,
 CAST(qp.query_plan as XML) AS query_plan,
 qs.execution_count,
 cp.usecounts AS UseCounts,
 cp.refcounts AS ReferenceCounts,
 qs.max_worker_time,
 qs.last_worker_time,
 qs.total_worker_time,
 qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
 qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
 cp.size_in_bytes/1048576. AS SizeMb,
 qs.total_logical_reads, qs.last_logical_reads,
 qs.total_logical_writes, qs.last_logical_writes,
 qs.[plan_handle]
FROM sys.dm_exec_query_stats qs WITH (NOLOCK)
LEFT JOIN sys.dm_exec_cached_plans AS cp WITH (NOLOCK)
 on qs.[plan_handle] = cp.[plan_handle]
CROSS APPLY sys.dm_exec_sql_text(qs.[plan_handle]) AS st
CROSS APPLY sys.dm_exec_text_query_plan(qs.[plan_handle],qs.statement_start_offset,qs.statement_end_offset) AS qp
WHERE qs.query_hash = 0x52b3b6cede6fc57a
OPTION (RECOMPILE);

Here is the result:

I can click on Query plan XML to get query plan in graphical form:
Or I can copy/paste a query text in SSMS window:

SELECT target_data           
FROM sys.dm_xe_session_targets xet WITH(nolock)           
JOIN sys.dm_xe_sessions xes WITH(nolock)           
ON xes.address = xet.event_session_address           
WHERE xes.name = 'telemetry_xevents'           
AND xet.target_name = 'ring_buffer'

Hope you find it useful for troubleshooting SQL Server performance.