, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

SQL SERVER Interview Questions & Answers – SET 2 (40 Questions & Answers)

Download – SQL SERVER Interview Questions with Answers – Set 2 [40 Questions&Answers]



Question11. While creating a “Include” columns on which basis we should consider main column and include columns?

Answer – A nonclustered index can be extended by including nonkey columns in addition to the index key columns. The nonkey columns are stored at the leaf level of the index b-tree

The Syntax of a Non Clustered Index with Included column is given below


CREATE INDEX <Index_Name> ON <table> (KeyColumns) INCLUDE (NonKeyColumns)

  • KeyColumns – These columns are used for row restriction and processing E.g they were used in WHERE, JOIN, ORDER BY, GROUP BY etc.
  • NonKeyColumns – These columns are used in SELECT and Aggregation. For e.g. AVG(col) after selection/restriction.

So always choose KeyColumns and NonKeyColumns based on the query requirements only.


Question12. What is the difference between pessimistic locking and optimistic locking?

Answer –

Source – http://dba.stackexchange.com/questions/35812/why-is-optimistic-locking-faster-than-pessimistic-locking

Lets start with the analogy with banks..

Pessimistic locking is like having a guard at the bank door who checks your account number when you try to enter; if someone else accessing your account, then you cannot enter until that other person finishes his/her transaction and leaves.

Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks.

Optimistic locking, on the other hand, allows you to walk into the bank at any time and try to do your business, but at the risk that as you are walking out the door the bank guard will tell you that your transaction conflicted with someone else’s and you will have to go back and do the transaction again.

Optimistic Locking is a strategy where you read a record, take note of a version number and check that the version hasn’t changed before you write the record back. When you write the record back you filter the update on the version to make sure it’s atomic. (i.e. hasn’t been updated between when you check the version and write the record to the disk) and update the version in one hit.

If the record is dirty (i.e. different version to yours), Optimistic locking possibly causes a transaction to fail, but it does so without any “lock” ever having been taken. And if a transaction fails because of optimistic locking, the user is required to start all over again. The word “optimistic” derives from exactly the expectation that the condition that causes transactions to fail for this very reason, will occur only very exceptionally. “Optimistic” locking is the approach that says “I will not be taking actual locks because I hope they won’t be needed anyway. If it turns out I was wrong about that, I will accept the inevitable failure.”.

This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.

Question13. How VLF’s created for tempDB?

Answer – VLFs are Virtual Log Files.

A transaction log stores every transaction made to a SQL Server database, except some which are minimally logged like BULK IMPORT or SELECT INTO.

Internally transaction log is split into the smaller parts called Virtual Log Files (VLFs). When one VLF becomes full, logging continue to write into the next available in the transaction log. The transaction log file can be represented as a circular file. When the logging reaches the end of the file it starts again from the beginning, but only if all the requirements has been met and the inactive parts has been truncated.

The truncation process is necessary to mark all inactive parts so they can be used again and overwritten.


Every time space is allocated for the transaction log file (It may be an Initial creation or log growth) new VLFs are created behind the scenes. The number of new VLFs is determined by the amount of space allocated.

  • If space added is between 0 to 64MB then 4 new VLFs
  • If space Added is between 64MB to 1GB then 8 new VLFs
  • If space Added is greater than 1GB then 16 new VLFs

Use below query to find out the growth and transaction log details


    name FileName,
    CAST(size*8/1024 AS VARCHAR(10))+'MB' Size,
    CASE is_percent_growth
      WHEN 1 THEN CAST(growth AS VARCHAR(10))+'%'
      ELSE CAST(growth*8/1024 AS VARCHAR(10))+'MB'
     END AutoGrowth
FROM sys.database_files WHERE type_desc = 'LOG'




Question14. Can you give some examples for One to One, One to Many and Many to Many relationships?

Answer – There are following types of database relationships.

  • One to One Relationships
  • One to Many and Many to One Relationships
  • Many to Many Relationships

One to One Relationships

Lets say we have two table Customers and Address. We have a relationship between the Customers table and the Addresses table. If each address can belong to only one customer, this relationship is “One to One”. Check out the example below.


One to Many and Many to One Relationships

Lets say we have two table Customers and Orders. We have a relationship between the Customers table and the Addresses table. Customers can make multiple orders but we cannot have multiple customer in one order. This kind of relationship called “One to many relationship”. Check out the example below.


Many and Many Relationships

Lets say we have two table Students and Teachers. We have a relationship between the Students table and the Teachers table. One teacher can teach multiple students and one student can be taught be multiple teacher. This kind of relationship called “One to many relationship”.

Note – Please note that for this kind of relationship we require 3rd table to handle relationship.

Check out the example below.



Question15. How to find all dependent objects of a table?

Answer –

Multiple methods are there

--Note "testP" is the name of the proc

*********************** Method 1 *************************************

sp_depends 'testP'

*********************** Method 2 *************************************

SELECT * FROM information_schema.routines

*********************** Method 3 *************************************

SELECT referencing_schema_name, referencing_entity_name,
referencing_id, referencing_class_desc, is_caller_dependent
FROM sys.dm_sql_referencing_entities ('testP', 'OBJECT');


sp_depends will be deprecated, and instead, sys.dm_sql_referencing_entities and sys.dm_sql_referenced_entities are recommended.

For details please refer below URL



Question16. How to filter nested stored procedure code from profiler?

Answer –

Nested stored procedures are the stored procedures that call another stored procedure(s).

We can use SQL Server Profiler to peek into stored procedure execution details. Lets first create nested stored proc.


************** Child Proc *******************

	SELECT 'ChildProc'

********** Parent Proc ************************

	EXEC ChildProc


After creating the above stored procedures, execute the parent stored proc using below command.




For that please follow the steps given below.

Step 1. Open SQL Server profile by clicking on Start , type SQL SERVER Profiler and click on SQL Server profiler.

Click on “Show all events”, then select stored procedures as shown below.


Select SP:StmtCompleted event class as shown below


Now execute the Main sp and check the trace below


We can see all the trace i.e. from Parent SP and child SP.


Question17.  What are the limitations on “SET ROWCOUNT”?

Answer –

Both TOP and SET ROWCOUNT are both acceptable methods of limiting the result sets from a query; However both are very different commands. The TOP clause of a command limits that single command, while the SET ROWCOUNT command limits all eligible queries within the connection until another SET ROWCOUNT 0 is called.

This could be dangerous sometime if you forget to reset the ROWCOUNT.

Microsoft recommends that we should not use this style as they are planning to stop its affect on DML statements. Microsoft recommends TOP Command.

If both TOP statement and SET ROWCOUNT are used, SET ROWCOUNT over rides TOP when ROWCOUNT value is smaller than TOP value

Example given below. The below statement will give us only 2 rows as output.


SELECT TOP 4 * FROM Approver


SET ROWCOUNT limits all the queries including triggers.

As a part of a SELECT statement, the query optimizer can consider the value of expression in the TOP or FETCH clauses during query optimization. Because SET ROWCOUNT is used outside a statement that executes a query, its value cannot be considered in a query plan.

This setting comes into play during execution and not at parse time.

If you want to reset it, then use the statement below.




Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax.


Question18. What is the SQL Query Order of Operations ? OR What is the Logical Query Processing Phases and their order in SQL Statement Execution.

Answer – SQL Server processes SQL statements in a Logical Order. We call it Logical Query Processing Phases. Phases and their order is given below-

2. ON

If you want to remember the above sequence use below


You can also learn this in detail from below URL – (Itzik Ben-Gan – SQL Expert)

Click to access Logical%20Query%20Processing%20Poster.pdf


Question19. What is .TUF file? What is the significance of the same? Any implications if the file is deleted?

Answer –

TUF file is the Transaction Undo File. This file is created when LogShipping is configured in SQL Server in stand by mode. This file is located @ the path where transaction log files were saved.

This File consists of list of uncommitted transactions while backup is going on the primary server in LogShipping. If .tuf file is got deleted there is no way to repair logshipping except reconfiguring it from scratch.

The transaction undo file contains modifications that were not committed on the source database but were in progress when the transaction log was backed up AND when the log was restored to another database, you left the database in a state that allowed addition transaction log backups to be restored to it (at some point in the future. When another transaction log is restored, SQL Server uses data from the undo file and the transaction log to continue restoring the incomplete transactions (assuming that they are were completed in the next transaction log file). Following the restore, the undo file will be re-written with any transactions that, at that point, are incomplete.


Question 20. What is a deadlock and what is a live lock? How will you go
about resolving deadlocks?

Answer –

Deadlock is a situation when two processes, each having a
lock on one piece of data, attempt to acquire a lock on the
other’s piece. Each process would wait indefinitely for
the other to release the lock, unless one of the user
processes is terminated. SQL Server detects deadlocks and
terminates one user’s process.

A livelock is one, where a request for an exclusive lock
is repeatedly denied because a series of overlapping shared
locks keeps interfering. SQL Server detects the situation
after four denials and refuses further shared locks. A
livelock also occurs when read transactions monopolize a
table or page, forcing a write transaction to wait
indefinitely. A human example of live lock would be two people who meet
face-to-face in a corridor and each moves aside to let the
other pass, but they end up moving from side to side without
making any progress because they always move the same way at
the same time and never cross each other. This is good
example of live lock.