SQL Server Interview Questions

The top SQL Server interview questions and answers are right here! These expert-curated SQL Server interview questions and answers for experienced will boost your knowledge and confidence in a short period of time. We have compiled a list of both Intermediate & Expert Level SQL Server interview questions and answers. These SQL Server interview questions and answers for developers have been answered by experts. Be prepared and get recruited by your dream employer. Convert your dream job interview into a new job offer.

  • 4.6 Rating
  • 48 Question(s)
  • 30 Mins of Read
  • 6987 Reader(s)

Intermediate

A page is a fundamental unit of storage that stores data. The page is the size of 8KB and we have 8 types of pages in SQL Server.

  • Data
  • Index
  • Text/Image (LOB, ROW_OVERFLOW, XML)
  • GAM (Global Allocation Map)
  • SGAM (Shared Global Allocation Map)
  • PFS (Page Free Space)
  • IAM (Index Allocation Map)
  • BCM (Bulk Change Map)
  • DCM (Differential Change Map)

An identity column in the table has auto-generate & auto increase value with each new row insert. The user cannot insert value in the identity column.

The sequence is a new feature introduced with SQL Server 2012 similar to Oracle’s sequence objects.  A sequence object generates a sequence of unique numeric values as per the specifications mentioned. Next VALUE for a SEQUENCE object can be generated using the NEXT VALUE FOR clause.

IDENTITY is column level property & tied to a particular table. This cannot be shared among multiple tables.

SEQUENCE is an object defined by the user with specific details and can be shared by multiple tables. This is not tied to any particular table.

IDENTITY property cannot be reset to its initial value but the SEQUENCE object can be reset to initial value any time.

Maximum value cannot be defined for IDENTITY whereas this can be done for SEQUENCE object.

The DMV (Dynamic Management Views) is a set of system views introduced with SQL Server 2005. DMV’s are a new tool of DBA to get internal information of the system and it’s working.

  • sys.dm_exec_query_stats
  • sys.dm_exec_sql_text
  • sys.dm_os_buffer_descriptors
  • sys.dm_tran_locks - Locking and blocking
  • sys.dm_os_wait_stats - Wait stats
  • sys.dm_exec_requests Percentage – For Percentage complete for a process
  • Sys.dm_exec_sessions

SQL Server supports different types of isolation level.

  1. Read Uncommitted – Read Uncommitted is the lowest isolation level allow dirty reads. Here, Changes does by one transaction is visible to other transactions before committing.
  2. Read Committed – Read Committed s isolation does not allow dirty read. Any data read by the transaction is 100% committed. Till the time transaction is updating the record, it holds exclusive locks on it.
  3. Repeatable Read – Repeatable read is the most restrictive isolation level which holds locks on the table even with read transactions. Table data cannot be modified from any other sessions until the transaction is completed.
  4. Serializable – This is the Highest isolation level similar to repeatable read with the prevention of Phantom Read. It ensures transaction referencing same records must run in serially.
  5. Snapshot – This also known as RCSI (Read Committed Snapshot Isolation). It’s similar to Serializable Isolation but the only difference is Snapshot does not hold a lock on the table during the transaction so that the table can be modified in other sessions. Snapshot isolation maintains versioning in Tempdb for old data called Row Versioning.

Checkpoint

  • Checkpoint occur on Recovery time interval, Backup/Detach Command & on the execution of DDL command.
  • Checkpoint writes only dirty pages to the disk. Checkpoint does not release any memory.
  • The checkpoint is responsible for DB recovery point.
  • Checkpoint always mark entry in T-log before it executes either SQL engine or manually
  • Checkpoint occurrence can be monitored using performance monitor “SQL Server Buffer Manager Checkpoint/sec”.
  • You need SYSADMIN, DB_OWNER and DB_BACKUPOPERATOR rights to execute checkpoint manually

Lazy Writer

  • Lazy Writer release buffer pool memory when memory pressure occurs to ensure enough free memory.
  • Lazy Writer looks for LRU, least recently used (“cold” = least recently read or written, not accessed in recent time) pages in the buffer pool and releases the memory taken by them. Lazy writer releases both dirty and clean pages. Clean pages can be released without writing to disk and dirty pages release after written to the disk.
  • Lazy Writer is only responsible for managing free buffer space. The lazy writer does not affect or manage database recovery.
  • The lazy writer doesn’t mark any entry in T-log.
  • Lazy writer occurrence can be monitored using performance monitor “SQL Server Buffer Manager Lazy writes/sec”.
  • SQL server executes its own. User cannot run it manually
  • SQL Server Database Engine
  • SQL Server Analysis Services (SSAS)
  • SQL Server Reporting Services (SSRS)
  • SQL Server Integration Services (SSIS)
  • SQL Server Management Studio
  • SQL Server Profiler
  • SQL Server Configuration Manager
  • SQL Server CEIP (Telemetry Services)
  • SSIS CEIP (Telemetry Services)
  • Database Tuning Advisor
  • SQ Profiler
  • R Service
  • Connectivity Components
  • Communication between clients and servers
  • Network libraries for DB-Library, ODBC, and OLE DB.
  • Documentation and Samples
  • Books Online

SQL Server supports the following types of database backups:

  • Full Backup – It contains a complete backup of the database.
  • Differential backup – Differential backup is also known as incremental backup. It contains changes happen after the last full backup.
  • Log Backup – Log backup contains a backup of the transactional log after last log backup. It contains both committed & uncommitted transactions.
  • Copy-Only Backup – Special backup type that does not impact the sequence of regular backups and gives you a copy of the complete database.
  • Filegroup Backup – SQL gives you the capability of one or more filegroup backup in place of complete database backup.
  • Partial Backup – Partial backup helps you to take a backup of specific data from different files or filegroups.

SQL Server database backups are supported by all 3 recovery models:

  • Simple
  • Full
  • Bulk-logged

With Compressed backups, SQL Server works on the pre-allocation algorithm to determine how much space the compressed backup will require and will allocate this space at the start of the backup process. At the time of backup completion, the backup process might expand or shrink the backup file on the basis of actual requirement.

Pre-Allocation helps SQL Server save the overhead of constantly growing the backup file.

Sometimes when we have a very large database and the difference in uncompressed and compressed backups is very high, preallocated and actual backup space has a significant difference. Trace flag 3042 promote better space management with little over-head to SQL server to grow backup file on a regular basis. This is useful with limited disk space.

Enabling trace flag 3014 may result in a little spike in CPU & memory but not significant to cause performance issues.

Pseudo-simple Recovery Model is situational based recovery model. This is something not given by project but ones with circumstances. It’s a situation where the database may be configured in the FULL recovery model but behaves like a SIMPLE recovery model.

The database will behave like a SIMPLE recovery model until a full database backup is taken after switching recovery model to Full.

For example, You have a database in Simple Recovery Model and now you have switched it to the FULL recovery model. You need to perform a full backup after switching recovery model otherwise database will keep behaving like simple and keeps truncating Transaction log on commit.

You need to restore Last Sunday full backup happen at 2 AM then Wednesday Differential backup of 2 AM followed by all Transaction log backups after. Consider all backups are valid, in case of any bad backup file recovery path, will be changed.

SQL Server Agent provides multiple functionalities like:-

  • SQL Server Agent is a Windows service come with all SQL editions other than express.
  • SQL Server Agent is like task scheduler of OS. It used to schedule jobs/process & reports.
  • SQL Server Agent is also provided inbuilt features of Alerts like Blocking, deadlock, etc.
  • SQL Server Agent support multiple types of process like T-SQL \ CMD  \PowerSheel \ SSAS \ SSIS for scheduled execution.
  • SQL Server Agent provides the proxy option to secure and limit the direct user access on critical sub-systems.

SQL Server agent has lots of features and sometimes you need to give rights on an agent to manage and view jobs. Allowing every one with sysadmin permission is not appropriate.

SQL Server has 3 fixed DB roles for permission on SQL Agent & Jobs.

  • SQLAgentUserRole – Users with SQLAgentUserRole rights can manage Agent jobs created by self. Self-owned jobs can be managed by this rights, he can not view other jobs available on the server.
  • SQLAgentReaderRole – This is the 2nd level of permission on the agent. User with SQLAgentReaderRole rights can only view job available on server but cannot change or modify any jobs. This roll gives the ability to review multi-server jobs, their configurations, and history with SQLAgentUserRole rights.
  • SQLAgentOperatorRole - SQLAgentOperatorRole  roles gives you highest permission on agent. User with this role has the ability to review operators, proxies and alerts, execute, stop or start all local jobs, delete the job history for any local job as well as enable or disable all local jobs and schedules with AQLAgentReaderRole rights.

The Guest user account is created by default with SQL server installation. Guest user is not mapped to any login but can be used by any login when explicit permissions are not granted to access an object. You can drop the guest user from all databases except Master and TempDB.

When any user login to the SQL Server, it has 2 parts Server & database. First, at the Server level, user authentication verifies at the server level and User is able to login to SQL Servers. Second, Login and mapping to the database are verified. In case, Login is not mapped to any user but able to log in on SQL server. SQL automatically map that login to Guest and grant him database access.

One of the security recommendations of Microsoft is to Drop or disable a Guest user in every database except Master & TempDB database. By having Guest user, you are at risk of unauthorized access to data.

An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database.

An SQL injection is a web security vulnerability that allowed hackers to access application code and queries used to interact with the database. Hacker uses those methods to retrieve data belonging to other users and data not authorized to them. The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc.

An SQL injection is not a task of one man or team. Complete support and architecture team work together to get it prevented to happen.

Developers \ DBAs are responsible for DB security and proper SQL code.

Applications developers are responsible for application code and Db access methods.

Infra team is responsible for network, firewall & OS security.

Proper SQL instance, OS & Farwell security with a well-written application can help to reduce the risk of SQL injection.

Index Reorg

  • The reorganization is an "online" operation.
  • Reorganization only affects the leaf level of an index
  • Reorganization shuffle data place to rectify existing allocated pages of index
  • The reorganization is always a fully-logged operation
  • Reorganization can be stopped or killed any time, no need for a rollback

Index Rebuild

  • Rebuild is an "offline" operation by default.
  • Rebuild creates a completely new structure of index B-Tree
  • Rebuild uses new pages/allocations
  • Rebuild can be a minimally-logged operation
  • Rebuild can be stopped or killer but it requires rollback to complete transactionally

Each table can have only one Cluster & multiple nonclustered indexes. All nonclustered indexes use index key of cluster index or directly depends on the clustered index.

Because of this dependency, we usually got this question if rebuilding the clustered index will rebuild the nonclustered index on a table or not.

The answer is NO. Cluster index rebuild will not rebuild or reorganize the nonclustered index. You need to plan a nonclustered index rebuild or reorganization separately.

  • Collation of TempDB should be the same as the SQL Server instance collation.
  • TempDB database should be sa.
  • Guest user should not drop or revoke permissions from TempDB database
  • Keep the recovery model SIMPLE only.
  • Configure tempdb files to automatically grow as required.
  • Ensure TempDB drives is with RAID protection in order to prevent a single disk failure from shutting down SQL Server
  • Keep the TempDB database to separate set of disks
  • TempDB database size should be according to server load.
  • TempDB database data files should be configured as per available C.P.U. Cores
  •  If no of cores < 8 then no. of data files  equals to no of logical processors
  • If no. of cores between 8 to 32 inclusive then no. of data files  equals to 1/2 data files as logical processors
  • If no. of cores > 32 then no. of data files  equals to 1/4 data files as logical processors
  • Ensure each data file should be of the same size to allow optimal proportional-fill performance.
  • Reside tempdb database on a fast I/O subsystem.
  • Configure auto growth if tempDB data  Log files to a reasonable size to avoid the tempdb database files from growing by too small a value.
  • Guidelines for setting the FILEGROWTH increment for tempdb files
  • If TempDB file size > 0 and < 100 MB then the recommended filegroup increment can be 10 MB
  • If TempDB file size > 100 and < 200 MB then the recommended filegroup increment can be 20 MB
  •  If TempDB file size >= 200 MB then the recommended filegroup increment can be 10% OR any fixed value depending on the requirement or on basis of I/O system capabilities
  • Do not shrink TempDB unless necessary.
  • Do not enable auto-create statistics & auto update statistics.
  • Ensure to have auto close OFF
  • Snapshot replication – Snapshot replication works on a snapshot of the published objects. Snapshot agent takes care of it and applies it to a subscriber. Snapshot replication overwrites the data at the subscriber each time a snapshot is applied. It’s best suited for fairly static data or sync interval is not an issue. Subscriber does not always need to be connected.
  • Transactional replication – Transactional replication replicates each transaction for the article being published. For initial setup, Snapshot or backup is required to copy article data at the subscriber. After that Log Reader Agent reads it from the transaction log and writes it to the distribution database and then to the subscriber. It’s the most widely used replication.
  • Merge replication – Merge replication is the most complex types of replication which allow changes to happen at both the publisher and subscriber. Changes happen at publisher & subscriber are merged to keep data consistency and a uniform set of data. For an initial setup like transactional replication, Snapshot or backup is required to copy article data at the subscriber. After that Merge Reader Agent reads it from the transaction log and writes it to the distribution database and then to the subscriber. The merge agent is capable of resolving conflicts that occur during data synchronization.

Push & Pull is a type of Replication configuration. It decides how data will be replicated to the subscriber.

Push – In Push subscription, the publisher is responsible for sending data. Publisher pushes data from the publisher to the subscriber. Distributor agent will run on the Distributor to push changes. The distributor can be configured on the Publisher or separate server. Changes can be pushed to subscribers on demand, continuously, or on a scheduled basis.

Pull is best suited configured when the subscriber is connected all the time and need the latest data all the time accessed.

Pull - In Pull subscription, Subscribers is responsible for fetching data. Subscribers requests changes from the Publisher. Distributor agent will run on the subscriber to Pull changes. The subscriber can pull data as & when needed.

Pull is best suited when subscribers are not online every time. The subscriber can allow data lag and can wait for delay in data refresh

Log shipping is a Microsoft SQL Server DR solution from the very beginning. In Log Shipping, Backup and restores of a database from one Server to another standby server happens automatically.

Log Shipping works with 3 mandatory & 1 optional Job.

  • Backup Job - Backup Job is responsible for taking transactional log backups on the Primary server. It runs on Primary Server.
  • Copy Job - Copy Job runs on a secondary server and responsible for copying T-Log backups from the primary server to Secondary server.
  • Restore Job - Restore job runs on secondary server to restore T-log backups in sequential order.

These 3 jobs are created separately for each database configured in log shipping.

  • Alert Job – This is an optional job to monitor log shipping threshold and generate notifications. This Job is instance specific.
  • Log shipping is a disaster recovery solution from Microsoft. Log Shipping comes up with multiple internal tables to refer to its details and monitor current status.
  • log_shipping_monitor_alert – This system table saves alert configuration used to monitor and trigger a notification on violations.
  • log_shipping_monitor_error_detail – This system table shows errors occurred during Log shipping.
  • log_shipping_monitor_history_detail – This system table saves the history of log shipping and it’s status. This can be referred in future for any issues and security report.
  • log_shipping_monitor_primary – These tables save one record per database with backup and monitoring threshold.
  • log_shipping_monitor_secondary - These tables save one record per secondary database with the primary server, primary database, restore and monitoring threshold.
  • log_shipping_primary_databases – This table saves a list of all databases serving as primary database & enabled for Log shipping.
  • Log_shipping_secondary_databases - This table saves a list of all databases serving as a secondary database in Log shipping.

Also, You can use the Transaction Log Shipping Report at Server Level to view the current status.

Principal Server:- The main server holding a primary copy of the database and serving client application & requests.

Mirror Server:- The secondary server which holds a mirrored copy of Principal database and acts as a hot or warm standby server on basis Synchronous & Asynchronous configuration.

Witness Server: - The witness server is an optional server and it controls automatic failover to the mirror if the principal becomes unavailable.

The FQDN (fully qualified domain name) is computer name of each server with the domain name. This can be found running the following from the command prompt:

IPCONFIG /ALL

Concatenate the “Host Name” and “Primary DNS Suffix”

Host Name . . . . . . . . . . . . : XYZ

Primary DNS Suffix . . . . . . . : corp.mycompany.com

The FQDN of your computer name will be XYZ.corp.mycompany.com.

Advanced

The physical database architecture is a description of the way the database and files are organized in a SQL server.

Pages and extents- A page is the size of 8KB and set of 8 Pages are called extent. This is the fundamental unit where data is stored.

Physical Database Files and Filegroups- Database files visible at file system to store data and logs.

Table and Index Architecture- Database objects inside the database to store & access data.

Database- Database is a set of data & Log file which resides over the filesystem and managed by the operating system.

SQL Instance- SQL instance is a logical entity controlled databases. One SQL instance can have multiple databases. Security and accessibility is also part of the SQL instance.

  1. One to One(1:1) – For each instance, in the first entity, there is one and only one in the second entity and vice versa. Like - Employee Name and Employee ID, One employee can have only one Employee ID and one Employee ID can be allocated to one person only.
  2. One to Many(1:N) – For each instance, in the first entity, there can be one or more in the second entity but for each instance, in the second entity, there can be one and only one instance in the first entity. Like -  Manager & Employee, One Manager can have many employees but one employee can have only one Manager.
  3. Many to Many(N:N) –For each instance, in the first entity there can be one or more instance in the second entity and vice versa. Like – Employee & Project, One Employee can work on multiple projects and One Project can have multiple employees to work.

Write-ahead transaction log controls & defined recording of data modifications to disk. To maintain ACID (Atomicity, Consistency, Isolation, and Durability) property of DBMS, SQL Server uses a write-ahead log (WAL), which guarantees that no data modifications are written to disk before the associated log record is written to disk.

Write-ahead log work as below:-

  • SQL Server maintains a buffer cache to maintain data pages on retrieval
  • When a page is modified in the buffer cache, it marked as dirty. The page is not immediately written back to disk.
  • A data page can be modified multiple times before written to disk but maintain separate log entry a transaction log record.
  • The log records must be written to disk before the associated dirty page is removed from the buffer cache and written to disk.

To understand Row versioning, you should first understand RCSI (Read Committed Snapshot Isolation). RCSI does not hold a lock on the table during the transaction so that the table can be modified in other sessions, eliminate the use of shared locks on read operations.

RCSI isolation maintains versioning in Tempdb for old data called Row Versioning. Row versioning increases overall system performance by reducing the resources used to manage locks.

When any transaction updates the row in the table, New row version was generated. With each upcoming update, If DB is already having the previous version of this row, the previous version of the row is stored in the version store and the new row version contains a pointer to the old version of the row in the version store.

SQL Server keeps running clean up task to remove old versions which are not in use. Until the transaction is open, all versions of rows that have been modified by that transaction must be kept in the version store. Long-running open transactions and multiple old row versions can cause a huge tempDB issue.

Sometimes we have a situation when we cannot perform such activities using SSIS due to multiple types of databases, Different domains, Older version, etc.

We have several commands available to import using T-SQL.

  • BCP – BCP is Bulk copy program used to import a large number of rows into SQL server or to text\csv files.
  • Bulk Insert – Bulk insert is used to data files (text \ csv \ excel) into a database table in a user-specified format.
  • OpenRowSet – OpenRowSet is used to access remote data from an OLE DB data source. This is an alternative method of Linked server for the one-time or ad-hoc connection.
  • OPENDATASOURCE - OPENDATASOURCE is used to access remote data on an ad-hoc basis with 4 part object name without using a linked server name.
  • OPENQUERY – OPENQUERY is used to execute a specified query on the specified linked server. OPENQUERY can be referenced in from clause with INSERT, UPDATE, or DELETE statement on the target table.
  • Linked Servers – Linked Servers are configured to access data outside SQL Server from another SQL instance or DB types (Like Oracle \ DB2, etc.). It has the ability to issue queries & transactions on heterogeneous data sources.

We need to run the SQL Server patch installation minimally 4 times to patch both SQL instance on all 3 cluster nodes.

SQL 2005  - 1 installation, SQL 2005 support remote installation. SQL 2005 patch installation will install the patch on all cluster nodes in one go but only for cluster objects like DB engine or agent.

SQL 2008 R2 – 3 installation, SQL 2008 R2 does not support remote installation. We need to patch all 3 nodes separately.

Additionally, We may need to run SQL 2005 setup on other 2 nodes to patch non-cluster objects like SSMS but that’s the additional part.

Full backups are set of the complete database with all data. AT the time of any crash or data recovery it’s the starting point to select which full to sue to plan recovery path.

To make things clear and doubt free, SQL Server includes all data or transactions data into full backup till the END TIME of backup.

If your backup took 2 hours that backup will also contain data changes and transaction happen in this 2 hours.

A full backup includes all data and transaction at the completion of backup time.

The full backup cover complete transaction with all the changes that were made after the backup start checkpoint to apply those changes during the database recovery process.

DCM (Differential Changed map): DCM is used by SQL server for differential backup. DCM pages keep track of all extents which has changed since the last full database backup.

During SQL server differential backup, database engine reads the DCM page and takes a backup of only those pages which has been changed since the last full backup. If the value is 1 means extent has been modified and 0 means not modified.  After each full backup, all these extents are marked to 0.

BCM (Bulk Changed map): BCM page is used in bulk-logged recovery model to track extends changed due to bulk-logged or minimally logged operations.

During log backup, the database engine reads BCM page and includes all the extents which have been modified by the bulk-logged process. If the Value 1 means modified extent and 0 means not modified. After each log backup, all these extents are marked to 0

To restore the differential backup of any database, DB needs to be in restoring mode which means DB will not accessible.

The MASTER database is a startup database for any SQL server instance. SQL instance will be in an offline state if the MASTER database is not accessible.

If we combined both statements, We can see that differential backup of the MASTER database is unnecessary as we can not restore. That’s why SQL server will not allow you to do take a differential backup of MASTER DB.

SQL Server provides the feature of managing jobs from Master \ central server on target servers called multi-server administration. Jobs and steps information is stored on Master Server. When the jobs complete on the target servers notification is sent to the master server so this server has the updated information.  This is an enterprise level solution where a consistent set of jobs need to run on numerous SQL Servers.

As the name implies, SQL Server Agent Proxy is an account that grant privilege to the user to execute a particular process or an action when a user does not have rights. The SQL Server Agent Proxies include multiple sub-systems:

  • ActiveX Script – Access to run ActiveX Scripts
  • Operating System (CmdExec) – Access to run Command line scripts
  • Replication Distributor – Replication Agent Rights
  • Replication Merge – Replication Agent Rights
  • Replication Queue Reader – Replication Agent Rights
  • Replication Snapshot – Replication Agent Rights
  • Replication Transaction-Log Reader – Replication Agent Rights
  • Analysis Services Command - SSAS execution rights
  • Analysis Services Query - SSAS execution rights
  • SSIS Package Execution – SSIS package execution rights
  • Unassigned Proxies – If required option is not able, You can select this like Other option.

All these sub-systems are available under Proxy in SSMS and you can create as per your requirement.

To secure your SQL Server instance, it’s advisable to hide your SQL instance. SQL Server instance can be marked as hidden from the SQL Server Configuration Manager.

SQL Server Configuration Manager > Select the instance of SQL Server, > Right click and select Properties > After selecting properties you will just set Hide Instance to "Yes" and click OK or Apply.

You need to restart the instance of SQL Server.

An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database.

The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc.

Proper SQL instance, OS & Farwell security with the well-written application can help to reduce the risk of SQL injection.

Development\DBA

  • Validate or filter the SQL commands that are being passed by the front end
  • Validate data types and parameters
  • Use stored procedures with parameters in place of dynamic SQL
  • Remove old installable from application & database servers
  • Remove old backup, application files & user  profiles
  • Restrict commands from executing with a semicolon, EXEC, CAST, SET, two dashes, apostrophe, special characters, etc.
  • Restrict the option of CMD execution or 3rd party execution
  • Limited or least possible rights to DB users

Infra\Server

  • Latest Patches
  • Restricted Access
  • Updated Antivirus

Network Administration

  • Allow traffic from required addresses or domains
  • Firewall settings to be reviewed on a regular basis to prevent SQL Injection attacks

Powershell is windows scripting and powerful enough to manage things from the core in a very efficient manner.

Powershell help in deep automation and quick action on DBA activities. The new face of DBA automation.

We can perform multiple actions from a DBA perspective using Powershell, like:-

  1. Check the status of SQL Server service or SQL Server Agent service
  2. Start/stop a SQL Server service
  3. Find the SQL Server version/edition including the service pack level
  4. Find the SQL Server operating system information such as the OS version, processor number, physical memory, etc.
  5. Perform Backups
  6. Script out a SQL Server Agent Job, based on a specific category
  7. Kill all sessions connected to a SQL Server database

Update Statistics has performed a recalculation of query optimization statistics for a table or indexed view. Although, with Auto Stats Update option, Query optimization statistics are automatically recomputed, in some cases, a query may benefit from updating those statistics more frequently. UPDATE STATISTICS uses tempdb for its processing.

Please note that update statistics causes queries to be recompiled. This may lead to performance issues for initial execution.

You can perform Update Statistics by 2 methods:

UPDATE STATISTICS  - This need Alter right on the table and has more controlled options of performing an update only for one table or specific stats. This can’t be used for the complete database in one go.

SP_UPDATESTATS - This need sysadmin rights on SQL instance. This can help in performing update stats for all the stats of all table in the database.

If a column in the table is having >50% NULL values then index selection if very selective.

Index schema is based on B-Tree structure and if the column is having more than 50% NULL values then all data will reside on one side of the tree result ZERO benefits of the index on query execution.

The SQL server is having a special index called filtered index which can be used here. You can create an index on column only on NON NULL values. NULL data will not be included in the index.

By Default, SQL Server works in Simple Parameterization where  SQL Server internally will add parameters to an SQL statement executed without parameters so that it can try to reuse a cached execution plan. But Simple parametrization has lots of limitation but did not work in the following cases:-

  • JOIN
  • IN
  • BULK INSERT
  • UNION
  • INTO
  • DISTINCT
  • TOP
  • GROUP BY
  • HAVING
  • COMPUTE
  • Sub Queries

To force SQL Server to have Auto Parameterization for every SQL execution, You need to enable Force Parameterization at the database level.

Force Parameterization settings need proper testing as it may also create an adverse impact on performance.

  • Snapshot Agent- The Snapshot Agent is used with all types of replication for the initial copy of an article from publisher to the subscriber. It generates schema copy and bulk copy files of published tables and other objects. Later same is applied to the subscriber. Snapshot Agent runs at the Distributor.
  • Log Reader Agent - Log Reader Agent is used with transactional replication. It’s responsible for moving transactions marked for replication from the transaction log on the Publisher to the distribution database.
  • Distribution Agent - Distribution Agent is responsible for applying Snapshot & Transaction logs at subscriber moved to the distributor by Snapshot or Log Reader agent. Distribution Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Merge Agent - Merge Agent is used by merge replication to move initial snapshot followed by movement and reconciliation of incremental data changes. Each merge subscription has its own Merge Agent works with both the Publisher and the Subscriber and updates both. Merge Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Queue Reader Agent - Queue Reader Agent works with transactional replication with the queued updating option. Queue Reader agent runs at the Distributor and moves changes made at the Subscriber back to the Publisher. Only one instance of the Queue Reader Agent exists to service all Publishers and publications for a given distribution database.

In the existing replication setup, Sometimes we need to add new articles. When you perform such operations in existing setup, it will mark your snapshot invalid which results in snapshot re-initialize for all articles.

This sounds normal but considers the situation where a number of articles are very high or the size of the remaining article is huge, reinitialization can bring down your system.

SQL server provides you option of generating a snapshot of only newly added article. This can be achieved by T-SQL like:-

  • Set allow_anonymous property of the publication to FALSE
  • Disable Change immediate_sync on the publication
  • Invalidate the snapshot by setting property@force_invalidate_snapshot=1 using sp_addarticle
  • Refresh subscriptions using sp_refreshsubscriptions
  • Start Snapshot Agent using Replication monitor
  • You can notice that bulk-insert statements are created only for the specific article instead of all articles
  • Start log reader agent
  • Re-enable Change immediate_sync on the publication
  • Change back allow_anonymous property of the publication to TRUE

Log shipping may fail due to multiple issues can cause data long, Here are some possible reasons:

  • Changes in shared folder or share access permissions – Copy Job is responsible for copying log backup from primary to the secondary server. If you have to change the shared folder permissions on the primary server, Copy job will not able to access the share of primary server to copy Log backups.
  • A human error like someone deletes the T-log backup file or truncate T-log on the primary server – This is the most common issue. If someone has deleted Log backup by mistake log chain will break and restore might fail. Also if Someone truncates the T-Log file on primary than log chain will break and new log backup will not generate.
  • Low Drive free space is low on secondary – Standard approach is to have similar drive structure and space on secondary like primary. If secondary has less drive space that it may get full and impact copy or restore process.
  • Low I/O, Memory, Network resources  - Copy & Restore job needs resource and if you have a long list of databases then you need high resources. Secondary server with low network \ IO or memory can cause server slowness \ crash or delay in the restore.
  • TUF file is missing – TUF is transaction undo file which contains active transaction details. If TUF file is deleted and you do not have a backup then you have to reset shipping.
  • MSDB database is full – MSDB keeps track of log shipping & restore history. If MSDB got full then Copy & Restore jobs will start failed.

TUF means Transaction Undo file used when log shipping is configured in standby mode on secondary.

TUF contains details of transaction not applied on secondary in last Log restore. Basically, these transactions were not committed to the Primary database when the Transaction log backup was in progress. On Next T-Log restore on secondary, Log shipping refer TUF file or state of active transactions.

If TUF is missing, You can’t recover your log shipping.

We have 3 Operating Modes in DB Mirroring.

  • High Availability:- High-Availability operating mode is a combination of principle + mirror + witness with Synchronous. In Synchronous, The Principal server sends the log buffer to the mirror server and then waits for a response from the mirror server. Witness server will monitor principal & Mirror Server, In case Principal is not available the witness and mirror will decide automatic failover to bring mirror online.
  • High Protection:- High-Protection operating mode is a combination of principle + mirror with Synchronous. The Principal server sends the log buffer to the mirror server and then waits for a response from the mirror server but Automatic failover is not possible.
  • High Performance:- High- Performance operating mode runs asynchronously and the transaction safety set to off. The Principal server sends the log buffer but does not wait for a response from the mirror server for acknowledgment. Mirror server can lag behind from principal and cause data loss.

We need to perform force failover to bring mirror online when Principle is down. We can perform this using below command:

ALTER DATABASE <DB Name> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS

If you try to do failover like the normal situation when the principle is online [ALTER DATABASE <DB Name> SET PARTNER FAILOVER] then you will receive below error.

ALTER DATABASE <DB Name> SET PARTNER FAILOVER

Msg 1404, Level 16, State 10, Line 1

Above command failed because the database mirror is busy. Reissue the command later.d later.

AlwaysOn provided you with the feature of Readable replica. This is a long-awaited feature where you can utilize your passive node.

Unlike cluster where your passive node consume all your resources but you cannot utilize them before the primary node goes.

Benefits:

  • Offloads your read-only workload from primary to secondary.
  • Utilize your resources of secondary mode
  • Temporary statistics on readable secondary help in performance tuning on secondary.
  • Offload T-log backups to secondary reduce backup load.

The Remote Procedure Call (RPC) is a system service for interprocess communication (IPC) between different processes. This can be in over on the same computer, on the LAN, or in a remote location, and it can be accessed over a WAN connection or over a VPN connection.

RPC working on a dynamic range of ports and connect using any available port from the available range.

There is a long list of services that depends on RPC like Telnet, DHCP, DNS, COM+, DTC, WINS, etc.

You may face multiple errors when RPC is not working like

  1. Cluster name mention not found
  2. Node1 not able to communicate to Node 2
  3. File Share not working
  4. MSDTC issues
  5. Etc.
Levels