SQL Server DBCC Commands

SQL Server DBCC Commands

 

 Microsoft provides a set of SQL Server DBCC Commands originally that stood for Database Consistency Check commands. The DBCC commands are divided into four main categories: Status, validation, maintenance, and miscellaneous commands. Let’s take a look at a few of the more common ones. I’ll provide links to the others.

 Status Commands:

 The status commands are the ones you normally run first. With these commands, you can gain an insight into what you’re server is doing.

  • DBCC Showcontig
  • DBCC Show_statistics
  •   DBCC Opentran
  •   DBCC Inputbuffer
  •   DBCC Outputbuffer
  •   DBCC Proccache
  •   DBCC Sqlperf
  •   DBCC Tracestatus
  •   DBCC Useroptions

DBCC showcontig:

 Use: This is the command you’ll probably use the most. DBCC SHOWCONTIG shows you how fragmented a table, view or index is. Fragmentation is the non-contiguous placement of data. Just like a hard drive, it’s often the cause of the slowness of a system.

 Syntax: DBCC SHOWCONTIG (table / View / Index Name)

 Example:

DBCC SHOWCONTIG (MEMBER)

 

DBCC SHOWCONTIG scanning ‘Member’ table…

 Table: ‘Member’ (1474104292); index ID: 1, database ID: 17

TABLE level scan performed.

– Pages Scanned…………………………..: 126

– Extents Scanned…………………………: 29

– Extent Switches…………………………: 66

– Avg. Pages per Extent……………………: 4.3

– Scan Density [Best Count:Actual Count]…….: 23.88% [16:67]

– Logical Scan Fragmentation ………………: 62.70%

– Extent Scan Fragmentation ……………….: 89.66%

– Avg. Bytes Free per Page…………………: 1673.2

– Avg. Page Density (full)…………………: 79.33%

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

 

Info: The most telling piece of information here is the Scan Density. The closer this number is to 100, the more contiguous the data.

 DBCC show_statistics:

 Use: By providing the name of the table in question and the proper index, you receive a plethora of information about the “spread” of the index. The important piece of information here is once again the density number.

 Syntax: DBCC SHOW_STATISTICS (table_Name, index_Name)

 Example:

DBCC SHOW_STATISTICS (Member,PK_Member)

DBCC SHOW_STATISTICS (LatLong,ix_clustered_id)

Here the density value is 1 for first statement and it is 1.035415E-05 for second statement.

Info: This time, though, a lower number is better (if that’s the index you want favored) because a low number here means that index is favored to be used in a query.

 Dbcc opentran:

 Use: Displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions, if any, within the specified database. Results are displayed only if there is an active transaction or if the database contains replication information. An informational message is displayed if there are no active transactions.

 Syntax: DBCC OPENTRAN (Database Name)

Example:

DBCC OPENTRAN(WAPMEDLIVE_KUWAIT)WITHNO_INFOMSGS

Info: We can get the transaction opened time and id so that we can kill that process id or commit that transaction.

 Dbcc inputbuffer:

Use: Displays the last statement sent from a client to an instance of Microsoft SQL Server.

Syntax: DBCC INPUTBUFFER (SessionID / RequestID)

Example:

DBCC INPUTBUFFER(@@spid) WITHNO_INFOMSGS

Info:

 Dbcc outputbuffer:

 Use: Returns the current output buffer in hexadecimal and ASCII format for the specified session_id.

Syntax: DBCC OUTPUTBUFFER (SessionID / RequestID)

Example:

DBCC OUTPUTBUFFER(@@spid) WITHNO_INFOMSGS

Info:

 Dbcc proccache:

Use: Displays information in a table format about the procedure cache.

Syntax: DBCC PROCCACHE

Example:

DBCC PROCCACHE

Info:

 The procedure cache is used to cache the compiled and executable plans to speed up the execution of batches. The entries in a procedure cache are at a batch level. The procedure cache includes the following entries:

  • Compiled plans 
  • Execution plans 
  • Algebrizer tree 
  • Extended procedures 

The SQL Server Performance Monitor uses DBCC PROCCACHE to obtain information about the procedure cache.

Dbcc sqlperf:

Use: Provides transaction log space usage statistics for all databases. It can also be used to reset wait and latch statistics..

Syntax: DBCC SQLPERF

( [ LOGSPACE ] /

[ “sys.dm_os_latch_stats” , CLEAR ] /

[ “sys.dm_os_wait_stats” , CLEAR ]

)

[WITH NO_INFOMSGS ]

Example:

The following example displays LOGSPACE information for all databases contained in the instance of SQL Server.

DBCC SQLPERF(LOGSPACE) WITHNO_INFOMSGS

The following example resets the wait statistics for the instance of SQL Server.

DBCC SQLPERF(“sys.dm_os_wait_stats”,CLEAR);

Info:

Once we know the transaction log space for all databases we can get an idea for which database is to be shrink.

 Dbcc Tracestatus:

 Use: Displays the status of trace flags.

Syntax:

DBCC TRACESTATUS ( [ [ trace# [ ,…n ] ] [ , ] [ -1 ] ] )

[ WITH NO_INFOMSGS ]

Example:

DBCC TRACESTATUS(2)

Info:

 Dbcc Useroptions:

 Use: Returns the SET options active (set) for the current connection.

Syntax:

DBCC UserOptions

Example:

DBCCUSEROPTIONS

Info:

Once we can get the all set options for the current connection like language, date format, isolation level etc.

 Validation Commands:

 Once you’ve seen the performance issues due to fragmentation or index problems, you normally run these commands next, since they will flush out the problems the various database objects (including the database itself) are having.

  •  Dbcc Checkdb
  •  Dbcc CheckTable
  •  Dbcc CheckCatalog
  •  Dbcc CheckConstraints
  •  Dbcc CheckFileGroup
  •  Dbcc CheckIdent

Dbcc Checkdb:

 

Use: By far the most widely used command to check the status of your database is DBCC CHECKDB. This command has two purposes: To check a database, and to correct it. Let’s take a look at a couple of the more common options.

 

Syntax:

 

Example:

 

The first option on the command looks like this for the ATI database:

 

DBCC CHECKDB (‘ATI’, NOINDEX)WITHNO_INFOMSGS

 

The command above checks the ATI database but not its indexes. This won’t take long at all. The output returned will tell you if there are problems with the database. If so, check to make sure your backup is handy and then you can run the next level of this command:

 

DBCC CHECKDB (‘ATI’, REPAIR_FAST)WITHNO_INFOMSGS

 

This command will attempt to fix many errors, but won’t allow any data to be lost. If that doesn’t work, the next level of the command is:

 

DBCC CHECKDB (‘ATI’, REPAIR_REBUILD)WITHNO_INFOMSGS

 

This command takes longer, but will also correct the indexes (if it can). It will also not allow data loss. Should this command not correct your errors, you’ll definitely want to have that backup handy, because you’re going to need it. The next level of this command will potentially lose data. It looks like this:

 

DBCC CHECKDB (‘ATI’, REPAIR_ALLOW_DATA_LOSS)WITHNO_INFOMSGS

 

As you can probably guess, this command could potentially lose data or make your applications unusable, depending on what data is lost (if any). I only use this command to repair the database on another server and then pull data selectively where I need it.

 

Info:

 

Use the REPAIR options only as a last resort. To repair errors, we recommend restoring from a backup. If you must use REPAIR, run DBCC CHECKDB without a repair option to find the repair level to use. If you use the REPAIR_ALLOW_DATA_LOSS level, we recommend that you back up the database before you run DBCC CHECKDB with this option.

 

**Note: If we want to repair the database the db should be in single user mode.

 

Dbcc CheckTable:

 

Use: Checks the integrity of all the pages and structures that make up the table or indexed view..

 

Syntax:

 

DBCC CHECKTABLE

 

( table_name / view_name ,(NOINDEX / index_id ) /

 

(REPAIR_ALLOW_DATA_LOSS / REPAIR_FAST / REPAIR_REBUILD)

 

)

 

WITH

 

ALL_ERRORMSGS,

 

EXTENDED_LOGICAL_CHECKS

 

[ , NO_INFOMSGS ]

 

[ , TABLOCK ]

 

[ , ESTIMATEONLY ]

 

[ , { PHYSICAL_ONLY | DATA_PURITY } ]

 

}

 

]

 

Example:

 DBCC CHECKTABLE(Member,NOINDEX)WITHNO_INFOMSGS

DBCC CHECKTABLE(Member,REPAIR_FAST)WITHNO_INFOMSGS

DBCC CHECKTABLE(Member,REPAIR_REBUILD)WITHNO_INFOMSGS

DBCC CHECKTABLE(Member,REPAIR_ALLOW_DATA_LOSS)WITHNO_INFOMSGS

Info:

We need to know more and more about these commands like error message type number and type of action to take, files checking etc.

**Note: If we want to repair the table the db should be in single user mode.

 Dbcc CheckCatalog:

Use: Checks for catalog consistency within the specified database.

Syntax:

DBCC CHECKCATALOG

(database_name / database_id / 0) WITH NO_INFOMSGS

Example:

DBCC CHECKCATALOG(ATI)

Info:

We need to know more and more about these commands like error message type number and type of action to take, files checking etc.

**Note: The database must be online.

 Dbcc CheckConstraints:

Use: Checks the integrity of a specified constraint or all constraints on a specified table in the current database.

Syntax:

DBCC CHECKCONSTRAINTS

( table_name / table_id / constraint_name / constraint_id )

WITH

ALL_CONSTRAINTS / ALL_ERRORMSGS , NO_INFOMSGS

Example:

DBCCCHECKCONSTRAINTS(Member)WITHNO_INFOMSGS

Info:

DBCC CHECKCONSTRAINTS constructs and executes a query for all FOREIGN KEY constraints and CHECK constraints on a table.

 Dbcc CheckFileGroup:

Use: Checks the allocation and structural integrity of all tables and indexed views in the specified filegroup of the current database.

syntax:

DBCC CHECKFILEGROUP

( filegroup_name / filegroup_id / 0

,NOINDEX ]

)

WITH

ALL_ERRORMSGS / NO_INFOMSGS,

TABLOCK ,

ESTIMATEONLY ,

PHYSICAL_ONLY,

Example: It checks the defauls filegroup.

DBCC CHECKFILEGROUP(0,NOINDEX)WITHNO_INFOMSGS

 

Info:

 DBCC CHECKFILEGROUP and DBCC CHECKDB are similar DBCC commands. The main difference is that DBCC CHECKFILEGROUP is limited to the single specified filegroup and required tables.

DBCC CHECKFILEGROUP performs the following commands:

 

 Running DBCC CHECKALLOC or DBCC CHECKTABLE separately from DBCC CHECKFILEGROUP is not required.

 Dbcc CheckIdent:

 Use: Checks the current identity value for the specified table and, if it is needed, changes the identity value. You can also use DBCC CHECKIDENT to manually set a new current identity value for the identity column.

Syntax:

DBCC CHECKIDENT

(

table_name ,

NORESEED / RESEED , new_reseed_value

)

WITH NO_INFOMSGS

Example:

DBCC CHECKIDENT(MemberDetails)

DBCC CHECKIDENT(MemberDetails,NORESEED)

DBCC CHECKIDENT(MemberDetails,RESEED,0)

 

Info:

 The specific corrections made to the current identity value depend on the parameter specifications. We have two more functions which helps us to find the seed and incremental values for a table or a view:

 

SELECT IDENT_SEED(‘MemberDetails’)

 

SELECT IDENT_INCR(‘MemberDetails’)

 

Maintenance Commands:

 The maintenance commands are the final steps you normally run on a database when you’re optimizing the database or fixing a problem.

  •  Dbcc Dbreindex
  •  Dbcc Indexdefrag
  •  Dbcc Shrinkdatabase
  •  Dbcc Shrinkfile
  •  Dbcc Updateusage

Dbcc Dbreindex:

 Use: The DBCC DBREINDEX command rebuilds the indexes on a database. You can specify a particular index or all of them. This is the most popular and time consuming command you’ll normally run, and the one you’ll use most often for making your database access fast.

 Syntax:

 DBCC DBREINDEX

(

table_name , index_name , fillfactor

)

WITH NO_INFOMSGS

 

fillfactor :

Is the percentage of space on each index page for storing data when the index is created or rebuilt. fillfactor replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.

 Example:

DBCC DBREINDEX(Member,PK_Member,100)

 

Info:

 This feature will be removed in the next version of Microsoft SQL Server (SQL SERVER 2010). Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use ALTER INDEX instead.

 Dbcc Indexdefrag:

 Use: The DBCC INDEXDEFRAG command defragments the index rather than rebuilding it. This command is normally used when time is an issue, such as in cases of very large databases. What’s normally done here is that this command is run during the week, and the DBCC DBREINDEX is run once a week.

Syntax:

DBCC INDEXDEFRAG

 

(

 

database_name / database_id / 0 ,

 

table_name / table_id / view_name / view_id ,

 

index_name / index_id ,

 

partition_number / 0

 

)

 

WITH NO_INFOMSGS

 partition_number :

 Is the partition number of the index to defragment. If not specified or if 0 is specified, the statement defragments all partitions in the specified index.

Example:

DBCC INDEXDEFRAG(WAPMEDLIVE_KUWAIT, Member, PK_Member, 0)

 

Info:

 This feature will be removed in the next version of Microsoft SQL Server (SQL SERVER 2010). Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use ALTER INDEX instead.

 Dbcc Indexdefrag:

 Info:

This feature is not available in SQL SERVER 2005 use Drop Database instead.

 Dbcc Shrinkdatabase:

Use: Shrinks the size of the data and log files in the specified database. To shrink all data and log files for a specific database, execute the DBCC SHRINKDATABASE command. To shrink one data or log file at a time for a specific database, execute the DBCC SHRINKFILE command. DBCC SHRINKDATABASE operations can be stopped at any point in the process, and any completed work is retained.

 

Running DBCC SHRINKDATABASE without specifying either the NOTRUNCATE option or the TRUNCATEONLY option is equivalent to running a DBCC SHRINKDATABASE operation with NOTRUNCATE followed by running a DBCC SHRINKDATABASE operation with TRUNCATEONLY.

 

Syntax:

 

DBCC SHRINKDATABASE

 

(

 

database_name / database_id / 0 ,

 

target_percent,

 

NOTRUNCATE / TRUNCATEONLY

 

)

 

WITH NO_INFOMSGS

 

target_percent:

 

Is the percentage of free space that you want left in the database file after the database has been shrunk.

 NOTRUNCATE :

 

Compacts the data in data files by moving allocated pages from the end of a file to unallocated pages in the front of the file. target_percent is optional.

 

The free space at the end of the file is not returned to the operating system, and the physical size of the file does not change. Therefore, when NOTRUNCATE is specified, the database appears not to shrink.

 

NOTRUNCATE is applicable only to data files. The log files are not affected.

 

TRUNCATEONLY :

 

Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified with TRUNCATEONLY.

 TRUNCATEONLY is applicable only to data files. The log files are not affected.

Example:

DBCC SHRINKDATABASE (EAttendence,TRUNCATEONLY)

DBCC SHRINKDATABASE (EAttendence,80,NOTRUNCATE)

DBCC SHRINKDATABASE(tempdb)

 

Info:

 The database being shrunk does not have to be in single user mode; other users can be working in the database when it is shrunk. This includes system databases.

You cannot shrink a database while the database is being backed up. Conversely, you cannot backup a database while a shrink operation on the database is in process.

Consider the following information when you plan to shrink a database:

 

  • A shrink operation is most effective after an operation that creates lots of unused space, such as a truncate table or a drop table operation. 
  • Most databases require some free space to be available for regular day-to-day operations. If you shrink a database repeatedly and notice that the database size grows again, this indicates that the space that was shrunk is required for regular operations. In these cases, repeatedly shrinking the database is a wasted operation. 
  • A shrink operation does not preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This is another reason not to repeatedly shrink the database. 
  • Unless you have a specific requirement, do not set the AUTO_SHRINK database option to ON. 

 Note:

 It is very useful when we want to shrink tempdb in production database.

 

Dbcc Shrinkfile:

 

Use: Shrinks the size of the specified data or log file for the current database or empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a size that is less than the size specified when it was created. This resets the minimum file size to the new value.

 The unique way to shrink log and data files separately.

Syntax:

DBCC SHRINKFILE

(

file_name / file_id ,

EMPTYFILE / [target_size ,NOTRUNCATE / TRUNCATEONLY ]

)

WITH NO_INFOMSGS

 

target_size:

 

Is the size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.

 

You can reduce the default size of an empty file by using DBCC SHRINKFILE target_size. For example, if you create a 5-MB file and then shrink the file to 3 MB while the file is still empty, the default file size is set to 3 MB. This applies only to empty files that have never contained data.

 

EMPTYFILE:

 Migrates all data from the specified file to other files in the same filegroup. Because the Database Engine no longer allows data to be placed in the empty file, the file can be removed by using the ALTER DATABASE statement.

 

NOTRUNCATE:

 Moves allocated pages from the end of a data file to unallocated pages in the front of the file with or without specifying target_percent. The free space at the end of the file is not returned to the operating system, and the physical size of the file does not change. Therefore, when NOTRUNCATE is specified, the file appears not to shrink.

NOTRUNCATE is applicable only to data files. The log files are not affected.

TRUNCATEONLY:

Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent.

target_size is ignored if specified with TRUNCATEONLY. TRUNCATEONLY is applicable only to data files.

Example:

DBCC SHRINKFILE(kwtwapmed)

DBCC SHRINKFILE(kwtwapmed_log)

Info:

DBCC SHRINKFILE operations can be stopped at any point in the process, and any completed work is retained. When a DBCC SHRINKFILE operation fails an error is raised.

The database being shrunk does not have to be in single-user mode; other users can be working in the database when the file is shrunk. You do not have to run the instance of SQL Server in single-user mode to shrink the system databases.

Consider the following information when you plan to shrink a file:

A shrink operation is most effective after an operation that creates lots of unused space, such as a truncate table or a drop table operation.

  • Most databases require some free space to be available for regular day-to-day operations. If you shrink a database repeatedly and notice that the database size grows again, this indicates that the space that was shrunk is required for regular operations. In these cases, repeatedly shrinking the database is a wasted operation. 
  • A shrink operation does not preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This is another reason not to repeatedly shrink the database.Note:

It is very useful when we want to shrink log files in production database.

Dbcc Updateusage:

 

Use: Reports and corrects pages and row count inaccuracies in the catalog views. These inaccuracies may cause incorrect space usage reports returned by the sp_spaceused system stored procedure.

 Syntax:

DBCC UPDATEUSAGE

 

(

 

database_name / database_id / 0,

 

table_name / table_id / view_name / view_id ,

 

index_name / index_id

 

)

 

WITH

 

NO_INFOMSGS ,

 

COUNT_ROWS

 COUNT_ROWS

Specifies that the row count column is updated with the current count of the number of rows in the table or view.

Example:

DBCC UPDATEUSAGE(mulcny_latest)WITHCOUNT_ROWS

DBCC UPDATEUSAGE(wapmedlive_kuwait,Member,PK_Member)WITHCOUNT_ROWS

 

Info:

 DBCC UPDATEUSAGE corrects the rows, used pages, reserved pages, leaf pages and data page counts for each partition in a table or index. If there are no inaccuracies in the system tables, DBCC UPDATEUSAGE returns no data. If inaccuracies are found and corrected and WITH NO_INFOMSGS is not used, DBCC UPDATEUSAGE returns the rows and columns being updated in the system tables.

DBCC CHECKDB has been enhanced to detect when page or row counts become negative. When detected, the DBCC CHECKDB output contains a warning and a recommendation to run DBCC UPDATEUSAGE to address the issue.

Consider the following information when you plan to updateusage:

 Always run DBCC UPDATEUSAGE after upgrading a database from SQL Server 2000. The page and row counts are corrected and are maintained thereafter.

 Do not run DBCC UPDATEUSAGE routinely for databases created in SQL Server 2005 or higher or on upgraded databases that have been corrected once by using DBCC UPDATEUSAGE. Because DBCC UPDATEUSAGE can take some time to run on large tables or databases, it should not be used only unless you suspect incorrect values are being returned by sp_spaceused.

 

 Consider running DBCC UPDATEUSAGE routinely (for example, weekly) only if the database undergoes frequent Data Definition Language (DDL) modifications, such as CREATE, ALTER, or DROP statements.

 

Note:

 

It is very useful when we want to upgrade databases.

  Miscellaneous Commands:

 

These commands perform such tasks as enabling row-level locking or removing a dynamic-link library (DLL) from memory.

 

Dbcc Dllname:

 

Use: I’ve actually had to use the DBCC dllname (FREE) command — it’s primarily a programming convention. It frees up memory used by a DLL that’s often been called by an extended stored procedure. Unloads the specified extended stored procedure DLL from memory.

 

Syntax:

 

DBCC dllname ( FREE ) WITH NO_INFOMSGS

 

Example:

 

 DBCC xp_sample (FREE);

 

 Info:

 

 When an extended stored procedure is executed, the DLL remains loaded by the instance of SQL Server until the server is shut down. This statement allows for a DLL to be unloaded from memory without shutting down SQL Server. To display the DLL files currently loaded by SQL Server, execute sp_helpextendedproc

 Dbcc Help:

Use: DBCC HELP is one of the best commands to remember — it simply shows you the syntax of the other commands:

 

Syntax:

 

DBCC HELP

 

(

 

‘dbcc_statement’ / @dbcc_statement_var / ‘?’

 

)

 

WITH NO_INFOMSGS

 

? :

 

Returns all DBCC commands for which Help is available.

 

Example:

 

DBCC HELP(‘?’)

 

DBCC HELP(‘CHECKDB’)

 

Dbcc Pintable:

 

This command “pins” a table into memory. Once the table is accessed, it stays in the buffer cache of memory and performance (for that table, anyway) is improved. Unless you’ve got a real driving need for this command, you probably shouldn’t use it. Its sister is DBCC UNPINTABLE which of course, releases the table from memory.

 

Syntax:

 

dbcc pintable (database_id, table_id)

 

DbccTtraceon:

 

This command and its sister command DBCC TRACEOFF Turn trace flags off and on, which can control the way SQL Server implements some of its behavior. These flags are normally used for debugging purposes, and I haven’t seen them in use on a production system.

 

Posted in SQL Server DBA | Tagged , , , | 2 Comments

ESSENTIAL STEPS AFTER INSTALLING SQL SERVER

Essential Steps after Installing SQL Server

 

Rolling out SQL Servers is a key task for DBAs.  While for some DBAs this can be a frequent exercise, others will have to face it one time or other throughout their career in an organisation. 

Successfully planning an installation requires a fair bit of knowledge about SQL Server and a basic understanding of the operating system. Once the installation is complete, the system usually needs some changes to default configuration before production databases can be rolled out. In this article I list some common steps that I take after installing a SQL Server. This can be used as a “checklist” to ensure consistency for all the SQL Servers that you roll out. 

Step1: Install the Service Pack, Hotfixes and Cumulative Updates 

This is the first step you should take after a successful installation. You should always install the latest service pack available unless there is a clear business cause not to do so. At the time of this writing, SQL Server 2005 is in service pack level 3 and SQL Server 2008 has service pack 1. Service packs are freely downloadable from the Microsoft web site. If there are hotfixes or cumulative updates available after the last service pack has been released, you should apply them as well. This ensures that your server is ready with the latest version of the database software. Once all the patches are installed, make a backup of your system databases and reboot the server. 

Step 2: Configure SQL Services 

Although your planning phase should determine what SQL services you are going to install, there may be occasions when a service is rolled out but not needed immediately. Perhaps your business requires you to install a SQL Server and although reporting is not going to be part of it, management insists you install Reporting Services – “just in case”. If a service is not going to be needed immediately (or needed at all), disable it.  

One example of a service that you can disable is the Browser service. This service is installed by default. If you are not running named instances or planning to install multiple instances in one machine, you can disable this service.  You can disable the Browser service from the SQL Server Configuration Manager. 

 

Another service that you can disable is the SQL Server VSS Writer. Unless you are using applications that make use of the Windows Volume Shadow Copy infrastructure to backup SQL databases, you can disable this service safely. Note that SQL VSS Writer is not available from the Configuration Manager. You can access it from the Control Panel > Services applet (see below). 

 

While non-essential services should be disabled, other services like the Agent are supposed to be running continuously. This is because scheduled backups, maintenance plans, replication and jobs are dependent on the Agent service. Similarly, if your server is going to execute Integration Service packages, ensure the SSIS service starts automatically every time the server boots up. You can configure the start-up property of a service from the SQL Server Configuration Manager. 

One thing that is not available in the Configuration Manager is the recovery behaviour of SQL Services – i.e. what should the services do if they fail unexpectedly. To access this behaviour, you can bring up the service’s property from the Control Panel’s “Services” applet. The “Recovery” tab of the service property allows you to define what the computer should do in case the service fails once, twice or subsequent times. From here, you can specify that the service should start automatically if it fails. 

 

The same type of behaviour can be set in the SQL Server Agent’s property page. Here, you can specify that SQL or Agent service should restart automatically if they stop unexpectedly. 

 

Step 3: Configure Default Directories 

Everything has its place in this world and when it comes to files and folders, SQL Server is no exception. A number of directories are created during installation – you have one directory for storing data and log files, another directory for replication, another for full text indexes, and yet another for backup files. 

Where SQL places its files is determined mostly during the installation process. This happens because either you provide this information explicitly or SQL uses a default set of locations. The directory path information you provide during the setup should really be coming from your planning phase. Once the installation is complete, it is still possible to fine-tune some of these component locations. 

If you have followed the best practice, your data files should be residing in a separate physical drive and directory.  The transaction log files of user databases should also be in a separate drive and folder. You can configure both the locations from the server’s properties: 

 

Similar to the data and log directories, you can specify the default locations for: 

a)    Backup files 

b)    Replication files 

c)    Full text index files 

d)    Job output files 

SQL Server creates the folders for these components in the same location where data files are placed. To ensure data files are given the maximum possible dedicated space as far as disk activity is concerned, I try to move these folders to a separate drive. 

Unfortunately, you cannot use a GUI front-end to configure these alternate locations. The path information is saved as registry values and that means you either have to manually change registry entries or use an extended stored procedure like xp_regwrite.  Also, the location of the registry keys will be different between named instances. They will also vary depending on your version of SQL Server (2005 or 2008). Generally, you will be looking for a registry key called “Working Directory”.  

The script below shows how I changed the default locations for a default instance of SQL Server 2005 installed in a Windows Server 2003 system: 

— Default Backup Directory 

EXEC xp_regwrite 

          N’HKEY_LOCAL_MACHINE’, 

          N’Software\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer’, 

          N’BackupDirectory’, 

          REG_SZ, 

          N’F:\SQL2005\BACKUP’ 

GO 

— Default Replication Directory 

EXEC xp_regwrite 

          N’HKEY_LOCAL_MACHINE’, 

          N’Software\Microsoft\Microsoft SQL Server\MSSQL.1\Replication’, 

          N’WorkingDirectory’, 

          REG_SZ, 

          N’F:\SQL2005\REPLDATA’ 

GO 

— Default Full Text Directory 

EXEC xp_regwrite 

          N’HKEY_LOCAL_MACHINE’, 

          N’Software\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer’, 

          N’FullTextDefaultPath’, 

          REG_SZ, 

          N’F:\SQL2005\FTData’ 

GO 

— Default Backup Directory 

EXEC xp_regwrite 

          N’HKEY_LOCAL_MACHINE’, 

          N’Software\Microsoft\Microsoft SQL Server\MSSQL.1\SQLServerAgent’, 

          N’WorkingDirectory’, 

          REG_SZ, 

          N’F:\SQL2005\JOBS’ 

GO 

And of course, the paths you specify as registry key values must exist – that means you need to create (or move) the folders. 

To ensure the system is aware of your changes, restart SQL services after making the changes. 

Step 4: Configure Default Database Properties 

We all know it – model database works as the “template” for new databases. If you want to set some common properties across all your user databases, you can set them in the model. A number of properties can be set here to take advantage of this approach: 

a)    Recovery Model: perhaps you are installing SQL Server for a data warehouse that will host a number of data marts that would not require full recovery model. You can set the recovery model of the model database to “simple” mode in such cases.  

b)    File Properties: by default, the data files are configured to grow by 1 MB and the log files by 10% of their size. This means that a large transaction log files will grow by a large proportion when it needs more space. For example, every time a 6 GB log file needs to grow, it will take up 600 MB of disk space in one gallop. For a write intensive database, the data file’s incremental growth by 1 MB can result in fragmentation. As a general rule of thumb, the log files should be quarter or half the size of the data files. If you have an idea about how big your data and log files are going to be and how they should grow, you can set up the properties in the model database. I generally set the growth increments in megabytes rather than leaving it as a percentage.

 

Another area where you can make use of the model database is filegroups. If you want your user tables to be created on secondary data files and leave the primary filegroups with system objects only, you can create secondary data files in the model database and assign them to a user defined filegroup and make that filegroup default. 

c)    Database Options: depending on your requirements you can set up other database options in the model database. Examples of such options could be torn page detection, ANSI defaults etc. 

d)    Custom Objects: if you have custom objects that need to be present in each user database, you can put them in the model database. 

Step 5: Configure tempdb Database 

The temporary database is used heavily in SQL 2005 and latter versions. Best practice suggests that for optimal performance, SQL Server’s tempdb database files should be placed in a separate dedicated disk drive. This should be followed wherever possible, but even if your server’s current disk configuration does not allow this, you can still make some changes to ensure tempdb is optimally tuned. Part of this tuning means you should allocate a healthy amount of space to this database.  

The number of data files you should allocate in the tempdb database will depend on the number of CPUs (logical or physical) present in the machine. When SQL Server accesses a database, it starts a scheduler thread for each data file present. So if you have eight CPUs in your physical server, the tempdb database should have eight data files to “load balance” the threads. 

The easiest way you can check the number of CPUs present in the machine is by looking at the Windows Task Manager’s Performance tab. 

 

Also, data files in a database are used in a “proportional fill” algorithm. This means that if you have two data files in a database – one 4 GB and another 8 GB, SQL will try to stripe the data across the two files in a 1:2 ratio. For each stripe of data written to the 4 GB file, SQL will write two stripes in the 8GB file – hence the term “proportional fill”. 

Now if you configure tempdb to have the same number of data files as the number of CPUs present and specify each data file to have the same size and growth, you are effectively ensuring the CPU load is evenly distributed across tempdb.  

The following script shows one such configuration done from the query window. 

ALTERDATABASE tempdb MODIFYFILE 

          ( 

                   NAME =’tempdev’, 

                   FILENAME = ‘G:\SQL2005\tempdb\tempdb.mdf’, 

                   SIZE = 8GB, 

                   MAXSIZE = UNLIMITED, 

                   FILEGROWTH = 256 MB 

          ) 

GO 

ALTERDATABASE tempdb ADDFILE 

          ( 

                   NAME =’tempdev_2′, 

                   FILENAME = ‘G:\SQL2005\tempdb\tempdb2.ndf’, 

                   SIZE = 8GB, 

                   MAXSIZE = UNLIMITED, 

                   FILEGROWTH = 256 MB 

          ) 

GO 

ALTERDATABASE tempdb ADDFILE 

          ( 

                   NAME =’tempdev_3′, 

                   FILENAME = ‘G:\SQL2005\tempdb\tempdb3.ndf’, 

                   SIZE = 8GB, 

                   MAXSIZE = UNLIMITED, 

                   FILEGROWTH = 256 MB 

          ) 

GO 

ALTERDATABASE tempdb ADDFILE 

          ( 

                   NAME =’tempdev_4′, 

                   FILENAME = ‘G:\SQL2005\tempdb\tempdb4.ndf’, 

                   SIZE = 8GB, 

                   MAXSIZE = UNLIMITED, 

                   FILEGROWTH = 256 MB 

          ) 

GO 

ALTERDATABASE tempdb MODIFYFILE 

          ( 

                   NAME =’templog’, 

                   FILENAME = ‘G:\SQL2005\tempdb\templog.ldf’, 

                   SIZE = 16GB, 

                   MAXSIZE = UNLIMITED, 

                   FILEGROWTH = 32 MB 

          ) 

GO 

As you can see from the code, we are adding three new data files to tempdb (since there are four CPUs present and there is already one data file) and ensuring the four files to have the same size and growth characteristics. 

Once you have completed the tempdb configuration, you will need to restart the SQL instance for the changes to take effect. 

Step 6: Configure Server Properties and Features 

Of all the SQL Server configurations, this one is probably the most important and has a direct effect on how the server behaves. What properties you configure and what values you set here will depend on your organisation’s business needs, SOEs etc. There are a number of properties you can set for the database server engine and there is more than one way you can set some of them. I like to divide these properties into three categories: 

Configuring Features: 

If it is SQL 2005, use the Surface Area Configuration tool for enabling some of the SQL Server features. For SQL 2008, use the Surface Area Configuration Facet from the Server properties. Both these versions list the same features and by default all of these features are disabled.

 

 

What you enable here will depend on what you want your server to do. You may want to enable Database Mail (discussed later). If your stored procedures are going to use the xp_cmdshell utility, you will need to switch it on here. If you think there will be managed code in your database application, you need to enable CLR. 

Configuring Parameter Values: 

a)    Memory: Set the minimum and maximum server memory parameters. You can do so from the server properties dialogue box or using the sp_configure system stored procedure. If AWE is required, enable that too. 

b)    Fill factor: Set a default value for index fill factor. This can save page splitting in indexes in future. 

c)    Processor: Configure max. Degree of parallelism, cost threshold for parallelism as necessary. 

d)    Network connections: Configure remote query timeouts if you want to override the default behaviour of 600 seconds. 

e)    Security: Although you can change the authentication mode (Trusted or Mixed) at this stage – this is a choice you probably have made during the installation. You can also enable C2 audit mode as part of security configuration. 

f)     Trace Flags: If you are installing SQL Server 2005, you may wish to enable trace flags as start-up parameters for detecting and reporting deadlocks. The trace flag 1204 can be used for such purposes. 

Configuring SQL Server Network: 

This is where you define how SQL will use the network to talk to other computers. The first thing you need to do here is enabling or disabling the network protocols SQL will use to listen for incoming traffic. You set this from the Configuration Manager. The picture below shows how 

I have disabled named pipes. 


 

Next, you need to see if your applications require server aliases. Some of the older apps may require you to define a server alias for a named instance.  

Finally, you need to decide if you will assign non-default TCP port number for your instance. The default instance uses a default port number of 1433 and the named instances are usually assigned dynamic port numbers at start up. If you want to assign a non-default port number to SQL Server for TCP connections, you can set it here. 

 

Most of the server property changes would require restarting the service. 

Step 7: Configure Security 

When it comes down to SQL Server security, companies usually take one of two approaches – some follow a rigorous data security policy, others just leave it all to defaults (and to potential attacks). If you are working for a company like the former, you are in luck – you will know what to do. If your company does not have any specific procedures for implementing SQL Server security (written or unwritten), it is up to you how you define it. Once you have completed this part though, you will have something to refer back to in future. 

First and foremost, ensure there are as few members in the sysadmin server role as possible. The Local Administrators Windows group is a member of the sysadmin role by default. Unless you want to have your Windows Administrators to have access to the SQL Server, you should get rid of this group from the sysadmin fixed server role. The next step would be to revoke the login for this group. 

If you have a pre-defined account that you use for database administration, add that to the sysadmin server role.  

You can also create Credentials and Proxy accounts at this stage if you want your jobs to run SSIS packages or operating system commands under those accounts. 

Step 8: Configure Error Logs 

SQL Server error log is the first port of call for DBAs when they troubleshoot server related issues. By default, the server keeps six logs as archive and one log as current. Every time SQL service starts, the system begins a new log file. The currently active log file name is ERRORLOG. The log file before restart (previous ERRORLOG) becomes archived as ERRORLOG.1, the log file before that (named ERRORLOG.2) becomes ERRORLOG.3 and so on. 

One of the annoying things with default error log behaviour is that any one file can contain a number of days of entries. When you try to view a file, it can take some time to load. If you are interested in a particular day’s events, you will need to sift through the entries or use a filter. 

Also, log files are overwritten in a rolling fashion after six files are created. So that means you won’t have any history before the oldest log file (ERRORLOG.6). 

What you can do here is to configure SQL Server to keep a predefined number of log files. You can also create a scheduled job to “recycle” the error logs regularly. Recycling means that SQL Server will start a new log file without restarting the service. 

To keep a pre-defined number of archived logs, you will need to configure the SQL error log properties. 

 

As you can see, I would recommend keeping at least the last thirty log files. 

Once the number of error log files is configured, you can create a scheduled job to run every day to reinitialise the log. The job will have only one step that will call the sp_cycle_errorlog system stored procedure. This procedure starts a new error log file without restarting the SQL service. I usually schedule it to run at 12:00 AM every morning.

 

 

The combined effect of these two changes is that you will have one new log file created every day at midnight. If your server does not restart in between, you will have the last one month’s log entries in disk with each day’s log in one separate file. 

The same kind of job can be defined for SQL Agent error log files. The latest Agent error log is named SQLAGENT.OUT. Executing the msdb database’s sp_cycle_agent_errorlog procedure from a job will create a new Agent log file and archive the previous one. Unlike SQL Server error logs though, you cannot use a dialogue box to specify the number of archived Agent error logs to keep. 

Step 9: Configure Database Mail 

This step is optional. If your databases are going to send e-mails, you will first need to enable database mail from the Surface Area Configuration tool (SQL 2005) or the Server Properties’ Surface Area Configuration Facet (for SQL 2008). 

 

 

Once enabled, you can easily configure it using a wizard in the Management Studio. 

 

You can also enable the mail profile for SQL Agent service. 

If your apps are not going to use the mail feature – don’t enable or configure it. 

Step 10: Configure SQL Agent & Operator 

We are almost at the end of our configuration process. It is now worth having a look at the SQL Agent properties.  The three most important things that can be done here are: 

a)    Configuring SQL and Agent services to restart when they stop unexpectedly (discussed before) 

b)    Enabling the mail profile for SQL Agent. You can do it from the SQL Agent’s Alert System properties 

c)    Increasing the size of the job history log in the msdb database 

 

Changing the size of the job history log in the msdb database is usually a good idea if your server is going to host a large number of scheduled jobs.  

Define DBA Operators: 

Every database server should have one standard DBA operator defined. This operator will be notified via e-mail when jobs fail or alerts fire. Do not create yourself as this operator – if you leave the company, your e-mail account is likely to be deleted and those who come after you will need to spend time for reconfiguring everything as the notifications will fail. That’s why you should ask your system administrators to create a standard DBA account and assign a mailbox to it.

 

Step 11: Configure Operating System  

You can also make some OS level changes to give some extra power to SQL. This works if you have administrative privilege on the Local Windows server.  

If the database application is going to participate in distributed transactions through linked servers, ensure that Network DTC is enabled.  

 

To reach this screen in a Windows Server 2003 system, 

1.    Start the Component Service applet (Start > Administrative Tools > Component Services) 

2.    Expand Component Services node and then Computers node in the left side pane 

3.    Right Click on My Computer and choose Properties 

4.    Choose the MS DTC tab 

5.    Click on Security Configuration screen 

The steps are similar for Windows Server 2008 

You should also give your SQL Service account the right to “Lock pages in memory”. This ensures that Windows will not swap out SQL pages from the memory on to the paging file on disk. In fact from version 2005, SQL Server can detect a memory pressure from OS signal and can dynamically allocate / de-allocate pages itself.  

To configure this property, assign the SQL service account to this privilege from the Local Security Policy. 

 

 

Step 12: Set up Maintenance Plans for System Databases 

Your server is now ready for production database rollouts. To wrap it up all, take a moment to create a maintenance plan for the system databases. This maintenance plan should reorganise the indexes, run DBCC CHECKS and update the statistics. It should also back up the master, model and the msdb databases.  As a best practice, the plan should backup the system databases once every day. When you have created the maintenance plan, run each of the tasks manually to see if they succeeded. 

Conclusion 

Although it is not possible to describe every possible installation scenario, the steps discussed here should be common in every installation. You should customise this list to incorporate any extra configuration necessary. For example, you may need to configure Reporting Services after SQL configuration is complete. Perhaps there are custom jobs or DBA stored procedures that you roll out in every SQL Server in the company. In each case, these extra steps will form parts of your post-installation configuration.  

 

Posted in SQL Server DBA | Tagged , , | Leave a comment

SQL SERVER 2005 ARCHITECTURE

–>

SQL SERVER 2005 ARCHITECTURE

 

 Please have a look here to get the full and latest article to know sql server 2005 architecture
SQL Server Architecture 

SQL Server is a client/server relational database management system. Figure 1.3 shows the process that every query must follow, from its origin as a SQL query in the client application running in the client computer, to the final result set received by the client application. 

 

Fig: Client access to a SQL Server database 

These steps are defined as follows: 

1. The user selects an option in a client application. This option calls a function in the client application that generates a query that is sent to SQL Server. The application uses a database access library to send the query in a way SQL Server can understand. 

2. The database library transforms the original request into a sequence of one or more Transact-SQL statements to be sent to SQL Server. These statements are encapsulated in one or more Tabular Data Stream (TDS) packets and passed to the database network library to be transferred to the server computer. 

3. The database network library uses the network library available in the client computer to repackage the TDS packets as network protocol packets. 

4. The network protocol packets are sent to the server computer network library across the network, where they are unwrapped from their network protocol. Chapter 1. Relational Database Management Systems and SQL Server 

5. The extracted TDS packets are sent to Open Data Services (ODS), where the original query is extracted. 

6. ODS sends the query to the relational engine, where the query is optimized and executed in collaboration with the storage engine. 

7. The relational engine creates a result set with the final data and sends it to ODS. 

8. ODS builds one or more TDS packets to be sent to the client application, and sends them to the server database network library. 

9. The server database network library repackages the TDS packets as network protocol packets and sends them across the network to the client computer. 

10. The client computer receives the network protocol packets and forwards them to the network libraries where the TDS packets are extracted. 

11. The network library sends the TDS packets to the database access library, where these packets are reassembled and exposed as a client result set to the client application. 

12. The client application displays information contained in the result sets to the user. 

Services Offered by MSSQL Server 2005: 

ØDatebase Engine 

ØFulltext Search 

ØReplication 

ØService Broker 

ØIntegration Services 

ØReporting Services 

ØAnalysis Services 

ØNotification Services 

Database Engine: 

The Database Engine is the core service for storing, processing, and securing data. The Database Engine provides controlled access and rapid transaction processing to meet the requirements of the most demanding data consuming applications within your enterprise. 

Analysis Services: 

Analysis Services is the core service for supporting rapid analysis of business data, delivering online analytical processing (OLAP) and data mining functionality in business intelligence applications. 

Integration Services: 

SQL Server 2005 Integration Services (SSIS) is the extract, transform, and load (ETL) component of SQL Server 2005. It replaces the earlier SQL Server ETL component, Data Transformation Services (DTS). 

Replication: 

Replication is a set of technologies for copying and distributing data and database objects from one database to another, and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet. SQL Server provides three types of replication, each with different capabilities: transactional replication, merge replication, and snapshot replication. 

Reporting Services: 

SQL Server 2005 Reporting Services (SSRS) is a server-based reporting platform that provides comprehensive data reporting from relational and multidimensional data sources. Reporting Services includes processing components, a complete set of tools that you can use to create and manage reports, and an application programming interface (API) that allows developers to integrate or extend data and report processing in custom applications. 

Notification Services: 

SQL Server 2005 Notification Services is a platform for developing applications that generate and send notifications, and it is also an engine that runs those applications. You can use Notification Services to generate and send timely, personalized messages to thousands or even millions of subscribers, and deliver the messages to a wide variety of applications and devices. 

Full-Text Search: 

SQL Server contains the functionality you need to issue full-text queries against plain character-based data in SQL Server tables. Full-text queries could include words and phrases or multiple forms of a word or phrase. Full-Text Search allows fast and flexible indexing for keyword-based query of text data stored in a Microsoft SQL Server database. In SQL Server 2005, Full-Text Search delivers enterprise-level search functionality. 

Service Broker: 

SQL Server 2005 Service Broker provides the SQL Server Database Engine native support for messaging and queuing applications. This makes it easier for developers to create sophisticated applications that use the Database Engine components to communicate between disparate databases. Developers can use Service Broker to easily build distributed and reliable applications. 

Components Installed with SQL Server 

When you install SQL Server you install two different sets of components: 

· Server components are back-end services, responsible for data storage, data integrity, security, concurrency, and so on. 

. Client components are front-end applications, used by administrators, developers, and even end users, to administer, develop, test, and use a SQL Server database system. 

Components Installed: 

  • Microsoft SQL Server service (MSSQLServer)— The main service, responsible for data storage, data integrity, consistency, concurrency, security, query processing, optimization, and execution. 
  • Microsoft SQL Server Agent (SQLServerAgent)— Responsible for scheduling jobs, managing alerts, and Notifying operators. SQL Server Agent is an important service in SQL Server Administration because so many administrative operations depend on it to be executed automatically at fixed intervals— for example, backups, data consistency checks, rebuilding indexes, importing and exporting data, replication, and so on. 
  • Microsoft Search— Provides full-text search capabilities to SQL Server, as well as to Microsoft Exchange and Index Server. 
  • Microsoft SQL Server OLAP Service— Provides back-end support for Analysis Services. 
  • Microsoft Distributed Transaction Coordinator (MS-DTC)— Provides transaction support in multiserver and heterogeneous environments. 
  • Server Network libraries— SQL Server can listen to several network libraries at the same time, waiting for queries to answer, and use any of these libraries to send results to the client. The selected database network library must have a compatible server network library to work with. SQL Server 2000 currently supports the following network libraries: TCP/IP Sockets, Named Pipes, Multiprotocol, NWLink IPX/SPX, VIA ServerNET II SAN, VIA GigaNet SAN, Banyan VINES, and AppleTalk 
Services Installed with Sql Server 2005 

  • Database Services 
  • Integration Services 
  • Analysis Services 
  • Reporting Services 
  • Notification and Broker Services 
  • Client Components 
ØConnectivity Components 

ØManagement Tools 

ØConnectivity Components 

ØBusiness Intelligence Development Studio 

ØSoftware Development Kit 

ØSQL XML Client Features 

ØLegacy Components 

  • Documentation, Samples and Sample Databases 
Tools and Utilities Available with Sql Server: 

Sql Server 2000: 

Enterprise Manager 

Query Analyzer 

Profiler 

Upgrade Wizard 

Service Manager 

Command-line utilities 

Sql Server 2005: 

SQL Server Management Studio 

Sqlcmd Utility 

SQL Configuration Manager 

Database Engine Tuning Advisor 

Query Editor 

Surface Area Configuration 

SQL Server Profiler 

Tablediff Utility 

Sql Server 2008: 

SQL Management Objects 

SQL Configuration Manager 

Sqlcmd Utility 

SQL Server Management Studio 

SQL Profiler 

SQL Server Agent 

Database Tuning Advisor 

Microsoft Operations Manager Pack 

Resource Governor 

Performance Studio 

Editions Of Sql Server: 

Three are types of Editions Available: 

à Core Editions 

ØEnterprise 

ØStandard 

à Specialized Editions 

ØWork Group 

ØWeb 

ØDeveloper 

à Free Editions 

ØExpress 

ØCompact 

Enterprise: 

Enterprise workloads that need redundancy and built-in Business Intelligence 

Standard: 

Shared data scenarios in departments and small to large businesses 

Work Group: 

Remote offices that need local instances of company data 

Web: 

For web application hosting 

Developer: 

Full featured edition for development and testing only 

Express: 

Entry level database, ideal for learning and ISV redistribution 

Compact: 

Embedded database for developing desktop and mobile applications 

Posted in Database Design, Interview Q&A, SQL Development, SQL Server DBA | Tagged , , , , , , , , | Leave a comment