Export to Excel from SQL Server using T-SQL script

Export to Excel from SQL Server using T-SQL script

This post helps in understanding Export to Excel from SQL Server using T-SQL Script. Typically to export data from sql server to excel we can user varius options :

  • Import / Export Wizard
  • BCP
  • OPENROWSET

Below stored procedure will make the task easier. It provides more flexibility to Export to Excel from SQL Server using T-SQL script. .

 

USE MASTER

/*** Make sure that the below are enabled on your server ***/

-- Ad Hoc Remote Quires
-- XP_CMDSHELL

***/

/****** Object:StoredProcedure [dbo].[usp_ExportData_Excel]

Script Date: 10/05/2009 08:39:47 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

 

/*** Execution Example

EXEC [usp_ExportData_Excel] @dbName = 'Master',

@sql = 'select * from master.sys.sysdatabases',

@fullFileName = 'D:\Table_Details.xls' ,

@InstanceName ='',

@User = '',

@pwd = ''

***/

 

CREATE PROC [dbo].[usp_ExportData_Excel]

(

@dbName VARCHAR(100)='master',

@sql VARCHAR(8000)='select * from master.sys.sysdatabases',

@fullFileName VARCHAR(100) = 'D:\Table_Details.xls',

@InstanceName VARCHAR(100) = '',

@User VARCHAR(20) = '',

@Pwd VARCHAR(20) = ''

)

AS

BEGIN

SET NOCOUNT ON

IF @sql =''or @fullFileName =''

BEGIN

SELECT 0 AS ReturnValue -- failure

RETURN

END

-- if DB isn't passed in set it to master

SELECT @dbName ='use '+ @dbName +';'

IF object_id('##TempExportData') is not null

DROP TABLE ##TempExportData

IF object_id('##TempExportData2') is not null

DROP TABLE ##TempExportData2

-- insert data into a global temp table

DECLARE @columnNames VARCHAR(8000),

@columnConvert VARCHAR(8000),

@tempSQL VARCHAR(8000)

 

SELECT @tempSQL = left(@sql, charindex('from', @sql)-1) +

' into ##TempExportData '+

substring(@sql,charindex('from', @sql)-1,len(@sql))

 

EXEC(@dbName + @tempSQL)

IF @@error > 0

BEGIN

SELECT 0 AS ReturnValue -- failure

RETURN

END

 

-- build 2 lists

-- 1. column names

-- 2. columns converted to nvarchar

SELECT @columnNames =COALESCE( @columnNames +',','')+ column_name,

@columnConvert =COALESCE( @columnConvert +',','')+'convert(nvarchar(4000),'

+ column_name + case when data_type in('datetime','smalldatetime')then',121'

WHEN data_type in('numeric','decimal')then',128'

WHEN data_type in('float','real','money','smallmoney')then',2'

WHEN data_type in('datetime','smalldatetime')then',120'

ELSE ''

END + ') as ' + column_name

FROM tempdb.INFORMATION_SCHEMA.Columns

WHERE table_name ='##TempExportData'

 

-- execute select query to insert data and column names into new temp table

SELECT @sql = 'select ' + @columnNames + ' into ##TempExportData2 from

(select ' + @columnConvert + ', ''2'' as [temp##SortID] from ##TempExportData

union all select ''' + replace(@columnNames, ',', ''', ''') + ''', ''1'') t

order by [temp##SortID]'

 

exec (@sql)

-- BUILD FULL BCP QUERY

DECLARE @bcpCommand VARCHAR(8000)

SET @bcpCommand ='bcp " SELECT * from ##TempExportData2" queryout'

 

SET @bcpCommand = @bcpCommand +' '+@fullFileName+' -S'+

@InstanceName+' -c -w -T -U '+@User+' -P '+@pwd+'","-CRAW'

 

EXEC master..xp_cmdshell @bcpCommand

IF @@ERROR > 0

BEGIN

SELECT 0 AS ReturnValue -- failure

RETURN

END

DROP TABLE ##TempExportData

DROP TABLE ##TempExportData2

SET @columnNames =' '

SET @columnConvert =' '

SET @tempSQL =' '

SELECT 1 as ReturnValue

END

 

Posted in SQL Development, SQL Scripts | Tagged , , , , | Leave a comment

New features added in SQL Server 2012

New features added in SQL Server 2012

Time to Denali – SQL 2012

Now Code name Denali is ready with CTP3. Now here are the list of new features added in sql server 2012.

Operating System Support & Prerequisites

ü Windows 7 & Windows Server 2008 R2

ü Windows Server 2008 R2 Server Core

ü Windows Vista & Windows Server 2008

ü Windows XP & Windows Server 2003

ü Windows ME, 98SE, 98 & 95

ü All Operating Systems

 

Management Enhancements

ü SQL Server Management Studio

ü Startup Options

ü Contained Databases

ü Windows PowerShell

ü -K Option Added to bcp.exe and sqlcmd.exe

ü sqlcmd.exe Behavior Change

ü Database Engine Tuning Advisor Enhancements

ü New and Modified Dynamic Management Views and Functions

 

Programmability Enhancements

ü Full-Text Search

ü Introduction to Sequence

ü Circular Arc Segment Support for Spatial Types

ü Extend Order by Clause functionality

ü Extend Execute Statement functionality

ü New Collation Support

ü Metadata Discovery

ü Native XML Web Services (SOAP/HTTP endpoints) is Removed

ü FileTables

ü Statistical Semantic Search

ü THROW statement – New Error Handling

ü 14 New Functions and 1 Changed Function

ü SQL Server Express LocalDB

ü New and Enhanced Query Optimizer Hints

ü XEvent Enhancements

ü OVER Clause Support Enhanced

ü Analytic Functions

 

Security Enhancements

ü Provisioning During Setup

ü New Permissions

ü New Role Management

ü Default Schema for Groups

ü SQL Server Audit Enhancements

ü Database Engine Access is Allowed Through Contained Databases

ü Hashing Algorithms

ü Further Deprecation of RC4

ü Certificate Key Length

ü Service Master Key and Database Master Key Encryption changes from 3DES to AES

ü Certificates Can be Created from Binary

 

Scalability and Performance Enhancements

 

ü Columnstore Indexes

ü Online Index Rebuild

ü Partition Support Increased

 

Availability Enhancements

 

ü AlwaysOn – Next Version to the Mirroring

ü AlwaysOn SQL Server Failover Cluster Instances

ü AlwaysOn Availability Groups

ü Indirect Checkpoints

 

Business Intelligence Enhancements

ü SSRS

ü SSIS

ü SSAS

 

Other Enhancements

ü Denali Enhancement–Duplicate Key Error Message

ü Online non-NULL with values column add in SQL Server Denali

ü A glance at SQL Server Denali CTP3 – DATEFROMPARTS

 

Operating System Support & Prerequisites

Here are the different sets of operating systems that are supported (or not supported), and what else you need to know to get Denali up and running on them.

Windows 7 & Windows Server 2008 R2

Denali is officially supported on Windows 7 Service Pack 1 and Windows Server 2008 R2 Service Pack 1. It will install and run fine if you don’t have Service Pack 1 installed, but you will probably get one of those nasty-gram “known compatibility issues” dialogs that suggest you update your operating system. Nip that in the bud now, and install SP1. If you don’t already have .NET Framework 3.5 SP1 installed, you will need to install it. You will need to enable both the .NET Framework 3.5 SP1 and .NET Framework 4.0 (which should already be installed) in order to install most of the common Denali components on Windows 7 and Windows Server 2008 R2.

Windows Server 2008 R2 Server Core

Denali is supported on Windows Server 2008 R2 SP1 Server Core. This is the first time SQL Server has been officially supported on a Server Core operating system. There were hacks around that would allow you to install SQL Server 2008, for example, on Windows Server 2008 Server Core, but this configuration is *not* supported. Note that the setup UI is not supported; you need to use Quiet (/Q) or Quiet Simple (/QS) from the command line, as well as an /Action parameter, in order to continue. If you are installing the Express version of Denali on server core, you will need to download and install the standalone installer of .NET Framework 4.0 for Server Core before proceeding with Denali setup. Unlike the “full” version of Windows Server 2008 R2, and unlike Denali setup when run against those other operating systems, the framework isn’t present by default on Server Core – and the Express version of Denali will not install it for you.
Windows Vista & Windows Server 2008

Denali is supported on Windows Vista Service Pack 2 and Windows Server 2008 Service Pack 2. I suspect installation may be blocked if you are at Service Pack 1 or RTM (though it may just be a warning like above), so please make sure you are at Service Pack 2 if possible. Unlike previous versions of SQL Server, you will need to install the .NET Framework 3.5 SP1 and then an update from KB #956250 before you run setup. Denali will, however, install the .NET Framework 4.0, if you don’t already have it. Note that CTP3 (because of PowerShell 2.0 requirements) cannot be installed on Vista & Windows Server 2008 for these operating system languages: Bulgarian, Estonian, Croatian, Lithuanian, Latvian, Romanian, Serbian, Slovak, Slovenian, Thai, or Ukrainian. In these cases setup won’t be blocked, but you will get an error message that simply says, “This package is not applicable.”

Windows XP & Windows Server 2003

Denali is *not* supported on Windows XP (or Windows Server 2003). If this is your preferred operating system, or if you have legacy applications that won’t work on newer operating systems, you will need to use virtualization software (Virtual PC 2007, VirtualBox, VMWare Workstation) to run a more modern operating system, and install Denali there. You also might consider upgrading to a newer version of Windows, running Denali there, and running your legacy apps under XP mode.

Windows ME, 98SE, 98 & 95

Not Supported

All Operating Systems

If you’re not already running PowerShell 2.0, you will need it for both the database engine and Management Studio. Unlike previous versions, setup will not install PowerShell for you. Everything you need to be sure PowerShell 2.0 is enabled on your machine. If your operating system is x64, make sure to choose the 64-bit Management Tools and 64-bit Windows PowerShell Extensions for SQL Server. The 32-bit extensions will not work.

 

Manageability Enhancements

 

SQL Server Management Studio


SQL Server Management Studio introduces the following features

  • SQL Server Management Studio supports two keyboard shortcut schemes. The new default keyboard shortcuts are based on the Microsoft Visual Studio 2010 keyboard shortcuts. You can also configure Management Studio to use the keyboard shortcuts from SQL Server 2008 R2. Denali is no longer includes the SQL Server 2000 Enterprise Manager shortcuts.
  • The Database Engine Query Editor introduces enhanced functionality for Transact-SQL debugging and IntelliSense.
    • The Transact-SQL debugger introduces the following new features:
      • You can now debug Transact-SQL scripts running on instances of SQL Server 2005 Service Pack 2 (SP2) or later.
      • Transact-SQL breakpoints now support the following functionality:
        • A breakpoint condition is a Transact-SQL expression whose evaluation determines whether the breakpoint is invoked.
        • A breakpoint hit count specifies the number of times a breakpoint is encountered before it is invoked.
        • A breakpoint filter limits the breakpoint to operating only on specified computers, processes, or threads.
        • A breakpoint action specifies a custom task that is performed when the breakpoint is invoked.
        • You can edit a breakpoint location to move a breakpoint from one Transact-SQL statement to another.
      • The Watch window and Quick Watch now support watching Transact-SQL expressions.
      • When you move the cursor over a Transact-SQL identifier, a Quick Info pop up displays the name of the expression and its current value.
    • Transact-SQL IntelliSense introduces the following new features:
      • Breakpoint validation prevents setting a breakpoint in an invalid location.
      • Transact-SQL code snippets are templates you can use as starting points when building Transact-SQL statements in batches and scripts.
    • Many known database restore issues and enhancements requested by customers have been addressed. The major enhancements include:
      • The algorithm used to construct restore plan has improved significantly and a lot of edge cases, including forking scenarios, are dealt with more efficiently than in previous versions of SQL Server.
      • Point-in-time restore has been improved by the addition of a visual timeline that allows you to identify a feasible point in time as a target time for a database restore operation.
    • The new Page Restore dialog enables you to check database pages for corruption and to restore selected corrupt pages from a database backup and subsequent log backups.

Startup Options


Database Engine startup options are now configured by using a new Startup Parameters tab of SQL Server Configuration Manager.

Contained Databases


Moving databases from one instance of the SQL Server Database Engine to another instance of the Database Engine is simplified by introducing contained databases. Users in a contained database are no longer associated with logins on the instance of SQL Server. Many other dependencies on the instance are also removed.

Benefits of Contained Databases:

One of the problems facing the existing database model is data being left behind when an application is moved from one instance to another. Some data connected with an application, (for example, login information and agent job information) is currently stored within the instance instead of in the database. When you move a non-contained database application from one instance to another instance of SQL Server, this data is left behind. Then you must identify the data left behind and move it with your application to the new instance of SQL Server. This process can be time consuming and difficult.

The contained database, keeps all necessary information and objects in the database, for example, tables, functions, constraints, and schemas, types. It also stores all application-level objects in the database, including logins, application-level agent jobs, persisted error messages, linked server information, and system settings.

Windows PowerShell


Starting with “Denali” CTP 3, Windows PowerShell is no longer installed by SQL Server Setup. Windows PowerShell 2.0 is a pre-requisite for installing “Denali” CTP 3. “Denali” now uses the new Windows PowerShell 2.0 feature called modules for loading the SQL Server components into a PowerShell environment. Users import the sqlps module into PowerShell, and the module then loads the SQL Server snap-ins.

The sqlps utility is also added to the list of deprecated features starting in “Denali” CTP 3. The SQL Server PowerShell provider includes two new cmdlets: backup-sqldatabase and restore-sqldatabase.

-K Option Added to bcp.exe and sqlcmd.exe


The bcp Utility and sqlcmd Utility utilities now have the -K switch, which allows you to specify read-only access to a secondary replica in an AlwaysOn availability group.

sqlcmd.exe Behavior Change


If you use sqlcmd.exe with XML mode (:XML ON command) when executing a SELECT * from T FOR XML …, expect behavior changes.

Behavior prior to “Denali” CTP 3:

  • Text data containing a single quote was replaced with the ‘ escape sequence.
  • Legacy datetime data values with 0 fractional seconds were shown with three decimal digits.
  • Money data values with no decimal value were represented as whole numbers.

Beginning in “Denali” CTP 3:

  • Text data containing a single quote is no longer replaced with ‘. It is still valid XML and an XML parser will give the same result.
  • Legacy datetime data values with no fractional seconds will not return 3 decimal digits. Other date/time types are not affected.
  • Money data values with no decimal value shows the 4 decimal digits.

 

Database Engine Tuning Advisor Enhancements


In “Denali” CTP3, you can use the query plan cache as a Database Engine Tuning Advisor (DTA) workload. By doing this, you can avoid having to manually create a workload from a script or trace file. When you specify the plan cache as the DTA workload, the Database Engine Tuning Advisor selects the top 1,000 events to use for analysis. The number of events can be changed using the –n option of the DTA utility.

New and Modified Dynamic Management Views and Functions


The following system views have been added or modified.

sys.dm_exec_query_stats :

Added four columns to help troubleshoot long running queries. You can use the total_rows, min_rows, max_rows and last_rows aggregate row count columns to separate queries that are returning a large number of rows from problematic queries that may be missing an index or have a bad query plan.

sys.dm_os_volume_stats :

This dynamic management functions returns information about the operating system volume (directory) on which the specified databases and files are stored. Use this dynamic management function to check the attributes of the physical disk drive or return available free space information about the directory.

sys.dm_os_windows_info :

This dynamic management view returns one row that displays Windows operating system version information such as the OS version or language ID.

sys.dm_server_memory_dumps , sys.dm_server_services And sys.dm_server_registry :

These dynamic management views return property information associated with the SQL Server, Full-text, and SQL Server Agent services that are installed on the host server. These views also contain configuration, installation, and memory dump file information.

 

Programmability Enhancements

 

Full-Text Search:


In Denali, you can configure a full-text index to support property-scoped searching on properties, such as Author and Title, which are emitted by Filters. Adding to this, you can also customize a proximity search by using the new custom NEAR option of the CONTAINS predicate or CONTAINSTABLE function

 

Introduction to Sequence:


SQL Server finally has the concept of a “sequence” that Oracle has had for years. A sequence is a user-defined schema-bound object that generates a sequence of numeric values according to the specification with which the sequence was created. The sequence of numeric values is generated in an ascending or descending order at a defined interval and may cycle (repeat) as requested. Sequences, unlike identity columns, are not associated with tables. An application refers to a sequence object to receive its next value. The relationship between sequences and tables is controlled by the application. User applications can reference a sequence object and coordinate the values keys across multiple rows and tables.

Example:

CREATE SEQUENCE Schema.SequenceName

AS int

START WITH 1

INCREMENT BY 1 ;

 

Circular Arc Segment Support for Spatial Types:


There are new methods for geometry and geography data types that work with circular arc segments.

Three new sub-data types for geometry and geography data types can be used to store circular arc segments, CircularString, CompoundCurve, and CurvePolygon. New methods also introduced to deal with these data types

 

Extend Order by Clause functionality:


You can specify a range of rows returned by a SELECT statement based on row offset and row count values that you provide. Order the result set of a query by the specified column list and, optionally, limit the rows returned to a specified range.

 

Extend Execute Statement functionality:


SQL Server extends the EXECUTE statement so that it can be used to send pass-through commands to linked servers. SQL Server extends the EXECUTE statement so that it can be used to send pass-through commands to linked servers. Additionally, the context in which a string or command is executed can be explicitly set.

Metadata for the result set can be defined by using the WITH RESULT SETS options.

New Collation Support:


A new family of supplementary characters (SC) collations can be used with the data types nchar, nvarchar and sql_variant.

 

Metadata Discovery:


The SET FMTONLY option for determining the format of a response without actually running the query is replaced with

sp_describe_first_result_set,

sp_describe_undeclared_parameters,

sys.dm_exec_describe_first_result_set and

sys.dm_exec_describe_first_result_set_for_object.

Native XML Web Services (SOAP/HTTP endpoints) is Removed:


Beginning in Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3), you can no longer use CREATE ENDPOINT or ALTER ENDPOINT to add or modify SOAP/HTTP endpoints.

FileTables


The FileTable feature builds on top of the SQL Server FILESTREAM technology to bring support for the Windows file namespace and compatibility with Windows applications to the file data stored in SQL Server. This lets an application integrate its storage and data management components, and provides integrated SQL Server services (including full-text search and semantic search) over unstructured data and metadata, along with easy policy management and administration.

In summary, you can now store files and documents in special tables in SQL Server, but access them from Windows applications as if they were stored in the file system, without making any changes to the Windows applications.

Statistical Semantic Search


Statistical Semantic Search provides deep insight into unstructured documents stored in SQL Server databases by extracting statistically relevant key phrases, and then – based on these phrases – identifying similar documents. These results are made available as structured data through three Transact-SQL rowset functions.

Semantic search builds upon the existing full-text search feature in SQL Server but enables new scenarios that extend beyond syntactical keyword searches. While full-text search lets you query the words in a document, semantic search lets you query the meaning of the document. New scenarios include automatic tag extraction, related content discovery, and hierarchical navigation across similar content. For example, you can query the document similarity index to identify resumes that match a job description. Or, you can query the index of key phrases to build the taxonomy for an organization, or for a corpus of documents.

Before you can use Semantic Search, you have to install, attach, and register an additional database.

THROW statement – New Error Handling


The THROW statement can be used to raise an exception and transfer execution to a CATCH block of a TRY…CATCH construct.

14 New Functions and 1 Changed Function


“Denali” CTP3introduces 14 new built-in functions. These functions ease the path of migration for information workers by emulating functionality that is found in the expression languages of many desktop applications. However these functions will also be useful to experienced users of SQL Server.

The new functions are:

 

Ø Conversion functions

· PARSE (Transact-SQL)

· TRY_CONVERT (Transact-SQL)

· TRY_PARSE (Transact-SQL)

Ø Date and time functions

· DATEFROMPARTS (Transact-SQL)

· DATETIME2FROMPARTS (Transact-SQL)

· DATETIMEFROMPARTS (Transact-SQL)

· DATETIMEOFFSETFROMPARTS (Transact-SQL)

· EOMONTH (Transact-SQL)

· SMALLDATETIMEFROMPARTS (Transact-SQL)

· TIMEFROMPARTS (Transact-SQL)

Ø Logical functions

· CHOOSE (Transact-SQL)

· IIF (Transact-SQL)

Ø String functions

· CONCAT (Transact-SQL)

· FORMAT (Transact-SQL)

In addition to the 14 new functions, one existing function has been changed. The existing LOG (Transact-SQL) function now has an optional second base parameter.

SQL Server Express LocalDB


SQL Server Express LocalDB is a new lightweight edition of Express that has all its programmability features, yet runs in user mode and has a fast, zero-configuration installation and short list of pre-requisites. The LocalDB edition of SQL Server is targeted to program developers. LocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. To use the SQL Server Database Engine, developers connect directly to the database files and the SQL Server infrastructure is transparent to them. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server. An instance of SQL Server Express LocalDB is managed by using the LocalDBManager.exe utility. SQL Server Express LocalDB should be used in place of the SQL Server Express user instance feature which is deprecated.

New and Enhanced Query Optimizer Hints


The syntax for the FORCESEEK table hint has been modified. You can now specify an index and index columns to further control the access method on the index. The existing FORCESEEK syntax remains unmodified and works as before. No changes to applications are necessary if you do not plan to use the new functionality.

XEvent Enhancements


The following new XEvents are available.

Ø page_allocated:

· Fields: worker_address, number_pages, page_size, page_location, allocator_type, page_allocator_type, pool_id

Ø page_freed:

· Fields: worker_address, number_pages, page_size, page_location, allocator_type, page_allocator_type, pool_id

Ø allocation_failure:

· Fields: worker_address, failure_type, allocation_failure_type, resource_size, pool_id, factor

The following XEvents have been modified.

Ø resource_monitor_ring_buffer_record:

· Fields removed: single_pages_kb, multiple_pages_kb

· Fields added: target_kb, pages_kb

Ø memory_node_oom_ring_buffer_recorded:

· Fields removed: single_pages_kb, multiple_pages_kb

· Fields added: target_kb, pages_kb

OVER Clause Support Enhanced


The OVER clause has been extended to support window functions. Window functions perform a calculation across a set of rows that are in some relationship to the current row. For example, you can use the ROWS or RANGE clause over a set of rows to calculate a moving average or cumulative total. In addition, ordering rows within a partition is now supported in the aggregate functions that allow the OVER clause to be specified.

Analytic Functions


The following analytic functions have been added.

ü CUME_DIST (Transact-SQL)

ü LAST_VALUE (Transact-SQL)

ü PERCENTILE_DISC (Transact-SQL)

ü FIRST_VALUE (Transact-SQL)

ü LEAD (Transact-SQL)

ü PERCENT_RANK (Transact-SQL)

ü LAG (Transact-SQL)

ü PERCENTILE_CONT (Transact-SQL)

Security Enhancements

 

Provisioning During Setup


To enhance role separation, BUILTINadministrators and Local System (NT AUTHORITYSYSTEM) are not automatically provisioned in the sysadmin fixed server role. Local administrators can still access the Database Engine when in single user mode.

SQL Server now supports Managed Service Accounts and Virtual Accounts when installed on Windows 7 or Windows Server 2008 R2. The protection of operating services under a per-service SID is now extended to all operating systems.

New Permissions


New GRANT, REVOKE, and DENY permissions to a SEARCH PROPERTY LIST are available.

New GRANT, REVOKE, and DENY permissions to CREATE SERVER ROLE and ALTER ANY SERVER ROLE.

New Role Management


User-defined server roles are now available. To manage user-defined server roles and to add and remove members from all server roles, use CREATE SERVER ROLE, ALTER SERVER ROLE and DROP SERVER ROLE.

sp_addsrvrolemember and sp_dropsrvrolemember are deprecated. Use ALTER SERVER ROLE instead.

ALTER ROLE is modified to add or remove members from roles.

sp_addrolemember and sp_droprolemember are deprecated. Use ALTER ROLE instead.

IS_ROLEMEMBER is added to check the membership of database roles.

Default Schema for Groups


You can now define a default schema for a Windows group. When an object is created by a Windows user and when a default schema is not specified, SQL Server no longer automatically creates a schema.

SQL Server Audit Enhancements


Support for server auditing is expanded to include all editions of SQL Server. Database audits are limited to Enterprise, Datacenter, Developer, and Evaluation editions.

SQL Server Audit is now more resilient to failures to write to the audit log. For example, if the target directory is on a remote share and the network goes down, SQL Server Audit will now be able to recover once the network connection is re-established. In addition, a new option has been introduced to fail an operation that would otherwise generate an audit event to be written to a failed audit target. For more information, see the FAIL_OPERATION option for the ON_FAILURE event in CREATE SERVER AUDIT.

Previously, Audit logs could have an indeterminate number of log files or else be rolled-over after a predefined number. A new option has been introduced to cap the number of audit files without rolling over, in order to allow customers to control the amount of audit information collected without losing audit records. For more information, see the MAX_FILES option in CREATE SERVER AUDIT.

When possible, the audit log provides additional Transact-SQL stack frame information. In many cases, auditors can now determine whether a query was issued through a stored procedure or directly by an application.

SQL Server audit specifications now support a user-defined audit group. Audited events can be written to the audit log by using the new sp_audit_write procedure. User-defined audit events allow applications to write custom information to the audit log, such as the name of the application user who has connected in cases where a common login is used to connect to SQL Server.

New columns are added to sys.server_file_audits, sys.server_audits, and sys.fn_get_audit_file to track user-defined audit events.

SQL Server Audit now supports the ability to filter audit events before they are written to the audit log. For more information, see the WHERE clause in CREATE SERVER AUDIT and ALTER SERVER AUDIT.

New audit groups support the monitoring of contained database users. The new audit options have been added to the audit dialog boxes in Management Studio.

Database Engine Access is Allowed Through Contained Databases


Access to contained databases is permitted through contained database users which do not require logins. SQL Server system administrators should understand how contained databases change the SQL Server security model.

Hashing Algorithms


The HASHBYTES function now supports the SHA2_256, and SHA2_512 algorithms.

Further Deprecation of RC4


The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128 when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES algorithms instead. In “Denali” CTP3 material encrypted using RC4 or RC4_128 can be decrypted in any compatibility level.

 

Certificate Key Length


When creating certificates, the maximum length of private keys imported from an external source is expanded from 3,456 to 4,096 bits.

Service Master Key and Database Master Key Encryption changes from 3DES to AES


“Denali” CTP3 uses the AES encryption algorithm to protect the service master key (SMK) and the database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. After upgrading an instance of the Database Engine to “Denali” CTP3the SMK and DMK should be regenerated in order to upgrade the master keys to AES.

Certificates Can be Created from Binary


CREATE CERTIFICATE (Transact-SQL) has the FROM BINARY option to allow specifying the bits of an ASN encoded certificate.

 

Scalability and Performance Enhancements

Columnstore Indexes


The Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3) introduces a new data warehouse query acceleration feature based on a new type of index called the columnstore. This new index, combined with enhanced query processing features, improves data warehouse query performance by hundreds to thousands of times in some cases, and can routinely give a tenfold speedup for a broad range of decision support queries. This can allow end users to get more business value from their data through fast, interactive exploration. IT workers can reduce development costs and ETL times because columnstore indexes limit or eliminate the need to rely on pre-built aggregates, including user-defined summary tables, and indexed (materialized) views.

Online Index Rebuild


Indexes containing varchar(max), nvarchar(max), and varbinary(max) columns can now be rebuilt as an online operation.

Partition Support Increased


Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3) supports up to 15,000 partitions by default. In earlier versions, the number of partitions was limited to 1,000 by default.

 

Availability Enhancements

 

For protecting application databases in an enterprise environment from both planned and unplanned downtime, Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3) introduces the AlwaysOn Availability Groups feature and a number of other high-availability enhancements.

AlwaysOn – Next Version to the Mirroring


SQL mirroring is great and I use it a lot in my day-to-day role, but it has two major short comings;

ü The Mirror database cannot be accessed.

ü You can only have one Mirror database.

You can of course address issue 1 to a degree by creating a database snapshot of the mirror to access the data, but the data becomes stale (especially on high write databases).

“Always on” addresses both of these issues by allowing you to access the mirror database in read-only mode (excludes the secondary database), and allows up to four mirrored databases. The ability to add additional read-only mirrors brings with it a dynamically scalable database solution. For example;

  • We could quite easily spin up a new mirrored database during busier periods to share the load.
  • Spin up a mirror to allow the business users to run reports without effecting Production.
  • Create mirror databases in multiple data centers for improved redundancy.
  • Simplifies migration to servers, as current mirroring setup can be maintained whilst the new mirror is configured.

Another cool feature is the ability to group multiple databases together (Availability Groups), meaning two or more separate but dependant databases can failover together.

AlwaysOn SQL Server Failover Cluster Instances


Multi-subnet failover clusters: A SQL Server multi-subnet failover cluster is a configuration where each failover cluster node is connected to a different subnet or different set of subnets. These subnets can be in the same location or in geographically dispersed sites. Clustering across geographically dispersed sites is sometimes referred to as Stretch clusters. As there is no shared storage that all the nodes can access, data should be replicated between the data storage on the multiple subnets. With data replication, there is more than one copy of the data available. Therefore, a multi-subnet failover cluster provides a disaster recovery solution in addition to high availability.

Flexible failover policy for cluster health detection: In a SQL Server failover cluster instance, only one node can own the cluster resource group at a given time. The client requests are served through this node for that failover cluster instance. In the case of a failure, the group ownership is moved to another node in the failover cluster. This process is called failover. The improved failure detection introduced in Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3), and addition of failure condition level property allows you to configure a more flexible failover policy.

AlwaysOn Availability Groups


The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in Microsoft SQL Server Code-Named “Denali”, Community Technology Preview 3 (CTP 3), AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. Deploying AlwaysOn Availability Groups involves creating and configuring one or more availability groups. Each availability group is a container for a discrete set of user databases, known as availability databases, that fail over together. An availability group can have multiple possible failover targets (secondary replicas). Moreover, you can configure secondary replicas to support read-only access to secondary databases and for performing backups on secondary databases.

Indirect Checkpoints


The indirect checkpoints feature provides a database-specific alternative to automatic checkpoints, which are configured by a server property. Indirect checkpoints implement a new checkpointing algorithm for the Database Engine. This algorithm provides a more accurate guarantee of database recovery time in the event of a crash or a failover than is provided by automatic checkpoints. To ensure that database recovery does not exceed allowable downtime for a given database, you can specify the maximum allowable downtime for that database.

Business Intelligence

SSRS

  • Self Service Reporting Capability – new a reporting tool for visually exploring data to answer ad-hoc questions, mainly designed for end-users – codenamed ‘Project Crescent’.
  • Self-Service Alerting: Can use existing Report Builder or BIDS reports to setup alerts when the report data changes (requires SharePoint)
  • New Excel and Word 2007/2010 renderers (allows exporting data directly to Excel, for instance)
  • Reporting Service now works as a SharePoint Shared Service – supports Claims, built-in Scale with load balancing, PowerShell commandlets, and cross-farm report consumption, along with benefits to configuration, deployment and administration.
  • BI Development Studio integrated with Visual Studio 2010
  • RDLC Designer and ReportViewer control have been upgraded to support “Denali” codebase.

SSIS

In Microsoft SQL Server Code-Named “Denali” CTP3, Integration Services (SSIS) introduces several usability enhancements to help new and experienced users develop packages more easily.

 

  • SSIS Projects, which is an encapsulation of multiple packages into a single unit that you build, deploy & execute.
  • Project and Package Parameters, which are the replacement for configurations.
  • Shared Connection Managers, which can be used by multiple packages.
  • Expression Indicator: Variables, Connection Managers and Tasks now have an fx adorner applied to them indicating that there is at least one expression on that object
  • SSIS server: SSIS server is no longer a separate service that you connect to, it now appears as a node in Object Explorer when connecting to a SQL Server database instance
  • Catalog, which is a database that stores deployed projects, environments, and package execution logs. Each SSIS server has one catalog.
  • Environments, which are a wrapper for all environment-specific information (e.g. Connection Strings) and are the replacement for configurations. They work hand-in-hand with Project and Package Parameters.
  • Undo: Finally, the ability to “undo” in the designer
  • Greatly improved column mapper
  • Package format changes to make it easier to use in source control (i.e. log differences)
  • New Data Quality Services (DQS) cleansing transform.
  • Variables, now called parameters, can be given descriptions
  • When you use the “Run Package” command in SSMS, the package runs on the server, as opposed to the way it is now which runs the package locally.
  • Version control: each version of a package is tracked and you can rollback to a previous version if needed
  • New reports that provide an overview of the package tasks and parameters, including execution results and performance statistics. Essentially it replaces the SQL Server agent log file summary into a readable format. You can also write your own reports
  • Data tap: At any path, capture all the data coming through and output it to a text file to review later. This is done without having to modify the package
  • No need to capture row counts as there is now a way to query that result on any path in the package (SSIS automatically stores row counts in a table, among other data)
  • Offline Connection Managers. Gives you the ability to set the connection managers to work offline.

 

SSAS

  • It embraces the relational (also called tabular) data model and brings it together with the multidimensional model (OLAP) under a single unified BI platform, called the Business Intelligence Semantic Model
  • The BI Semantic Model is one model for all end-user experiences – reporting, analytics, scorecards, dashboards, and custom applications. All client tools in the Microsoft BI stack – Excel, PowerPivot, SharePoint Insights and Reporting Services (including Crescent), operate on this model
  • The model can be shared among all the tools, helping migration from one tool to another. For example, a PowerPivot workbook built by a business user can be used as a starting point by a BI professional building an Analysis Services application
  • For existing Analysis Services cubes (also known as Unified Dimensional Model or UDM) – when you upgrade your Analysis Services project or server to SQL Server “Denali”, every cube automatically becomes a BI Semantic Model. The BI Semantic Model is an evolution of the UDM from a pure multidimensional model into a hybrid model that offers all of the capabilities of the UDM and more. As such, the term Unified Dimensional Model is no longer used with SQL Server “Denali” and is subsumed by the BI Semantic Model
  • BISM will allow SSAS to reach a much broader user base because the tabular data model is much easier to understand and build than the multidimensional model. Essentially, the tabular data model is equivalent to the data source view that is created when using the multidimensional model. But you don’t have to build a cube structure on top of it. So the ease of use with the tabular data model reduces the barrier to entry in using Analysis Services

Other Enhancements

Denali Enhancement–Duplicate Key Error Message


With Microsoft SQL Server Code-Named “Denali” CTP3 there is a new and nice feature in displaying the error message. Usually while trying to insert a duplicate value into the primary key column it triggers an error by just saying there is a duplicate. Instead of just telling that now in Denali it also displays where the duplicate exists (Row No/ID).

Before Denali “SQL Server 2008 R2” the error message is as below

Msg 2627, Level 14, State 1, Line 1
Violation of PRIMARY KEY constraint ‘PKtestErrorMessage’. Cannot insert duplicate key in object ‘test.testErrorMessage’.

Now on Denali CTP3, we get a little bit more:

Msg 2627, Level 14, State 1, Line 1
Violation of PRIMARY KEY constraint ‘PKtestErrorMessage’. Cannot insert duplicate key in object ‘test.testErrorMessage’. The duplicate key value is (106).

Online non-NULL with values column add in SQL Server Denali


Prior to SQL Server Denali when you add a new non-nullable column with default values to an existing table a size-of data operation occurs: every row in the table is updated to add the default value of the new column. For small tables this is insignificant, but for large tables this can be so problematic as to completely prohibit the operation. But starting with SQL Server Denali the operation is, in most cases, instantaneous: only the table metadata is changed, no rows are being updated.

Let’s say we have a table with 30 million rows and we need to add a new not null column with the default value as “common”.

 

In 2008 R2 and Before:

New row added to the table and all 30 million rows are updated with the value “common” which really takes your business time so that it can be done only on maintenance window or at business off hours.

 

 

 

 

In Denali:

The default value is updated logically and it doesn’t update all the rows physically which saves both time and space. The default value for all existing rows is updated logically, nothing but it updates the tables metadata. The ‘magic’ is that the table metadata has changed and it now contains a column with a ‘default’ value. With this ‘magic’ in place the ALTER TABLE will no longer have to update every row in the table and the operation is fast, metadata-only, no matter the number of rows in the table. This new behavior occurs automatically, no special syntax or setting is required, the engine will simply do the right thing. There is no penalty from having a missing value in a row. The ‘missing’ value can be queried, updated, indexed, exactly as if the update during ALTER TABLE really occurred. There is no measurable performance penalty from having a default value.

A glance at SQL Server Denali CTP3 – DATEFROMPARTS


There is a new function in SQL Server Denali named DATEFROMPART. What is does, is to calculate a date from a number of user supplied parameters such as Year, Month and Date.

Previously you had to use a formula like this

DATEADD(MONTH, 12 * @Year + @Month – 22801, @Day)

to calculate the correct datevalue from the parameters. With the new DATEFROMPARTS, you simple write

DATEFROMPARTS(@Year, @Month, @Day)

and you get the same result, only slower by 22 percent. So why should you use the new function, if it’s slower?
There are two good arguments for this

1) It is easier to remember
2) It has a built-in validator so that you cannot “spill” over the current month.

For the old way of doing this, using @Year = 2009, @Month = 2 and @Day = 29 you would end up with a date of 2009-02-28 and the DATEFROMPARTS will give you an error message.

 

Looking Towards – SQL Server 2012 RTM

Posted in SQL Development, SQL Server DBA | Tagged , , , , , , , , , | 2 Comments

Truncate Table with Foreign Key constraint

Truncate Table with Foreign Key constraint

This post helps you in automating the process of Truncate Table with Foreign Key constraint”. We recently working for a client on data migration project. There was a requirement for truncating tables. But we had to face issues as all those tables are having foreign keys. So the requirement is :

  • Everytime we’ll send you a list of tables to be truncated
  • First collect all constraint details from those tables
  • Drop all foreign keys
  • Truncate tables
  • Recreate all constraints
  • If in-case of any failure the script should not fail or through any error message instead it should continue with the next object  and failed objects / constraints should be printed

We built the below stored procedure to fulfill the client requirement and it worked perfectly as expected.

Stored Procedure Creation to Truncate Table with Foreign Key constraint:


IF EXISTS(SELECT 1 FROM SYS.SYSOBJECTS WHERE NAME='USP_Truncate_Tabs' AND TYPE='P')
BEGIN
	DROP PROC USP_Truncate_Tabs;
END

GO

CREATE PROC USP_Truncate_Tabs(@String VARCHAR(7500))  
AS 
BEGIN 
SET NOCOUNT ON 
 
/*** Verify table existence if not avilable it creates anew table on master database ***/
IF NOT EXISTS(SELECT 1 FROM master.sys.sysobjects WHERE Name='Constraints' AND TYPE='U')
BEGIN
	CREATE TABLE [master].[dbo].[Constraints]
			(ID INT IDENTITY,
			DropStmt VARCHAR(1000),
			Createstmt VARCHAR(1000),
			DropStatus BIT DEFAULT(1),
			CreateStatus BIT DEFAULT(1),
			TruncateStatus BIT DEFAULT(1),
			LastUsed DATETIME DEFAULT(GETDATE()));
END

TRUNCATE TABLE [master].[dbo].[Constraints]

/*** Get the list of Tables from the input String ***/
DECLARE @Tab TABLE(ID INT IDENTITY,TabName VARCHAR(200)) 
DECLARE @SQL1 VARCHAR(8000) 
SET @SQL1='SELECT NAME FROM sysobjects  WHERE TYPE=''U'' AND name in('+(@String)+')' 
INSERT INTO @Tab EXEC(@SQL1) 
 

/*** Insert All Drop and Create dtatements into the table Constraints ***/
INSERT INTO [master].[dbo].[Constraints] (DropStmt,CreateStmt) 
SELECT 
  'ALTER TABLE [' + ForeignKeys.ForeignTableSchema +  
      '].[' + ForeignKeys.ForeignTableName +  
      '] DROP CONSTRAINT [' + ForeignKeys.ForeignKeyName + ']; ' 
,'ALTER TABLE ['+ ForeignKeys.ForeignTableSchema +  
      '].[' + ForeignKeys.ForeignTableName +  
      '] WITH CHECK ADD CONSTRAINT [' +  ForeignKeys.ForeignKeyName +  
      '] FOREIGN KEY([' + ForeignKeys.ForeignTableColumn +  
      ']) REFERENCES [' + schema_name(sys.objects.schema_id) + '].[' + 
  sys.objects.[name] + ']([' + 
  sys.columns.[name] + ']);  ' 
 from sys.objects 
  inner join sys.columns 
    on (sys.columns.[object_id] = sys.objects.[object_id]) 
  inner join ( 
    select sys.foreign_keys.[name] as ForeignKeyName 
     ,schema_name(sys.objects.schema_id) as ForeignTableSchema 
     ,sys.objects.[name] as ForeignTableName 
     ,sys.columns.[name]  as ForeignTableColumn 
     ,sys.foreign_keys.referenced_object_id as referenced_object_id 
     ,sys.foreign_key_columns.referenced_column_id as referenced_column_id 
     from sys.foreign_keys 
      inner join sys.foreign_key_columns 
        on (sys.foreign_key_columns.constraint_object_id 
          = sys.foreign_keys.[object_id]) 
      inner join sys.objects 
        on (sys.objects.[object_id] 
          = sys.foreign_keys.parent_object_id) 
        inner join sys.columns 
          on (sys.columns.[object_id] 
            = sys.objects.[object_id]) 
           and (sys.columns.column_id 
            = sys.foreign_key_columns.parent_column_id) 
    ) ForeignKeys 
    on (ForeignKeys.referenced_object_id = sys.objects.[object_id]) 
     and (ForeignKeys.referenced_column_id = sys.columns.column_id) 
 where (sys.objects.[type] = 'U') 
  and (sys.objects.[name] not in ('sysdiagrams')) 
 

/*** Drop all Constraints in the database and update the Constraints table for failed statements ***/ 
/*** From Constraints Table Status 1 - Success and Status 0 - Failed ***/
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@   Drop Constraints @@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'

DECLARE @Sql VARCHAR(1000) 
DECLARE @i INT,@Max INT 
SELECT @Max=COUNT(1)FROM [master].[dbo].[Constraints]  
SET @i=1 

WHILE(@i<=@Max) 
BEGIN 
	SELECT @Sql=DropStmt FROM [master].[dbo].[Constraints] WHERE ID=@i 
	BEGIN TRY
		EXEC(@Sql) 
	END TRY
	BEGIN CATCH
		UPDATE [master].[dbo].[Constraints] SET DropStatus=0 WHERE ID=@i
	END CATCH
	SET @Sql='' 
	SET @i=@i+1 
END 

/*** Check for the errors in droping the constraints ***/

IF NOT EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE DropStatus=0)
BEGIN
	PRINT'All foreign keys are Dropped successfully' 
END
ELSE
BEGIN
	PRINT'A result set retrived with the Failed Statements'
	SELECT CreateStmt AS'FailedtoDrop'FROM [master].[dbo].[Constraints] WHERE DropStatus=0
END


/*** Truncate the tables specified in the input string ******/
/*** It directly echos the failed statements on results window ***/
PRINT' ' 
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@ TRUNCATE Tables @@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT' ' 

DECLARE @TabName VARCHAR(200) 
SELECT @Max=COUNT(1)FROM @Tab 
SET @i=1 
WHILE(@i<=@Max) 
BEGIN 
	SELECT @TabName=TabName FROM @Tab WHERE ID=@i 
	SELECT @Sql='TRUNCATE TABLE '+@TabName 
	BEGIN TRY
		EXEC(@sql)
	END TRY
	BEGIN CATCH
		UPDATE [master].[dbo].[Constraints] SET TruncateStatus=0 WHERE ID=@i
	END CATCH
	SET @Sql='' 
	SET @i=@i+1 
END 

/*** Check for the errors in Truncating the Tables ***/

IF NOT EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE TruncateStatus=0)
BEGIN
	PRINT'All specified Tables are Truncated successfully' 
END
ELSE
BEGIN
	PRINT'A result set retrived with the Failed Statements'
	SELECT CreateStmt AS'FailedtoTruncate'FROM [master].[dbo].[Constraints] WHERE TruncateStatus=0
END

/*** Recreate all Constraints in the database and update the Constraints table for failed statements ***/ 
/*** From Constraints Table Status 1 - Success and Status 0 - Failed ***/

PRINT' ' 
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT '@@@@@@@@@@@@@@   Recreate Constraints   @@@@@@@@@@@@@@@@' 
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT' '

SELECT @Max=COUNT(1)FROM [master].[dbo].[Constraints]  
SET @i=1 
WHILE(@i<=@Max) 
	BEGIN 
		SELECT @Sql=CreateStmt FROM [master].[dbo].[Constraints] WHERE ID=@i 
	BEGIN TRY
		EXEC(@Sql) 
	END TRY
	BEGIN CATCH
		UPDATE [master].[dbo].[Constraints] SET CreateStatus=0 WHERE ID=@i
	END CATCH
	SET @Sql='' 
	SET @i=@i+1 
END 

/*** Check for the errors in recreating the constraints ***/
IF NOT EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE CreateStatus=0)
	PRINT'All foreign keys are recreated successfully'
ELSE
	PRINT'Foreign keys are recreated with errors look into Summary'

/***** Summary   ******/
PRINT' '
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@  SUMMARY  @@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
PRINT' '

IF NOT EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE CreateStatus=0)
BEGIN
	PRINT'All foreign keys are recreated successfully'
END
ELSE
BEGIN
	PRINT'A result set retrived with the Failed Statements'
	SELECT CreateStmt AS'FailedtoRecreate'FROM [master].[dbo].[Constraints] WHERE CreateStatus=0
END

/**** Second Attempt to Recreating the failed constraints if any ****/

IF EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE CreateStatus=0)
BEGIN
	PRINT' '
	PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
	PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
	PRINT'@@@@@@ Final Atempt to Recreate Constraints   @@@@@@@@@@'
	PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
	PRINT'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
	PRINT' '

	DECLARE @Recreate Table(ID INT IDENTITY,ConID INT,CreateStmt VARCHAR(1000))
	INSERT INTO @Recreate (ConID,CreateStmt)SELECT ID,CreateStmt FROM [master].[dbo].[Constraints] WHERE CreateStatus=0
	SELECT @Max=COUNT(1)FROM @Recreate
	SET @i=1 
	SET @sql=''
	WHILE(@i<=@Max)
		BEGIN 
		SELECT @Sql=CreateStmt FROM @Recreate WHERE ID=@i   
		BEGIN TRY
			EXEC(@Sql)
			UPDATE [master].[dbo].[Constraints] SET CreateStatus=1 WHERE ID IN(
			SELECT ConID FROM @Recreate WHERE ID=@i)
		END	TRY
		BEGIN	CATCH
			PRINT'Final Attmept also Failed to Execute '+@sql
		END CATCH
		SET @Sql='' 
		SET @i=@i+1 
		END
	PRINT' '
	IF NOT EXISTS(SELECT 1 FROM [master].[dbo].[Constraints] WHERE CreateStatus=0)
	PRINT'All foreign keys are recreated successfully in Final Attempt'
	ELSE
	PRINT'Final Attempt is also filed in recreating all foreign keys there might be problem with domain intigrity'
	PRINT' '

END

END 

 

Example:

EXEC USP_Truncate_Tabs ”’DEPT”,”EMP_ADD”,”EMP”,”Item”,”Sales”’

 

Output is Something like below

Posted in SQL Development, SQL Scripts, SQL Server DBA | Tagged , , , , | 1 Comment