Sql Server Performance Tuning Interview Questions
Performance Tuning – SQL Server Part – 1
Q. What are the bottlenecks that effects the performance of a Database / Application
Ans:
The top performance bottlenecks for OLTP applications are outlined as:
- Database Design \ Database Code
- Application Design \ Application Code
- CPU bottleneck
- Memory bottleneck
- IO bottleneck
- Blocking bottleneck
- Network bottleneck
- Server Hardware
Database Design \ Database Code
- Too many indexes on frequently updated (inclusive of inserts, updates and deletes):
Tables incur extra index maintenance overhead. Generally, OLTP database designs should keep the number of indexes to a functional minimum, again due to the high volumes of similar transactions combined with the cost of index maintenance
- Statistics may not be updated or missing statistics
- Excess use of cursors and temporary tables
- Too much of normalization
- Do not use the conversion/system/user defined functions in where clause
- Unused indexes incur the cost of index maintenance for inserts, updates, and deletes without benefiting any users:
Unused indexes should be eliminated. Any index that has been used (by select, update or delete operations) will appear in sys.dm_db_index_usage_stats. Thus, any defined index not included in this DMV has not been used since the last re-start of SQL Server
- Choose the Appropriate Data Types
Always choose the smallest appropriate data type. Avoid NCHAR/NVARCHAR unless there is a need of storing Unicode.
- Use Triggers Cautiously
Keep the code in your triggers to the very minimum to reduce overhead. The more code that runs in the trigger, the slower each INSERT, UPDATE, and DELETE that fires it will be
- Don’t Access More Data Than You Need
Don’t return more columns or rows of data to the client than absolutely necessary. This just increases disk I/O on the server
- Avoid Using Cursors
- Wherever possible Try to use alternative solutions includes Temp-Tables, Derived tables, Table Variables or Recursive CTE’s etc
- Always select the cursor with the least amount of overhead. The most efficient cursor you can choose is the fast forward-only cursor.
- When you are done using a cursor, don’t just CLOSE it, DEALLOCATE
- Use Joins Appropriately
- If you have two or more tables that are frequently joined together, then the columns used for the joins should have an appropriate index. If the columns used for the joins are not naturally compact, then considering adding surrogate keys to the tables that are compact in order to reduce the size of the keys, thus decreasing read I/O during the join process, and increasing overall performance. You will learn more about indexing in the next section of this article.
- For best performance, the columns used in joins should be of the same data types. And if possible, they should be numeric data types rather than character types.
- Avoid joining tables based on columns with few unique values. If columns used for joining aren’t mostly unique, then the SQL Server optimizer will perform a table scan for the join, even if an index exists on the columns. For best performance, joins should be done on columns that have unique indexes.
- If you have to regularly join four or more tables to get the recordset you need, consider denormalizing the tables so that the number of joined tables is reduced. Often, by adding one or two columns from one table to another, joins can be reduced.
- Generally, frequent operations requiring 5 or more table joins should be avoided by redesigning the database
- Encapsulate Your Code in Stored Procedures
- Try to put all your T-SQL code in stored procedures which reduces the network traffic by just calling the proc from application and reduces the I/O overhead by using the compiled execution plan
- Always use the option “SET NOCOUNT ON”
- Design the proc’s to avoid the deadlocks
- Collect all inputs before the transaction begins
- Keep transaction short with in a batch
- Use the correct isolation levels
- Try to use with no lock option
Application Design / Application code:
Application Design issues:
- Perform as many data-centered tasks as possible on SQL Server in the form of stored procedures. Avoid manipulating data at the presentation and business services tiers.
- Don’t maintain state (don’t store data from the database) in the business services tier. Maintain state in the database as much as possible
- Don’t create complex or deep object hierarchies. The creation and use of complex classes or a large number of objects used to model complex business rules can be resource intensive and reduce the performance and scalability of your application. This is because the memory allocation when creating and freeing these objects is costly.
- Consider designing the application to take advantage of database connection pooling and object pooling using Microsoft Transaction Server (MTS). MTS allows both database connections and objects to be pooled, greatly increasing the overall performance and scalability of your application.
- If your application runs queries against SQL Server that by nature are long, design the application to be able to run queries asynchronously. This way, one query does not have to wait for the next before it can run. One way to build in this functionality into your n-tier application is to use the Microsoft Message Queue Server (MSMQ).
Application Code:
- Use OLE DB to Access SQL Server:
- You can access SQL Server data using either ODBC or OLE DB. For best performance, always select OLE DB. OLE DB is used natively by SQL Server, and is the most effective way to access any SQL Server data.
- Use DSN-less in Connection String:
- While creating an ADO connection to SQL Server, you can either use a DSN in the connection string, or you can use a DSN-less connection. For optimal performance, use DSN-less connections. Using them prevents the need for the OLE DB driver to look up connection string information in the registry of the client the application code is running on, saving some overhead.
- Encapsulate your DML (Data Manipulation Language) in Stored Procedures
- ADO allows you three different ways to SELECT, INSERT, UPDATE, or DELETE data in a SQL Server database. You can use ADO’s methods, you can use dynamic SQL, or you can use stored procedures. For better performance prefer Stored Procedures
- Encapsulate Your ADO Code in COM Components
- Put the ADO code that accesses SQL Server data into COM components. This gives you all the standard benefits of COM components, such as object pooling using MTS. And for ASP-based applications, it provides greater speed because the ADO code in COM objects is already compiled, unlike ADO code found in ASP pages. How you implement your data manipulation code in COM components should be considered when the application is first designed.
- For optimum performance, COM objects should be compiled as in-process DLLs (which is required if they are to run under MTS). You should always employ early binding when referencing COM objects, and create them explicitly, not implicitly.
CPU bottlenecks:
- Signal waits > 25% of total waits.
(See sys.dm_os_wait_stats for Signal waits and Total waits. Signal waits measure the time spent in the runnable queue waiting for CPU. High signal waits indicate a CPU bottleneck.)
- Plan re-use < 90%.
(A query plan is used to execute a query. Plan re-use is desirable for OLTP workloads because re-creating the same plan (for similar or identical transactions) is a waste of CPU resources. Compare SQL Server SQL Statistics: batch requests/sec to SQL compilations/sec. Compute plan re-use as follows: Plan re-use = (Batch requests – SQL compilations) / Batch requests. Special exception to the plan re-use rule: Zero cost plans will not be cached (not re-used) in SQL 2005 SP2. Applications that use zero cost plans will have a lower plan re-use but this is not a performance issue.)
Memory bottleneck:
- Consistently low average page life expectancy. (MSSQL$Instance: Buffer Manager\Page Life Expectancy:)
(See Average Page Life Expectancy Counter which is in the Perfmon object SQL Server Buffer Manager (this represents is the average number of seconds a page stays in cache). For OLTP, an average page life expectancy of 300 is 5 minutes. Anything less could indicate memory pressure, missing indexes, or a cache flush)
- Consistently low SQL Cache hit ratio. (MSSQL$Instance: Plan Cache\Cache Hit Ratio:)
(OLTP applications (e.g. small transactions) should have a high cache hit ratio. Since OLTP transactions are small, there should not be (1) big drops in SQL Cache hit rates or (2) consistently low cache hit rates < 90%. Drops or low cache hit may indicate memory pressure or missing indexes.)
IO bottleneck:
- High average disk seconds per read.
(When the IO subsystem is queued, disk seconds per read increases. See Perfmon Logical or Physical disk (disk seconds/read counter). Normally it takes 4-8ms to complete a read when there is no IO pressure. When the IO subsystem is under pressure due to high IO requests, the average time to complete a read increases, showing the effect of disk queues. Periodic higher values for disk seconds/read may be acceptable for many applications. For high performance OLTP applications, sophisticated SAN subsystems provide greater IO scalability and resiliency in handling spikes of IO activity. Sustained high values for disk seconds/read (>15ms) does indicate a disk bottleneck.)
- High average disk seconds per write.
(See Perfmon Logical or Physical disk. The throughput for high volume OLTP applications is dependent on fast sequential transaction log writes. A transaction log write can be as fast as 1ms (or less) for high performance SAN environments. For many applications, a periodic spike in average disk seconds per write is acceptable considering the high cost of sophisticated SAN subsystems. However, sustained high values for average disk seconds/write is a reliable indicator of a disk bottleneck.)
- Big IOs such as table and range scans due to missing indexes.
Blocking bottleneck:
- High average row lock or latch waits.
(The average row lock or latch waits are computed by dividing lock and latch wait milliseconds (ms) by lock and latch waits. The average lock wait ms computed from sys.dm_db_index_operational_stats represents the average time for each block.)
- Top wait statistics
- High number of deadlocks.
(See Profiler “Graphical Deadlock” under Locks event to identify the statements involved in the deadlock.)
Network bottleneck:
- High network latency coupled with an application that incurs many round trips to the database.
- Network bandwidth is used up.
(See counters packets/sec and current bandwidth counters in the network interface object of Performance Monitor. For TCP/IP frames actual bandwidth is computed as packets/sec * 1500 * 8 /1000000 Mbps)
Server Hardware:
Most slow applications are slow because of poor up front design, not because of slow hardware. Since the application’s design can’t be changed at the time when deployed to production, about the only thing you can try to help boost performance is to throw hardware at it.
- CPU: Always purchase a server with the ability to expand its number of CPUs. Usually it goes for larger servers with four or more CPUs. Always leave room for growth.
- Memory: Try to get enough RAM to hold the largest table you expect to have, and if you can afford it, get all the RAM your server can handle, which is often 2GB or more.
- I/O Subsystem: At the very minimum, purchase hardware-based RAID for your databases. As a rule of thumb, you will to purchase more – smaller drives, not fewer – larger drives in your array. The more disks that are in an array, the faster I/O will be.
- Network Connection: At the server, have at least one 100Mbs network card, and it should be connected to a switch. Ideally, you should have two network cards in the server connected to a switch in full-duplex mode.
For best performance on a server, SQL Server should be the only application running on the server, other than management utilities. Don’t try to save a few bucks by putting your IIS server on the same server as SQL Server.
Q. What is the process of tuning the Performance?
Ans:
- Identification – Use native tools like Profiler, Query Tuning Advisor, Query Execution Plans, Performance Monitor, system stored procedures, dynamic management views, custom stored procedures or third party tools
- Analysis – Analyze the data to determine the core problems
- Providing Solution –
- Creating new index on appropriate columns
- Altering the complex quires to make them use the existing indexes.
- By Updating Statistics for Tables and Views.
- By Rebuilding and Reorganizing indexes.
- By Resolving blocking problems.
- By removing Deadlocks.
- Testing – Test the various options to ensure they perform better and do not cause worse performance in other portions of the application
- Knowledge sharing – Share your experience with the team to ensure they understand the problem and solution, so the issue does not occur again
Q. How to choose the correct (Clustered/ Non- Clustered) index on a column?
Ans:
Selecting Clustered Index:
- Clustered indexes are ideal for queries that select by a range of values or where you need sorted results. Examples of this include when you are using BETWEEN, <, >, GROUP BY, ORDER BY, and aggregates such as MAX, MIN, and COUNT in your queries.
- Clustered indexes are good for queries that look up a record with a unique value (such as an employee number) and when you need to retrieve most or all of the data in the record.
- Clustered indexes are good for queries that access columns with a limited number of distinct values, such as columns that holds country or state data. But if column data has little distinctiveness, such as columns with a yes or no, or male or female, then these columns should not be indexed at all.
- Avoid putting a clustered index on columns that increment, such as an identity, date, or similarly incrementing columns, if your table is subject to a high level of INSERTS.
Selecting Non – Clustered Index:
- Non-clustered indexes are best for queries that return few rows (including just one row) and where the index has good selectivity (above 95%).
- If a column in a table is not at least 95% unique, then most likely the SQL Server Query Optimizer will not use a non-clustered index based on that column. For example, a column with “yes” or “no” as the data won’t be at least 95% unique.
- Keep the “width” of your indexes as narrow as possible, especially when creating composite (multi-column) indexes. This reduces the size of the index and reduces the number of reads required to read the index, boosting performance.
- If possible, try to create indexes on columns that have integer values instead of characters. Integer values have less overhead than character values.
- If you know that your application will be performing the same query over and over on the same table, consider creating a covering index on the table. A covering index includes all of the columns referenced in the query.
- An index is only useful to a query if the WHERE clause of the query matches the column(s) that are leftmost in the index. So if you create a composite index, such as “City, State”, then a query such as “WHERE City = ‘Houston'” will use the index, but the query “WHERE STATE = ‘TX'” will not use the index.
Q. How to read the graphical execution plan?
Ans:
The plan should be read from right to left
- Check the Graphical execution plan of a stored procedure / Query
- Table Scan – Index is missing
- Index Scan – Proper indexes are not using
- BookMark Lookup – Limit the number of columns in the select list
- Filter – Remove any functions from where clause, May require additional indexes
- Sort – Does the data really need to be sorted? Can an index be used to avoid sorting? Can sorting be done at the client more efficiently?
- DataFlow Arrow – High density: Sometimes you find few rows as outcome but the arrow line density indicates the query/proc processing huge number of rows
- Cost – Can easily find out which table / operation taking much time
- From the execution plan we can find out the bottleneck and give the possible solution to avoid the latency
Q. Why the Actual and Estimated Execution Plans Might Differ
Ans
- When Statistics are Stale:
The main cause of a difference between the plans is differences between the statistics and the actual data. This generally occurs over time as data is added and deleted.
- When the Estimated plan is invalid:
When the batch contains temporary tables or the T-SQL statements which refers some of the objects that are not currently existed in the database, but will be created once the batch is run. (Create table is there in batch)
Q. What are the permissions required to view execution plans?
Ans:
Either the user must be mapped to sysadmin, db_owner, db_creator or he/she will be granted the permission “Show Plan”.
GRANT SHOWPLAN TO [username]
Q. What are the tools available for performance tuning/monitoring?
Ans:
- Performance Studio: Act as a Central Data Repository, Collect Selected SQL Server Performance Data and Display Performance Reports
- Activity Monitor: It displays graphically about Processes, Resource Waits, Datafile I/O, Recent expensive Quires.
- Database Tuning Advisor (DTA): Recommend indexes
- Profiler: Can run traces and find out the expensive/long running quires/transactions
- Execution Plans: There are three types Graphical, Text and XML.
- DMV: Dynamic management views shows the current state of the sql server
- PerfMon: Windows native tool to view / monitor the performance of both sql and windows servers
- Third Party: Redgate products
Q. How to identify the CPU bottlenecks and how to resolve it?
Ans:
Identifying CPU Bottlenecks:
Firstly we have to confirm that SQL Server – CPU utilization is high. Run the below query
SELECT Timestamp, CONVERT(XML, record) AS XmlRecord
FROM SYS.DM_OS_RING_BUFFERS
WHERE ring_buffer_type = N’RING_BUFFER_SCHEDULER_MONITOR’
AND record like ‘%<SystemHealth>%’
ORDER BY timestamp DESC
One record is stored every minute up to a maximum of 256 records. Clicking on any of the XML links will take you to the XML editor and will show an entry similar to below
<Record id=”434″ type=”RING_BUFFER_SCHEDULER_MONITOR” time=”22398046″>
<SchedulerMonitorEvent>
<SystemHealth>
<ProcessUtilization>55</ProcessUtilization>
<SystemIdle>35</SystemIdle>
<UserModeTime>228180000</UserModeTime>
<KernelModeTime>251812000</KernelModeTime>
<PageFaults>64252</PageFaults>
<WorkingSetDelta>21770240</WorkingSetDelta>
<MemoryUtilization>100</MemoryUtilization>
</SystemHealth>
</SchedulerMonitorEvent>
</Record>
Information from above XML:
ProcessUtilization: Percentage of CPU utilized by SQL Server – 55%
SystemIdle: Percentage of Idle CPU – 35%
Other processes using CPU: 100- (55+35) = 10 %
Now find out the query/proc/process that is making CPU utilization High:
SELECT TOP 20
qst.sql_handle,
qst.execution_count,
qst.total_worker_time AS Total_CPU,
total_CPU_inSeconds = –Converted from microseconds
qst.total_worker_time/1000000,
average_CPU_inSeconds = –Converted from microseconds
(qst.total_worker_time/1000000) / qst.execution_count,
qst.total_elapsed_time,
total_elapsed_time_inSeconds = –Converting from microseconds
qst.total_elapsed_time/1000000,
st.text AS ‘Query’,
qp.query_plan
from
sys.dm_exec_query_stats as qst
CROSS APPLY sys.dm_exec_sql_text(qst.sql_handle) as st
cross apply sys.dm_exec_query_plan (qst.plan_handle) as qp
ORDER BY qst.total_worker_time DESC
From the above script we can find the commands which are taking the most CPU time along with the execution plan. By reviewing the execution plan you can see what additional indexes need to be added to the database which will improve database performance and decrease the CPU load time.
By adding missing indexes or by using the proper indexes we can decrease the load on CPU.
Other options:
- Sp_monitor: Displays statistics, including CPU usage, I/O usage, and the amount of time idle since sp_monitor was last executed. We can get the information about the “CPU Time (Sec)”, “I/O Time (Sec)”, “Count of Input\Output Packets”, “No of logins attempted”, “Errors in reading/writing network packets” etc.
- @@CPU_BUSY / @@IO_BUSY: Returns the time that SQL Server has spent working since it was last started. Result is in CPU time increments, or “ticks,” and is cumulative for all CPUs, so it may exceed the actual elapsed time. Multiply by @@TIMETICKS to convert to microseconds. But it may not the accurate value to be considered.
- PerfMon
- Profiler
Q. Can you tell me what the Wait Type “LAZY WRITTER” is?
Ans:
The job of the lazy writer is to find dirty pages in the buffer pool and write them out to disk and drop those pages from cache.
Q. Can we find performance bottleneck from sysprocesses?
Ans:
Yes. We may not confirm that it is the only bottleneck but at least we can find the bottleneck. Lastwaittype column with waittime plays a vital role in identifying the issue. This is a very interesting column because it can tell you what the offending query is waiting for to complete.
Network_io: There is too much of traffic in Network
Cxpacket: Your process is waiting on other parallel processes to complete.
SOS_SCHEDULER_YIELD: CPU bound. We may not have enough CPU in your box
IO_Completion: Disk issue. We may not have enough disk space or running on corrupted disk array.
Q. What Are SQL Server Waits?
Ans:
Instead of measuring activity of CPU, storage, or memory, why not ask what SQL Server has been waiting on when executing queries?
In general there are three categories of waits that could affect any given request:
- Resource waits are caused by a particular resource, perhaps a specific lock that is unavailable when the requested is submitted.
- External waits occur when SQL Server worker thread is waiting on an external process
- Queue waits normally apply to internal background tasks, such as ghost cleanup, which physically removes records that have been previously deleted.
Q. How could you know the statistics are outdated?
Ans:
If old statistics is your problem, you will likely experience this as a gradual decline of SQL Server slowing down over many days or weeks, or you may have just upgraded your platform (from 2000 to 2008) and forgot to update the statistics. Out of date statistics cause inaccurate execution plans.
Q. What are the main parameters we need to check when you are dealing with memory performance?
Ans:
There are four significant properties of sql server.
Max server memory and Min server memory:
Use the two server memory options, min server memory and max server memory, to reconfigure the amount of memory (in megabytes) that is managed by the SQL Server Memory Manager for a SQL Server process used by an instance of SQL Server. By default Min Memory is set to be 0 and Max Memory is set to be 2147483647 MB (21 GB). Never leave these two settings as default. Depends on the memory available and other applications running on windows Server, change these two settings.
For example we have 24 GB available and the settings can be like this:
Min Memory: 1 GB
Max Memory: 16 GB
Remember total max memory of all instances should not exceeds the actual physical memory available
Priority boost: By default, the priority boost setting is 0, which causes SQL Server to run at a normal priority. If you set priority boost to 1, the SQL Server process runs at a high priority.
Lightweight pooling: Switch on this parameter when you want to make sql server use the fiber mode facility. Unless there is a real need and environment (Large multi-processor servers) available we should not use this option at production servers.
Note: While answering questions regarding threshold values and performance counter values it really depends on lot many things. Ex: Type of database (OLTP / OLAP), Server and database configurations, SLA with the business owners etc. Like these lot many parameters will be considered while determining a threshold values.
One of our blog reader Jagadesh suggested how they calculate and keep threshold value for “Page Life Expectancy”.
“It really depends on the amount of memory allocated to your instance of SQL Server. When working with my clients the value I start with is taking (max memory in GB) / 4 * 300. This would be the minimum value. On a server with 32GB allocated to SQL this value would be (32/4)*300 = 2400 seconds or 40 minutes.”
Nice one..Very Useful
Thanks for visiting.
Great Stuff Uday.
Thanks Lavanya.
This is an awesome stuff. Thanks Uday.
Tejaswi, Thanks for visiting or blog.
Happy Reading
Uday Arumilli
I appreciation your belog.You really Rock!!! Thaks Uday.
I appreciate your belog.You are the ROCK!! Very Nice.
Thanks !
Happy Reading
http://udayarumilli.com/
Good one.. very useful
Thanks Agnivesh.
Happy Reading
http://www.udayarumilli.com
All stuff at one place
Good Work Keep on Bro….
Thanks Much Srinu !!!
Happy Reading
http://udayarumilli.com/
Really good information. Thank you for sharing it!!
Thanks Mahesh !!!
Happy Reading
http://udayarumilli.com/
Hello Uday, I really appreciate your affords these kind explanation!!!!! As you stated the “average Page Life Expectancy of 300 is 5 Minutes” , is a very old and dated recommendation is that this value be 300 or higher which is only 5 minutes. With the standard servers in place today, this value is too low. it really depends on the amount of memory allocated to your instance of SQL Server. When working with my clients the value I start with is taking (max memory in GB) / 4 * 300. This would be the minimum value. On a server… Read more »
Hi Jagadish,
Firstly thanks for visiting and your valuable suggestions.
Yes! That’s true 300 is really outdated value but we usually answer in a more generic way. Usually for any question there are lot of parameters should be considered before answering. An experienced person will raise a question and get clarification before answring these questions.
However I have added a note at the end of post and marked you for your suggestion.
Thanks a lot for reading our articles.
Happy Reading
http://udayarumilli.com/
Hi Uday,
Could you please share the SQl server clustering information.
Thanks,
Kranthi
Sure Kranthi.
We have been working on SQL Server Clustering and High Availability Groups interview questions and answers. Will come with the required stuff soon
Happy Reading
http://udayarumilli.com/
Thats cool …Thanks !!
Thank You so much for visiting our site.
Happy Reading
The Team SQL
udayarumilli.com
Thats cool …Thanks!!
Wow… excellent information.
I spent months to find something like this. Thank you so much.
Dear Sivareddy,
Thank you so much for visiting out site and we are very happy you liked the content.
Happy Reading
The Team SQL
udayarumilli.com
Very useful question and answer at interview time…
Thanks Pinank.
Happy Reading
the Team SQL
http://www.udayarumilli.com
Hello Udaysir,
I have 500GB Databse size. when i take full backup at that maximum time (1-2 hrs.) consume to complete backup.
is it any another way to taking full backup minimum time consuming.
please share your idea.
Thank you,
Pinank S.Mehta
Hi Pinank Mehta, Thanks for visiting our blog. You just said 500 GB db size, SQL Version, Hardware, CPU, Memory will impact the time. Anyways here are the top 6 ways in which one can reduce the backup time. Full and Differential: Why we are taking full backup on daily basis? We can plan for a weekly full backup and daily differential backup and also transactional backup based on SLA for data loss acceptance. Differential backup doesn’t take much time when compare to full backup. Make sure the latest differential backup is in safe place as it requires to restore… Read more »
wow..Excellent Information !!
Reading all your post and sharing same with colleagues and friends.
Thank You So much Jyoti.
Happy Reading
The Team SQL
http://udayarumilli.com/
What an awesome work! God bless you, Uday.
Anusha,
Thanks for visiting our blog and we are glad that you liked the content.
Happy Reading
The Team SQL
http://udayarumilli.com/
Really very useful, thank you
Thank You So much Santhosh for visiting our blog.
Happy Reading
The Team SQL
http://udayarumilli.com/
It’s really amazing!!!
Thanks Sandip….
Happy Reading
The Team SQL
http://www.udayarumilli.com
Superb, everything at one place… Thanks Uday 🙂
Thank you so much Sachin 🙂
Happy Reading
SQL THE ONE Team
http://www.udayarumilli.com
Great work Thank you
Awesome work, Thank you.
Thanks Vasanth!!!
Happy Learning
SQL The One Team
http://www.udayarumilli.com
[…] Performance Tuning […]
Hello Uday
really its nice one. Yours lot of effort on this site make us fit in subject.
thanks a lot.
Vivek, Thanks for your kind words 🙂
Happy Learning
SQL THE ONE Team
http://www.udayarumilli.com
Thank you so much for this info
One of the best of best, it helps a lot to move our SQL server performance knowledge.
Advice to all buy a SQL the One book and it improve our knowledge and helps Uday to write more useful topics. Help the right person for right things.
Thank You,
G Arunagiri
Thank you so much for your kind words.
Cheers
Uday Arumilli