SQL DBA Interview Experience with a Top MNC in Bangalore

SQL DBA Interview Experience with a Top MNC in Bangalore

SQL DBA Interview Experience with a Top MNC in Bangalore

SQL DBA Interview Experience with a Top MNC in Bangalore is a recent SQL DBA interview questions shared by one of the blog follower. Thank you so much Mr.Vijay Tripaati for sharing your experience. This is for a SQL DBA role with 5 to 7 years of experience.

First Interview – Telephonic

Interviewer: Can you tell me about yourself?

Vijay: I have been working as a DBA for XXXXX from the last 4 years. I totally have 6 years of experience in administrating SQL Server database systems. I am experienced in creating DR solutions, automation, process improvements, T-SQL Scripting, PowerShell Scripting, performance tuning and all other SQL DBA activities. Previously I worked with XXXXXX Company for the client XXXXXX. I am a graduate from XXXXX College.

Interviewer: What is your daily routine in your current role?

Vijay: I am currently with the DBA support team for XXXXX client. This comes under L2 support. I have been working in shifts. First thing in the shift I need to check for scheduled RFC (Change Requests). And then I work with Service Requests assigned to my queue. Along with these we need to handle the INCIDENT requests as and when required. Along with these we need to check the automated notifications and need respond emails.

Interviewer: Do you have any SLA with your clients?

Vijay: Yes! We have a clear Service Level Agreement with our client. As per the policy I can’t reveal the SLA details but we have something like SLA on response times for a service request, change request, emergency change which should link with an incident, Incident Request, response time when an email comes from a Director or higher level. Like these we have a list of conditions included in SLA.

Interviewer: I really didn’t see the email response time included in an SLA. This is the first time I am hearing. Any specific reason?

Vijay: Not sure! Our client is in different business areas but the applications that we are supporting have been generating the highest revenue. May be that’s the reason they considered these applications are critical.

Interviewer: How big is your current infrastructure environment?

Vijay: I am current working with the large infrastructure environment. I can’t reveal the exact resource counts but these are the rough figures.

We have virtual machines between 400 and 600. SQL Server instances between 800 and 1200.

The largest database size is about 5 TB.

Interviewer: Do you need to interact with the clients in your current role?

Vijay: Sometimes yes! It’s all depends on type of the problem we deal with. Most of the times I involve in client meetings when they are facing issues like “Performance Slow Down”, “ Unable to reproduce problem in Pre-Production”, “Process Improvements”, “When Approvals required”, “Resolving Long time issues” etc.

 

Interviewer: What is your strong technical skill set?

Vijay: SQL Server! Yes that’s where I have been working from last 6 + years. Apart from that I experienced in PowerShell, Performance Tuning and Configuring Log Shipping and Replication.

Interviewer: Would you be interested if you need to jump into other technology?

Vijay: I am always open to learn new technology. But I am really interested to learn new technologies with in my scope. I am very happy if you ask me to learn and work on MSBI, Oracle, MySQL or any other new database related technologies.

Interviewer: SQL Server versions you worked with?

Vijay: SQL Server 2008, 2012 and 2014.

Interviewer: Do you have any experience in cloud?

Vijay: Yes! We have few instances located on AWS EC2. Unfortunately I didn’t get the chance to migrate them to cloud but yes I have been supporting those servers.

Interviewer: Any questions for me?

Vijay: I have insights of the open position as one of my fried is working here. I just wanted to know the next process.

Interviewer: Well! I am pretty much fine with the interaction. I will schedule a technical interview sometime next week.

Second Technical Interview (In-Person)

Interviewer: Can you brief me about your DBA experience?

Vijay: Explained about my work experience as a DBA

Interviewer: What is the difference between Primary Key and Foreign Key?

Vijay:

Primary Key: Is to uniquely identify each row in a given table. It is required to enforce the uniqueness.

Ex: In Department Table is having “DeptID” as Primary Key. Each department should be uniquely identified using “DeptID”.

Foreign Key: Foreign Key refers to the primary key in another table. It is required to implement the referential integrity.

Ex: In Employee Table, “DeptID” column should be defined as foreign key. Therefore we can add employees to the department. In Employee table it is not possible to insert a DeptID if it is not available in Department table.

Interviewer: What is a Trigger in SQL?

Vijay: A trigger is a SQL batch or a kind of SQL procedure that executes in response to certain action on the table. Triggers fires automatically to respond to a user action based on the defined conditions. There are four types of Triggers DDL, DML, CLR and LOGON. As name implies DDL triggers are used to track DDL changes and DML triggers are used to track DML changes. In DML there are again two types of triggers, “After Triggers”, “Instead Of Triggers”. I can explain in detail if you want me to.

Interviewer: Can we use transactions inside a Trigger?

Vijay: Yes! We can use transactions inside a Trigger. COMMIT/ROLLBACK acts same like with nested procedures. When you commit a transaction inside a Trigger, it commits the nested transaction that opened inside the trigger. If issued “Rollback” inside a Trigger, it rollback all actions from the parent batch from which the Trigger got fired.

Interviewer: Which is the best object to hold data, Temporary Table, Table Variable, CTE, Table or View?

Vijay: To determine one of these objects to hold data we need to answer below questions:

What is the Scope? Ex: To All sessions, current session, current executing statement only, Permanent etc.

Size of the DataSet? Ex: Tiny, Small, Medium, Large.

What is the context? Ex: Should be used in Dynamic queries, should follow ACID properties etc.

Based on these inputs we can determine the object to be used.

Table: Data is persistent available for all DDL and DML operations.

View: We can hold data with views unless it is an indexed view. We use views on top of tables to reduce the query complexity and to provide security.

Temporary Table: Local, Global available. Can handle huge datasets for temporary purpose. Can follow transaction ACID properties. Difficult to handle the dynamic SQL.

Table Variable: Good to handle the tiny datasets in the current session for temporary purpose. Can’t obey the ACID properties

CTE: It’s good for handling the recurring results and the resultant dataset is available only to the next immediate query.

Interviewer: What is transaction safety in database mirroring?

Vijay: Database mirroring introduces the concept of transaction safety. There are two ways this can be set: FULL (synchronous) and OFF (asynchronous). The value for transaction safety controls which mode will be used by the database mirroring session. The default value is FULL.

Interviewer: In database mirroring what are the events happens when an automatic failover occurred?

Vijay:

When a failure occurs and automatic failover takes place, several events occur in the following sequence:

  • Failure occurs: The principal database becomes unavailable. This may be caused by hardware, storage, network, or power failure on the principal server.
  • Failure is detected: The failure is detected by the mirror and the witness.
  • Complete redo performed on mirror: The mirror database has been in the Restoring state until now and the redo is being continuously applied on the mirror database. The remaining log in the redo queue of the mirror at the time of failure of the principal is applied on the mirror database to completely recover it.
  • Decision to fail over: The mirror coordinates with the witness and decides that the database should now fail over to the mirror. This decision process usually takes around 1 second. If the principal comes back before the redo phase in step 3 is complete, failover is not required.
  • Mirror becomes principal: After all the remaining redo is applied, the mirror database is brought online and it assumes the role of the principal. The database is now available and clients can connect to the new principal and continue operation.
  • Undo: The uncommitted transactions in the transaction log are rolled back.

Interviewer: What is the new security features added in SQL Server 2014?

Vijay: I know one new Security feature. New Server level permission added “CONNECT ANY DATABASE”. This is helpful in giving permissions for audit purpose. I know more features added in 2014 but this is the only feature I could remember from security prospect.

Interviewer: What is the proxy account in SQL Server?

Vijay: When a SQL Job is created the job executed under SQL Server Agent service account. There are certain scenarios where the service account requires highest privileges. Giving highest privileges to SQL Server Agent service account is allowed in enterprise environments, in those cases we do create a proxy account we’ll give all required permissions to this proxy account. This proxy account can be mapped to SQL Agent Job/job step thereof the particular job/step runs under the highest privilege account through proxy.

Interviewer: I am unable to start the SQL Server Database Engine service from configuration manager. Have you ever faced the issue? If yes can you tell me 1 or 2 reasons which cause service down?

Vijay:

  • Service account password changed but not updated at SQL Server Services
  • System databases not available or not accessible. Ex: Files might be deleted or corrupted
  • STARTUP parameters having incorrect path locations ex: Master database file location is wrongly specified in SQL Server start-up parameters.

Interviewer: Have you ever participated in SQL Server installation?

Vijay: Yes! My current role involves SQL Server installation and patching.

Interviewer: Can you tell me one recent issue you faced while installing SQL Server?

Vijay:

Problem:

  • When SP1 released our team has applied on 5 staging machines.
  • There was no problem with the installation, it went on well but soon we come to know that there were performance issues.

RCA:

  • When I did a RCA found it was due to high CPU utilization and we have seen there were a lot of warnings in event log on “SQL_Tools_Ans.”
  • Further investigated and found 90% CPU utilization is from msiexec.exe and it was running continuously thereof problems started.
  • We quickly gone through MSDN sites and found there were lot of other people facing the similar kind of issues.

Solution:

  • Immediately we have uninstalled SQL Server 2012 SP1 from all staging machines.
  • Next minute CPU utilization came down and we didn’t see msiexec.exe on running processes list.
  • This issue got fixed in SQL Server 2012 SP1 CU2

Interviewer: In your environment do you have a separate team for Database and OS maintenance?

Vijay: Yes! Since it’s a large infrastructure environment we have dedicated supporting teams working. For example DBA team (SQL Server / Oracle / MySQL), Application (Developers / Testers), OS Team (Windows / Linux), Network, Storage, Cloud, Firewall etc.

Interviewer:. What happens when a checkpoint occurs?

Vijay:

  • Checks if there are any dirty pages (Pages marked as modified after last checkpoint) on buffer
  • Writes all log records from buffer to disk (It should be done before pushing modified page into disk so that it helps in recovering. This is called Write Ahead logging)
  • Writes all dirty pages from buffer to disk
  • Simple Recovery: It checks if there are any VLF’s that can be inactive

Third Technical Interview

Interviewer: Have you ever heard about “Last Page Insert Latch Contention”?

Vijay: Yes!

Interviewer: Can you explain about it quickly?

Vijay:

  • When you create a clustered index on an increasing column like Identity, it may create problems in handling the huge insert workload from multiple threads.
  • When huge inserts are happening from multiple threads / sessions then each thread is trying to access the last page (Since we created Clustered index on Identity column and all newly created rows inserted at the last page only.).
  • When SQL Server needs to read a page the worker thread has to acquire a Shared Latch, for write it has to acquire an Exclusive Latch.
  • Shared and Exclusive Latches are incompatible to each other. This means we can’t allow more than one thread to issue an exclusive latch on same page for writing.
  • Coming to our case, we are trying to INSERT new rows from multiple sessions / threads.
  • Each thread tries to access the last page in clustered index B-Tree.
  • Since the clustered index is created on an identity column, each new record is inserted at the end.
  • Here the problem comes-in, parallel queries / threads are contending about an exclusive latch on the last page of your clustered index B-Tree.
  • SQL Server executes these INSERT statements serially one by one
  • This causes the “Last Page Insert Latch Contention”

Interviewer: Interesting. Then how can we resolve this issue?

Vijay:

When insert statements contending for exclusive latch on same last page, we can prevent this problem by distributing the next generating key across the B-Tree. Technically this should answer your question. There are certain ways:

  • Using a random Clustered Key value Ex:UNIQUEIDENTIFIER. Limitation: Larger logical pointers required in non-clustered indexes.
  • Implement Hash Partitioning. Limitation: We need SQL Server Enterprise Edition.
  • Try Reverse Index: We have to use a function to generate a random sequence number by reversing the bits. Limitation: Extra burden in generating the new sequence number.
  • We can eliminate latching by using In-Memory OLTP if you are using 2014 / 2016. Limitation: There are many limitations with In-Memory OLTP in 2014

Interviewer: What kind of business scenario you faced this situation?

Vijay: Well, it was a table we created for logging the data coming from signaling system. It sends a huge feed continuously.

Interviewer: Can we take a log backup when database is in simple recovery mode?

Vijay: No! We can’t.

Interviewer: I have changed the existing database recovery model from simple to full and we are not performing the log backups. Does the log file grows or truncates?

Vijay: Did we perform a full backup after modifying the recovery model to full?

Interviewer: No

Vijay: In that case we still can’t perform the log backups even the database is in full recovery mode and the transaction log is auto truncates when every time the checkpoint happens just like in simple recovery mode.

Interviewer: So you mean to say that we should perform a full backup in order to take a log backup? Isn’t it?

Vijay: Yes! Certainly

Interviewer: As per my business SLA we need to perform a differential backup on every night and no transactional backups required. Which recovery model the databases should be in?

Vijay: Simple

Interviewer: Are you sure? Can we perform a differential backup when databse is in simple recovery mode?

Vijay: Yes! I am sure

Interviewer: We are using the log shipping as a DR solution for one of the database. Now we would like to back up the standby database. Can you be able to do that?

Vijay: No! We can’t perform the backup for standby database. May I know the exact requirement why you need the backup of standby database?

Interviewer: Well we have our prod is in different region and DR site and QA site are in same region so that we can quickly restore the prod backup to QA instance.

Vijay: Got it, but still we can’t take a backup of stand by database. We need to look for alternative solutions.

Interviewer: Ok, let’s say this is the situation and can you suggest any alternative solution?

Vijay:

Usually when we need to restore data from prod to pre-prod we do consider few major transactional tables. If that is possible then identify the tables to be copied from stand by instance to QA instance and copy / export using SSIS / Import-Export / BCP. If entire database is to be restored then we can try:

  • Make the standby database offline
  • Copy database related all MDF and LDF files to other folder
  • Make the standby database back online
  • Copy the folder (MDF / LDF copied into) to the required QA instance
  • Attach those files. (We should be careful in recreating users)

Interviewer: Have you ever tried this method?

Vijay: Yes! We were not able to reproduce in QA / DEV a problem which we were facing in prod database. App team found a solution but it needs to be tested in Pre-Prod before applying it to PROD. In that case we have copied the MDF / LDF files from standby and attached in a pre-prod instance for testing.

Interviewer: Have you ever faced a situation like moving a publisher database files whether it’s MDF or LDF to other location when the database is a publisher in transactional replication?

Vijay: Yes! We had to move the publisher database LDF file to other drive due to low disk space issue.

Interviewer: Can you explain the procedure in detail?

Vijay: Sure!

  • First modify the publisher database LDF file location to new drive using ALTER DATABASE MODIFY FILE. That means we just update the logical file path.
  • Now stop the SQL Server Agent jobs Replication Log Reader Agent and Distribution Agent jobs.
  • So that these jobs doesn’t try to access the publisher database while we are moving the file to new drive.
  • Then bring the publisher database offline using ALTER DATABASE <DBName> SET OFFLINE
  • Move the LDF file to the new location (Mentioned in the first step) where the enough space available
  • Bring back the database online using ALTER DATABASE <DBName> SET ONLINE
  • Start the SQL Server Agent jobs: Log Reader and Distribution Agents
  • Replication continues to work as usual.

Interviewer: Do you remember the exact statement for modifying the file logical path?

Vijay: Yes

ALTER DATABASE <Publisher_DBName>

MODIFY FILE (

NAME = <Publisher_DB_LDF_Log>,

FILENAME = ‘<New Drive Path>\< Publisher_DB_LDF_Log.ldf’);

Q. We have a requirement to remove duplicates from a huge dataset. Can you suggest an optimized way for that?

Vijay:

Before looking into the solution we need answers for:

1 What is the table size – Number of columns, Rows and Size in GB

2 Does the table having a unique key?

3 Is this the one time activity or we need to do it in regularly?

Interviewer: Sure, here are the answers for your questions: Table Size: 500 Million Rows, Size: 120 GB. Yes Table is having a unique key and we need to consider a row is duplicated when all column values are equal excluding the unique key. This is the one time activity.

Vijay:

Thanks for the inputs. Well having a unique key makes our job easier. We can accomplish the task in multiple ways but I would tell you one of the possible optimized methods:

  • Identify the key values of duplicate rows into a temporary table. Ex: There are 3 duplicate rows aligned with the key values 788, 966 and 1210. Ignore the 788 and capture 966 and 1210 into a temporary table.
  • Create a simple batch T-SQL script to delete duplicate rows in batches. Ex: We will loop through the temporary table keys to identify the rows to be deleted, delete these rows from the main table. Restrict the batch with 10 lakh or more based on your resource feasibility. That means in first run it deletes the duplicate rows from 1 to 10 Lakh and in 2nd run deletes duplicate records with the key values between 10 and 20 lakh.
  • Make sure the transaction backups are happening to clear the log file.
  • Also make sure the operation is scheduled in low / off business hours.

Interviewer: Let’s say my table is having 85% of duplicate records and we need to delete the duplicates. How do you handle the situation? The same procedure works well with this?

Vijay:

Not exactly! As I told you there are different methods to accomplish the task based on the inputs. When we are having 85% duplicated I would suggest:

  • Capture all distinct records (Not just a key rather entire record) into a temporary table.
  • Script the DROP and CREATE script for foreign keys on the main table.
  • Drop the foreign keys
  • TRUNCATE the main table.
  • Recreate the foreign keys
  • Insert data from temporary table into the main table.
  • Enable a constraint to deny duplicate record insertion if process permits.

Interviewer: We have a production SQL Server. Let’s imagine we do not have any SYSADMIN account other than SA. You usually login with a database account which is having DBO rights on all user databases. One day you need to change something at server level and you tried to login with SA. After few failure attempts you got to know that you forgot the SA password and the most unfortunate thing is the password is missing from the documentation. How do you recover the SA password?

Vijay:

Before 2012 it was really difficult to handle this situation. But from 2012 we can make it easy by following the steps:

  • Make sure your Windows account is a member of the local administrators group on Windows Servers where SQL Server Services are running.
  • Make sure your windows login is a member of administrators
  • Open command prompt as admin
  • Start SQL Server with single user mode or with minimal configuration from command prompt. SQLServr.Exe –m or SQLServr.Exe –f
  • Once SQL Server is started in single user mode, let’s keep the command prompt window open
  • Start a new command prompt window and connect to SQL Server using SQLCMD
  • Now we can create a new login using CREATE LOGIN
  • We can also give SYSADMIN permission to the newly created / existing account using SP_ADDSRVROLEMEMBER
  • Restart the SQL Server services normally
  • Now we can connect to the instance using newly created SYSADMIN account and can reset the password for SA.

Interviewer: What is TEMPDB PAGELATCH Contention?

Vijay:

As we know that below are the first, second and third pages of any database datafile.

First Page: PFS

Second Page: GAM

Third Page: SGAM

When Application is creating a Temp table:

  • To determine where in TEMPDB to create your table, SQL Server will read the SGAM page (2:1:3) to find a mixed extent with free space to allocate to the table.
  • SQL Server takes out an exclusive latch on the SGAM page while it’s updating the page and then moves on to read the PFS page to find a free page within the extent to allocate to the object.
  • An exclusive latch will also be taken out on the PFS page to ensure that no one else can allocate the same data page, which is then released when the update is complete.
  • The problem comes in when the number of users increased and trying to create many temp objects on TEMPDB creates a PAGELATCH wait, with 2:1:1 (PFS) or 2:1:3 (SGAM).
  • This is known as TEMPDB Page Latch Contention

Interviewer: How to handle the TEMPDB PAGELATCH contention?

Vijay:

Enable Start up Trace Flag 1118:

Up to SQL Server 2014 we have Trace Flag 1118 which is used to allocate a Uniform Extent instead of Mixed Extents to minimize contention in extent allocation. If this trace flag is enabled, then the first 8 data pages for tables are also Uniform Extents rather than Mixed Extents. In SQL Server 2016, this uniform extent allocation is the default behavior and we can change this behavior if required by using an ALTER DATABASE command, however we cannot change the behavior for the TEMPDB database.

Create multiple TEMPDB data files:

Creating multiple TEMPDB data files will help to remove the PFS page contention, by spreading the allocation workload over multiple files, thus reducing contention on the individual, per-file PFS pages.

Interviewer: I am fine with my interview. Do you have any questions for me?

Vijay: What is the scope of this position for the next 2 years?

Interviewer: Explained how the role is going to be involved in various activities and their plans about the client engagement etc.

Vijay: Thank you so much.

Interviewer: Please wait for some time I’ll call HR for the next level discussion.

Vijay had a HR discussion and after that he has submitted all required documents. Within next few days he got the offer and he is going to join the team be end of next month. We believe the post SQL DBA Interview Experience with a Top MNC in Bangalore can help DBA group members who are preparing for interviews.

Vijay: Thank you so much for sharing your interview experience. From our team we wish you all the very best for your future endeavors.

Note:

Guys, if someone wants to share your interview experiences please do share on sqltheone@gmail.com. Just send us the questions and the interview details where and when you attend etc. We never disclose your personal / professional details in anywhere without your notice.

Related Articles:

An Interview Experience with Microsoft

Top 1000 SQL Server Interview Questions

SQL Server SME Interview Experience

Posted in Interview Q&A, SQL Server DBA | Tagged , , , , , , , , , , | Leave a comment
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments