A discussion between a CxO and a senior Data Architect Part 5

A discussion between a CxO and a senior Data Architect Part 5

.

Links to other parts

A discussion between a CxO and a senior Data Architect Part 1

A discussion between a CxO and a senior Data Architect Part 2

A discussion between a CxO and a senior Data Architect Part 3

A discussion between a CxO and a senior Data Architect Part 4

.

Background: We have been going through a discussion that took place between senior leadership and a data architect. Here the final part of the series continues.

.

Discussion Follows:

.

Alison: We are currently holding stakeholder financial portfolios, customer personal identities, and other sensitive information which is classified as confidential and restrictive. When I say moving to the cloud, the first thing that comes to my mind was data security. We are going to store our corporate data in a public cloud data center like Azure or AWS. Since you are the data owner, you need to convince me about the cloud migration by explaining the public cloud security capabilities. Considering I have zero knowledge about cloud security, can you list out all possible security risks and how Cloud providers can handle them?

Vasumat: Security is the top concern for any business. Security is a shared responsibility between the cloud provider (Azure, AWS, Google Cloud, etc.) and the customer in a public cloud platform. Three fundamental objectives of data security are A) Confidentiality – Ensuring data privacy; B) Integrity – Protect data from accidental or intentional alteration or deletion without proper authorization; C) Availability / Data Resiliency – Despite the incidents data continues to be available at a required level of performance.

 Things to be protected: We need to protect everything that belongs to our enterprise infrastructure. However, those are categorized as: Cloud endpoint, Network, Data, Application, Resource, Keys & Identities, Backups, Logs, and Cloud Datacenter – Physical device protection.

.

Possible security risks, reasons, solutions / preventive measures:

 Account hijacking: Compromised login credentials can put our entire enterprise at risk.

Reason: Weak credentials, unchanged passwords, keylogging (monitoring keystrokes), sharing credentials, etc.

Prevention: Strong credentials, no sharing, define expiry time for tokens, enable password policy, enable multifactor authentication, do not write passwords in a clear text format, store keys, and certificates in Azure Vault, allow access only to the specific IP addresses, do not use public computers or Wi-fi to connect to the cloud portals, etc.

.

 Human error: It is an indirect threat to our cloud workloads. Ex: Unknowingly deleting a resource, downloading insecure applications, misconfigurations, etc.

Reason: Low clarity of goals, untrained resources, unclear policies, not having proper data handover process in resource exit formalities, etc.

Prevention: Train the resources, make your IT policies stronger (Ex: Password expiry, restricting risky apps, games, pirated software downloads, internet gateways), create a tight monitoring control, etc.

.

 Application Security failures: Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. Ex: SQL injection (code injection technique, malicious SQL code/statements are inserted into application fields and try to access information that was not intended to be displayed.), cross-site scripting (attacker sends malicious code/payload to the server using feedback form, comment section, etc.), etc.

Reasons: Not sanitizing the inputs, not implementing timeout policy, displaying session IDs in URL, not using SSL/TLS, not encrypting passwords, failing to verify the incoming request source, exposing object references (table/view/function, database, file, storage, server, etc.), exposing error handling information to the end client, running unnecessary services, using outdated software, plugins, not having a standard audit policy, etc.

Prevention: Properly sanitize the user inputs; configure session timeout based on requirement; do not expose unnecessary information (error info, object references, session ID, app metadata, etc.) to the end client; always make sure that underlying app components are updated with the latest patch; don’t do redirects at all. If it is necessary, have a static list of valid locations to redirect to; equip apps with SSL/TLS, multi-factor authentication, etc.; Establish a strong security clearance layer, which means every time new code is deployed, we need to review, scan and identify security loopholes; Enable Web Application Firewall which acts as a layer between application and internet and filters the traffic and protects our App from common attacks like cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection. It is recommended to use Cloud-based WAF to automatically update it to handle the latest threats. Schedule periodic audits on application code. We use vulnerability scanners like Grabber, which performs automatic black-box testing and identifies security vulnerabilities.

.

 Data Breach & Theft of Intellectual Property: Altering, deleting, uploading, or downloading our corporate data without authorization is called a data breach. If it happens for sensitive data (patents, trade secrets, PII-Personal Identifiable Information, financial info, etc.), we need to notify the victims and it can critically damage our organization’s image, sometimes leading to legal actions and heavy penalties. Ex: Cyber attacks, phishing attacks, malware Injections, etc.

Reason: Data Breach and theft of IP are the implications of a failed security framework. Typically, any security weak point can cause this to happen. Ex: Leaked Credentials, human error, application loopholes, weak or not having IT policies, storing encryption keys along with the encrypted data, etc.

Prevention: We must be able to control the entire workflow and data flow in our cloud workload. When a request is coming or going from/to our cloud network, we (our policies, standards, security posture) must drive the flow that includes “who can enter our network?”, “sanitizing the request based on its source and access pattern”, “network route it can take”, “resource that it can reach”, “data it can access”, “actions it can perform”, “results it can carry back to the request initiator (service, app, browser, etc.)” etc.

 To implement this, we need to have a strong authentication and authorization mechanism, giving the least possible permissions, enabling threat detection, restricting access to the specific IP addresses, applying data protection features (Classification, data masking, encryption, etc.), securing backup files, log files, frequent audits (data and application) and fixing the problems, take patching (IaaS) seriously, defining the clear data boundary standards and implementing the policies accordingly, storing encryption keys and certificates separately (using a key vault), etc.

.

 Data Loss: Data loss is any process or event that results in data being corrupted, deleted, and/or made unreadable by a user and/or software or application.

Reason: Hardware failure, power failure, datacenter failures, natural disasters, accidental deletion, not understanding and having proper agreements (data retention period), not having proper backup, no or weak disaster recovery plan, not performing the backup health, not having the tight protection control for backups, etc.

Prevention: Understand the SLA (Service Level Agreement) on Data Retention Policy (How long data is required and how to dispose of), Recovery Point Objective (RPO – Maximum allowed data loss), Recovery Time Objective (RTO-maximum allowed downtime), and plan your backup and disaster recovery accordingly. Depending on data volume, and operations, perform DR drills to ensure that backups are healthy. Wherever possible keep the secondary copies in across regions, utilize long-term backup features, etc.

.

 Compliance issues: Regulatory compliance is to mitigate the risk and protect our (both enterprise and customer) data. Our cloud infrastructure must be compliant with the regulatory standards that would be defined by our enterprise business team. If we fail to follow the standard and data breach or loss happens, our organization will be in a difficult position from both legal and financial aspects (higher penalties up to $21 million). The most common regulations are GDPR (General Data Protection Regulation), and data privacy (CCPA), likewise we have various regulators for health insurance (HIPAA), payment card (PCI DSS), financial information (SOX), etc. Sample compliance rules: A user mustn’t have access to prod and no-prod servers, block public internet access to VM, restrict a number of administrators, enable password policy, store keys and certificates separately from data, missing security patches is a compliance issue, etc.

Reason: Companies are not taking regulatory compliance seriously; Many companies are still in the awareness stage; Thinking about the investment in implementation efforts that requires collaboration, strategies, and skillset. Mostly programmers and IT developers are taking regulatory compliance as the least priority.

Prevention: Its Big bosses’ (IT Decision maker, cloud/data architect) responsibility to insist on compliance with the regulatory standards; At on-premises we may need to use third-party tools to audit our infrastructure to validate the compliance with the regulatory, but in the cloud, we have in-built support. We can use Azure policies to implement the required standards. “Regulatory compliance dashboard” in Azure security center is one of my favorite features. It monitors, validates, and reports non-compliant issues. So that we can fix them to ensure that we are complying with the regulatory standard. It validates almost all aspects Ex: Network, Cloud endpoints, Data protection, Threat detection, vulnerability management, privileged access, backup & recovery, etc.

.

Alison: I am a little curious about security because I started my career as a security engineer. Though you gave Great insights, you answered more generically about the common security concerns and their workaround. Now I would appreciate it if you could describe any public cloud provider’s security features. Just top 5 to 6 features are fine.

Vasumat: Sure! Almost all top cloud providers are providing air-tight security controls. It’s all about how customers efficiently utilize those features. I’ll talk about Azure security features.

Application Security:

 Web Application Vulnerability Scanning: Azure is providing a built-in solution called Tinfoil Security. We can use this to scan our applications hosted on Azure App Service.
 Application Security Group (ASG): It’s a part of NSG (Network Security Group) that helps to manage the security of Virtual Machines by grouping them according to the applications that run on them.
 Web Application Firewall: Azure Application Gateway comes with pre-configuration to protect our web applications from common attacks like SQL Injection, cross-site scripting, session hijacking, etc.
 Penetration\Pen testing: It simulates (like a cyberattack drill) a cyber-attack against our application to check for exploitable vulnerabilities. Using this we can simulate SQL Injection, LDAP Injection, Sensitive data exposure, cross-site scripting, endpoint port scanning, Denial of Service (DoS) attack, etc.
 Azure DDoS Protection: Protect our applications from Distributed Denial of Service (DDoS) attacks

.

Storage Security:

 Azure role-based access control (Azure RBAC – User level): Restrict access based on the “need to know” and always provide the least possible privilege to the users. We can control the access using Azure Role, Group & Scope in RBAC.
 Access Control Lists (ACLs – object level): ACLs give you the ability to apply a “finer grain” level of access to directories and files. RBAC lets you grant “coarse-grain” access to storage account data, such as read or write access to all of the data in a storage account, while ACLs let you grant “fine-grained” access, such as write access to a specific directory or file.
 Shared Access Signature (SAS): instead of sharing account keys we can use SAS to limit the access only to the required objects for the specified period.
 Encryption at Rest: Mandatory for data privacy. There are three options, Storage Service Encryption (Azure storage service automatically encrypts while writing to storage), Azure Disk Encryption (VM – OS and Data Disk encryption), and Client-side Encryption (encryption and decryption happen at the application layer. We use AES algorithm and stores encryption key in Azure Key Vault).
 Encryption at Transit: Mandatory for data privacy. Again, three options. Transport-level encryption (HTTPS/TLS) for Azure storage, Wire encryption (SMB 3.0 Encryption) for Azure File Shares, Client-side encryption.

.

Network Security:

 Azure VNet: Azure Virtual Network is a logical isolation of our resources in the Azure network. We can fully control the IP address blocks, DNS settings, security policies, route tables, etc.
 Network Security Groups (NSG): is a stateful packet filtering firewall and filters traffic moving between subnets in Azure VNet and Azure VNet and the internet.
 Azure Firewall: A fully managed cloud-native service used to protect our Azure Virtual Networks
 Route Control and Forced Tunneling: User-Defined Routes allow us to customize inbound and outbound paths for traffic moving into and out of individual virtual machines or subnets. Forced tunneling is a mechanism we can use to ensure that our services are not allowed to initiate a connection to devices on the Internet. It forces outbound traffic to the Internet to go through on-premises security proxies and firewalls.
 Azure Bastion: A fully managed service to secure remote access (RDP, SSH) to our Azure Virtual Machines without any exposure through public IP addresses.
 Front Door: this is a secure, fast, and reliable cloud CDN (Content Delivery Network) service with intelligent threat protection.
 Network Watcher: Network performance monitoring and diagnostics solution
 Private Link/Private Endpoint: enables us to access Azure PaaS Services (Azure Storage, SQL Database, etc.) in our virtual network over a private endpoint.
 VPN Gateway: To send network traffic between our Azure Virtual Network and our on-premises site. It sends encrypted traffic across a public connection.
 ExpressRoute: Connects on-premises and Azure over a dedicated private connection (WAN link). Traffic (encrypted data) does not go over the public Internet, thus a more secure option than VPN Gateway.
 Application Gateway: Helps us to round-robin distribution of incoming traffic, cookie-based session affinity, and URL path-based routing, and allows us to host multiple websites behind a single application gateway.
 Traffic Manager: This allows us to control the distribution of user traffic for service endpoints. It has the ability of automatic failover based on the health of the service endpoint.
 Load Balancer (LB):  Distributes incoming traffic among healthy instances of services. Public LB distributes internet traffic to VM, Private LB distributes traffic between VMs in Azure VNet and between on-premises and Azure VNet.

.

Database security:

 IP Firewall Rules: Database and Server level firewall rules allow us to grant access based on originating IP address of each request.
 VNet Firewall Rules: enable Azure SQL Database to only accept communications that are sent from selected subnets inside a VNet.
 Authentication and Authorization: We can use both SQL and Azure AD accounts. But it is recommended to use Azure AD with multi-factor authentication. Also, grant database object-level permissions on a need-to-know basis only.
 Azure Defender for SQL: An Advanced Threat Protection feature. We can configure emails to receive security alerts.
 Vulnerability assessment (VA): is a part of the Azure Defender for SQL offering and it can help us to discover and track potential database vulnerabilities.
 Auditing: It’s mandatory for all database workloads and helps maintain compliance with security standards. We can store audit logs using Azure storage, Azure Monitor (Log Analytics), and Event Hub.
 Dynamic data masking: Limits sensitive data exposure by masking it to non-privileged users. Mask data for specific columns for specific users.
 Row level security: Within the same table control user access to the specific set of rows based on business logic.
 Data discovery and classification: To identify and mark the sensitive information. Once we label a column as sensitive while logging audit info it takes extra care and we can see the access statistics. Moreover, data classification is mandatory in most of the compliance standards.
 Always Encrypted: Encrypt specific sensitive data and decrypt only for the application at the time of access.
 Encryption for data at rest: Transparent Data Encryption (TDE) encrypts entire database files and backups.
 Encryption for data at transit: By default, Azure enforces (TLS-Transport Layer Security) and encrypts all connections between client and server.

Other Security:

 Antimalware & Antivirus: With Azure IaaS, we can either use Microsoft or any third-party (Symantec, McAfee, Kaspersky, etc.) antimalware software.
 Protecting secrets: We must store our app, and DB secrets (Keys, certificates, etc.) separately in Azure Key Vault.
 Protect Identity: Multi-Factor Authentication (using Microsoft Authenticator, hardware/software token, SMS, voice calls, etc.), Password policy enforcement, and Azure Role-Based Access Control.
 Protect resources: Enabling soft delete, keeping backups for deleted resources for a certain period, etc.
 Protect Physical datacenter: Taken care of by Microsoft and before decommissioning of any storage device data will be completely wiped using certified standard procedures.

.

Security monitoring and alerts:

 Azure Monitor: this is a comprehensive solution for collecting, analyzing, and acting on telemetry from the cloud and on-premises environments.
 Collect data using Azure Monitor Logs and Azure Monitor Metrics.
 Detect and diagnose security issues using insights (Application, VM, Container, Network, Storage)
 Analyze collected data using Metric Explorer and Log Analytics. We can export firewall and proxy logs from on-premises or third-party cloud providers (AWS) to Azure and we can analyze the logs.
 Visualize information using Azure monitor dashboards and workbooks
 Support operations at scale with smart alerts (uses ML algorithms) and automated actions
 Azure Security Center: Detect threats and provides integrated security monitoring and policy management across our Azure subscriptions. Security dashboards list all alerts and recommendations.
 Azure Advisor (Personalized Cloud Consultant): analyzes our resource configuration. Based on Azure Security Center analysis, it recommends solutions to help improve performance, security, and reliability.
 Regulatory compliance dashboard: It’s part of Azure Security Center, and shows the status, violations, and recommendations.
 Azure Resource Manager templates: Once we create a template with the stronger security settings, the same template can be used for all environments which makes security configuration easier.
 Azure Service Health events & alerts: Provides a customizable dashboard and it tracks four types (services issues, planned maintenance, health, and security advisories) of health events that may impact our resources.
 Azure Storage Analytics: performs logging (successful and failed requests, timeout, network, authorization errors, etc.) and provides metrics data for a storage account. We can analyze usage trends and diagnose issues with our storage account.

.

.

Alison: Consider this as the last question, have you ever experienced or seen any failures or disasters in Cloud? Maybe while migrating or post-migration?

Vasumat: Before answering this, I want to define the word “failure”. Failures are two types A) Not able to reach the expectations as per the estimated plan. Ex: Cost is increased, failed to migrate within the timeline, performance is decreased in the cloud. B) Migration plan itself failed hence rollback. Since we spend adequate and quality time on the analysis and planning phase, fortunately, I never fall into option “B”. However, when we are dealing with a huge enterprise, whatever the care we take, there are still unknown things that essentially impact our estimations.

But consultants reach me for help and suggestions (both from inside and outside the organization), and there I have seen the ground-level mistakes done. Let me quickly explain a few case studies:

 Vasumat (Case-1): By default, AWS S3 bucket access is set to private. They made it public while doing a POC in the development environment and accidentally followed the same configuration for production. Customer application has been storing scanned copies of all employee identities (Passport, Tax documents, offer letters, etc.). The only good thing that happened is, it was identified by the vendor at an early stage of production Go-live. We can imagine the impact if a customer or third party identified the security loophole.

Alison: Sorry, for stopping you in-between. Did it happen in your organization? I mean as a vendor.

Vasumat: No mam, I write blogs so have some contacts and one of my followers called me for help.

Alison: I am excited to know your suggestion there.

 Vasumat: Well, the vendor was a small entity (600 employees) helping a customer in the UK with their cloud migration and he was working for the vendor. 8 days after production Go-Live, they identified that S3 was open to the public. I suggested three things that they must do on an immediate basis to control the damage. A) Raise an incident B) Make S3 bucket access to private C) Inform customer. While working on A, B, and C activities, inform someone to track the S3 bucket and individual object access statistics. It can be found using S3 Server Access Logs and AWS Cloud Trail. There we can find the complete events that happened to our S3 includes, including date and time of access, objects accessed, actions performed, source IP Address, access type (browser, REST API), etc. Fortunately, there was no unauthorized or unknown access seen in the logs, so they approached the customer, accepted it was a human mistake, projected no damage happened, promised the customer that they reperform the security the auditing for the entire infrastructure, and explained how prevention measures will be taken, etc. After 2 months of struggle, they finally settled down the issue without fines and actions from the customer. But we can imagine if it happens in the card payment industry that would take an entirely different route with serious consequences.

.

Alison: I reckon you suggested the right thing there. Damage control is the first step in any incident. Do you have any other use cases?

.

 Vasumat (Case-2): A customer was surprised to see the monthly cloud bill. It was more than double the amount of last month’s bill. While investigating there were a lot of Azure VMs (with premium features) got created without notifying the App owners. There was an automated scheduled job that had been creating a new VM for every hour and that was doing its job perfectly. There was an employee, who had some issues with his boss and while he was leaving the organization, he intentionally created a simple program and scheduled it.

Reasons: A) Lot of admins, so anyone can do anything, B) No policies that limit the resource creations, C) No proper monitoring and notifications.

Solution: Dealing with that employee and further actions were taken care of by HR. They approached the Azure team, explained the situation, and got some suggestions, since they were not using the VMs got the best discount on the additional amount incurred due to those VMs.

.

 Vasumat (Case-3): A team migrated a tiny application and its database to Cloud. Though they were facing some issues while migrating, fixed them, and successfully migrated within the timeline. End customers started using the application hosted on the cloud. After 15 days post the migration an application engineer realized that they did a mistake. Application and database were migrated to the cloud, but the app config file at the cloud was not properly updated hence it was pointing to the on-premises database. He was afraid and instead of informing his boss, simply updated the application config file and pointed it to the cloud database. Exactly after a month, the customer identified that some data was missing and reported it to the vendor. They found the problem and that app engineer had to confess his mistake.

Reasons: No proper testing done post the migration; Didn’t have the migration checklist; It’s a clear human error and if the customer was notified when the app team found the problem, the damage could have been controlled; Leaving on-premises database online and operational even after successful migration.

Solution: It was challenging to find the DELTA data set for 15 days (Go-live to config update to cloud DB). They had to run a table-wise comparison between the Go-live copy, 15 days after go-live (current on-prem) and 45 days after the Go-live copy (current cloud), identified the INSERT, DELETE, UPDATE operations, and performed the data load accordingly.

.

 Vasumat (Case-4): The customer raised an incident saying that an application stopped working. While doing the quick check they found that the SQL Server Database service was down in Azure VM. They were unable to restart the service and the log says that SQL Server system database files were missing. Surprisingly, all database files (MDF, NDF, and LDF) extensions got changed to some strange string something like “.MDF” to “.xto”. Lately, they realized that it was a Ransomware attack and found that they were demanding $60K to $75K ransom. Someone with admin permission used a personal email from the VM and accidentally clicked on a Phishing email that downloaded a malware program file into the VM, that’s all. It installed a program on the VM that forcefully stopped the SQL Service and encrypted all data and log files including the backup folder. Unfortunately, they didn’t have any secondary replica for the database instance. But they have the latest database backups stored in Azure storage and VM backup is on Azure Backup. They were able to quickly create another instance with the same IP hence the business was continued.

Reasons: Exposed VM to the public internet; Needlessly granting admin rights; Not giving security awareness to the developers; Not using proper malware protection; No proper monitoring

Solution: They disabled all access to the affected VM except for a single admin. Scanned the VM and entire network to make sure the malware didn’t affect other resources in the VNet. They had to identify and remove all developers and DBAs (who are using that VM) laptops from the corporate network and the security team had to scan those individual laptops.

.

Vasumat: Want to hear more?

Alison: I think that’s more than enough. What I observe is most of them are due to human error, isn’t it?

Vasumat: Certainly! 80% of the failures are due to human mistakes. You might have heard about the recent data leak that exposed 38 million records, including employee databases, COVID-19 vaccination statuses, etc. They hosted 1000 plus applications on Microsoft PowerApps and due to a simple misconfiguration data was exposed to the public.

.

Alison: Yes! I do. But, in my opinion, the problem is from both sides. Azure cloud must make it private by default instead they are making it public. Developers forgot to make it private.

Vasumat: That’s why we always say that SECURITY is a shared responsibility. Azure automated monitoring system identified it and sent a security notification saying that their sensitive data was exposed to the public. The customer immediately reacted and made access private.

.

Alison: How can we prevent this?

Vasumat: Three things, A) Continues testing and improving test scenarios. We need to have customizable security testing tools and services that fit our business. B) Automation. First, ensure that our environment is stable and automate one by one process. C) Monitor and improve the automation procedures.

Alison: Make sense to me.

.

Alison: Vasumat, have you ever worked in pre-sales?

Vasumat: Well, I can’t claim that I am experienced in the end-to-end pre-sales life cycle but it’s a part of any architect’s role. The sales team involves architects in customer discussions. I am experienced in helping pre-sales teams in preparing the technical documentation, creating mockups, giving presentations, and most importantly holding sales teams in customer discussions when they are giving unrealistic promises, buying time from the customer for performing POC, etc.

Alison: In that case, you already know a lot about pre-sales.

Vasumat: I have a query here, can I?

Alison: Sure!

.

Vasumat: As part of sales, before getting the PO approved, we need to understand the customer requirement to prepare the best possible solution, impress and win the deal from the customer. But your requirement says that your infrastructure needs to be migrated. This means there is no customer concept here. Then I would like to know why you are specific about pre-sales.

Alison: I understand your concern. Let me put it this way. We are not going to have the entire IT team with us, instead, we would give the migration project to multiple vendors. But we need to make sure of holding the KEY people with us, people like architects who can understand the core business, design the architecture, and essentially drives the entire cloud journey. Though we are ready to migrate our workloads to the cloud, it doesn’t mean that we got the green signal. Business vertical wise we need to pass through approvals from App owners, Audit teams, Cyber security, stakeholders, etc. It is not pre-sales, but almost needs to have a similar preparation…

Vasumat: To convince them for the cloud migration.

Alison: Exactly. Hope it answered your question.

Vasumat: 100%. Thanks for that.

.

Alison: Well, that’s all from my side. It’s already been a long day for you, in fact, I took 80 min extra, my apology for that. I would appreciate your time, patience, and commitment. I am open to questions if you have any.

Vasumat: You don’t need to apologize. For me, it’s one of the best technical discussions I have ever had. I hope someday I’ll sit and have a coffee from the mountain view place. Any idea when I can hear about the next steps of the hiring process?

Alison: Hopefully very soon!

Vasumat: Thank you a lot!

Alison: Viswa will escort you to the security check; From there you can take the shuttle and reach Exit-1. Have a good evening.

Vasumat: Good evening, Alison. Bye!

.

This discussion went on for 7:30 hrs. and the person who attended this interview had to go through a discussion with the head of the Data Engineering vertical and a final discussion with HR followed by the offer. It took 5 more weeks to get the final offer in hand.

.

***Note:

I will take some time and prepare the blog on the discussion that happened between a Senior Data Architect and a Head of Data Engineering. That would be fascinating as they talk about building modern data platforms using various architectures including Lambda, Kappa, Data Mesh, Data Fabric, Data Lake, Data Warehouse, etc. using Databricks, Synapse, Kafka, Snowflake, IoT, Stream Analytics, Cosmos DB, Red Shift, Azure SQL Database, RDS, S3, Blob, Spark Streaming, etc.

.

Posted in Interview Q&A | Tagged , , , , , , , , , , , , , , | Leave a comment
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments