Background: We have been going through a discussion that took place between senior leadership and a data architect. Here the final part of the series continues.
• Account hijacking: Compromised login credentials can put our entire enterprise at risk.
Reason: Weak credentials, unchanged passwords, keylogging (monitoring keystrokes), sharing credentials, etc.
Prevention: Strong credentials, no sharing, define expiry time for tokens, enable password policy, enable multifactor authentication, do not write passwords in a clear text format, store keys, and certificates in Azure Vault, allow access only to the specific IP addresses, do not use public computers or Wi-fi to connect to the cloud portals, etc.
.
• Human error: It is an indirect threat to our cloud workloads. Ex: Unknowingly deleting a resource, downloading insecure applications, misconfigurations, etc.
Reason: Low clarity of goals, untrained resources, unclear policies, not having proper data handover process in resource exit formalities, etc.
Prevention: Train the resources, make your IT policies stronger (Ex: Password expiry, restricting risky apps, games, pirated software downloads, internet gateways), create a tight monitoring control, etc.
.
• Application Security failures: Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. Ex: SQL injection (code injection technique, malicious SQL code/statements are inserted into application fields and try to access information that was not intended to be displayed.), cross-site scripting (attacker sends malicious code/payload to the server using feedback form, comment section, etc.), etc.
Reasons: Not sanitizing the inputs, not implementing timeout policy, displaying session IDs in URL, not using SSL/TLS, not encrypting passwords, failing to verify the incoming request source, exposing object references (table/view/function, database, file, storage, server, etc.), exposing error handling information to the end client, running unnecessary services, using outdated software, plugins, not having a standard audit policy, etc.
Prevention: Properly sanitize the user inputs; configure session timeout based on requirement; do not expose unnecessary information (error info, object references, session ID, app metadata, etc.) to the end client; always make sure that underlying app components are updated with the latest patch; don’t do redirects at all. If it is necessary, have a static list of valid locations to redirect to; equip apps with SSL/TLS, multi-factor authentication, etc.; Establish a strong security clearance layer, which means every time new code is deployed, we need to review, scan and identify security loopholes; Enable Web Application Firewall which acts as a layer between application and internet and filters the traffic and protects our App from common attacks like cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection. It is recommended to use Cloud-based WAF to automatically update it to handle the latest threats. Schedule periodic audits on application code. We use vulnerability scanners like Grabber, which performs automatic black-box testing and identifies security vulnerabilities.
.
• Data Breach & Theft of Intellectual Property: Altering, deleting, uploading, or downloading our corporate data without authorization is called a data breach. If it happens for sensitive data (patents, trade secrets, PII-Personal Identifiable Information, financial info, etc.), we need to notify the victims and it can critically damage our organization’s image, sometimes leading to legal actions and heavy penalties. Ex: Cyber attacks, phishing attacks, malware Injections, etc.
Reason: Data Breach and theft of IP are the implications of a failed security framework. Typically, any security weak point can cause this to happen. Ex: Leaked Credentials, human error, application loopholes, weak or not having IT policies, storing encryption keys along with the encrypted data, etc.
Prevention: We must be able to control the entire workflow and data flow in our cloud workload. When a request is coming or going from/to our cloud network, we (our policies, standards, security posture) must drive the flow that includes “who can enter our network?”, “sanitizing the request based on its source and access pattern”, “network route it can take”, “resource that it can reach”, “data it can access”, “actions it can perform”, “results it can carry back to the request initiator (service, app, browser, etc.)” etc.
• To implement this, we need to have a strong authentication and authorization mechanism, giving the least possible permissions, enabling threat detection, restricting access to the specific IP addresses, applying data protection features (Classification, data masking, encryption, etc.), securing backup files, log files, frequent audits (data and application) and fixing the problems, take patching (IaaS) seriously, defining the clear data boundary standards and implementing the policies accordingly, storing encryption keys and certificates separately (using a key vault), etc.
.
• Data Loss: Data loss is any process or event that results in data being corrupted, deleted, and/or made unreadable by a user and/or software or application.
Reason: Hardware failure, power failure, datacenter failures, natural disasters, accidental deletion, not understanding and having proper agreements (data retention period), not having proper backup, no or weak disaster recovery plan, not performing the backup health, not having the tight protection control for backups, etc.
Prevention: Understand the SLA (Service Level Agreement) on Data Retention Policy (How long data is required and how to dispose of), Recovery Point Objective (RPO – Maximum allowed data loss), Recovery Time Objective (RTO-maximum allowed downtime), and plan your backup and disaster recovery accordingly. Depending on data volume, and operations, perform DR drills to ensure that backups are healthy. Wherever possible keep the secondary copies in across regions, utilize long-term backup features, etc.
.
• Compliance issues: Regulatory compliance is to mitigate the risk and protect our (both enterprise and customer) data. Our cloud infrastructure must be compliant with the regulatory standards that would be defined by our enterprise business team. If we fail to follow the standard and data breach or loss happens, our organization will be in a difficult position from both legal and financial aspects (higher penalties up to $21 million). The most common regulations are GDPR (General Data Protection Regulation), and data privacy (CCPA), likewise we have various regulators for health insurance (HIPAA), payment card (PCI DSS), financial information (SOX), etc. Sample compliance rules: A user mustn’t have access to prod and no-prod servers, block public internet access to VM, restrict a number of administrators, enable password policy, store keys and certificates separately from data, missing security patches is a compliance issue, etc.
Reason: Companies are not taking regulatory compliance seriously; Many companies are still in the awareness stage; Thinking about the investment in implementation efforts that requires collaboration, strategies, and skillset. Mostly programmers and IT developers are taking regulatory compliance as the least priority.
Prevention: Its Big bosses’ (IT Decision maker, cloud/data architect) responsibility to insist on compliance with the regulatory standards; At on-premises we may need to use third-party tools to audit our infrastructure to validate the compliance with the regulatory, but in the cloud, we have in-built support. We can use Azure policies to implement the required standards. “Regulatory compliance dashboard” in Azure security center is one of my favorite features. It monitors, validates, and reports non-compliant issues. So that we can fix them to ensure that we are complying with the regulatory standard. It validates almost all aspects Ex: Network, Cloud endpoints, Data protection, Threat detection, vulnerability management, privileged access, backup & recovery, etc.