Security Architecture Models

  1. Introduction
  2. The Downfall of Traditional Security Architecture
  3. A new age needs new strategies
    1. Time-Based Security
      1. What is Time-Based Security?
      2. Benefits and Limitations of TBS
        1. Benefits
        2. Limitations
    2. Lockheed Martin Cyber Kill Chain
      1. Structure of the Cyber Kill Chain
      2. Benefits and limitations of the Cyber Kill Chain
        1. Benefits
        2. Limitations
    3. Combination Approach
      1. Combining approach’s for success
      2. Benefits and limitations of the Combined Model
        1. Benefits
        2. Limitations
    4. AWS Well-Architected Framework
      1. Overview of the Five Pillars
      2. Benefits and limitations of the Security Pillar
        1. Benefits
        2. Limitations
    5. Shift Left Model
      1. What are security gates
      2. Benefits and limitations of Shifting Left
        1. Benefits
        2. Limitations
    6. Zero Trust Model
      1. Definition of Zero Trust Security
      2. The Zero Trust Security Model
      3. Benefits and limitations of Zero Trust Security
        1. Benefits
        2. Limitations
    7. MICCMAC
      1. Understanding MICCMAC
      2. Benefits and Limitations of MICCMAC
        1. Benefits
        2. Limitations

Introduction

In this article, we will explore the different security architecture models available to us and what has driven the drive towards zero trust. These can help us design and organize security controls from the physical layer to the network in a way the helps us prioritise our investments in security by identifying gaps. We will look at the deficiencies of traditional models and newer defensible security architectures.

While network security is a key focus of many security technologies, with the goal of identifying and handling good and bad traffic appropriately, and critical controls like Next-Generation Firewalls (NGFW) and Intrusion Detection Systems (IDS) are important, the network only approach stems from the traditional security model, which concentrates on perimeter controls and often overlooks or assumes trust within the internal environment. A crunchy outside and chewy inside is no longer sufficient and the architectures we look at have evolved to be more comprehensive and data centric.

There have been many great thinkers of early architecture methodologies that have grown increasingly complex over the years to compete with ever growing threats.

Bill Cheswick’s influential 1990 paper, “The Design of a Secure Internet Gateway,” introduced the concept of internet proxies (referred to as gateways). Cheswick highlighted the delicate balance between security and convenience in designing corporate gateways to the internet. Many organizations chose convenience, using a simple router to connect their internal networks to the rest of the world, which he deemed dangerous. Cheswick’s paper came in response to the Morris Worm, the first major internet worm released in 1988.

Other pioneers like Richard Bejtlich and Clifford Stoll have also significantly contributed to the understanding of blue teaming, the defensive aspect of cybersecurity. Clifford Stoll initially discovered now common controls such as Perimeter security, and Honey Pots in the 1980’s, while more than a decade later in the early 2000’s Bejtlich first outlined key characteristics of defensible networks, using his MICCMAC model: they can be monitored, limit intruders’ freedom to manoeuvre, offer minimal services, and can be updated. Taking Stolls identified controls and structuring them in a model helped understand our defensive controls and coverage.

The Downfall of Traditional Security Architecture

Traditional security architecture has struggled to keep pace with the ever-evolving landscape of cyber threats. By focusing excessively on perimeter defence, insufficiently on the internal environment, and having a lack of consideration for Bring Your Own Device (BYOD) policies, Internet of Things (IoT) device, and the organic growth of organizational technology stacks without proper security planning, on top of compliance-driven security all reduces the effectiveness of security in the modern day.

Traditional security architecture has several key deficiencies:

  • Overemphasis on perimeter defence: Traditional security approaches have prioritized the protection of network perimeters, often neglecting the need for robust internal security measures. This has led to the development of flat networks, which can be easily managed but are also susceptible to intruders pivoting from one compromised system to another.
  • De-parameterization: The widespread adoption of cloud services, mobile devices, and IoT has eroded the classic network perimeter. Traditional perimeter controls, such as firewalls, struggle to secure these devices and services, which often escape the scrutiny applied to conventional network devices like desktops and servers.
  • Overreliance on preventive controls: Many organizations rely too heavily on preventive controls at the expense of defensive measures, leading to a lack of effective detective capabilities. When preventive controls inevitably fail, organizations often lack the necessary tools and resources to detect and respond to security incidents.
  • Compliance-driven security: Compliance is an essential aspect of security, but it should not be the primary goal. Compliance-driven security can be harmful when it is viewed as the primary end goal, as it may lead organizations to focus on meeting compliance standards rather than implementing a comprehensive security strategy.
  • Ignoring BYOD and IoT: Traditional security architecture has not adequately considered the challenges posed by BYOD policies and IoT devices. These devices can introduce new attack vectors and vulnerabilities that require a shift in security focus.
  • Organic growth of technology stacks without security planning: The rapid expansion of organizational technology stacks, driven by the adoption of new tools and technologies, has often occurred without proper security planning. This can lead to gaps in security coverage and a false sense of security.

The shortcomings of traditional security architecture have become increasingly apparent as organizations face new and emerging threats. We can no longer trust in our perimeter to keep threats out. Breaches are now holistic, it only takes one employee to use Hola VPN to give a malicious actor access to your environment, one unpatched smart light bulb or one annoyed employee who decided quite quitting was not a sufficient way to express their disappointment in their organisation.

A new age needs new strategies

Time-Based Security

What is Time-Based Security?

Time-Based Security (TBS) is a security architecture model that shifts our understanding of cybersecurity from a static, barrier-focused approach to a dynamic, time-focused perspective. This model, pioneered by Winn Schwartau, focuses on three critical components: protection (P), detection (D), and response (R).

Under TBS, the primary questions that define the security posture of a system are:

  • How long are my systems exposed (Protection)?
  • How long before we detect a compromise (Detection)?
  • How long before we respond (Response)?

The crucial principle of TBS is that the time taken for protection (P) should exceed the sum of the time for detection (D) and response (R). If this principle holds, your systems are secure. If not, you are at risk.

Traditionally, security has been about building robust protection mechanisms. In contrast, TBS argues that no protection is impenetrable given enough time and resources. Therefore, it places equal importance on detecting breaches and responding to them.

To illustrate, consider a physical safe designed to protect valuable assets. The safe has a specific rating indicating how long it can withstand an attack. However, relying solely on the safe’s resilience is not enough. We complement the safe with detection mechanisms—sensors, alarms, cameras—and a response plan to counteract when a breach is detected.

TBS applies the same concept to cybersecurity. It recognizes that given enough time, an attacker can bypass any protection mechanism. Therefore, it emphasizes the need for robust detection systems to identify breaches quickly and response mechanisms to mitigate the impact.

Benefits and Limitations of TBS

Benefits

  • Quantitative Assessment: TBS provides a reproducible method to understand how much security a product or technology provides. It uses measurable time-based metrics, allowing for more informed decision-making about security investments.
  • Holistic View: TBS considers all aspects of security—protection, detection, and response. This comprehensive approach enables a more robust security posture that goes beyond mere defensive walls.
  • Proactive Risk Management: By focusing on detection and response, TBS encourages proactive risk management. It recognizes that breaches are inevitable and plans accordingly.

Limitations

  • Difficulty in Measuring Time: Accurately measuring protection, detection, and response times can be challenging in complex environments with multiple layers of security.
  • Changing Threat Landscape: The dynamic nature of cybersecurity threats means that the effectiveness of protection mechanisms may change over time, requiring regular reassessment of the TBS model.
  • Resource Intensive: Implementing a TBS model can be resource-intensive, requiring significant investments in monitoring, detection, and response capabilities.

Lockheed Martin Cyber Kill Chain

The Cyber Kill Chain, also known as the Intrusion Kill Chain, is a model designed by Lockheed Martin to help organizations understand and counteract cyberattacks. This model draws its inspiration from military kill chains, which predate computers and describe a series of steps an attacker must take to achieve their objectives. By identifying these steps and deploying countermeasures at each stage, organizations can significantly reduce the chances of a successful cyberattack. In this article, we will explore the Cyber Kill Chain, its benefits, limitations, and where it can be effectively applied.

Structure of the Cyber Kill Chain

The Cyber Kill Chain, developed by Lockheed Martin, is a model that outlines the stages of a cyberattack, providing a framework for organizations to detect, analyze, and defend against threats. Below is an example model comprising of the seven stages, each with specific countermeasures that can be deployed to prevent or mitigate the impact of an attack.

Reconnaissance

In this stage, the attacker gathers information about the target organization, such as network infrastructure, systems, and potential vulnerabilities. They may use publicly available information or more advanced methods like scanning and social engineering.

Countermeasures: Implement network segmentation, deploy intrusion detection systems (IDS), monitor network traffic, and enforce strict access controls. Raise awareness about social engineering tactics and educate employees on how to recognize and report suspicious activities.

Weaponization

The attacker creates a weapon, such as malware or a malicious payload, and packages it with an exploit to take advantage of a specific vulnerability.

Countermeasures: Ensure that systems are up-to-date with the latest security patches, deploy antivirus and anti-malware solutions, and employ application whitelisting to prevent unauthorized software from running.

Delivery

The attacker delivers the weapon to the target, using methods such as email attachments, drive-by downloads, or malicious websites.

Countermeasures: Deploy email security solutions that scan for malicious attachments and links, implement web content filtering, and secure network perimeters using firewalls and intrusion prevention systems (IPS).

Exploitation

The attacker exploits the vulnerability, allowing them to execute the malicious payload on the target system.

Countermeasures: Employ vulnerability management processes to identify, prioritize, and remediate vulnerabilities. Implement network and host-based intrusion detection and prevention systems to detect and block exploit attempts.

Installation

The attacker installs the malware on the compromised system, enabling them to maintain persistence and control over the system.

Countermeasures: Use endpoint security solutions to detect and prevent malware installation, implement application control policies, and enforce the principle of least privilege to limit the potential impact of a compromised account.

Command and Control (C2)

The attacker establishes a connection to a command and control server, which allows them to remotely control the compromised system and potentially exfiltrate data.

Countermeasures: Monitor outbound network traffic for suspicious connections, block known malicious IPs and domains, and implement network segmentation to limit lateral movement within the network.

Execution

The attacker carries out their intended objective, such as data exfiltration, encryption for ransom, or destruction of data and systems.

Countermeasures: Implement robust data backup and recovery plans, deploy data loss prevention (DLP) solutions, and establish incident response plans to quickly detect, contain, and remediate threats.

Benefits and limitations of the Cyber Kill Chain

Benefits

  1. Provides a structured approach: The Cyber Kill Chain offers a systematic way for organizations to analyze and prioritize their security investments. By understanding the stages of an attack, organizations can develop targeted strategies to disrupt the chain and prevent attackers from achieving their objectives.
  2. Enhances detection and response: The model helps organizations identify the various stages of a cyberattack, enabling them to deploy appropriate countermeasures at each stage. This improves detection and response capabilities, allowing organizations to prevent or mitigate the impact of cyberattacks.
  3. Encourages proactive security measures: The Cyber Kill Chain encourages organizations to adopt a proactive approach to cybersecurity by focusing on disrupting the attack chain before it reaches its final stages.
  4. Adaptable to various contexts: While the model is often applied to malware and perimeter defenses, it can also be adapted to other contexts, making it a versatile tool for organizations to leverage in their cybersecurity efforts.

Limitations

  1. Limited scope: The Cyber Kill Chain primarily focuses on external threats and may not adequately address internal threats or other aspects of cybersecurity, such as insider threats, social engineering, or supply chain attacks.
  2. Overemphasis on prevention: The model heavily focuses on prevention, which can lead organizations to neglect other essential aspects of cybersecurity, such as detection, response, and recovery.
  3. Infrastructure-centric approach: Lockheed Martin’s countermeasures are primarily infrastructure-centric, which may not fully address the human element of cybersecurity or the need for robust security policies and procedures.

The Lockheed Martin Cyber Kill Chain provides a valuable framework for understanding and countering cyberattacks. While it has some limitations, organizations can benefit from its structured approach and adapt the model to suit their specific needs. By deploying countermeasures at each stage of the attack chain, organizations can significantly reduce the chances of a successful cyberattack and enhance their overall security posture.

Combination Approach

The rapidly evolving cybersecurity landscape requires innovative and robust defensive strategies. A synergistic approach that combines Time-Based Security (TBS), Lockheed Martin’s Cyber Kill Chain, and MITRE’s ATT&CK framework offers an effective solution. This holistic model allows Security Operations Centers (SOC) to block, detect, and react to threats as early as possible in the attack timeline.

Combining approach’s for success

The combined model utilizes the strengths of each individual framework:

  1. Time-Based Security (TBS): TBS advocates for timely detection and response to threats. The key is to ensure that the sum of the time to detect (D) and react (R) to a threat is less than the time a system is exposed (P).
  2. Lockheed Martin’s Cyber Kill Chain: This model identifies seven stages of a cyberattack (reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objectives). The goal is to detect and disrupt an attack as early in the kill chain as possible.
  3. MITRE’s ATT&CK Framework: This globally-accessible knowledge base of adversary tactics and techniques is based on real-world observations. It’s used as a foundation for the development of threat models and methodologies in the private sector, government, and the cybersecurity product and service community.

The goal is to detect and respond to threats before the ‘lateral movement’ step in the kill chain, known as the “breakout point”. For instance, during a simulated attack, the SOC had multiple opportunities to block, detect, and react before the breakout point.

TBS and the OODA (Observe, Orient, Decide, Act) loop provide the context for tactical defensive decisions, allowing for efficient management of intrusion risk and minimizing the impact of a compromise.

Additionally, architecting for visibility is crucial. This involves obtaining raw telemetry from various data sources, including network, endpoint, and cloud. The goal is to ensure comprehensive coverage, not just detection, to avoid blind spots in security.

In conclusion, the combination of TBS, Lockheed Martin’s Cyber Kill Chain, and MITRE’s ATT&CK framework provides a robust, comprehensive security architecture model. It enables proactive threat hunting, timely detection and response, and a nuanced understanding of cyber threats, equipping SOCs to effectively defend organizations against advanced adversaries.

Benefits and limitations of the Combined Model

Benefits

  1. Early Detection and Response: The combined model emphasizes early detection and response, minimizing the risk and potential impact of a breach.
  2. Greater Visibility and Context: It provides greater visibility into threats, offering a comprehensive view of attack techniques.
  3. Proactive Threat Hunting: By leveraging ATT&CK’s knowledge base, security teams can proactively hunt for threats, addressing potential false negatives from detection systems.

Limitations

  1. Complexity: Implementing and managing this combined approach requires significant technical expertise and resources.
  2. False Positives: While tuning for low false positives is critical, this could lead to false negatives, hence the need for proactive threat hunting.

AWS Well-Architected Framework

Amazon Web Services (AWS) has become a leading provider of cloud computing services, enabling organizations to build, deploy, and manage applications at scale. To help their clients they produced the AWS Well-Architected Framework, which offers a set of guiding principles and best practices for building and maintaining cloud-native applications, with a focus on five core pillars.

The AWS Well-Architected Framework is a comprehensive guide designed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. The framework provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time. It is based on five pillars – Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.

Overview of the Five Pillars

  1. Operational Excellence: Focuses on running and monitoring systems to deliver business value and continuously improve processes.
  2. Security: Emphasizes protecting information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
  3. Reliability: Ensures the ability of a system to recover from infrastructure or service failures and dynamically scale to meet demand.
  4. Performance Efficiency: Involves using computing resources efficiently to meet system requirements and maintain efficiency as demand changes and technologies evolve.
  5. Cost Optimization: Aims to avoid unnecessary costs and ensure the effective use of resources, balancing cost with other architectural priorities.

The Security Pillar is a critical component of the AWS Well-Architected Framework, as it addresses the protection of information, systems, and assets. The pillar consists of several key design principles, such as implementing a strong identity foundation, enabling traceability, and applying security at all layers. It also encourages the use of automation to reduce human error and ensure consistent security policies.

Identity and Access Management (IAM)
This component focuses on ensuring that only authorized and authenticated users and services can access your AWS resources. IAM involves creating and managing user accounts, roles, groups, and permissions. By implementing a strong identity foundation, you can control who has access to which resources and what actions they can perform, reducing the risk of unauthorized access or malicious activities.

Detective Controls
Detective controls involve continuously monitoring and logging activities within your AWS environment to identify potential security threats, anomalies, or misconfigurations. By enabling traceability, you can analyse logs, set up alerts, and detect unauthorized activities or policy violations promptly, allowing for quicker incident response and remediation.

Infrastructure Protection
This component emphasizes applying security at all layers of your infrastructure, from the edge network to your applications and data. Infrastructure protection involves implementing network segmentation, firewalls, intrusion detection and prevention systems (IDPS), web application firewalls (WAF), and secure configurations for operating systems, databases, and other components. It helps to minimize the attack surface and protect your resources from potential threats.

Data Protection
Data protection is about safeguarding your data at rest and in transit. This includes encrypting data, managing encryption keys, applying access controls, and using secure communication protocols (such as HTTPS and TLS). By following data protection best practices, you can prevent unauthorized access, disclosure, or alteration of sensitive information, ensuring data confidentiality, integrity, and availability.

Incident Response
Incident response is the process of preparing for, detecting, containing, and recovering from security incidents, such as data breaches or cyberattacks. This component involves developing an incident response plan, establishing a response team, and incorporating automation to reduce the time it takes to respond to incidents. By having a well-defined incident response process in place, you can minimize the impact of security incidents and prevent them from causing significant damage to your organization.

Benefits and limitations of the Security Pillar

Benefits

The AWS Well-Architected Framework offers several benefits over other architecture models, making it an appealing choice for organizations looking to build and optimize their cloud-based applications and infrastructure. Its primary value is in organisations that are looking to go “All-In” with AWS.

Comprehensive approach
The AWS Well-Architected Framework covers five essential pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. This comprehensive approach ensures that all aspects of your cloud environment are addressed, providing a holistic view of your architecture and helping you identify potential areas of improvement.

Adherence to best practices
The framework is based on AWS’s own experience in designing, deploying, and optimizing cloud architectures for a wide range of use cases and industries. By following the framework’s design principles and best practices, you can ensure that your cloud environment is designed to be secure, efficient, and cost-effective.

Flexibility and adaptability
The AWS Well-Architected Framework can be applied to a wide range of cloud-based applications and infrastructure, from simple web applications to complex, multi-tier architectures. This flexibility makes it suitable for organizations of all sizes and across various industries.

Continuous improvement
The framework encourages a continuous improvement mindset, emphasizing regular reviews and optimizations of your cloud environment. By adopting this approach, you can ensure that your architecture remains up-to-date and aligned with your organization’s evolving needs and goals.

Vendor support
As the framework is developed and maintained by AWS, you can benefit from extensive documentation, tools, and support resources provided by the vendor. This includes AWS Well-Architected Tool, AWS Well-Architected Labs, and access to AWS Solution Architects and Partners, who can assist you in implementing the framework and optimizing your cloud environment.

Enhanced security
The Security Pillar within the framework focuses on critical aspects of cloud security, such as identity and access management, detective controls, infrastructure protection, data protection, and incident response. By implementing these components, you can build a robust security posture that protects your information, systems, and assets.

Cost optimization
The Cost Optimization Pillar helps you identify opportunities to reduce costs without sacrificing performance, security, or reliability. By following the framework’s recommendations, you can minimize waste and ensure that your cloud resources are used efficiently.

Limitations

While the AWS Well-Architected Framework offers numerous benefits, there are certain limitations compared to other architecture models that should be taken into consideration:

  1. AWS-centric approach: The framework is designed specifically for AWS services and architectures, which can limit its applicability to other cloud platforms or hybrid environments. Organizations using multiple cloud providers or on-premises infrastructure may need to adapt the framework or supplement it with additional guidance to ensure its relevance across their entire infrastructure.
  2. High-level guidance: The AWS Well-Architected Framework provides high-level design principles and best practices, which can be useful for establishing a solid foundation. However, it may not provide detailed implementation guidance for specific use cases or industries. Organizations might need to seek additional resources or consult with experts to address their unique requirements.
  3. Time and effort investment: Implementing the AWS Well-Architected Framework requires a considerable investment of time and effort, particularly for organizations with complex or large-scale cloud environments. This may involve conducting in-depth architecture reviews, implementing recommended changes, and continuously monitoring and optimizing the environment.
  4. Potential vendor lock-in: Adopting the AWS Well-Architected Framework could potentially lead to vendor lock-in, as it is designed to align with AWS services and best practices. Organizations seeking to maintain flexibility in their choice of cloud providers may need to consider alternative architecture models or develop a multi-cloud strategy to mitigate this risk.
  5. Limited community support: Unlike some open-source frameworks or industry standards, the AWS Well-Architected Framework is developed and maintained solely by AWS. While AWS offers extensive support and resources, the framework may not benefit from the same level of community-driven development, collaboration, and innovation that open-source alternatives might provide.
  6. Evolving framework: The AWS Well-Architected Framework is continuously evolving as AWS introduces new services and updates best practices. Organizations using the framework must commit to staying up-to-date with these changes and adapting their architectures accordingly, which can be challenging and time-consuming.

Shift Left Model

The Shift Left model emphasizes the importance of integrating testing and security practices early in the development process. This approach encourages developers, testers, and security teams to collaborate from the beginning of a project, ensuring that potential issues are identified and addressed before they become more complex and expensive to fix.

Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure components, such as networks, storage, and servers, through code instead of manual processes. IaC allows for faster and more consistent infrastructure deployment, which can lead to improved security as configurations can be standardized and audited more easily.

Software as Code (SaC) is the practice of treating software components and their configurations as code, similar to IaC. This approach ensures that software components are built, tested, and deployed in a consistent and repeatable manner, enhancing security by reducing configuration errors and inconsistencies.

What are security gates

The Shift Left model emphasizes integrating testing and security practices early in the development process, allowing for the identification and resolution of potential vulnerabilities before they become more complex and expensive to fix. A gated release process enhances this approach by defining specific security criteria that must be met before the code progresses to the next stage. Below is a description of a typical gated release process with security gates, for all of the gates there can be different acceptance criteria as code deploys through our environments from Dev to Prod:

  1. Planning and Design Gate: During the planning and design stage, the development team, testers, and security professionals collaborate to identify potential security risks and define security requirements. This gate ensures that security considerations are incorporated into the project’s architecture and design from the outset.
  2. SAST Gate (Static Application Security Testing Gate): The SAST gate is a checkpoint in the gated release process where static application security testing is performed. SAST involves analyzing the source code, byte code, or binary code of an application to identify potential security vulnerabilities without executing the application. By incorporating the SAST gate into the development process, organizations can detect and address security issues early on, before the code is deployed to a testing or production environment.
  3. OSS Gate (Open Source Software Gate): The OSS gate is a checkpoint in the gated release process that focuses on identifying and managing risks associated with using open-source software components in an application. This gate involves conducting an open-source software audit to inventory all OSS components, assess their licenses, and check for known vulnerabilities. By incorporating the OSS gate, organizations can ensure compliance with licensing requirements and reduce the risk of introducing vulnerable components into their applications.
  4. Container Scanning Gate: The Container Scanning gate is a checkpoint in the gated release process for applications that utilize containerized environments, such as Docker or Kubernetes. This gate involves scanning container images for known vulnerabilities, misconfigurations, and compliance with best practices. By incorporating the Container Scanning gate, organizations can ensure the security of their containerized applications and minimize the risk of deploying vulnerable containers.
  5. Infrastructure Scanning Gate: The Infrastructure Scanning gate is a checkpoint in the gated release process that involves scanning an organization’s infrastructure for potential security vulnerabilities and misconfigurations. This may include network devices, servers, storage systems, and cloud environments. By incorporating the Infrastructure Scanning gate, organizations can identify and address infrastructure-related security issues before they become exploitable vulnerabilities in a production environment.
  6. DAST Gate (Dynamic Application Security Testing Gate): The DAST gate involves conducting dynamic application security testing on running applications to identify security vulnerabilities that may not be detected during static analysis. DAST simulates real-world attacks and checks for issues such as injection attacks, cross-site scripting (XSS), and insecure authentication. By incorporating the DAST gate into the development process, organizations can identify and address security vulnerabilities that may only become apparent during runtime.

The big difference with this model is everything runs through code and technical controls block promotion through our environments until the code meets our acceptance criteria, for example any medium or high vulnerabilities stops promotion of code into production. In todays modern environments that make heavy use of Cloud Native Services anything can be done as code, from user account management to firewall and network management.

Benefits and limitations of Shifting Left

Benefits

To effectively implement Shift Left in your organization, start by encouraging cross-functional collaboration between developers, testers, and security professionals from the beginning of the project. This will help ensure that potential security issues are identified and addressed before development begins.

Utilize automation tools to streamline testing, security checks, and infrastructure deployment. Continuous Integration (CI) and Continuous Deployment (CD) pipelines can be particularly useful for automating these processes, ensuring that infrastructure and software components are consistently and securely deployed.

Leverage IaC and SaC to create standardized configurations for your infrastructure and software components. This will help to reduce configuration errors and inconsistencies, improving security across the organization.

Regularly monitor and audit your infrastructure and software components to identify potential security issues. Implement continuous monitoring tools and perform periodic audits to ensure that your configurations remain secure and compliant with industry best practices and regulatory requirements.

Equip your team with the necessary skills and knowledge to effectively implement Shift Left, IaC, and SaC approaches. Provide training on security best practices, testing methodologies, and relevant tools to ensure that all team members can contribute to the success of the initiative.

Limitations

  1. Increased Complexity: Integrating security into every stage of the SDLC can increase complexity. Development teams must be well-versed in security principles and practices, which can be a steep learning curve.
  2. Resource Intensive: Shift Left security requires a considerable investment of resources, including time and personnel. It requires skilled security professionals who can work closely with development teams, which may be a challenge for smaller organizations or those with limited budgets.
  3. Potential for Slower Development Cycle: While the aim of Shift Left is to streamline the development process by catching issues early, it can initially lead to a slower development cycle, as new practices and checks are introduced.
  4. Dependency on Automation Tools: Shift Left security relies heavily on automation tools to scan and detect vulnerabilities during the early stages of development. However, these tools are not infallible and may miss some types of vulnerabilities, leading to a false sense of security.
  5. Risk of Overemphasis on Security: While the idea of ‘security first’ is fundamentally sound, there’s a risk of overemphasizing security to the detriment of other important aspects of software development, such as functionality and user experience.
  6. Culture Shift Requirement: Shift Left requires a significant culture shift within the organization. Everyone involved in the SDLC, from developers to operations, must embrace security as a part of their role. This change can face resistance and require considerable effort and leadership commitment to implement effectively.
  7. Potential for Increased Conflicts: Integrating security into the SDLC could potentially lead to more conflicts between the security team and the development team, especially if the security measures are perceived as hindering the development process or the implementation of certain features.

Zero Trust Model

The concept of Zero Trust Security is a revolutionary approach to cybersecurity that challenges the traditional “trust but verify” model. Created by Forrester, it offers a more comprehensive and secure model for today’s complex digital landscape.

Definition of Zero Trust Security

Zero Trust Security is a cybersecurity model that eliminates the concept of trust based on network location. It operates on the principle that no user or device, whether inside or outside the organization’s network, should be trusted by default. Instead, every access request is thoroughly verified before granting access. This approach embeds security into the DNA of your IT architecture, fundamentally transforming the design of networks from the inside out.

The Zero Trust Security Model

The Zero Trust Security model revolves around three main concepts:

  1. Secure Access: Ensure all resources are accessed securely, regardless of their location.
  2. Least Privilege Access: Adopt a strategy that enforces strict access control, granting users only the permissions they need to perform their tasks.
  3. Inspect and Log All Traffic: Monitor and log all network traffic to detect and respond to suspicious activities promptly.

The model was officially published in the NIST SP 800-207 in late 2020 and offers a way to transition from the traditional “candy bar” network design, with its vulnerable “soft, chewy center,” to a more secure architecture. Implementation can start with new projects or technologies, followed by converting existing technologies, and gradually chipping away at the rest.

Benefits and limitations of Zero Trust Security

Benefits

  1. Enhanced Security: By assuming all traffic is untrusted, Zero Trust Security significantly reduces the risk of a breach.
  2. Greater Visibility: By inspecting and logging all traffic, organizations gain greater visibility into their network activities, enabling them to detect and respond to threats more quickly.
  3. Flexibility: The Zero Trust model is adaptable and can be applied to a wide range of environments, including cloud, mobile, and on-premise networks.

Limitations

  1. Implementation Complexity: Transitioning to a Zero Trust model can be complex, requiring substantial changes to existing network architectures and policies.
  2. Cost: Implementing Zero Trust Security can be expensive, requiring significant investments in new technologies and training.
  3. User Experience: The strict access controls of Zero Trust Security can potentially impact user experience and productivity if not implemented carefully.

MICCMAC

In the ever-evolving cyber threat landscape, traditional security architectures often fall short of effectively protecting organizations. Recognizing these shortcomings, Richard Bejtlich introduced Defensible Network Architecture in 2008. This concept was further refined with Defensible Network Architecture 2.0, which introduces the innovative MICCMAC model: Monitored, Inventoried, Controlled, Claimed, Minimized, Assessed, and Current.

Understanding MICCMAC

The MICCMAC model is a holistic and strategic approach to cybersecurity, incorporating seven key principles:

  • Monitored: The model emphasizes the deployment of security monitoring, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and other tools to actively monitor network traffic for potential threats.
  • Inventoried: Organizations are urged to maintain a comprehensive inventory of all hosts and applications within their network. This helps identify unauthorized devices or applications quickly.
  • Controlled: By implementing ingress and egress filtering, organizations can manage and restrict network traffic, reducing potential attack vectors.
  • Claimed: Assigning ownership to all systems enhances accountability for their security, fostering a sense of responsibility within the team.
  • Minimized: Reducing the attack surface by limiting unnecessary services and applications decreases the number of potential entry points for attackers.
  • Assessed: Regular vulnerability assessments help identify potential weaknesses in the network, allowing for timely mitigation.
  • Current: Ensuring all systems are up-to-date with the latest patches and security updates is a crucial element of the MICCMAC model.

Benefits and Limitations of MICCMAC

Benefits

Despite its simplicity, the MICCMAC framework boasts several advantages over traditional security models.

  • Enhanced Intrusion Resistance: By incorporating a comprehensive and proactive security strategy, MICCMAC helps organizations improve their resistance to intrusions, even when absolute prevention is impossible.
  • Improved Security Posture: The framework’s focus on continuous monitoring, assessment, and improvement leads to a more robust and resilient security posture.
  • Increased Accountability: The principle of system ownership encourages a culture of responsibility and security awareness across the organization.
  • Reduced Attack Surface: Limiting unnecessary services and applications results in fewer potential entry points for attackers, reducing the overall risk.
  • Strategic Planning: With its emphasis on long-term commitment to better security, MICCMAC enables organizations to develop a more comprehensive and strategic approach to their cybersecurity efforts.

Limitations

While MICCMAC offers a more holistic approach to cybersecurity, it’s not without its limitations.

  • Oversimplification: The framework’s simplicity, while beneficial in some respects, can also be a disadvantage. It may not cover all aspects of an organization’s unique security needs, necessitating supplemental strategies.
  • Reliance on Human Intervention: MICCMAC relies heavily on human intervention for monitoring, assessing, and maintaining systems. This could be prone to human error and might not be scalable for larger organizations.
  • Requires Skilled Personnel: The model’s effectiveness depends on the organization’s ability to hire and retain skilled security professionals who can implement and manage the security measures.
  • Limited Automation: Unlike some more modern security frameworks, MICCMAC does not explicitly incorporate automation, which can help scale security efforts and reduce human error.

In conclusion, the MICCMAC model represents a significant evolution from traditional security architectures. Its emphasis on holistic and proactive security practices can enhance an organization’s ability to defend against the increasingly complex cyber threat landscape. However, it is crucial to consider its limitations and supplement the framework as needed to align with your organization’s unique security requirements,

What to do during a staff termination or change of employment?

The final category of Human Resource Security is how our security handles both employee terminations and when an employees role is changed in some way. Thus HR Security should be a consideration at all stages of an employee’s time at the organization.

7.3.1. Termination or change of employment responsibilities.

An organization’s security should take into account staff turnover; Ensuring that when employees leave, either through termination or resignation, there is limited negative impact on the organization’s security posture. This requires additional planning to address security concerns and challenges.

We need to ensure that our IT departments are in sync with our HR departments so that the employee’s access to information systems is revoked at the correct time to protect our systems and data from the former employee. This coordination is also required to ensure company equipment in the possession of the employee is returned. The employee should also be made aware of any obligations he or she has to the organization after his employment ceases, such as NDA’s and non-compete clauses.

These steps should be carried out when-ever an employee leaves their role even if they are simply taking up a new role in the same company, as it limits the risk of data breaches and permission/privilege creep. One of the more neglected issues that impact companies is that when an employee moves teams, is promoted or has a similar role change; they are given the access required to carry out their new role, but the access they had for their previous role is retained. This can lead to long serving employees having access far in excess of what they need to carry out their tasks and presenting a risk. It is better to prevent this by removing access as it is no longer required.

How to reduce your risk throughout your staffs employment.

Our obligations to our staff and security continue throughout the time the employee works for the organization. From making sure the management are aware of their security focused responsibilities, making sure security-awareness programs happen in your organization and ensuring there is a fair, understood disciplinary process is in place to address security breaches.

7.2.1. Management responsibilities.

This control requires we make sure management enforces the information security requirements. Good security practices cannot be put in place and abided by without managerial support. For example, having great, comprehensive password protection policies is no good if a manager allows, or even encourages staff to share login credentials before they go on annual leave so they can cover for each other. Managers must be specifically instructed to apply information security standards as per the organizations policies and procedures, including providing training where required.

7.2.2. Information security awareness, education and training

It is often said that the biggest risk to security comes from staff and users of IT systems. The best defence we have against this weak link is training. Studies have shown that the more educated a user is, the less likely they are to be the cause of security incidents. This is due to increased knowledge but also increased awareness of the risks the staff member’s organization faces. From this it quickly becomes apparent that one of the best ways to protect your organization is to put into place an initiative for continuous security awareness training, and more general information security training, for all staff. This will ensure that staff understand their responsibilities, types of attacks that may target them and the different threat vectors. It can also instil certain best practices into staff, such as not holding the door open to unidentified people, not giving out information over the phone, to be more scrupulous when deciding if an email is a phishing attack and even to discourage staff from discussing sensitive matters with colleagues in public places. Part of this training should include familiarising staff with policies and other security documents.

7.2.3. Disciplinary process.

It is inevitable that, despite our best efforts, there may be times when it becomes necessary to discipline staff. Having a defined disciplinary process that is enforced ensures both uniformity and fairness. This is an essential practice for modern organizations. This flows into ensuring security policies and procedures are followed and disciplining staff for deviations. If there is a verifiable security breach and the cause is found to be a staff member not following best practices security breach the disciplinary process should begin and it should allow for different degrees of result depending on the severity of wrongdoing by the staff. On top of this, employees should be made aware of this process.

Different countries have different laws and requirements around staff discipline and this process should be handled by qualified HR staff.

What to do before you hire a new staff member.

Pre employment tasks in the Human resource security clause can have its importance overlooked if we are not careful. Much is said of the Insider Threat and when we are hiring a new employee we are accepting that risk into our companies. We should do everything we can to ensure the employees we hire and to provide them with clear terms of employment to make sure expectations are known and understood.

7.1.1. Screening

Screening of candidates should be carried out prior to them being offered a role at your organization. This has a range of benefits if done in an appropriate manner. These benefits can include confirming an employee’s qualifications, work history and finding issues with the person that could compromise their integrity. All screening to be carried out should be executed to a level that is appropriate for the role the candidate is applying for,a prospective CEO should have a stringent check carried out, while an entry level junior may have more relaxed checks. To ensure the correct screening measures are taken all steps in the process should be defined in a procedure document and executed in a manner that meets local laws and regulations. Some concerns that the organization should address when deciding on screening procedures are; which specific employees/roles will be carrying out the screening, what exactly is screened for and who verifies that the screening process was carried out correctly.

Some examples of what we can screen for are;

  • Valid references given when candidate applied for the position,
  • Garda vetting where required,
  • Gaps in CV,
  • Criminal conviction,
  • Background in drug, alcohol or gambling abuse,
  • Verification of the credentials the candidate claims.

Depending on the industry and role the candidate is applying for the depth of the screening can vary but screening should be carried out for employees, contractors, and outsourcing companies.

7.1.2. Terms and conditions of employment.

It is important that information security responsibilities for both employees and contracts are included in their contracts. This can ease confusion as all employees and contractors understand what is expected of them and agree to these terms before joining your organization. These can include confidentiality and non-disclosure agreements for those staff handling sensitive data, they can inform employees of any monitoring that is carried out (within the boundaries of local laws), the use of the employee PII and even acknowledgements that the results and outputs an employee produces within the course of their employment is owned by the company (which would deal with patents, copyrights and other IP types). Lastly it should include general information security requirements and responsibilities.

Getting secure with mobile devices and remote workers!

Organizing your information security does not just cover devices that stay inside your office. We must take into account portable devices, BYOD’s and those staff that work from home. Most organizations have to plan for remote workers connecting to their systems, from travelling sales folk to people working from home we need to have policies in place to handle this securely. Likewise with smart wearables, laptops, mobile phones and a variety of other mobile devices brought into your organization every day we are confronted with a unique challenge keeping ourselves secure. Fortunately by applying these 2 controls in our organization we can better manage these risks.

 

6.2.1. Mobile devices policy.

In the modern organization, mobile devices are a given. Staff with laptops that move around, leave your organization’s premise; Uncontrolled iPads, Smartphones, smart wearables with incredibly accurate cameras and all with Wi-Fi, Bluetooth access and even GPS. The threat landscape is changing. All organizations, big and small, face risks due to these devices and this risk needs to be properly managed. The control recommends a Mobile Device Policy to address these concerns by imparting minimum standards and usage restrictions on these devices. The policy should include details on;

  • registration of mobile devices so the organization can track device and identify owners in case of misuse,
  • physical protection of mobile devices,
  • restrictions on software installation,
  • mobile device software versions and for applying patches,
  • restriction of connection to information services,
  • access controls,
  • cryptographic techniques to encrypt the drive and for connecting the office from outside the organization,
  • malware protection such as requiring a specific Anti-Virus version with up to date signatures,
  • remote disabling, erasure, or lockout in case the device is lost so that any sensitive information stored on that device can be destroyed,
  • backups,
  • use of web services and apps.

With mobile devices, there are times when it is the employee’s private property and placing restriction on what the employee sees as their own, can be a challenge but is necessary for the protection of the organization. Having a policy in place that an employee needs to read through and agree to can help staff understand where the boundaries of acceptable use are and the requirements to use a device at all. This easy to understand document can help improve acceptance of the organizations mobile device security.

Even just making an employee aware through security awareness training can reduce the risk of mobile devices such as by making the employee conscious of his or her surroundings when they open sensitive emails and can encourage an employee to question what wireless networks they use for business purposes.

The strictness of these restrictions should be tailored to your organization’s risk appetite. There are many Mobile Device Management platforms companies can make use of to better manage these assets.

 

6.2.2. Teleworking.

When an organization allows its employees to work remotely it introduces risks that must be acknowledged and mitigated against. There are many things an organization should consider such as whether to provide an employee with equipment to work from home with, or to allow them to use their own personal devices. In general, the organization will provide company laptops to staff working from outside of the organization, who connect to the corporate network with a VPN. Organizations that allow employees to use their personal equipment should take additional steps to ensure threats are not introduced to your network, for example requiring software to be installed that monitors applications installed on the device, granting the corporate IT team with additional powers over the personal device and ensuring the security level of that device (such as requiring a patched OS, up-to-date antivirus etc).

Other controls can include controling the times employees can access the network to prevent abuse.