In the beginning, networks and the Internet writ-large were designed with the notion of intrinsic security based on a perimeter wherein a person, application or third-party was verified and subsequently granted an all-inclusive ‘trusted’ status. Suffice to say, this approach has resulted in damages and incalculable losses on a global scale. Trust placed on verification at only of a few points of access has proven to be lacking. To put it mildly, according to the Cloud Security Alliance in their Software-Defined Perimeter Architecture Guide, “Today’s network security architectures, tools and platforms all fall short of meeting the challenges presented by our current security threats.”  With recent (network) technology advances, we are now capable of building continuous verification to enable zero-trust.

The concepts of Zero-trust (ZT) and software defined perimeter (SDP) controls were originally introduced by John Kindervag in 2008, now Field CTO at Palo Alto Networks, and the Cloud Security Alliance in 2014, respectively. Since then, these invaluable ideas have become mainstream. In response to today’s dynamic threat landscape, multiple industry analysts, government agencies, and vendors have been converging on the comprehensive and highly effective development of an array of practical and effective standards and solutions based on a consistent set of frameworks for addressing infrastructure, cloud, and web access security.

Conceptually, software defined perimeter and zero-trust are powerful constructs based on the following core principles:

  • Never trust
  • Always verify
  • Protect everything, everywhere, every time

However, until recently these constructs were deemed too brute-force and impractical due to the inability to achieve an effective level of detection and response at speed and scale without impeding business speed operations. Instead, organizations relied on isolation and perimeter-based defenses (firewalls, ACLs, IDS/IPS and network segmentation), leaving core networks, critical infrastructure and assets woefully exposed for attackers to exploit and compromise. Compounding these issues is the fact that the speed, scale and complexity of cyber-attacks has resulted in unacceptably long mean-time-to-detect (MTTD) and mean-time-to-remediate (MTTR) timeframes. According to the Fireeye/Mandiant M-Trends 2019 report, the 2018 average time to detect, or “dwell time”, is 78 days (with variations by geographic region), down from 101 days in 2017. Likewise, the 2018 Cost of a Data Breach Study found a 197 day mean time to identify and 69 day mean time to contain.

 

Crunchy on the outside, soft on the inside, more complex

The original presumptions regarding the unwarranted need for inherent security may have been reasonable given the belief that networks existed in a trusted and secure environment. However, as we now know, left unprotected, networks are not secure and vulnerable to attacks. Anyone can easily access anything and discern communications at a network level by simply looking at the 5-tuple (source-destination IP address, port number and protocol) and exploit these access and transparency vulnerabilities for their own ends. However, with the explosion of more complex network topologies, the mobile and IoT industries, and internet consumerization, in addition to the increased complexity and demand on our networks, we’ve developed technologies that attempt to prioritize both security and scalability in tandem. These technologies come with the addition of obfuscation, encryption, VLANs, and SSL/SSH technologies. As a result, for better or worse, most network traffic is difficult to inspect, attribute and authenticate.

This increase in complexity can be both beneficial and detrimental for both the bad actors and the individuals or organizations with valuable information to protect. Consequently, this has resulted in additional delays between when a cybersecurity incident begins to when the actual breach occurs, and how long it takes an organization to detect (if at all) and recover.

The need to deploy zero-trust to prevent, detect and remediate vulnerabilities continuously and in real time is more important than ever. The average loss suffered from a single cybersecurity breach is now close to $4 million per incident, up from $3.6 million last year, according to the most recent estimates from the Ponemon Institute. The tide has turned. 

 

Looking ahead: Applying SDP and zero-trust to networking is now possible

It is now generally recognized that networks are the core, the lifeblood and the central nervous system of an organization’s information technology infrastructure. As the adage goes, “the network is the computer.” Additionally, both Forrester Research and Gartner identify zero-trust and SDP at the network layer as a vitally important attack surface and critical for organizational security.  

Because an organization's information assets are accessible to, and must transit across, an organization’s networks, the networks are both an attractive and significant attack surface. Simultaneously, they are an immutable source of “ground truth”, and therefore an essential component (some would say the foundation) of any security architecture. Specifically, any zero-trust or defense in depth strategy must focus on the networks as any attempt to infiltrate an organization can be detected and/or stopped at the network level.

The good news is that this is now possible! With the availability of in-memory processing resources, on-demand hyperscale compute infrastructure, micro-services, and event stream processing-based analytics, organizations can now efficiently and effectively build and deploy continuous, real-time threat detection, authentication, control and remeditative systems that are at the heart of a zero-trust model/framework.

 

Visibility anywhere is the enabler to zero-trust

The ability to deploy zero-trust is built on two fundamental necessities:

  1. The capacity to gain universal visibility into the physical or virtual network infrastructure
  2. The capacity to deploy embedded network monitoring and data-plane control (software) services anytime and anywhere the are needed - throughout the enterprise or in the cloud

These capabilities are now enabled by the availability of highly efficient in-memory software sensors that are placed throughout a network. These sensors continuously produce highly efficient and precise streaming network traffic metadata to feed on-premise and cloud-based event stream processing and analytics technologies; technologies that can ingest information from a wide spectrum of data sources that leverage ML/AI for analysis and response. Next generation analytic workflows can handle unifying analytics and subsequently actuating responses to network traffic. Anything from Security Operations and Analytics Platform architectures (SOAPA) to NOCs/SOCs leveraging SIEM, at a fundamental level, have the ability to ingest and make sense of a huge amount of intelligence; not just from network sensors but also from a broad array of logs, system tools, endpoint data and third-party intelligence. Once consumed, the intelligence can be processed and enriched continuously in real-time to deliver accurate, situational awareness. This helps to produce predictive assessments that can exploit data-plane and control plane functionality to respond to anomalous or malicious activity at machine speed. 

What does this mean in the context of leveraging the entire network surface to achieve zero-trust? IT professionals now have the ability to get comprehensive, continuous, real-time, visibility across the entire infrastructure, including attribution and authentication of all traffic with real-time detection and response/remediation capabilities to filter, block, control or otherwise engineer unwanted or malicious network traffic.

 

Essential Detection and Response Capabilities for building a Zero-trust Network Architecture

New forms of continuous, real-time network inspection, attribution and traffic engineering functionality are foundational to implementing zero-trust. These functions constitute new forms of software-based network function virtualization (NFV) capabilities that fall into two general categories; sensing-detection and action-response:

1. Sensing/sensors - Detection

  • TLS, TCP-IP and HTTP fingerprinting - for fraud, DGA and BOT detection
  • DNS/DHCP monitoring – access monitoring rogue device detection and insider threat detection
  • Geolocation attribution via GTP inspection
  • Payload inspection (RegEx) and analytics infiltration and exfiltration detection

2. Action/actuators - Response

  • Utilize data-plane programming technologies to take action on network traffic:
    • Parse
    • Filter
    • Deny
    • Terminate
    • Redirect
    • Isolate

The nature of these capabilities requires the ability to perform both stateless (packet filtering and inspection) as well as stateful (connection-based) processing, and the competency to both inspect and effect decisions based on the application layer. Additionally, these functions must be:

  • Deployed ubiquitously anywhere and anytime they are needed for both complete visibility and for the most effective protection and response
  • Software driven and dynamic
  • Able to be orchestrated as needed and configured programmatically to accommodate a changing operational and threat landscape
  • Cloud-native (containerized) software that can be deployed when and where they are needed - in the cloud, on-premise, and in hybrid environments - to be truly efficient and effective
  • Lightweight and efficient in a way that is performant and scalable

 

Moving Forward with implementing zero-trust technology

We are dedicated to delivering the advancement of these cutting-edge real-time detection (sensor) and response (data-plane programming) technologies to protect organizations’ most important assets. We develop software systems that deliver high resolution visibility into network traffic and the ability to effect data plan controls to engineer network traffic at any speed, for any network or IT protocol, continuously and in real-time.

Our software sensor and data-plane programming technologies are an integral part of this new zero-trust and SDP landscape. They provide unique and valuable capabilities for network detection and response.

When implementing our solutions, compared to similar products, you can expect:

  • Deeper, more highly efficient network traffic analysis with continuous, real-time visibility into all network traffic
  • Proficiency to continuously verify and validate connecting technologies and applications
  • Improved speed, accuracy and effectiveness as compared to traditional forms of SIEM, Firewall (IDS/IPS) and WAF systems
  • Capabilities to enrich CASB functions with richer and deeper attribution and insight into client-server communications
  • Improved network threat detection and response / enforcement mastery
  • The ability to streamline traffic flow, engineer and optimize network performance

When implementing zero-trust, new forms of continuous, real-time visibility and control are essential. Without visibility, control is useless, and vice versa. Furthermore, these capabilities must be able to operate at speed and scale. Fortunately, these goals are now practically achievable and organizations that take the time and effort to implement them can reap huge direct and indirect benefits.

Topics: network engineering, cyber security, Real-Time Monitoring, mantis

MantisNet

Written by MantisNet