Answering Tough Questions About Network Metadata and Zeek

We often receive questions about our decision to anchor network visibility to network metadata as well as how we choose and design the algorithmic models to further enrich it for data lakes and even security information and event management (SIEMs).

The story of Goldilocks and the Three Bears offers a pretty good analogy as she stumbles across a cabin in the woods in search of creature comforts that strike her as being just right.

As security operations teams search for the best threat data to analyze in their data lakes, network metadata often lands in the category of being just right.

Here's what I mean: NetFlow offers incomplete data and was originally conceived to manage network performance. PCAPs are performance-intensive and expensive to store in a way that ensures fidelity in post-forensics investigations. The tradeoffs between NetFlow and PCAPs leaves security practitioners in an untenable state.

NetFlow: Too little

As the former Chief of Analysis for US-CERT has recommended: "Many organizations feed a steady stream of Layer 3 or Layer 4 data to their security teams. But what does this data, with its limited context, really tell us about modern attacks? Unfortunately, not much."

That's NetFlow.

Originally designed for network performance management and repurposed for security, NetFlow fails when used in forensics scenarios. What's missing are attributes like port, application, and host context that are foundational to threat hunting and incident investigations.

What if you need to go deep into the connections themselves? How do you know if there are SMBv1 connection attempts, the main infection vector for WannaCry ransomware? You might know if a connection on Port 445 exists between hosts, but how do you see into the connection without protocol-level details?

You can't. And that's the problem with NetFlow.

PCAPs: Too much

Used in post-forensic investigations, PCAPs are handy for payload analysis and to reconstruct files to determine the scale and scope of an attack and identify malicious activity.

However, an analysis of full PCAPs in Security Intelligence explains how the simplest networks would require hundreds of terabytes, if not petabytes, of storage for PCAPs.

Because of that – not to mention the exorbitant cost – organizations that rely on PCAPs rarely store more than a week's worth of data, which is useless when you have a large data like. A week's worth of data is also insufficient when you consider that security operations teams don't often know for weeks or months that they've been breached.

Add to that the huge performance degradation – I mean frustratingly slow – when conducting post-forensic investigations across large data sets. Why would anyone pay to store PCAPs in return for lackluster performance?

Network metadata: Just right

The collection and storage of network metadata strikes a balance that is just right for data lakes and SIEMs.

Zeek-formatted metadata gives you the proper balance between network telemetry and price/performance. You get rich, organized and easily searchable data with traffic attributes relevant to security detections and investigation use-cases (e.g. the connection ID attribute).

Metadata also enables security operations teams to craft queries that interrogate the data and lead to deeper investigations. From there, progressively targeted queries can be constructed as more and more attack context is extracted.

And it does so without the performance and big-data limitations common with PCAPs. Network metadata reduces storage requirements by over 99%, compared to PCAPs. And you can selectively store the right PCAPs, requiring them only after metadata-based forensics have pinpointed payload data that is relevant.

The perils of managing your own Bro/Zeek deployment

Another question customers often ask us is whether they should manage their own Bro/Zeek deployments. The answer is best explained through the experience of one of our government customers, which chose to deploy and manage it themselves.

At the time, the rationale was reasonable: Use in-house resources for a one-time, small-scale deployment, and incrementally maintain it with the rest of the infrastructure while providing significant value to their security team.

But over time, it became increasingly untenable:

  • It was difficult to keep it tuned. Each patch or newly released version required the administrator to recompile a binary and redeploy. 
  • It became difficult to scale. While partially an architectural decision, sensors can rarely scale by default – especially those that push much of the analytics and processing to the sensor. We don't see many deployments that can even operate at 3 Gbps per sensor. Over time, the sensors began to drop packets. The customer had to suddenly architect clusters to support the required processing.
  • It was painfully difficult to manage legions of distributed sensors across multiple geographic locations, especially when sensor configurations were heterogeneous. When administrators who were familiar with the system left, a critical part of their security infrastructure was left unmanaged.

This no-win tradeoff drives many customers to ask us how their security teams can better spend their time. Should they manually administer tools (a.k.a. barely keeping afloat) in a self-managing fashion or focus on being security experts and threat hunters?

In addition to the deployment challenges for those who opt for the self-managed approach, day-to-day operational requirements like system monitoring, system logging and even front-end authentication pose a heavy burden.

Most make the choice to find a partner that can simplify the complexity of such a deployment: Accelerate time to deployment, enable automatic updates that eliminate the need to regularly patch and maintain, and perform continuous system monitoring.

These are default capabilities that free you to focus on the original charter of your security team.

About the author: Kevin Sheu leads product marketing at Vectra. During the past 15 years, he has held executive leadership roles in product marketing and management consulting experience, where he has demonstrated a passion for product innovations and how they are adopted by customers. Kevin previously led growth initiatives at Okta, FireEye and Barracuda Networks.

Copyright 2010 Respective Author at Infosec Island

via Infosec Island Latest Articles http://bit.ly/2VQ5D8Z
RSS Feed

If New feed item from http://www.infosecisland.com/rss.html, then send m


Unsubscribe from these notifications or sign in to manage your Email Applets.

IFTTT

Comments

Popular posts from this blog

Evernote cuts staff as user growth stalls

The best air conditioner

We won't see a 'universal' vape oil cartridge anytime soon