Hunting for Threats

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Hunting for Threats. The summary for this episode is: <p>Today's session on hunting for threats and multi-cloud and hybrid cloud environments is lead by Darren Spruell. Darren is a Senior Threat Analyst for Special Operations at Sumo Logic. He dives into best practices for hunting threats and shows how effective the Sumo Logic platform and Cloud SIEM enterprise are. Lastly, Darren walks us through a use case scenario using the forced authentication technique.</p>
Why We Are Focusing on Threat Hunting
01:24 MIN
Three Parts of Successful Threat Hunting
02:17 MIN
Taking a Look at Use Case Identification
01:44 MIN
Data Collection - Tiers of Visibility
01:50 MIN
Establishing and Increasing the Value of Our Audit Program
01:04 MIN
Enriching the Data
02:04 MIN
Example Use Case: Forced Authentication
04:34 MIN

Darren Spurell: My name is Darren Spruell. I'm a Senior Threat Analyst with the Sumo Logic spec ops team. We're a services group here with Sumo Logic, focused on security use cases and helping customers be successful with our Cloud SIEM and continuous intelligence products. The topic today is on hunting for threats and multi- cloud and hybrid cloud environments. And what we'd like to do to approach this is basically present a set of best practices and insights from the spec ops team to enable successful threat hunting and hybrid environments, be those cloud or on- premise multi- cloud et cetera. So to start off, we'd like to talk about why we're here and why the topic threat hunting. In today's environment, we know that we have a pervasive threat landscape that we deal with. And every CSO I think realizes that they need to stop assuming no breach. And the new assumption is we assume breach. Generally, we expect that some threat actors will be successful in bypassing our perimeter defenses. Some of those threat actors maybe present on our network and it could be staging attacks and daily, we're able to read about new attacks that are emerging and threat actors that carry out operations to meet their goals. So today's security teams often these security and risk management programs are set up with kind of intended purposes and not necessarily given this mission threat hunting. And so this is something that a lot of teams they emerge and then they need to start building out a program that perform threat hunting. What we find is with new charters, oftentimes you look at new technology and at Sumo Logic we believe that our continuous intelligence platform and Cloud SIEM enterprise are really effective and efficient. This is what we in spec ops use on a daily basis to assist our customers in this mission. So the right platform can definitely make this goal easy to deliver them. So let's talk about leveraging Sumo Logic for threat hunting. Successful threat hunting in the Sumo Logic platforms, as far as spec ops is concerned, kind of rolls into these particular bullets. The process that we start with is based on use case identification. So when we talk about identifying threat actors and how they operate, we can typically break those down into use cases that help us drive data collection. And then from there, we get into data normalization and enrichment. This is one of the main values provided by Cloud SIEM Enterprise. We're able to ingest a lot of different vendor and product event data, and basically normalize that into a consistent stream that can be used throughout the platform. Once we've normalized that data and enriched it, then we also integrate threat intelligence data. This is how the Cloud SIEM Enterprise platform is able to take what we know about active and ongoing attacks today. And for example, any known attacker infrastructure, and basically synthesize that into the event streams that produce alerting from CSE and from the continuous intelligence platform. After threat intel data is integrated, we'll talk more about different ways to do this. We get into the core correlation and alerting use cases. This is the main purpose of the Cloud SIEM Enterprise, and certainly any SIEM product. Cloud SIEM Enterprise is a cloud native way to achieve this. We use a set of rules, correlation rules that provide various capabilities. And we distill those down into insights that provide the alerting use cases and SOAR integration for those alerts. Once we've achieved this, then we have all of that data, including the Cloud SIEM Enterprise signals insights and the native record data available for scalable searches and data review. This occurs within the continuous intelligence platform. Sumo SIP platform is a scalable search platform that lets you review gigabytes, terabytes of data within your environment. All of the telemetry that's being generated by security devices and passing through the event workflow is available in SIP for searching and for data analytics use cases. So with this process, this lets us achieve a scalable threat hunting program from the spec ops team. And each one of our customers can take part of that themselves and implement their own threat hunting program as well. Let's talk a little bit about use case identification. So the goal that we look at for evolving a process like this is basically maturity progression. This means that we have a reasonable place to start right out of the box. By default, we have out of the box global content. Sumo Logic's content team produces a tremendous amount of rules that are geared towards the MITRE ATT& CK framework. And this is basically a standard framework and a threat management process that provides a taxonomy as well as an enumeration of attacker tactics and techniques. Tactics are a high- level groupings of the general steps that an attacker takes as they carry out their intrusions on a network and the techniques are the more specific methods that they use to carry out those tactics. All of our rules are backed with classifications that identify MITRE ATT& CK tactics and techniques. So this out of the box content is useful for a turn key, a use case Cloud SIEM Enterprise. With no effort, those rules can be enabled and then any ingested data that matches what the rules look for will automatically alert. And so that's a very turn key application. In this use case identification process, what would we go for is more of a hands- on approach. And with this hands on approach, we're trying to increase the specificity and also the value of this operation. And this of course requires more effort. So out of the box, the rules are present, but we can continue to feed those rules. A very effective way to do this is to integrate your threat hunting mission with your incident response mission. And so taking the lessons learned from incident response engagements and turning those into new use cases is a really effective way to do this. This is something that any SOC analysts can participate in or any incident handler. And this is basically a tie in with your CCER. So incident lessons are a very great way to get started with this. Further evolution can basically branch out of our own environment and our own incident experiences, and also integrate open source intelligence. Anything that we can pick up in blogs, Twitter posts, any sort of, inaudible sourcing can also be used to feed additional use cases and additional rule content in Cloud SIEM Enterprise. This includes premium intelligence. So any sort of engagement with your ISAC contracts with premium intel vendors or anybody that can deliver specific intelligence on attacker operations that also provides very valuable use cases. The last point of evolution that we see here is what we term purple teaming. And this can be any sort of red or purple teaming operation. This is where we really push the boundaries. And instead of taking what we've already experienced in our own environment or what others have reported from their environments, we build on that by really trying to identify new unique ways of penetrating defenses and make sure that we have use cases that meet those new concepts. These are often very hypothesis driven, and there's a lot of great tooling available out there, such as Atomic Red Team, as well as other commercial frameworks that help with this purple team effort. At the end of this, what we should have is a significant set of use cases and something that's always evolving and moving forward and kind of keeping track of what the offensive security research and attacker TTP developments are bringing us. So moving into data collection. Note here the data collection comes after a use case development in many cases. Really what we're doing is we're thinking in terms of visibility. So the chart that I have here shows a set of columns for different tiers of visibility. These are things like network- based analytics, endpoint visibility, authentication logs, cloud- based events, and telemetry, things that are focused on threats and also file analysis and file based telemetry. What we can do is we can distill from this whichever technologies we have in our environment, and we can collect data from this. The way to start out with this is to identify what our audit policies are. Each one of these systems and each one of these classifications has specific audit policies tied to it. Some of these audit policies may be disabled by default, and it helps to enable them. An example of this is in Windows, the process creation logs are part of the advanced audit framework, and those need to be enabled through a group policy. When we look at it from a data source perspective, we'll think that we want to collect from firewalls IDS, so on and so forth. And really what we encourage is to think in terms of which data sources and which types of event logging will allow us to meet our use cases. One additional thing related to audit policies is even when some audit policies are configured, they may also need to be tweaked for settings. So again, with the process creation logging in Windows by default, it doesn't log the command line that was invoked. It logs only the invoked executable. And so what we want to do is tweak that policy to enable command line auditing as well as process creation. The thing that this gives us then is it establishes something that we believe in a lot at Sumo Logic, which is the term observability. And in this case, the observability is a focus on our security telemetry. And this is how we actually instrument our environment for auditing. Auditing is the key here. That's what enables us to hunt for threats is in terms of generating all of this visibility at multiple layers within our environment. So a quick checkpoint. Our collection set up in three steps. We've determined how to find the threats and we've now instrumented our environment accordingly. So we started by developing the use cases. We configured audit policies on our data sources, and now we should be collecting that relevant data. Let's move on to data normalization and enrichment. So the goal here is to leverage the Cloud SIEM Enterprise for its core value points. The four things that we identify here are these features. Mappings, context, enrichment, network blocks, and match lists. These are features of the CSE platform that enable us to basically take our data and really make it powerful. It enables us to develop new use cases based on how we enrich and contextualize that data. If we can do these things, the result will be normalized and enriched records. Records are produced in Cloud SIEM Enterprise, and they're basically a consistent, normalized representation of a security event. Similar security events will produce the same types of records and this sort of abstraction makes it really easy to write common rules across many different vendors and products. What this enables is consistent and repeatable searches. This is one of the keys we've found to a successful threat hunting operation. When we talk about establishing and increasing the value of our audit program, there's a few things that we can take for assumptions. We know that we'll develop many new use cases on top of our existing environment. This isn't something where we produce a few and then we're done. This is something that's going to continue to evolve. We also know that we're going to run many searches across different, but similar types of events. And so what we do is we write the content once and we configure the content once. And then we can continually audit that and search against it many times, tweaking our searches and really dialing things in to identify new questions that we need to ask and get new answers to those questions. So all of this requires our data to be consistent, predictable, and something that we can scale and repeat on. And Cloud SIEM Enterprise is very helpful for this. So talking about our mappings, the goal here is normalizing the event data across vendors and products. So the mappings are key to any analytics platform. One thing we want to do then considering the use case for cloud is looking at how traffic logging is done in an environment. Traditionally traffic logging is something that was done on premise where an organization would want to capture logs from firewalls, IDS or any system that's basically gotten network traffic visibility. If we've got different vendors at play, especially when we consider a hybrid environment with on- premise and cloud, we definitely will. So let's take an example where, for example, on premise, we have data centers and we have maybe employee office buildings where we have Palo Alto network firewalls. They'll produce a certain type of log. Additionally, we have a cloud infrastructure and in those clouds, if we're running like a multi- cloud environment, we may have AWS, VPC flow logs. We may have the Azure network security group logs, and we may have GCP VPC flow logs. Each of these, of course logs their own specific format, not necessarily the same fields, although they are generally the same type of data. So in Cloud SIEM Enterprise, we provide a common record format that we call a schema and through our mappings and this record format, we normalize these into similar records. In this case, what we'll produce is called a network flow object, and a network flow object produces the very familiar 5- tuple that you would expect, source and destination IP address and port, as well as protocol. And if it's available in the native event, then we'll also pull in a tremendous amount of other data. Any data that's present in event can be mapped into fields. What we call top level fields in the schema and the rest of the data is available in another field, kind of as an overflow. So mappings here are a key way to start with this. And this helps us prepare for the rest of inaudible process. Moving into enrichment. One of the key aspects of any environment that has network- based visibility is applying information about your IP schema into the SIEM. Our purpose here is to basically teach Cloud SIEM Enterprise about your IP space as a customer. The two purposes are this. We need to be able to separate mine from yours. In other words, which of these addresses are ours and which are the rest of the internets. This gives us the ability to define internal and external. The other thing that's nice is that we can apply labels for IP addresses. So at the net block level, we can basically apply a representation of what our organization is. If our network is set up and it's segregated by let's say geographic region, and then further maybe like site location. And then within those sites, we have additional vlans so that we can identify a data center space, server blocks, accounting, HR, all of that IP representation can be mapped in with network blocks in CSE. For additional detail on this out of the box, we already provide network blocks for RFC 1918 addresses. We know that these are private and these are always internal, but we encourage our customers and we work with them to input a very specific ID schema. The more detailed we get in CSE, the more successful we are in first of all, enriching the data to identify what's occurring when an event flashes in front of an analyst, but also achieving higher accuracy with the rules that are focused on scenarios like internal to external or inbound attacks for our network. So network blocks are a very powerful way to set up success when it comes to network data. Another feature that CSE provides is enrichment. This feature relates to match lists and match lists are basically a collection of similar objects. There's three purposes to a match list. We want to build lists of object classes. So think, for example, a bucket that can hold IP addresses of vulnerability scanners, or a list that can contain the identity of all of our domain controllers. These all play into the use cases that we identified early on. And so in our rules, we can basically integrate these match lists so that we can match, for example, when a directory synchronization event occurs with a system that's not a domain controller. This for example, could indicate a dcsync attack. We can use them for any sort of query filters, and we can also use them for data quality. So as we baseline our environment and find things that are behaving a certain way, because they're a certain type of system, like a vuln scanner of the main controller or a DNS server, we basically add those to a list and we integrate those into our filters to tighten up the logic and increase the data quality. The secondary aspect of this is that we get additional context. So we can use the API to query match lists, for example, we can get context on information and every record that's created in CSE will also contain the match list filled on there. So as we're looking at the event data that's been normalized and presented to an analyst, we can very quickly tell whether that record has anything to do with a DNS server, a domain controller, a vulnerability scanner, an external partner, anything like that. So match lists are very multi- purpose rich. And we use them quite a lot in spec ops to achieve data quality and hunting accuracy. All right, we have some time left. Let's dive into an example use case. So in this use case, we will take one of the attacks that we're aware of. This technique is called forced authentication. In MITRE ATT& CK this is T1187 and T1187 is focused on credential access. With this attack, the adversary will attempt to lure a user or somehow force a user to have their operating system or browser attempt to access an untrusted resource and solicit an authentication attempt. Native Windows integrated authentication will typically use an NTLM challenge, which results in the system sending a encrypted hash to a remote system as part of a challenge. That system can capture that encrypted hash and basically do an offline brute force attack or a dictionary attack against that. And then identify the attackers or excuse me, the victims credentials that'll typically be a hashed password that can be cracked offline. There's a second aspect to this particular technique. And this relates to outbound SMB generally. SMB, the server message block protocol is something that's typically used for work group and internal enterprise communications, file server access and similar things. And it's very unusual that an enterprise system would connect out to the internet to access something over SMB. If we see any form of connection out to an SMB using a network session, that's highly suspicious and it could indicate a forced authentication attack or something similar. The rule here is integrating several of the things that we've talked about. So we want to note the use of internal to external logic on this. That's actually the bottom part that last and clause where we say destination device IP is internal is false. So we're actually looking at an external destination or we're looking at of the proxy servers in our environment as a destination. If we do use a proxy server, this rule logic helps us match on that. If our environment does not use proxy servers, that's fine. Really what we're going for is any sort of a network connection where the device has identified that the SMB protocol is being spoken. And one of our internal systems is attempting to connect out. So this very small expression of rule logic integrates the internal to external functionality from our network blocks. And it also integrates the list matches filled there, which is part of the matchless feature. So we've classified all of our proxy servers as destination IP addresses. And we can check if that list contains a given proxy server. So this is a very specific sort of rule and that it's looking for the field here, metadata device event ID. If this is matching SMB, then this tells us that a device in our network, perhaps a next generation firewall, or one of the recommended network sensors that we make available to our customers or through partners is identifying SMB traffic specifically. What we can do is we can try another approach with this rule also, and this is a modification to the use case. What if in our environment outbound SMB is blocked? So for example, the standard port for SMB these days is port 445 TCP. A lot of organizations very intelligently will block this with their egress filters. Knowing the most systems don't need to connect out to the internet ports 139, and 445 might be blocked. In that case, the previous rule will never fire because there's no established session and there's no connection established through the firewall, you'll never talk SMB, but we are still interested anytime that a system is attempting to connect out on this ports. So this is an opportunity to detect anomalous behavior. We've successfully implemented a control to block this behavior, but now we're interested in identifying further attempts. So this second port bound query basically takes the SMB protocol out of the picture and it replaces it with a test for destination ports. We're looking for desk ports, 139, and 445, and the rest of the logic remains the same. So with this use case, we have two rules. One would probably consider higher severity because it was actually an established SMB session. The other one can relate to attempted activity. And this is enough to get an analyst to look at something that could be a precursor to an attack within our environment. All right, this concludes the technical portion of our talk. We do want to remind our attendees that we have another session where the spec ops team will be presenting. And this is on a workshop called spec ops threat hunting presented by James inaudible and Brian Gardener. James leads the spec ops team, and Brian's another analyst on the team with me. In this session, we will be able to learn various tactics, techniques, and procedures, and get into a bit more detail on how Cloud SIEM Enterprise can be used to detect active attacks. So we'd encourage everybody to catch that. That's at 2: 00 PM to 3: 00 PM Pacific time. So that'll be a little bit later this afternoon. Well, I thank you for attending this session. Just a quick reminder, also to join our next session. We'd love to see you at the workshop and we wish everybody success. At this time, we would like to mention also Sumo Logic spec ops team is available for hire, put us to work. We do manage threat hunting. We can help supplement your defensive program. We provide support consultation, use case development. We can do a lot of things. Please see our website and you'll find more details there.

DESCRIPTION

Today's session on hunting for threats and multi-cloud and hybrid cloud environments is lead by Darren Spruell. Darren is a Senior Threat Analyst for Special Operations at Sumo Logic. He dives into best practices for hunting threats and shows how effective the Sumo Logic platform and Cloud SIEM enterprise are. Lastly, Darren walks us through a use case scenario using the forced authentication technique.

Today's Guests

Guest Thumbnail

Darren Spruell

|Sr. Threat Analyst for Special Operations, Sumo Logic