FalconFriday — Recognizing Beaconing Traffic—0xFF0D
In today’s edition, we’ll share a method of detecting beaconing C&C traffic from large data sets of proxy traffic.

TL;DR for blue teams: By making certain assumptions, it is possible to find a beaconing needle in a very large haystack of web requests.

TL;DR for red teams: Do not just randomly use a genuine browser’s User Agent in your beaconing. Perfectly matching the targeted user’s actual browser’s User Agent for your beacons may be needed to not have them detected.

Beaconing introduction

After an initial compromise attackers will often attempt to establish a connection to a C&C server. Many attackers will use regular HTTP/HTTPS requests to reach the C&C server in an attempt to blend into regular traffic sent by the compromised machine. These so called “beaconing” connections are often periodic, for example one request every 10 minutes to check if new commands are available from the C&C server to be executed. To avoid detection many attack tools have a built-in randomization of the interval to avoid detection based on the beaconing interval.

How to recognise beaconing

Recognising beaconing traffic can be like looking for a needle in a very large haystack. In huge enterprise environments there can be up to a billion web requests made per day. Inspecting all these requests for potential beaconing is a daunting task that can also lead to many false positives.

One approach to recognizing beaconing in such large data sets is to make certain assumptions about the beaconing traffic and finding only requests that match these assumptions. The challenge is to make assumptions that will still detect a substantial percentage of beaconing traffic, while not causing too many false positives.

Assumption #1 — Attackers often masquerade beacon traffic using legitimate browser User Agents

In order to blend in better, attackers will often attempt to masquerade their beaconing traffic so that it appears to originate from a legitimate browser. This is done by mimicking the “User Agent” string sent by the browser to match that of a legitimate browser. For example, the popular attack framework Cobalt Strike allows configuring the User Agent using a malleable profile. Publicly available profiles recommend setting this to a value that will allow it to blend in with regular traffic.

Assumption #2— Beaconing is periodic and occurs over a longer time-period.

If we look at typical beaconing traffic patterns these are periodic in nature. Meaning that there will be requests to the same destination every hour.

There is a nice Sentinel rule in the Azure Sentinel Github repository that detects beaconing using this technique. While this provides a great start, we also noticed that these rules can provide large numbers of false positives in large enterprise environments.

If we only use assumptions 1 and 2 for our detection logic, we are still left with hundreds of false positives per day in a large environment. Therefore, we need to make an additional assumption on how to distinguish attacker beaconing traffic from regular browser traffic. Of course, such an assumption will also mean we can cause false negatives by filtering out too much.

Assumption #3 — Attackers might not always use the same browser User Agent as the actual user is using.

The third assumption we make here is one that in our testing allowed us to go from hundreds of false positives per day to only a few potential beaconing connections.

This third assumption assumes that the attacker mimics a legitimate browser User Agent, but that they won’t always be able to fully mimic the targeted user’s actual browser User Agent.

For example the attacker will prepare a User Agent pretending to be Chrome while the user is actually using Edge for their daily browsing. In this example that will manifest itself in the proxy logs showing requests to many different domains using Edge, but only to one or two domains related to the beaconing via Chrome.

Building a KQL detection rule based on these assumptions

The first thing required to build an actual detection rule based on these three assumptions are proxy logs that contain a User Agent. Unfortunately, the network logs generated by Microsoft Defender for Endpoint are not sufficient, since they do not include the User Agent and are severely rate limited by only logging a few connections to the same domain per day. A suitable log source for this traffic would be Zscaler or Palo Alto proxy logs.

A query based on Zscaler logs is available in our FalconFriday repository.

The assumptions explained above are implemented as follows in the query:

Assumption #1

The query constructs a list of common web browser User Agents by summarizing proxy traffic and finding User Agent strings that are used by a configurable minimum number of users to visit a configurable minimal number of domains. For example, used by at least 10 users to visit at least 50 different domains. This assumes that ‘real’ web browsers visit a large number of domains. Using this method you will automatically get rid of certain fat client applications that also have a User Agent, but only communicate to a very limited set of domains. Using this dynamic approach instead of a hard coded list of popular web browsers will make sure it remains up to date, even when new versions of web browsers are released.

Assumption #2

The query looks for users using one of the User Agents defined in the previous step to visit the same domain over a longer period of time (you could start with 1 day). It does this by creating a list of domains and the number of distinct hours during which they were visited by a specific user using a particular User Agent. Ultimately, the query will build a list of suspicious user, User Agent and domain combinations that have visited a specific domain in multiple different hours during a longer period of time.

Assumption #3

The list of suspicious visits is reduced by looking at requests where the user only uses that particular User Agent to visit a small set of domains and never visits any common domains such as http://www.google.com – indicating that this is not an actual web browser, but a beacon pretending to be a web browser.

To further reduce false positives a look-back mechanism is implemented that searches the proxy logs further back in time to check if the user did use the suspicious User Agent to visit more domains in the recent past. In our testing we noticed that his can remove false positives where a user keeps a single browser tab open for a longer period of time.

Improvements and Caveats

No single rule can identify all types of beaconing behavior in large and complex environments, so additional detections will be required.

Some additional rules that might be implemented for this are:

  • Connections made using rare or known malicious User Agents.
  • Connections made to newly registered or low reputation domains.
  • Detecting domain fronting.

Knowledge center

Other articles

FalconHound, attack path management for blue teams

FalconHound, attack path management for blue teams

[dsm_breadcrumbs show_home_icon="off" separator_icon="K||divi||400" admin_label="Supreme Breadcrumbs" _builder_version="4.18.0" _module_preset="default" items_font="||||||||" items_text_color="rgba(255,255,255,0.6)" custom_css_main_element="color:...

Together. Secure. Today.

Stay in the loop and sign up to our newsletter

FalconForce realizes ambitions by working closely with its customers in a methodical manner, improving their security in the digital domain.

Energieweg 3
3542 DZ Utrecht
The Netherlands

FalconForce B.V.
[email protected]
(+31) 85 044 93 34

KVK 76682307
BTW NL860745314B01