FalconFriday — AzureAD Edition—0xFF11
After a few missed editions of FalconFriday, we are back! Today, we will cover some detections specifically for attacks related to AzureAD. To make up for the missed editions, we will treat you with a bonus detection rule from our premium catalogue, normally only reserved for our paying customers.

TL;DR for blue teams: With a bit of fine-tuning of these rules, you get good insight in suspicious deviations from normal behaviour in AAD. This helps a great deal in focusing your hunting / response activities in the huge set of events generated by AAD.

TL;DR for red teams: These rules equip the blue team with means to detect slight changes in user behavior. Make sure to emulate your target’s MO even closer when attacking AzureAD.

Getting caught with an expired credential

With access to a victim’s machine, one of the ways to jump to the cloud is by extracting the user’s cookie. When done properly, this can even bypass MFA. Luckily for the good guys, there is a way to catch this in certain circumstances — and this is how.

When an attacker clones the cookie, it’s effectively grabbing a bearer token of a user. Of course, this is extremely simplified, but enough to grasp the concept of this detection. As good security practice preaches, any kind of credential should have a maximum lifetime — and tokens are no different. So at some point the bearer token will inevitable expire. The full details of when and how these tokens are refreshed are complicated, but irrelevant in this case. Performing an action/authentication with a revoked or an expired token triggers some specific error codes in the AzureAD logs.

70008 ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time.
50132 SsoArtifactInvalidOrExpired - The session is not valid due to password expiration or recent password change.
50173 FreshTokenNeeded - The provided grant has expired due to it being revoked, and a fresh auth token is needed.
81010 DesktopSsoAuthTokenInvalid - Seamless SSO failed because the user's Kerberos ticket has expired or is invalid.

The detection looks for any IP that triggers one of those error codes, and then checks if the error code is followed by a successful login from the same IP. This is based on the assumption that if the token for an actual user has expired or has been revoked, the user will re-authenticate, but an attacker can’t.

This detection assumes that the attacker isn’t pivoting through the machine of the target. You can refine this rule even further by also taking the user agent of the browser into consideration, to get visibility in case an attacker does pivot through the target’s machine.

Finally, there are two more error codes which you can add to the above list, but these are fairly noisy. These are the error codes for “everyday” token refreshes, which happens very regularly and often without the user noticing. Obviously, in a large organization, you’ll end up with a lot of false positives, but for small organizations or certain accounts, it might be useful. The error codes are:

50089 Flow token expired - Authentication Failed. Have the user try signing-in again with username -password.
70044 The session has expired or is invalid due to sign-in frequency checks by conditional access

You can get a copy of the query on our FalconFriday repository.

Mismatching OS and UserAgent

Another way we found to reliably detect suspicious activity is by comparing the OS field with the UserAgent. Spoofing the UserAgent is used as one of the tricks to mimic the behavior of the legitimate user. Microsoft hasn’t disclosed how the OS is determined during sign-in, but we’re guessing it combines multiple data sources (window.navigator.platform, MDM data, fat client vs browser, etc.).

An attacker that has obtained access to an account, and is performing a login from a Linux VM with a Windows user-agent, is going to get flagged by this rule. Another use case is changing the user-agent to bypass Conditional Access, but having a mismatch between OS and UserAgent. So if you change the UserAgent to that of the Word iOS app, but you are running on Windows, this will show here.

The query works by identifying the OS according to the UserAgent. This UserAgent is then compared with the OS as determined by Microsoft. Any mismatch is then reported.

For the sake of simplicity and reducing noise, this rule ignores cases where the OS as determined by Microsoft is empty, or if the login was unsuccessful, or if there is no UserAgent logged, etc.

You can find the query on our GitHub page.

Bonus: Rare UserAgent & app combination

This one is complicated, so bear with me while I try to explain the high-level concept. If you want an in-depth understanding, the only way is to go through the KQL yourself. A description of the details in a blog post would make it only more complicated. 😉

The basic idea is that when using an “app” for AzureAD sign-ins, you’d normally expect a UserAgent which is “close” to this particular app. So when you use your Word desktop application to log in to AzureAD, the UserAgent should be representative of that. You can replace “Word” in this example by most other apps that do SSO against AzureAD.

The query assumes that the recent past is “clean”/normal and uses the past 7 days as a baseline to determine which UserAgent is seen how often. Based on this baseline, it checks the past day to see if there has been any abnormal UserAgent and app combination. UserAgents aren’t compared like for like, but are put in UserAgent categories such as “OfficeApp — Onedrive”, “Browser — iOS Chrome”, etc. This is done to determine whether an app has a UserAgent in a category that is “close”, as mentioned previously.

Since this query is from our premium collection, it’s fairly complete in covering edge cases. It does of course require some tuning. There are three parameters which you need to tune:

  1. The minimum threshold for the number of sign-ins per app. An app which is only used 10 times in your look-back period, doesn’t work well for outlier detection. We recommend a value of 100, but this can vary depending on your organization and environment.
  2. Period to look back for building your baseline. In principle, the longer you look back, the more stable your baseline becomes. This does come at an increased computational cost.
  3. The period in which to look for outliers. The shorter, the better. We recommend 1d as a minimum / optimal value in general.

You can find the query on our GitHub page.

Knowledge center

Other articles

Leg ups: helping hand or red team failure?

Leg ups: helping hand or red team failure?

[dsm_breadcrumbs show_home_icon="off" separator_icon="K||divi||400" admin_label="Supreme Breadcrumbs" _builder_version="4.18.0" _module_preset="default" items_font="||||||||" items_text_color="rgba(255,255,255,0.6)" custom_css_main_element="color:...

BloodHound — Calculating AD metrics 0x02

BloodHound — Calculating AD metrics 0x02

[dsm_breadcrumbs show_home_icon="off" separator_icon="K||divi||400" admin_label="Supreme Breadcrumbs" _builder_version="4.18.0" _module_preset="default" items_font="||||||||" items_text_color="rgba(255,255,255,0.6)" custom_css_main_element="color:...

Together. Secure. Today.

Stay in the loop and sign up to our newsletter

FalconForce realizes ambitions by working closely with its customers in a methodical manner, improving their security in the digital domain.

Energieweg 3
3542 DZ Utrecht
The Netherlands

FalconForce B.V.
[email protected]
(+31) 85 044 93 34

KVK 76682307
BTW NL860745314B01