Microsoft 365 – Security Monitoring

Disclaimer: This is a very high-level post of M365 security monitoring leaving the technical stuff on the later blog posts. It doesn’t cover all products and possible integrations in the Microsoft cloud ecosystem and is more of a starting point for a journey of evaluating possible security solutions.

Security monitoring is a topic I have been working with my colleagues (@santasalojh & @pitkarantaM) for the last two years. During that time we have helped many organizations to get better visibility to cloud security monitoring. Now it’s time to share thoughts around this topic, starting from the root and digging deep down into the tech side.

Setting Up The Scene

Logging and monitoring is a huge topic in the Microsoft cloud ecosystem and for that reason, I will concentrate in this post to M365 security monitoring and alerts (which is quite obvious as a cyber-security expert), not the metrics in here. Also, I would like to highlight again that this blog is a very high level leaving the technical stuff on the later blog posts.

The questions I have heard quite often from customers are

  • Which native Microsoft tools I should use for monitoring the security of the cloud environment?
  • How can/should I manage all the alerts in the ecosystem (easily)?
  • Should I use 3rd party tools for security monitoring?

Unfortunately, I have to say that it depends on many things. It’s a matter of licenses, which tools you have on your toolbox, how much the organization is utilizing Microsoft cloud workloads, the maturity of the organization or service provider, other cloud service provider tools in use etc, …

Microsoft offers a brilliant set of cloud security solutions for use, here are a few named ones:

  • Azure Security Center
  • Microsoft 365 Security Center
  • Azure AD Identity Protection
  • Microsoft Defender ATP
  • Azure ATP & O365 ATP
  • Cloud App Security
  • Azure Sentinel

Architecture

Microsoft cyber-security architecture is the document for the start when the organization is planning cyber-security architecture in the Microsoft environment. In the first view, it looks a bit crowded, but once you get familiar with it, it will be beneficial. What’s covered here is the components inside the yellow circle, the Security Operations Center (SOC) part.

Internal Cloud Integrations

When planning security monitoring in the Microsoft cloud, the integrations (+ licenses) plays an important role to get most out of the security solutions. Some of the integrations are already in place by default but most of them need to be established by admin.

Integration Architecture – Example

The picture below doesn’t cover all possible security solutions and integration scenarios, it rather gives overall understanding which solutions can be used to investigate alerts and suspicious activity in the cloud or on-premises.

The best synergy advantages come with the integrations between security solutions. In the top category are the solutions which, in my opinion, are the best ones to start the investigation.

Naturally, if Sentinel is in use it triggers the alert and investigation starts from there. It could also be replaced by 3rd party SIEM (Splunk, QRadar, etc). Both Sentinel and Cloud App Security have a rich set of capabilities for investigation and contain a number of data from the user identity, device identity, and network traffic.

If you are wondering why investigation doesn’t start from Azure Security Center or M365 Security Center, the reason is that alerts from these solutions can be found or send to SIEM (in this example case – Sentinel).

Investigating The Alerts

Highly encourage to use SIEM (Sentinel) or MCAS for starting the investigation. Deep dive analysis can be made in the alert source itself, for example in MDATP if the initial alert was generated in there.

Azure Sentinel

Sentinel is a fully cloud-based SIEM solution and it offers also SOAR capabilities. Sentinel provides a single pane of glass solution for alert detection, threat visibility, proactive hunting, and threat response including Azure Workbooks & Jupyter Notebooks which can be used in advanced threat hunting and investigation scenarios.

Cloud App Security

Microsoft Cloud App Security (MCAS) is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. MCAS has UEBA capabilities and as I have said many times it’s, in my opinion, the best tool in Microsoft ecosystem to investigate internal user suspicious, and possible malicious activity.

Intelligent Security Graph (ISG)

According to Microsoft: to be successful with threat intelligence, you must have a large diverse set of data and you have to apply it to your processes and tools.

The data sources include specialized security sources, insights from dark markets (criminal forums), and learning from incident response engagements. Key takeaways from the slide:

  • Products send data to graph
  • Products use Interflow APIs to access results
  • Products generate data which feeds back into the graph

In later blog posts, I will dig more deeply into the Security Graph functionalities. At the time of writing, the following solutions are providers to the ISG (GET & PATCH):

  • Azure Security Center (ASC)
  • Azure AD Identity Protection (IPC)
  • Microsoft Cloud App Security
  • Microsoft Defender ATP (MDATP)
  • Azure ATP (AATP)
  • Office 365
  • Azure Information Protection
  • Azure Sentinel

Integration with ISG makes sense if you are using on-prem SIEM and you don’t want to pull all of the logging and monitoring data from cloud to on-premises. Also, ISG contains processed alerts from the providers.

Note: During tests, I was not able to update alerts across security products even Microsoft documents says that it’s supported. I will address this topic in a later post which is still under investigation.

Conclusion

The best synergy advantages from the security solutions come with the integrations between the products. Even though, your organization would use 3rd party SIEM the internal cloud integrations between the solutions are very beneficial.

Integrations between cloud and SIEM systems are one of the topics covered later on in technical posts.

Until next time!

Deep Diver – Azure AD Identity Protection (IPC) Alerts

This blog is all about Azure AD Identity Protection alerts (referred to provider name ”IPC” later on the blog post) in the Microsoft cloud ecosystem.

If you want to read how IPC works I encourage you to read my blog post or navigate straight to docs.microsoft.com.

IPC is an Azure AD P2 feature that has been in general availability mode for approx. three years. Earlier this year Microsoft did ”refresh” for IPC and added new detection capabilities and enhanced UI. Azure AD P2 feature means that it’s available in most expensive license packages.

You will get a taste of its features even with a free Azure AD license but all the cool features are included in AAD P2. Practically, IPC calculates user risk (online/offline) based on Machine Learning & AI and makes decisions based on policies, is user login approved, is MFA or password change needed or is user sign-in blocked.

Gimme The Alerts – Where are Those?

Azure AD Identity Protection (IPC) is a provider for multiple security solutions which means that alerts triggered in IPC can be found from multiple places (list below). Let’s have a closer look.

  • Azure AD Identity Protection blade
  • Intelligent Security Graph (ISG)
  • Azure Sentinel
  • Cloud App Security (MCAS)
  • Azure Security Center
  • Microsoft Threat Protection suite (MTP)
  • PowerShell for management

Azure AD Identity Protection Blade

Identity Protection UI resides in Azure AD where investigation and mitigations can be done. Btw, check my demo environment Identity Secure Score, 237 out of 265. Quite impressive even I say it myself 🙂

If a risk is detected admins (or dedicated email distribution list) will receive an alert as seen below. When a link is clicked, the person who received the email will land to the Identity Protection portal in Azure AD.

Intelligent Security Graph (ISG) aka Microsoft Security Graph

Identity Provider is one of the ISG providers, the full list of available providers at the time of writing can be found here. In pictures below are ISG high-level architecture and export from ISG alert from my tenant. In the latter one, you can see the provider highlighted with yellow color.

Microsoft Intelligent Security Graph (ISG)

Azure Sentinel

If you are using Azure Sentinel (a cloud-native SIEM which is a hot topic right now) and you have configured data connectors, and activated rule properly you will get IPC alerts to Azure Sentinel as incidents.

Azure Sentinel Identity Protection template rule basically raises an incident if an alert is generated in IPC.

Microsoft Cloud App Security

The IPC alert is also found from MCAS. As you can see from the picture below MCAS adds information to the alert and makes it more useful for the investigation. Btw, MCAS is the best solution in the Microsoft ecosystem to investigate internal user suspicious activity and behavior. If you are using it, I highly encourage you to get familiar with it and especially to UEBA capabilities.

I’m sold to it, totally:) But, I’m looking it only from the technical perspective, not from a financial perspective.

Azure Security Center (ASC)

IPC alerts are also found from Azure Security Center. ASC also provides a geolocation map which can be very useful to get a bigger picture of the attack.

Microsoft Threat Protection Suite (MTP)

MTP was just launched to public preview. Unfortunately, I don’t have such an environment available where it’s enabled. In a nutshell, it is a pre -and post-breach enterprise defense suite that natively integrates across endpoints, protecting:

  • Endpoints with Microsoft Defender ATP
  • Email and collaboration with Office 365 ATP
  • Identities with Azure ATP and Azure AD Identity Protection
  • Applications with Microsoft Cloud App security

More information from here and here.

PowerShell

This has slipped out of my radar totally. Microsoft released the PowerShell module for Microsoft Security Graph in April 2019. You can read more about it from here. In a nutshell, you need to do the following:

  • PowerShell v5 or above
  • Register App to Azure AD
  • User this URI: urn:ietf:wg:oauth:2.0:oob, it’s needed for desktop app redirect to work
  • Configure permissions to the App ( SecurityEvents.ReadWrite.All )
  • Grant Admin consent to the App
  • Install the module
  • Run and enjoy 🙂

In the Technet blog, there are multiple questions about App registration guidance. My 2cents:

  • When registering the App, grant API permissions for delegated mode (interactive login is used)
  • Grant SecurityEvents.Read.All & SecurityEvents.ReadWrite.All permissions to the App
  • Grant admin consent
  • When running the PowerShell – use you userprincipalName as username and AppID as password.

Using the module

Login with userPrincipalName + App ID. After the initial login, the modern authentication prompt appears (it’s encrypted by Finnish) and interactive login is processed together with Conditional Access policies.

Managing the Alerts

MicrosoftGraphSecurity module has following available commands

With Get-GraphSecurityAlert you can get all alerts from the ISG with Identity Protection alerts included.

Secure Score information is also available with Get-GraphSecuritySecureScore command.

Set ISG Alert

ISG alerts can be managed via PowerShell with Set-MicrosoftGraphSecurity cmdlet. Extremely useful if the alerts are sent to SIEM. The downside is that there isn’t any integration to the backends (providers).

Note: This means that even you update the alert status in ISG it will remain open in the ISG providers (Identity Protection, Cloud App Security etc.)

Summary

The Microsoft cloud ecosystem is huge and organizations have multiple security solutions available by default. Keep in mind that most of the advanced tools (including IPC, MCAS) require E5/A5, G5, P2 license. As seen above, synergy advantages are obvious.

What solutions to use is more a matter of cloud logging and monitoring strategy. From which sources you want to have an audit trail & events sent and to where? Where are you processing all the alerts, in SIEM or directly in Security solutions? And lastly, how operations are built around the solutions.

Until next time!

Azure Log Analytics – Permission Models

This blog post is all about Log Analytics workspace (later referred as LA) permission models which changed at May 2019. The options (at time of writing) for granting permissions are:

  • Grant access using Azure role-based access control (RBAC).
  • Grant access to the workspace using workspace permissions.
  • Grant access using a specific table in the workspace using Azure RBAC.

These new options gives more granular controls for granting permission to Log Analytics workspaces instead of old model. Anyway, I see still place for granting workspace permissions. It comes back to Log Analytics’ workspace design and organization needs. How operating teams, development teams, and security organization needs to access the performance and security-related events.

There are basically two approaches for Log Analytics design:

  • Centralized
  • Siloed

If you have chosen the centralized model then more granularity access controls might be needed and you can leverage more about new access control modes table (Resource centric RBAC & Table level RBAC).

Access control mode

The new access model is automatically configured to all workspaces created after March 2019.

How to check which access model is in use?

Navigate to Azure Log Analytics and overview page where you can find ”access control mode” options for the workspace.

  • If you are using legacy model ”Access Control mode = require workspace permissions”
  • If new model is in place ”Access Control mode = Use resource or workspace permissions”

With PowerShell

  • If output is true – Log Analytics is using new access model
  • If output is false or null – Log Analytics is using legacy model
Get-AzResource -ResourceType Microsoft.OperationalInsights/workspaces -ExpandProperties | foreach {$_.Name + ": " + $_.Properties.features.enableLogAccessUsingOnlyResourcePermissions}

How to change access model?

Access model can be changed directory from resource properties and verified from Activity log.

If Powershell is the favorite tool for changes here you go (code from docs.microsoft.com)

$WSName = "my-workspace"
$Workspace = Get-AzResource -Name $WSName -ExpandProperties
if ($Workspace.Properties.features.enableLogAccessUsingOnlyResourcePermissions -eq $null)
    { $Workspace.Properties.features | Add-Member enableLogAccessUsingOnlyResourcePermissions $true -Force }
else
    { $Workspace.Properties.features.enableLogAccessUsingOnlyResourcePermissions = $true }
Set-AzResource -ResourceId $Workspace.ResourceId -Properties $Workspace.Properties -Force

New Model In Action

The scenario

Siloed Log Analytics workspace design is in use and operational team members needs to maintain Virtual Machines. In practical person needs to read Virtual Machine metrics and performance data from LA.

If new access model is in use members of the team can access the needed logs directly from the resource blade. They cannot see the LA at all because they don’t have global read access to the resource. If user browses to Azure Monitor the user can see logs he/she has access to.

When log search is opened from a resource menu, the search is automatically scoped to that resource and resource-centric queries are used. This means that if users have access to a resource, they’ll be able to access their logs.

Built-in Roles (from docs.microsoft.com)

Azure has two built-in user roles for Log Analytics workspaces:

  • Log Analytics Reader
  • Log Analytics Contributor

Members of the Log Analytics Reader role can:

  • View and search all monitoring data
  • View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.

Nothing prevents from using built-in roles but keep in mind that these are top level permissions. If user has Log Analytics reader permission global read permission with the standard Reader or Contributor roles that include the */read action, it will override the per-table access control and give them access to all log data.

Table Level Access

More granular controls can be achieved with Table level access in case its needed. If centralized Log Analytics model is chosen scenario it might be necessary to grant permissions to different teams to read data from LA tables. This can be achieved with Table level RBAC permissions.

With table level RBAC you can grant permissions only to needed LA tables depending on your needs.

Considerations

If a user has global read permission (Reader or Contributor roles) that include the */read action, it will override the per-table access control and give them access to all log data.

If a user has access to specific LA table but no other permissions, they would be able to access log data through the API but not from the Azure portal.

Summary

Most cases I have worked with, multi-homing Log Analytics design has been used and workspace centric model is enough to satisfy the needs. If centralized model (as few LA’s as possible) would be in use new access model gives more granularity.

Risky IP’s and Traffic Analytics

Risky IP feature of AAD Connect Health came to public preview in early May. With AAD Connect Health you can monitor sign-ins and send data to the cloud where it will be analyzed. Together with ADFS Extranet Lockout it helps to monitor and detect password brute force and spray attacks.

AAD Connect Health can be used for:

Installation guidance found from here

What’s Risky IPs in ADFS (docs.microsoft.com)

Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the “Risky IP report” that detects this condition and notifies administrators when this occurs. The following are the key benefits for this report:

  • Detection of IP addresses that exceed a threshold of failed password-based logins
  • Supports failed logins due to bad password or due to extranet lockout state
  • Email notification to alert administrators as soon as this occurs with customizable email settings
  • Customizable threshold settings that match with the security policy of an organization
  • Downloadable reports for offline analysis and integration with other systems via automation

Risky IPs in Action

As a pre-requirement I have configured on-premises Active Directory Domain account lockout policy and password policy to following values:

  • Account Lockout duration & Reset Account lockout counter after: 15min
  • Account Lockout threshold: 10

Extranet account lockout policy with following values:

  • ExtranetLockoutThreshold: 5
  • ExtranetObservationWindow: 30min

Command:

$Timespan = New-TimeSpan -Minutes 30
Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 5 -ExtranetObservationWindow $Timespan

13

After pre-reguirements configuration it’s time to perform attack simulation against my ADFS instance. Trying to logon multiple times with my test account.

1

Because of ”Extranet Account Lockout” policy my test account stays active and is not locked out at on-premises AD. This policy defines that authentication requests are not sent after 5 attempts to the domain controller.

14

After short period of time I navigated to Azure AD portal and AAD Connect Health blade where my Risky IP’s were visible

4.1

I tried to find same IP from Azure Traffic Analytics as malicious traffic but was unable to do so. Either my queries were build wrongly or this kind of traffic is not visible from Network Analytics. If latter is the case it would be nice to see Risky IP traffic as malicious traffic in Traffic Analytics. Of course then assumption is that VMs needs to be located at Azure datacenters as I have in my environment.  15

What I found was all traffic I generated to random ports during test sessions.

16

After I frustrated myself with Traffic Analytics I tried to find my Risky IP from NSG Flow logs from storage accounts and it was successful.  This data was then imported to human readable format to  JSON editor.

10.PNG

Now the Risky IP has been identified, what’s next

  • It can be blocked at firewall
  • If we take another angle and look this from identity point of view, Trusted locations can be defined to Azure AD which helps to define Conditional Access policies when protecting identities