Microsoft 365 – Security Monitoring

Disclaimer: This is a very high-level post of M365 security monitoring leaving the technical stuff on the later blog posts. It doesn’t cover all products and possible integrations in the Microsoft cloud ecosystem and is more of a starting point for a journey of evaluating possible security solutions.

Security monitoring is a topic I have been working with my colleagues (@santasalojh & @pitkarantaM) for the last two years. During that time we have helped many organizations to get better visibility to cloud security monitoring. Now it’s time to share thoughts around this topic, starting from the root and digging deep down into the tech side.

Setting Up The Scene

Logging and monitoring is a huge topic in the Microsoft cloud ecosystem and for that reason, I will concentrate in this post to M365 security monitoring and alerts (which is quite obvious as a cyber-security expert), not the metrics in here. Also, I would like to highlight again that this blog is a very high level leaving the technical stuff on the later blog posts.

The questions I have heard quite often from customers are

  • Which native Microsoft tools I should use for monitoring the security of the cloud environment?
  • How can/should I manage all the alerts in the ecosystem (easily)?
  • Should I use 3rd party tools for security monitoring?

Unfortunately, I have to say that it depends on many things. It’s a matter of licenses, which tools you have on your toolbox, how much the organization is utilizing Microsoft cloud workloads, the maturity of the organization or service provider, other cloud service provider tools in use etc, …

Microsoft offers a brilliant set of cloud security solutions for use, here are a few named ones:

  • Azure Security Center
  • Microsoft 365 Security Center
  • Azure AD Identity Protection
  • Microsoft Defender ATP
  • Azure ATP & O365 ATP
  • Cloud App Security
  • Azure Sentinel

Architecture

Microsoft cyber-security architecture is the document for the start when the organization is planning cyber-security architecture in the Microsoft environment. In the first view, it looks a bit crowded, but once you get familiar with it, it will be beneficial. What’s covered here is the components inside the yellow circle, the Security Operations Center (SOC) part.

Internal Cloud Integrations

When planning security monitoring in the Microsoft cloud, the integrations (+ licenses) plays an important role to get most out of the security solutions. Some of the integrations are already in place by default but most of them need to be established by admin.

Integration Architecture – Example

The picture below doesn’t cover all possible security solutions and integration scenarios, it rather gives overall understanding which solutions can be used to investigate alerts and suspicious activity in the cloud or on-premises.

The best synergy advantages come with the integrations between security solutions. In the top category are the solutions which, in my opinion, are the best ones to start the investigation.

Naturally, if Sentinel is in use it triggers the alert and investigation starts from there. It could also be replaced by 3rd party SIEM (Splunk, QRadar, etc). Both Sentinel and Cloud App Security have a rich set of capabilities for investigation and contain a number of data from the user identity, device identity, and network traffic.

If you are wondering why investigation doesn’t start from Azure Security Center or M365 Security Center, the reason is that alerts from these solutions can be found or send to SIEM (in this example case – Sentinel).

Investigating The Alerts

Highly encourage to use SIEM (Sentinel) or MCAS for starting the investigation. Deep dive analysis can be made in the alert source itself, for example in MDATP if the initial alert was generated in there.

Azure Sentinel

Sentinel is a fully cloud-based SIEM solution and it offers also SOAR capabilities. Sentinel provides a single pane of glass solution for alert detection, threat visibility, proactive hunting, and threat response including Azure Workbooks & Jupyter Notebooks which can be used in advanced threat hunting and investigation scenarios.

Cloud App Security

Microsoft Cloud App Security (MCAS) is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. MCAS has UEBA capabilities and as I have said many times it’s, in my opinion, the best tool in Microsoft ecosystem to investigate internal user suspicious, and possible malicious activity.

Intelligent Security Graph (ISG)

According to Microsoft: to be successful with threat intelligence, you must have a large diverse set of data and you have to apply it to your processes and tools.

The data sources include specialized security sources, insights from dark markets (criminal forums), and learning from incident response engagements. Key takeaways from the slide:

  • Products send data to graph
  • Products use Interflow APIs to access results
  • Products generate data which feeds back into the graph

In later blog posts, I will dig more deeply into the Security Graph functionalities. At the time of writing, the following solutions are providers to the ISG (GET & PATCH):

  • Azure Security Center (ASC)
  • Azure AD Identity Protection (IPC)
  • Microsoft Cloud App Security
  • Microsoft Defender ATP (MDATP)
  • Azure ATP (AATP)
  • Office 365
  • Azure Information Protection
  • Azure Sentinel

Integration with ISG makes sense if you are using on-prem SIEM and you don’t want to pull all of the logging and monitoring data from cloud to on-premises. Also, ISG contains processed alerts from the providers.

Note: During tests, I was not able to update alerts across security products even Microsoft documents says that it’s supported. I will address this topic in a later post which is still under investigation.

Conclusion

The best synergy advantages from the security solutions come with the integrations between the products. Even though, your organization would use 3rd party SIEM the internal cloud integrations between the solutions are very beneficial.

Integrations between cloud and SIEM systems are one of the topics covered later on in technical posts.

Until next time!

Hardening SalesForce Integration in Azure Logic Apps + Azure Secure Devops Kit Alignment of Logic Apps

What are logic apps? Azure Logic Apps could be described as a convenient way to ”commoditize” input and out operations between multiple resources and API’s within Azure’s 1st party and third party ecosystem.

Where are Logic Apps used? As far as I’ve seen Logic apps are quite popular with architecture modernization projects where the architecture is leaning towards ”pluggable/decoupled” microservice architecture. Logic Apps can consume and push data to 1st and 3rd party services, which sometimes completely obsoletes previous built in API consumer client from a monolithic app – Salesforce integration is good example of this.

Logic apps support parallel flows, and different clauses to proceed, this is just the app I created for testing which works in more linear fashion…

But Isn’t app managed by Microsoft?

The fact that Logic App abstracts lot of the plumbing doesn’t mean it doesn’t have rich set of optional security features. We are exploring common ones for these.

Besides what I present here there is something called Logic Apps ISE (Integrated Security Environment), but that is separate concept, that is tailored to more specific network and data requirements.

While there aren’t particular best practices, this guide attempts to combine some of the AZSK controls and experiences from integrating SF into Logic Apps

But why harden Salesforce side, isn’t this Azure related security configuration

When you have integrations going across environments and clouds, you have to look at the big picture. Especially when the master data that you process is sometimes the data in SalesForce you need to ensure, that source of the data is also protected… Not just where the data is going to

There are similar recommendations on the AZSK kit, but this combines the IP restrictions + and reduces the amount of accounts that have access to the integration in SF side.

Logic App connectors must have minimum required permissions on data source
This ensures that connectors can be used only towards intended actions in the Logic App
Medium

Checklist

Before proceeding

  • While I am very comfy with building access policies in Azure, I cant claim the same competence in SF -> If you notice something that might cause a disaster, please add comments to this blog, or dm at twitter)
    • hence the Disclaimer: The information in this weblog is provided “AS IS” with no warranties and confers no rights.
    • Do any tests first in a ”NEW” developer account. Don’t ruin your uat/test accounts :)… (Get account here)
    • Make sure that put the IP restrictions under new cloned profile. Not a profile that is currently used
    • For any IP restrictions you set remember to include one IP which you can terminate to (VPN ip etc). This is to ensure that you wont lockout yourself of the SF environment.

Expected result

Once you’ve configured the setup (setup guide below), and tested failure and success behavior the end result you should show failed and success events.

After adding the Logic App IP’s the result should be success.

I can’t emphasize the importance of testing both failing and success behavior when testing any access policy.

Azure Side Configuration

  • Enable Azure Monitoring of the API connection for Azure Control Plane Actions
  • If you want to reduce the output for reader level roles of possibly confidential inputs/outputs in the Logic Apps, then enable Secure Inputs/Outputs setting for the Logic App
  • If your using storage account as part of your logic app flow , ensure storage account logging is enabled
1.0;2020-02-20T07:33:25.0496414Z;PutBlob;Success;201;13;13;authenticated;logicapp;logicapp;blob;"https://logicapp.blob.core.windows.net:443/accounts/Apparel";"/logicapp/accounts/Apparel";

SalesForce Side

  • Profiles in Users
    • Clone, or create new profile for Logic Apps integration.
      • I cloned the System administrator profile, but more granular setup is possibly available by configuring the least privilege permissions for the application
    • Under ’Login IP ranges’ add the Azure Logic App IP ranges that can be found from properties (Include the user IP also which you will use for registering the API in the Logic App)
undefined

  • The SF integration connection ’API connection’ in Azure runs in the context of the user registering the API first time in logic apps
    • There is no point to allow non related users of creating OAuth2 tokens bearing the apps context
  • Oauth Policies in App Manager
    • For ’Permitted users’, change setting to ’Admin approved users are pre-authorized’
    • For ’IP Relaxation’ change setting to ’Enforce IP restrictions’
  • Profiles in App Manager
    • Add the profile created in previous step to profiles
      • This step restricts the users that can use the integration to a custom profile, or system administrators profile
  • In Security ’Session Management’ Ensure IP restrictions aren’t just enforced on login by selecting this setting
    • This setting applies based on the profile IP restriction settings, not globally unless all your profiles have IP login restrictions in place
IP Restrictions setting SF documentation
  • After this revoke all existing user tokens for the app under user settings and Oauth connected Apps
  • Reauthorize the connection
  • Now test the integration to work

Further AZSK recommendations

  • This blog adds the non-Azure side recommendation to hardening guidelines.
  • The following guidelines are where you should start, if you start hardening your Logic Apps
  • if your logging data in Logic Apps (which is recommended to detect misuse and doing debugging: Understand which and what kind of data (PII) etc will be stored/persisted in the logs outside possibly standard access and retention policies.
    • If you want to separate part of the data create separate logic app with secure inputs/outputs, or implement secure inputs/outputs for the current application

Source https://github.com/azsk/DevOpsKit-docs/blob/master/02-Secure-Development/ControlCoverage/Feature/LogicApps.md

LogicApps

Description & Rationale ControlSeverity Automated Fix Script
Multiple Logic Apps should not be deployed in the same resource group unless they trust each other
API Connections contain critical information like credentials/secrets, etc., provided as part of configuration. Logic App can use all API Connections present in the same Resource Group. Thus, Resource Group should be considered as security boundary when threat modeling.
High Yes No
Logic App connectors must have minimum required permissions on data source
This ensures that connectors can be used only towards intended actions in the Logic App
Medium No No
All users/identities must be granted minimum required permissions using Role Based Access Control (RBAC)
Granting minimum access by leveraging RBAC feature ensures that users are granted just enough permissions to perform their tasks. This minimizes exposure of the resources in case of user/service account compromise.
Medium Yes No
If Logic App fires on an HTTP Request (e.g. Request or Webhook) then provide IP ranges for triggers to prevent unauthorized access
Specifying the IP range ensures that the triggers can be invoked only from a restricted set of endpoints.
High Yes No
Must provide IP ranges for contents to prevent unauthorized access to inputs/outputs data of Logic App run history
Using the firewall feature ensures that access to the data or the service is restricted to a specific set/group of clients. While this may not be feasible in all scenarios, when it can be used, it provides an extra layer of access control protection for critical assets.
High Yes No
Application secrets and credentials must not be in plain text in source code (code view) of a Logic App
Keeping secrets such as DB connection strings, passwords, keys, etc. in clear text can lead to easy compromise at various avenues during an application’s lifecycle. Storing them in a key vault ensures that they are protected at rest.
High Yes No
Logic App access keys must be rotated periodically
Periodic key/password rotation is a good security hygiene practice as, over time, it minimizes the likelihood of data loss/compromise which can arise from key theft/brute forcing/recovery attacks.
Medium No No
Diagnostics logs must be enabled with a retention period of at least 365 days.
Logs should be retained for a long enough period so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. A period of 1 year is typical for several compliance requirements as well.
Medium Yes No
Logic App Code View code should be backed up periodically
Logic App code view contains application’s workflow logic and API connections detail which could be lost if there is no backup. No backup/disaster recovery feature is available out of the box in Logic Apps.
Medium No No

Source https://github.com/azsk/DevOpsKit-docs/blob/master/02-Secure-Development/ControlCoverage/Feature/APIConnection.md

APIConnection

Description & Rationale ControlSeverity Automated Fix Script
Logic App connectors must use AAD-based authentication wherever possible
Using the native enterprise directory for authentication ensures that there is a built-in high level of assurance in the user identity established for subsequent access control. All Enterprise subscriptions are automatically associated with their enterprise directory (xxx.onmicrosoft.com) and users in the native directory are trusted for authentication to enterprise subscriptions.
High Yes No
Data transit across connectors must use encrypted channel
Use of HTTPS ensures server/service authentication and protects data in transit from network layer man-in-the-middle, eavesdropping, session-hijacking attacks.
High Yes No

Br Joosua!