Microsoft 365 – Security Monitoring

Disclaimer: This is a very high-level post of M365 security monitoring leaving the technical stuff on the later blog posts. It doesn’t cover all products and possible integrations in the Microsoft cloud ecosystem and is more of a starting point for a journey of evaluating possible security solutions.

Security monitoring is a topic I have been working with my colleagues (@santasalojh & @pitkarantaM) for the last two years. During that time we have helped many organizations to get better visibility to cloud security monitoring. Now it’s time to share thoughts around this topic, starting from the root and digging deep down into the tech side.

Setting Up The Scene

Logging and monitoring is a huge topic in the Microsoft cloud ecosystem and for that reason, I will concentrate in this post to M365 security monitoring and alerts (which is quite obvious as a cyber-security expert), not the metrics in here. Also, I would like to highlight again that this blog is a very high level leaving the technical stuff on the later blog posts.

The questions I have heard quite often from customers are

  • Which native Microsoft tools I should use for monitoring the security of the cloud environment?
  • How can/should I manage all the alerts in the ecosystem (easily)?
  • Should I use 3rd party tools for security monitoring?

Unfortunately, I have to say that it depends on many things. It’s a matter of licenses, which tools you have on your toolbox, how much the organization is utilizing Microsoft cloud workloads, the maturity of the organization or service provider, other cloud service provider tools in use etc, …

Microsoft offers a brilliant set of cloud security solutions for use, here are a few named ones:

  • Azure Security Center
  • Microsoft 365 Security Center
  • Azure AD Identity Protection
  • Microsoft Defender ATP
  • Azure ATP & O365 ATP
  • Cloud App Security
  • Azure Sentinel

Architecture

Microsoft cyber-security architecture is the document for the start when the organization is planning cyber-security architecture in the Microsoft environment. In the first view, it looks a bit crowded, but once you get familiar with it, it will be beneficial. What’s covered here is the components inside the yellow circle, the Security Operations Center (SOC) part.

Internal Cloud Integrations

When planning security monitoring in the Microsoft cloud, the integrations (+ licenses) plays an important role to get most out of the security solutions. Some of the integrations are already in place by default but most of them need to be established by admin.

Integration Architecture – Example

The picture below doesn’t cover all possible security solutions and integration scenarios, it rather gives overall understanding which solutions can be used to investigate alerts and suspicious activity in the cloud or on-premises.

The best synergy advantages come with the integrations between security solutions. In the top category are the solutions which, in my opinion, are the best ones to start the investigation.

Naturally, if Sentinel is in use it triggers the alert and investigation starts from there. It could also be replaced by 3rd party SIEM (Splunk, QRadar, etc). Both Sentinel and Cloud App Security have a rich set of capabilities for investigation and contain a number of data from the user identity, device identity, and network traffic.

If you are wondering why investigation doesn’t start from Azure Security Center or M365 Security Center, the reason is that alerts from these solutions can be found or send to SIEM (in this example case – Sentinel).

Investigating The Alerts

Highly encourage to use SIEM (Sentinel) or MCAS for starting the investigation. Deep dive analysis can be made in the alert source itself, for example in MDATP if the initial alert was generated in there.

Azure Sentinel

Sentinel is a fully cloud-based SIEM solution and it offers also SOAR capabilities. Sentinel provides a single pane of glass solution for alert detection, threat visibility, proactive hunting, and threat response including Azure Workbooks & Jupyter Notebooks which can be used in advanced threat hunting and investigation scenarios.

Cloud App Security

Microsoft Cloud App Security (MCAS) is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. MCAS has UEBA capabilities and as I have said many times it’s, in my opinion, the best tool in Microsoft ecosystem to investigate internal user suspicious, and possible malicious activity.

Intelligent Security Graph (ISG)

According to Microsoft: to be successful with threat intelligence, you must have a large diverse set of data and you have to apply it to your processes and tools.

The data sources include specialized security sources, insights from dark markets (criminal forums), and learning from incident response engagements. Key takeaways from the slide:

  • Products send data to graph
  • Products use Interflow APIs to access results
  • Products generate data which feeds back into the graph

In later blog posts, I will dig more deeply into the Security Graph functionalities. At the time of writing, the following solutions are providers to the ISG (GET & PATCH):

  • Azure Security Center (ASC)
  • Azure AD Identity Protection (IPC)
  • Microsoft Cloud App Security
  • Microsoft Defender ATP (MDATP)
  • Azure ATP (AATP)
  • Office 365
  • Azure Information Protection
  • Azure Sentinel

Integration with ISG makes sense if you are using on-prem SIEM and you don’t want to pull all of the logging and monitoring data from cloud to on-premises. Also, ISG contains processed alerts from the providers.

Note: During tests, I was not able to update alerts across security products even Microsoft documents says that it’s supported. I will address this topic in a later post which is still under investigation.

Conclusion

The best synergy advantages from the security solutions come with the integrations between the products. Even though, your organization would use 3rd party SIEM the internal cloud integrations between the solutions are very beneficial.

Integrations between cloud and SIEM systems are one of the topics covered later on in technical posts.

Until next time!

Post: Create Logic App for Azure Sentinel/Log Analytics

While I’ve browsed the excellent TechCommunity article about custom connectors, until now I’ve used my own HTTP client implementation to implement connectors against Log Analytics HTTP collector.

All I can say that I am getting seriously spoiled by Logic Apps and the Data Collector Connector…

  • Generate the payload from the app
  • Watch Logic App ingest the payload
  • Check the content from Log Analytics

Br, Joosua

Hardening SalesForce Integration in Azure Logic Apps + Azure Secure Devops Kit Alignment of Logic Apps

What are logic apps? Azure Logic Apps could be described as a convenient way to ”commoditize” input and out operations between multiple resources and API’s within Azure’s 1st party and third party ecosystem.

Where are Logic Apps used? As far as I’ve seen Logic apps are quite popular with architecture modernization projects where the architecture is leaning towards ”pluggable/decoupled” microservice architecture. Logic Apps can consume and push data to 1st and 3rd party services, which sometimes completely obsoletes previous built in API consumer client from a monolithic app – Salesforce integration is good example of this.

Logic apps support parallel flows, and different clauses to proceed, this is just the app I created for testing which works in more linear fashion…

But Isn’t app managed by Microsoft?

The fact that Logic App abstracts lot of the plumbing doesn’t mean it doesn’t have rich set of optional security features. We are exploring common ones for these.

Besides what I present here there is something called Logic Apps ISE (Integrated Security Environment), but that is separate concept, that is tailored to more specific network and data requirements.

While there aren’t particular best practices, this guide attempts to combine some of the AZSK controls and experiences from integrating SF into Logic Apps

But why harden Salesforce side, isn’t this Azure related security configuration

When you have integrations going across environments and clouds, you have to look at the big picture. Especially when the master data that you process is sometimes the data in SalesForce you need to ensure, that source of the data is also protected… Not just where the data is going to

There are similar recommendations on the AZSK kit, but this combines the IP restrictions + and reduces the amount of accounts that have access to the integration in SF side.

Logic App connectors must have minimum required permissions on data source
This ensures that connectors can be used only towards intended actions in the Logic App
Medium

Checklist

Before proceeding

  • While I am very comfy with building access policies in Azure, I cant claim the same competence in SF -> If you notice something that might cause a disaster, please add comments to this blog, or dm at twitter)
    • hence the Disclaimer: The information in this weblog is provided “AS IS” with no warranties and confers no rights.
    • Do any tests first in a ”NEW” developer account. Don’t ruin your uat/test accounts :)… (Get account here)
    • Make sure that put the IP restrictions under new cloned profile. Not a profile that is currently used
    • For any IP restrictions you set remember to include one IP which you can terminate to (VPN ip etc). This is to ensure that you wont lockout yourself of the SF environment.

Expected result

Once you’ve configured the setup (setup guide below), and tested failure and success behavior the end result you should show failed and success events.

After adding the Logic App IP’s the result should be success.

I can’t emphasize the importance of testing both failing and success behavior when testing any access policy.

Azure Side Configuration

  • Enable Azure Monitoring of the API connection for Azure Control Plane Actions
  • If you want to reduce the output for reader level roles of possibly confidential inputs/outputs in the Logic Apps, then enable Secure Inputs/Outputs setting for the Logic App
  • If your using storage account as part of your logic app flow , ensure storage account logging is enabled
1.0;2020-02-20T07:33:25.0496414Z;PutBlob;Success;201;13;13;authenticated;logicapp;logicapp;blob;"https://logicapp.blob.core.windows.net:443/accounts/Apparel";"/logicapp/accounts/Apparel";

SalesForce Side

  • Profiles in Users
    • Clone, or create new profile for Logic Apps integration.
      • I cloned the System administrator profile, but more granular setup is possibly available by configuring the least privilege permissions for the application
    • Under ’Login IP ranges’ add the Azure Logic App IP ranges that can be found from properties (Include the user IP also which you will use for registering the API in the Logic App)
undefined

  • The SF integration connection ’API connection’ in Azure runs in the context of the user registering the API first time in logic apps
    • There is no point to allow non related users of creating OAuth2 tokens bearing the apps context
  • Oauth Policies in App Manager
    • For ’Permitted users’, change setting to ’Admin approved users are pre-authorized’
    • For ’IP Relaxation’ change setting to ’Enforce IP restrictions’
  • Profiles in App Manager
    • Add the profile created in previous step to profiles
      • This step restricts the users that can use the integration to a custom profile, or system administrators profile
  • In Security ’Session Management’ Ensure IP restrictions aren’t just enforced on login by selecting this setting
    • This setting applies based on the profile IP restriction settings, not globally unless all your profiles have IP login restrictions in place
IP Restrictions setting SF documentation
  • After this revoke all existing user tokens for the app under user settings and Oauth connected Apps
  • Reauthorize the connection
  • Now test the integration to work

Further AZSK recommendations

  • This blog adds the non-Azure side recommendation to hardening guidelines.
  • The following guidelines are where you should start, if you start hardening your Logic Apps
  • if your logging data in Logic Apps (which is recommended to detect misuse and doing debugging: Understand which and what kind of data (PII) etc will be stored/persisted in the logs outside possibly standard access and retention policies.
    • If you want to separate part of the data create separate logic app with secure inputs/outputs, or implement secure inputs/outputs for the current application

Source https://github.com/azsk/DevOpsKit-docs/blob/master/02-Secure-Development/ControlCoverage/Feature/LogicApps.md

LogicApps

Description & Rationale ControlSeverity Automated Fix Script
Multiple Logic Apps should not be deployed in the same resource group unless they trust each other
API Connections contain critical information like credentials/secrets, etc., provided as part of configuration. Logic App can use all API Connections present in the same Resource Group. Thus, Resource Group should be considered as security boundary when threat modeling.
High Yes No
Logic App connectors must have minimum required permissions on data source
This ensures that connectors can be used only towards intended actions in the Logic App
Medium No No
All users/identities must be granted minimum required permissions using Role Based Access Control (RBAC)
Granting minimum access by leveraging RBAC feature ensures that users are granted just enough permissions to perform their tasks. This minimizes exposure of the resources in case of user/service account compromise.
Medium Yes No
If Logic App fires on an HTTP Request (e.g. Request or Webhook) then provide IP ranges for triggers to prevent unauthorized access
Specifying the IP range ensures that the triggers can be invoked only from a restricted set of endpoints.
High Yes No
Must provide IP ranges for contents to prevent unauthorized access to inputs/outputs data of Logic App run history
Using the firewall feature ensures that access to the data or the service is restricted to a specific set/group of clients. While this may not be feasible in all scenarios, when it can be used, it provides an extra layer of access control protection for critical assets.
High Yes No
Application secrets and credentials must not be in plain text in source code (code view) of a Logic App
Keeping secrets such as DB connection strings, passwords, keys, etc. in clear text can lead to easy compromise at various avenues during an application’s lifecycle. Storing them in a key vault ensures that they are protected at rest.
High Yes No
Logic App access keys must be rotated periodically
Periodic key/password rotation is a good security hygiene practice as, over time, it minimizes the likelihood of data loss/compromise which can arise from key theft/brute forcing/recovery attacks.
Medium No No
Diagnostics logs must be enabled with a retention period of at least 365 days.
Logs should be retained for a long enough period so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. A period of 1 year is typical for several compliance requirements as well.
Medium Yes No
Logic App Code View code should be backed up periodically
Logic App code view contains application’s workflow logic and API connections detail which could be lost if there is no backup. No backup/disaster recovery feature is available out of the box in Logic Apps.
Medium No No

Source https://github.com/azsk/DevOpsKit-docs/blob/master/02-Secure-Development/ControlCoverage/Feature/APIConnection.md

APIConnection

Description & Rationale ControlSeverity Automated Fix Script
Logic App connectors must use AAD-based authentication wherever possible
Using the native enterprise directory for authentication ensures that there is a built-in high level of assurance in the user identity established for subsequent access control. All Enterprise subscriptions are automatically associated with their enterprise directory (xxx.onmicrosoft.com) and users in the native directory are trusted for authentication to enterprise subscriptions.
High Yes No
Data transit across connectors must use encrypted channel
Use of HTTPS ensures server/service authentication and protects data in transit from network layer man-in-the-middle, eavesdropping, session-hijacking attacks.
High Yes No

Br Joosua!

Experimental testing: Azure AD Application Proxy With Azure Application Gateway WAF

Disclaimer: This configuration example is only for experimental testing. I’d advise against using it in any kind of serious scenario as the configuration has no official support …and is based on-whim testing 🙂

I was recently browsing Feedback for Azure AD Application Proxy, and noticed that I am not the only one who would like to see WAF functionality enabled for AAD App Proxy.

The comment for ”Under Review” raised my curiosity ” We are reviewing options for creating smoother integration and providing documentation on how to layer the two. ”
https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/31964980-allow-azure-ad-app-proxy-apps-to-use-the-azure-web

While its fairly easy to retrofit WAF API -scenario with Azure AD App Proxy and API management, it’s another thing to also make it render web pages in a browser without a custom front end. https://securecloud.blog/2019/06/01/concept-publish-on-prem-api-using-aad-app-proxy-and-api-management-with-azure-ad-jwt-bearer-grant/

Test configuration

Application Proxy Configuration

Application Gateway Configuration

  1. Create Listener binding the cert for App Proxy Apps FQDN

2. Add the IP of Azure AD App Proxy as back-end target

  • The logic: Point the DNS to Application Gateway instead to App Proxy Application, and point the application gateway to that CNAME, and override the naming bind in the listener of Application Gateway
Use the name AppProxy DNS should be pointed at

3. Override the host name to the same name that is in the DNS (this would create loop, unless we hadn’t different name in the back-end pool)

Now watch the back end for traffic originating through WAF + AppProxy

Back-end application receiving WAF forwarded traffic, with both App Proxy and Application Gateway headers
  • Obvious problem is that the attacker can bypass WAF by ”gatewaying” itself with custom DNS directly to the AppProxy.
    • Obviously there is no public reference anywhere, what is the IP for Azure AD App Proxy app, or whats the name of the app, as the communication goes through App GW, and DNS points to App GW. Depending on the back-end app, the attacker might figure out a simple way, to get the app ”echoing” back the route (For example headers…)
  • Sub optimal mitigations would be (if back-end app is configurable, and you want to check in back end that did the request come from WAF)
      • that the only calls that have last X-Forwarded-For IP as Application Gateway would be authorized.
      • Or to set an ”secret” header in Application Gateway URL rewrite rules, and check the presence of that header in the back-end app for authorization
  • If I had to do this in production today, I would place WAF in the internal network before the back-end app
AppGW WAF headers
The app is the consent extractor, which i just used as placeholder app (has no context meaning in this scenario)

I will stay tuned to see if this feature gets actually implemented!

Br Joosua!

NodeJS Logging integration with Azure Log Analytics/Sentinel

If you want to send data from NodeJS application to Log Analytics/Sentinel you can do it by using the HTTP Log Collector API.

Sending data to Sentinel Connected Log Analytics WorkSpace as part of incoming request callback

Note: If your app is in Azure PaaS solution, you should check out AppInsights first before going to this route 🙂

Writing module for the Log Collector API

There we’re some existing examples to do this, but I couldn’t get them to work in quick breeze. Due to this I did my own implementation with some key differences:

Signature generation part is done in two phases to improve readability

  • Basically I separated the creation of buffer shared key to base64 into an separate variable (var)

Function is bit different with callbacks and try catch logic added

Request Module will handle the Body Payload as non stringified

I did find, that If I sent the body payload stringified, it wouldnt match with the signature. To get the signature to match with the body payload, I added the request option json:true, and sent the non-stringified JSON payload.

The module to be imported

//https://nodejs.org/api/crypto.html
//https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api
//https://stackoverflow.com/questions/44532530/encoding-encrypting-the-azure-log-analytics-authorization-header-in-node-js
const rq = require('request')
const crypto = require('crypto')
const util = require('util')
function PushToAzureLogs (content,{id,key,rfc1123date,LogType}, callback) {
    console.log(id)
    try {
        //Checking if the data can be parsed as JSON
        if ( JSON.parse(JSON.stringify(content)) ) {
            var length = Buffer.byteLength(JSON.stringify(content),'utf8')
            var binaryKey = Buffer.from(key,'base64')
            var stringToSign = 'POST\n' + length + '\napplication/json\nx-ms-date:' + rfc1123date + '\n/api/logs';
            //console.log(stringToSign)
    
            var hash = crypto.createHmac('sha256',binaryKey)
            .update(stringToSign,'utf8')
            .digest('base64')
            var authorization = "SharedKey "+id +":"+hash
            var options= {
            json:true,
            headers:{
            "content-type": "application/json", 
            "authorization":authorization,
            "Log-Type":LogType,
            "x-ms-date":rfc1123date,
            "time-generated-field":"DateValue"
            },
            body:content    
            }
            var uri = "https://"+ id + ".ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
    
            rq.post(uri,options,(err,Response) => {
                //return if error inside try catch block 
                if (err) {
                    return callback(("Not data sent to LA: " + err))
                }
               callback(("Data sent to LA " +util.inspect(content) + "with status code " + Response.statusCode))
    
            })
    
        }
        //Catch error if data cant be parsed as JSON
    } catch (err) {
        callback(("Not data sent to LA: " + err))
    }
           
}
module.exports={PushToAzureLogs}

Example from ExpressJS

//Add your other dependencies before this
const logs = require('./SRC/laws')
//define workspace details
const laws = {
    id:'yourID',
  key:'yourKey',
    rfc1123date:(new Date).toUTCString(),
    LogType:'yourLogType'
}
app.get('/graph', (request,response) => {
//not related to LA, this the data I am sending to LA
    var token = mods.readToken('rt').access_token
    mods.apiCall(token,'https://graph.microsoft.com/v1.0/me?$select=displayName,givenName,onPremisesSamAccountName', (data) => {
    console.log('reading graph', data)
//LA object
    jsonObject = {
        WAFCaller:request.hostname,
        identity:data.displayName,
        datasource:request.ip
    }
    console.log(jsonObject)
//send data to LA
        logs.PushToAzureLogs(jsonObject,laws,(data)=> {
            console.log(data)
        })
//return original response
    response.send(data)
    })
})

Once the data is sent, it will take about 5-10 minutes, for the first entries to be popping up

If /when you attach the Log Analytics workspace to Sentinel, you can then use it create your own hunting queries, and combine the data you have with TI-feeds etc

Happy hunting!