Microsoft 365 – Security Monitoring

Disclaimer: This is a very high-level post of M365 security monitoring leaving the technical stuff on the later blog posts. It doesn’t cover all products and possible integrations in the Microsoft cloud ecosystem and is more of a starting point for a journey of evaluating possible security solutions.

Security monitoring is a topic I have been working with my colleagues (@santasalojh & @pitkarantaM) for the last two years. During that time we have helped many organizations to get better visibility to cloud security monitoring. Now it’s time to share thoughts around this topic, starting from the root and digging deep down into the tech side.

Setting Up The Scene

Logging and monitoring is a huge topic in the Microsoft cloud ecosystem and for that reason, I will concentrate in this post to M365 security monitoring and alerts (which is quite obvious as a cyber-security expert), not the metrics in here. Also, I would like to highlight again that this blog is a very high level leaving the technical stuff on the later blog posts.

The questions I have heard quite often from customers are

  • Which native Microsoft tools I should use for monitoring the security of the cloud environment?
  • How can/should I manage all the alerts in the ecosystem (easily)?
  • Should I use 3rd party tools for security monitoring?

Unfortunately, I have to say that it depends on many things. It’s a matter of licenses, which tools you have on your toolbox, how much the organization is utilizing Microsoft cloud workloads, the maturity of the organization or service provider, other cloud service provider tools in use etc, …

Microsoft offers a brilliant set of cloud security solutions for use, here are a few named ones:

  • Azure Security Center
  • Microsoft 365 Security Center
  • Azure AD Identity Protection
  • Microsoft Defender ATP
  • Azure ATP & O365 ATP
  • Cloud App Security
  • Azure Sentinel

Architecture

Microsoft cyber-security architecture is the document for the start when the organization is planning cyber-security architecture in the Microsoft environment. In the first view, it looks a bit crowded, but once you get familiar with it, it will be beneficial. What’s covered here is the components inside the yellow circle, the Security Operations Center (SOC) part.

Internal Cloud Integrations

When planning security monitoring in the Microsoft cloud, the integrations (+ licenses) plays an important role to get most out of the security solutions. Some of the integrations are already in place by default but most of them need to be established by admin.

Integration Architecture – Example

The picture below doesn’t cover all possible security solutions and integration scenarios, it rather gives overall understanding which solutions can be used to investigate alerts and suspicious activity in the cloud or on-premises.

The best synergy advantages come with the integrations between security solutions. In the top category are the solutions which, in my opinion, are the best ones to start the investigation.

Naturally, if Sentinel is in use it triggers the alert and investigation starts from there. It could also be replaced by 3rd party SIEM (Splunk, QRadar, etc). Both Sentinel and Cloud App Security have a rich set of capabilities for investigation and contain a number of data from the user identity, device identity, and network traffic.

If you are wondering why investigation doesn’t start from Azure Security Center or M365 Security Center, the reason is that alerts from these solutions can be found or send to SIEM (in this example case – Sentinel).

Investigating The Alerts

Highly encourage to use SIEM (Sentinel) or MCAS for starting the investigation. Deep dive analysis can be made in the alert source itself, for example in MDATP if the initial alert was generated in there.

Azure Sentinel

Sentinel is a fully cloud-based SIEM solution and it offers also SOAR capabilities. Sentinel provides a single pane of glass solution for alert detection, threat visibility, proactive hunting, and threat response including Azure Workbooks & Jupyter Notebooks which can be used in advanced threat hunting and investigation scenarios.

Cloud App Security

Microsoft Cloud App Security (MCAS) is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. MCAS has UEBA capabilities and as I have said many times it’s, in my opinion, the best tool in Microsoft ecosystem to investigate internal user suspicious, and possible malicious activity.

Intelligent Security Graph (ISG)

According to Microsoft: to be successful with threat intelligence, you must have a large diverse set of data and you have to apply it to your processes and tools.

The data sources include specialized security sources, insights from dark markets (criminal forums), and learning from incident response engagements. Key takeaways from the slide:

  • Products send data to graph
  • Products use Interflow APIs to access results
  • Products generate data which feeds back into the graph

In later blog posts, I will dig more deeply into the Security Graph functionalities. At the time of writing, the following solutions are providers to the ISG (GET & PATCH):

  • Azure Security Center (ASC)
  • Azure AD Identity Protection (IPC)
  • Microsoft Cloud App Security
  • Microsoft Defender ATP (MDATP)
  • Azure ATP (AATP)
  • Office 365
  • Azure Information Protection
  • Azure Sentinel

Integration with ISG makes sense if you are using on-prem SIEM and you don’t want to pull all of the logging and monitoring data from cloud to on-premises. Also, ISG contains processed alerts from the providers.

Note: During tests, I was not able to update alerts across security products even Microsoft documents says that it’s supported. I will address this topic in a later post which is still under investigation.

Conclusion

The best synergy advantages from the security solutions come with the integrations between the products. Even though, your organization would use 3rd party SIEM the internal cloud integrations between the solutions are very beneficial.

Integrations between cloud and SIEM systems are one of the topics covered later on in technical posts.

Until next time!

Post: Create Logic App for Azure Sentinel/Log Analytics

While I’ve browsed the excellent TechCommunity article about custom connectors, until now I’ve used my own HTTP client implementation to implement connectors against Log Analytics HTTP collector.

All I can say that I am getting seriously spoiled by Logic Apps and the Data Collector Connector…

  • Generate the payload from the app
  • Watch Logic App ingest the payload
  • Check the content from Log Analytics

Br, Joosua

NodeJS Logging integration with Azure Log Analytics/Sentinel

If you want to send data from NodeJS application to Log Analytics/Sentinel you can do it by using the HTTP Log Collector API.

Sending data to Sentinel Connected Log Analytics WorkSpace as part of incoming request callback

Note: If your app is in Azure PaaS solution, you should check out AppInsights first before going to this route 🙂

Writing module for the Log Collector API

There we’re some existing examples to do this, but I couldn’t get them to work in quick breeze. Due to this I did my own implementation with some key differences:

Signature generation part is done in two phases to improve readability

  • Basically I separated the creation of buffer shared key to base64 into an separate variable (var)

Function is bit different with callbacks and try catch logic added

Request Module will handle the Body Payload as non stringified

I did find, that If I sent the body payload stringified, it wouldnt match with the signature. To get the signature to match with the body payload, I added the request option json:true, and sent the non-stringified JSON payload.

The module to be imported

//https://nodejs.org/api/crypto.html
//https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api
//https://stackoverflow.com/questions/44532530/encoding-encrypting-the-azure-log-analytics-authorization-header-in-node-js
const rq = require('request')
const crypto = require('crypto')
const util = require('util')
function PushToAzureLogs (content,{id,key,rfc1123date,LogType}, callback) {
    console.log(id)
    try {
        //Checking if the data can be parsed as JSON
        if ( JSON.parse(JSON.stringify(content)) ) {
            var length = Buffer.byteLength(JSON.stringify(content),'utf8')
            var binaryKey = Buffer.from(key,'base64')
            var stringToSign = 'POST\n' + length + '\napplication/json\nx-ms-date:' + rfc1123date + '\n/api/logs';
            //console.log(stringToSign)
    
            var hash = crypto.createHmac('sha256',binaryKey)
            .update(stringToSign,'utf8')
            .digest('base64')
            var authorization = "SharedKey "+id +":"+hash
            var options= {
            json:true,
            headers:{
            "content-type": "application/json", 
            "authorization":authorization,
            "Log-Type":LogType,
            "x-ms-date":rfc1123date,
            "time-generated-field":"DateValue"
            },
            body:content    
            }
            var uri = "https://"+ id + ".ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
    
            rq.post(uri,options,(err,Response) => {
                //return if error inside try catch block 
                if (err) {
                    return callback(("Not data sent to LA: " + err))
                }
               callback(("Data sent to LA " +util.inspect(content) + "with status code " + Response.statusCode))
    
            })
    
        }
        //Catch error if data cant be parsed as JSON
    } catch (err) {
        callback(("Not data sent to LA: " + err))
    }
           
}
module.exports={PushToAzureLogs}

Example from ExpressJS

//Add your other dependencies before this
const logs = require('./SRC/laws')
//define workspace details
const laws = {
    id:'yourID',
  key:'yourKey',
    rfc1123date:(new Date).toUTCString(),
    LogType:'yourLogType'
}
app.get('/graph', (request,response) => {
//not related to LA, this the data I am sending to LA
    var token = mods.readToken('rt').access_token
    mods.apiCall(token,'https://graph.microsoft.com/v1.0/me?$select=displayName,givenName,onPremisesSamAccountName', (data) => {
    console.log('reading graph', data)
//LA object
    jsonObject = {
        WAFCaller:request.hostname,
        identity:data.displayName,
        datasource:request.ip
    }
    console.log(jsonObject)
//send data to LA
        logs.PushToAzureLogs(jsonObject,laws,(data)=> {
            console.log(data)
        })
//return original response
    response.send(data)
    })
})

Once the data is sent, it will take about 5-10 minutes, for the first entries to be popping up

If /when you attach the Log Analytics workspace to Sentinel, you can then use it create your own hunting queries, and combine the data you have with TI-feeds etc

Happy hunting!