Experimental testing: Azure AD Application Proxy With Azure Application Gateway WAF

Disclaimer: This configuration example is only for experimental testing. I’d advise against using it in any kind of serious scenario as the configuration has no official support …and is based on-whim testing 🙂

I was recently browsing Feedback for Azure AD Application Proxy, and noticed that I am not the only one who would like to see WAF functionality enabled for AAD App Proxy.

The comment for ”Under Review” raised my curiosity ” We are reviewing options for creating smoother integration and providing documentation on how to layer the two. ”
https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/31964980-allow-azure-ad-app-proxy-apps-to-use-the-azure-web

While its fairly easy to retrofit WAF API -scenario with Azure AD App Proxy and API management, it’s another thing to also make it render web pages in a browser without a custom front end. https://securecloud.blog/2019/06/01/concept-publish-on-prem-api-using-aad-app-proxy-and-api-management-with-azure-ad-jwt-bearer-grant/

Test configuration

Application Proxy Configuration

Application Gateway Configuration

  1. Create Listener binding the cert for App Proxy Apps FQDN

2. Add the IP of Azure AD App Proxy as back-end target

  • The logic: Point the DNS to Application Gateway instead to App Proxy Application, and point the application gateway to that CNAME, and override the naming bind in the listener of Application Gateway
Use the name AppProxy DNS should be pointed at

3. Override the host name to the same name that is in the DNS (this would create loop, unless we hadn’t different name in the back-end pool)

Now watch the back end for traffic originating through WAF + AppProxy

Back-end application receiving WAF forwarded traffic, with both App Proxy and Application Gateway headers
  • Obvious problem is that the attacker can bypass WAF by ”gatewaying” itself with custom DNS directly to the AppProxy.
    • Obviously there is no public reference anywhere, what is the IP for Azure AD App Proxy app, or whats the name of the app, as the communication goes through App GW, and DNS points to App GW. Depending on the back-end app, the attacker might figure out a simple way, to get the app ”echoing” back the route (For example headers…)
  • Sub optimal mitigations would be (if back-end app is configurable, and you want to check in back end that did the request come from WAF)
      • that the only calls that have last X-Forwarded-For IP as Application Gateway would be authorized.
      • Or to set an ”secret” header in Application Gateway URL rewrite rules, and check the presence of that header in the back-end app for authorization
  • If I had to do this in production today, I would place WAF in the internal network before the back-end app
AppGW WAF headers
The app is the consent extractor, which i just used as placeholder app (has no context meaning in this scenario)

I will stay tuned to see if this feature gets actually implemented!

Br Joosua!

NodeJS Logging integration with Azure Log Analytics/Sentinel

If you want to send data from NodeJS application to Log Analytics/Sentinel you can do it by using the HTTP Log Collector API.

Sending data to Sentinel Connected Log Analytics WorkSpace as part of incoming request callback

Note: If your app is in Azure PaaS solution, you should check out AppInsights first before going to this route 🙂

Writing module for the Log Collector API

There we’re some existing examples to do this, but I couldn’t get them to work in quick breeze. Due to this I did my own implementation with some key differences:

Signature generation part is done in two phases to improve readability

  • Basically I separated the creation of buffer shared key to base64 into an separate variable (var)

Function is bit different with callbacks and try catch logic added

Request Module will handle the Body Payload as non stringified

I did find, that If I sent the body payload stringified, it wouldnt match with the signature. To get the signature to match with the body payload, I added the request option json:true, and sent the non-stringified JSON payload.

The module to be imported

//https://nodejs.org/api/crypto.html
//https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api
//https://stackoverflow.com/questions/44532530/encoding-encrypting-the-azure-log-analytics-authorization-header-in-node-js
const rq = require('request')
const crypto = require('crypto')
const util = require('util')
function PushToAzureLogs (content,{id,key,rfc1123date,LogType}, callback) {
    console.log(id)
    try {
        //Checking if the data can be parsed as JSON
        if ( JSON.parse(JSON.stringify(content)) ) {
            var length = Buffer.byteLength(JSON.stringify(content),'utf8')
            var binaryKey = Buffer.from(key,'base64')
            var stringToSign = 'POST\n' + length + '\napplication/json\nx-ms-date:' + rfc1123date + '\n/api/logs';
            //console.log(stringToSign)
    
            var hash = crypto.createHmac('sha256',binaryKey)
            .update(stringToSign,'utf8')
            .digest('base64')
            var authorization = "SharedKey "+id +":"+hash
            var options= {
            json:true,
            headers:{
            "content-type": "application/json", 
            "authorization":authorization,
            "Log-Type":LogType,
            "x-ms-date":rfc1123date,
            "time-generated-field":"DateValue"
            },
            body:content    
            }
            var uri = "https://"+ id + ".ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
    
            rq.post(uri,options,(err,Response) => {
                //return if error inside try catch block 
                if (err) {
                    return callback(("Not data sent to LA: " + err))
                }
               callback(("Data sent to LA " +util.inspect(content) + "with status code " + Response.statusCode))
    
            })
    
        }
        //Catch error if data cant be parsed as JSON
    } catch (err) {
        callback(("Not data sent to LA: " + err))
    }
           
}
module.exports={PushToAzureLogs}

Example from ExpressJS

//Add your other dependencies before this
const logs = require('./SRC/laws')
//define workspace details
const laws = {
    id:'yourID',
  key:'yourKey',
    rfc1123date:(new Date).toUTCString(),
    LogType:'yourLogType'
}
app.get('/graph', (request,response) => {
//not related to LA, this the data I am sending to LA
    var token = mods.readToken('rt').access_token
    mods.apiCall(token,'https://graph.microsoft.com/v1.0/me?$select=displayName,givenName,onPremisesSamAccountName', (data) => {
    console.log('reading graph', data)
//LA object
    jsonObject = {
        WAFCaller:request.hostname,
        identity:data.displayName,
        datasource:request.ip
    }
    console.log(jsonObject)
//send data to LA
        logs.PushToAzureLogs(jsonObject,laws,(data)=> {
            console.log(data)
        })
//return original response
    response.send(data)
    })
})

Once the data is sent, it will take about 5-10 minutes, for the first entries to be popping up

If /when you attach the Log Analytics workspace to Sentinel, you can then use it create your own hunting queries, and combine the data you have with TI-feeds etc

Happy hunting!

Azure Log Analytics – Permission Models

This blog post is all about Log Analytics workspace (later referred as LA) permission models which changed at May 2019. The options (at time of writing) for granting permissions are:

  • Grant access using Azure role-based access control (RBAC).
  • Grant access to the workspace using workspace permissions.
  • Grant access using a specific table in the workspace using Azure RBAC.

These new options gives more granular controls for granting permission to Log Analytics workspaces instead of old model. Anyway, I see still place for granting workspace permissions. It comes back to Log Analytics’ workspace design and organization needs. How operating teams, development teams, and security organization needs to access the performance and security-related events.

There are basically two approaches for Log Analytics design:

  • Centralized
  • Siloed

If you have chosen the centralized model then more granularity access controls might be needed and you can leverage more about new access control modes table (Resource centric RBAC & Table level RBAC).

Access control mode

The new access model is automatically configured to all workspaces created after March 2019.

How to check which access model is in use?

Navigate to Azure Log Analytics and overview page where you can find ”access control mode” options for the workspace.

  • If you are using legacy model ”Access Control mode = require workspace permissions”
  • If new model is in place ”Access Control mode = Use resource or workspace permissions”

With PowerShell

  • If output is true – Log Analytics is using new access model
  • If output is false or null – Log Analytics is using legacy model
Get-AzResource -ResourceType Microsoft.OperationalInsights/workspaces -ExpandProperties | foreach {$_.Name + ": " + $_.Properties.features.enableLogAccessUsingOnlyResourcePermissions}

How to change access model?

Access model can be changed directory from resource properties and verified from Activity log.

If Powershell is the favorite tool for changes here you go (code from docs.microsoft.com)

$WSName = "my-workspace"
$Workspace = Get-AzResource -Name $WSName -ExpandProperties
if ($Workspace.Properties.features.enableLogAccessUsingOnlyResourcePermissions -eq $null)
    { $Workspace.Properties.features | Add-Member enableLogAccessUsingOnlyResourcePermissions $true -Force }
else
    { $Workspace.Properties.features.enableLogAccessUsingOnlyResourcePermissions = $true }
Set-AzResource -ResourceId $Workspace.ResourceId -Properties $Workspace.Properties -Force

New Model In Action

The scenario

Siloed Log Analytics workspace design is in use and operational team members needs to maintain Virtual Machines. In practical person needs to read Virtual Machine metrics and performance data from LA.

If new access model is in use members of the team can access the needed logs directly from the resource blade. They cannot see the LA at all because they don’t have global read access to the resource. If user browses to Azure Monitor the user can see logs he/she has access to.

When log search is opened from a resource menu, the search is automatically scoped to that resource and resource-centric queries are used. This means that if users have access to a resource, they’ll be able to access their logs.

Built-in Roles (from docs.microsoft.com)

Azure has two built-in user roles for Log Analytics workspaces:

  • Log Analytics Reader
  • Log Analytics Contributor

Members of the Log Analytics Reader role can:

  • View and search all monitoring data
  • View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.

Nothing prevents from using built-in roles but keep in mind that these are top level permissions. If user has Log Analytics reader permission global read permission with the standard Reader or Contributor roles that include the */read action, it will override the per-table access control and give them access to all log data.

Table Level Access

More granular controls can be achieved with Table level access in case its needed. If centralized Log Analytics model is chosen scenario it might be necessary to grant permissions to different teams to read data from LA tables. This can be achieved with Table level RBAC permissions.

With table level RBAC you can grant permissions only to needed LA tables depending on your needs.

Considerations

If a user has global read permission (Reader or Contributor roles) that include the */read action, it will override the per-table access control and give them access to all log data.

If a user has access to specific LA table but no other permissions, they would be able to access log data through the API but not from the Azure portal.

Summary

Most cases I have worked with, multi-homing Log Analytics design has been used and workspace centric model is enough to satisfy the needs. If centralized model (as few LA’s as possible) would be in use new access model gives more granularity.

Enable Microsoft Security Graph Alerts in Log Analytics

While its not yet configurable in GUI, you can already today configure (with proper prerequisites) the preview for Security Graph API and Log Analytics Integration.

Side note: Similar Integration can be done for Event Hub, and its recommended while enabling LA integration to enable EH integration (if EH exists in tenant)
https://docs.microsoft.com/en-us/graph/security-qradar-siemintegration

While its not yet configurable in GUI, you can already today configure (with proper prerequisites) the preview for Security Graph API and Log Analytics Integration.

Alerts today include  (Check link for original reference)

https://docs.microsoft.com/en-us/graph/api/resources/security-api-overview?view=graph-rest-1.0

I will avoid using the word Azure Monitor here, as this integration is not really in my opinion tapping into any resource API in the Azure Monitor (unless you consider Azure Monitor as umbrella term

AAD Identity Protection Alert
Same alert in Log Analytics
The target in event hub ( I will do separate blog about ELK integration)

integration includes also Event Hub – which has been available for a good while already via similar method

Before proceeding: As always – read the disclaimer in the bottom of the page) I put the disclaimer up also here to emphasize its matter, not just as an escape clause

How To: integration with Log Analytics

  • Disclaimer: The behavior may vary per tenant
  • It took few hours till events started popping up in Log Analytics and event hub
  • To enable the integration You can use existing AAD tokens in cache to achieve this
    • I’ve usually preffered the ADAL libraries to do this bit more simply, but in this case we want to use the AzureRM module to populate parts of the script simply (/done similarly here)


Login-AzureRmAccount
$ctx =Get-AzureRmContext
Get-AzureRmSubscription | Out-GridView -PassThru |Set-AzureRmContext

$rmEndpoint ="https://management.azure.com/providers/Microsoft.SecurityGraph/diagnosticSettings/securityApiAlerts?api-version=2017-04-01-preview"

#Details from IAM API
$Token = Invoke-RestMethod "https://login.windows.net/common/oauth2/token" -Method POST -Body @{
   grant_type="refresh_token"
   refresh_token = ($ctx.TokenCache.ReadItems() | Out-GridView -PassThru ).refreshToken
   content_type = "application/json"
   resource="https://management.core.windows.net/"
}

$json = '
{
  "location": "",
  "properties": {
      "workspaceId": null,
    "name": "securityApiAlerts",
    "serviceBusRuleId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP/providers/Microsoft.EventHub/namespaces/EVENT_HUB_NAMESPACE/authorizationrules/RootManageSharedAccessKey",
    "logs": [
      {
        "category": "Alert",
        "enabled": true,
        "retentionPolicy": {
          "enabled": true,
          "days": 7
        }
      }
    ]
  }
}
' | ConvertFrom-Json 

$json.properties.serviceBusRuleId = ( (Get-AzureRmResource | where {$_.ResourceType -match "Eventhub"}| Out-GridView -PassThru).resourceID  + "/authorizationrules/RootManageSharedAccessKey")

$json.properties.workspaceId = ((Get-AzureRmOperationalInsightsWorkspace | Out-GridView -PassThru ).resourceID)

$data = Invoke-RestMethod -Uri $rmEndpoint -Method Put -Body ($json | ConvertTo-Json -Depth 4) -UseBasicParsing -Headers @{
Authorization = ($Token.token_type) +" "+ ($Token.access_token)
} -ContentType "application/json; charset=utf-8"



Comment from @samilamppu

its quite likely, that you need to have Security Center Standard with proper event coverage enabled, to get the Node for security alerts under LA

Br,

Joosua

Best tip ever for Azure Security – Security Center built-in policies

As far as the inception of the ASC (Azure Security Center) Security Policies, I’ve been recommending attaching security policies to subscription, or management group.

Best part of this, is that the deployment is handled for you by ASC, if you’ve allowed/configured ASC policies in first place

  • On as side note, once you get comfy with policies, you’ll want to add region restrictions + bunch of best practice policies, but that shall be part of another blog post.

ASC’s default policy initiative

ref: https://docs.microsoft.com/en-us/azure/security-center/security-center-azure-policy 

With the ASC’s default policy initiative you get to audit and monitor the following controls proactively

  • Compute And Apps (14 out of 14 policies enabled)
  • Data (12 out of 12 policies enabled)
  • Identity (10 out of 10 policies enabled)

Kuvaesitys vaatii JavaScriptin.

 

How to assign ASC’s default policy initiative?

If for some reason this isn’t setup for you, you might want to check the following setting in security center

  • Once you’ve acknowledged and understand how you’re inheritance and ASC Plan is configured, you can enable the policies by one simple control ’ Assign Security Policy ’

ASC

3


Once the policies start, you’ll begin see the results of evaluation

  • 289 resources evaluated 🙂 – How great is this!

ASC45

Highly recommended!

Br, Joosua