Deep Diver – Azure AD B2C – Azure Monitor integration, configuration and delegation explained

I wrote this blog because configuring sign-in and audit log export from Azure AD B2C to Azure Monitor based on the existing guide may appear complex – Especially without previous knowledge of two distinct concepts: 1. Delegation model of Azure Lighthouse and the 2 .tenant model introduced by B2C.

If you are well versed in both (1,2), It doesn’t matter which guide you use to configure Azure guide– To be honest, I think the docs.microsoft.com one is gonna stay bit more updated, vs an Blog which tends to be more of an snapshot of its writing times assumptions. Also since this concept for at least me was bit confusing initially, there might be some false assumptions, if so don’t hesitate to DM in twitter…

Expected knowledge after reading this blog

  1. How Azure Lighthouse and delegated access work in conjunction to expose log export settings in B2C
  2. How to export to sign-in and and audit logs to Azure Storage, Log Analytics, or Event Hub

post structure

  • How B2C Monitoring works
    • Tenant structure
    • MSP Delegation model in B2C monitoring
      • Creating MSP offer to myself?
      • Two or three tenants? :)…
  • Configuration
    • prerequisites
    • Configuration 1. Create the delegation offer
    • Configuration 2. Deploy the offer
    • Configuration 3. Configure log export in B2C
  • Wrapping it up
  • Troubleshooting tips

How B2C Monitoring works

Tenant structure

If you look at the hierarchy in Azure, the B2C tenant is a resource under Azure Subscription of another Azure tenant

  • This can get confusing because typically in Azure the top level object is the tenant – There are no sub-tenants, only management groups, subscriptions, resource groups, and finally resources.

MSP Delegation model in B2C monitoring

The monitoring of Azure AD B2C is configured utilizing the MSP model introduced by Azure Lighthouse, even though its not clearly stated, I think this is an clever way to expose a feature to an resource (B2C), that from ”tenant level” does not have subscriptions as sub-concept where the resources could exist

Creating MSP offer to myself?

If you are the sole Azure party managing the B2C tenant, and the Azure tenant which in B2C exists as an resource, you end up creating an MSP offer to yourself. Nothing bad with this (Its actually pretty cool concept using the MSP model) but it might be hard to grasp at first sight…

  • The root reason for this is that you can’t create MSP delegations inside the same tenant.
    • This is enforced by the fact, that with B2C you end up always having two tenants anyways (Creating B2C requires a subscription which must exist in another tenant, before the B2C tenant exist)
https://azure.microsoft.com/en-us/trial/get-started-active-directory-b2c/

Two or three tenants? :)…

  • Circular condition: There is no explicit reason, other than convenience that you end up creating a delegation from sub resource in your tenant to your tenant hosting the sub resource
    • You could create the delegation to an third subscription, which would break the seemingly circular condition, but this would actually make things more complex. Let me show why
    • Current model ”Two tenants”
      1. Azure Tenant hosting the B2C resource configured to expose Log Analytics feature via delegation (3) to B2C tenant (2)
      2. B2C Tenant
      3. Delegation from B2C tenant (2) to (1) Azure Tenant to access Azure Monitor (Log Analytics)
    • Other model ”Three tenants” (Haven’t tested this one…)
      1. Azure Tenant hosting the B2C resource
      2. B2C Tenant
      3. Azure Tenant configured to expose Log Analytics feature via delegation (4) to B2C tenant (2)
      4. Delegation from B2C tenant (2) to (3) Azure Tenant to access Azure Monitor (Log Analytics)

Configuration

Prerequisites

Have to following done before you start with 1.

  • B2C tenant linked to an subscription
    • Group (Principal) in B2C tenant, which you grant the access to delegated resources exposed in another tenant
      • Place the B2C admin user in the group
      • Most convenient way to create a group in Azure AD B2C directory, is to access it from the aad.portal.azure.com (this accesses it from the Azure AD Side… I know that sounds confusing, but perhaps an subject to another blog post)
  • Azure Tenant
    • Resource Group which you which you expose to the delegation
    • Log analytics workspace
  • Tools
    • AZ powershell module
    • VScode, or any other preferable editor

Screenshots for prerequisites

Configuration 1. Create the delegation offer

  • mspOfferName, any suitable string ”SecureCloudBlog B2C management by delegation”
  • rgName is the name of the resource group you created earlier
  • managedByTenantId is the tenantId of the B2C tenant
    • not the tenant ID of the tenant hosting the subscription
  • principalId, is the group you created in the B2C directory earlier
  • principalIdDisplayName, is dispaly name for corresponding use case. In my example ’Azure Monitor Access’
  • roleDefinitionId, is the default ID for contributor access for the resource group you enable delegation to

B2CMSPparams.JSON

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "mspOfferName": {
            "value": "SecureCloudBlog B2C management by delegation"
        },
        "rgName": {
            "value": "B2C"
        },
        "mspOfferDescription": {
            "value": "Provide Azure Monitor for B2C resource"
        },
        "managedByTenantId": {
            "value": "972103f7-60e5-4153-a8fa-1840f0f03678"
        },
        "authorizations": {
            "value": [
                {
                    "principalId": "ba2d04e4-e3cb-4b69-b898-ee3facc0bf13",
                    "principalIdDisplayName": "Azure Monitor Access",
                    "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"
                }
            ]
        }
    }
}

B2CMSPtemplate.JSON

{
    "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "mspOfferName": {
            "type": "string",
            "metadata": {
                "description": "Specify the name of the offer from the Managed Service Provider"
            }
        },
        "mspOfferDescription": {
            "type": "string",
            "metadata": {
                "description": "Name of the Managed Service Provider offering"
            }
        },
        "managedByTenantId": {
            "type": "string",
            "metadata": {
                "description": "Specify the tenant id of the Managed Service Provider"
            }
        },
        "authorizations": {
            "type": "array",
            "metadata": {
                "description": "Specify an array of objects, containing tuples of Azure Active Directory principalId, a Azure roleDefinitionId, and an optional principalIdDisplayName. The roleDefinition specified is granted to the principalId in the provider's Active Directory and the principalIdDisplayName is visible to customers."
            }
        },
        "rgName": {
            "type": "string"
        }              
    },
    "variables": {
        "mspRegistrationName": "[guid(parameters('mspOfferName'))]",
        "mspAssignmentName": "[guid(parameters('rgName'))]"
    },
    "resources": [
        {
            "type": "Microsoft.ManagedServices/registrationDefinitions",
            "apiVersion": "2019-06-01",
            "name": "[variables('mspRegistrationName')]",
            "properties": {
                "registrationDefinitionName": "[parameters('mspOfferName')]",
                "description": "[parameters('mspOfferDescription')]",
                "managedByTenantId": "[parameters('managedByTenantId')]",
                "authorizations": "[parameters('authorizations')]"
            }
        },
        {
            "type": "Microsoft.Resources/deployments",
            "apiVersion": "2018-05-01",
            "name": "rgAssignment",
            "resourceGroup": "[parameters('rgName')]",
            "dependsOn": [
                "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]"
            ],
            "properties":{
                "mode":"Incremental",
                "template":{
                    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
                    "contentVersion": "1.0.0.0",
                    "parameters": {},
                    "resources": [
                        {
                            "type": "Microsoft.ManagedServices/registrationAssignments",
                            "apiVersion": "2019-06-01",
                            "name": "[variables('mspAssignmentName')]",
                            "properties": {
                                "registrationDefinitionId": "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]"
                            }
                        }
                    ]
                }
            }
        }
    ],
    "outputs": {
        "mspOfferName": {
            "type": "string",
            "value": "[concat('Managed by', ' ', parameters('mspOfferName'))]"
        },
        "authorizations": {
            "type": "array",
            "value": "[parameters('authorizations')]"
        }
    }
}

Configuration 2. Deploy the offer

  1. Save the files created in step 1. to an location you can reference in the following script

Ensure you have the correct subscription selected (select-AZsubscription works here)

$template = ".\B2CMSPtemplate.JSON"
$templateParams = ".\B2CMSPparams.json"
Connect-AzAccount -SubscriptionId "YourSubID"
New-AzDeployment -Name "B2Cmonitoring" `
                 -Location "westeurope" `
                 -TemplateFile $template `
                 -TemplateParameterFile $templateParams `
                 -Verbose 

The expected end result is as follows.

  • There might be small delay before the delegation is available in the services

After the delegation you should be able to login with B2C admin user, and use Azure LightHouse navigation experience of delegated directories including the subscriptions

https://docs.microsoft.com/fi-fi/azure/lighthouse/overview#benefits
Azure delegated resource management: Manage your customers’ Azure resources securely from within your own tenant, without having to switch context and control planes. Subscriptions and resource groups can be delegated to specified users and roles in the managing tenant, with the ability to remove access as needed. For more info, see Azure delegated resource management.”

Configuration 3. Configure log export in B2C

Before continuing you should have tidily packaged resource group for delegated access with log analytics B2C resource and the log analytics workspace
  • Access Azure AD B2C directory as user who is in the delegation group
  • You should have the delegation selected for both directories, and the subscription where the Log Analytics is in order to successfully deploy the monitoring to B2C tenant

Final touches

  • From sign-ins select ’Export Data Settings’
  • Select ’Add diagnostic settings’ to configure the log export settings
    • If you see error here, then review the prerequisites, or ensure the ’Directory + subscription’ has the correct selections

  • Select the destinations you wish to export the logs to

Wrapping it up:

After successful configuration you will see Azure AD B2C events in the Log Analytics workspace.

  • Please note, that in Azure AD B2C Federated login goes to AuditLogs, and local directory sign-in goes to the SignInLogs
  • events are split between audit and sign-in logs also for some operations for local account sign-ins

Lighthouse and service provider settings

  • You can view lighthouse settings from the B2C tenant and see the Azure tenant where the log analytics is placed as a customer
  • You can view service provider settings in the tenant where delegation is exposed to
Mine happens to be used for multiple B2C delegations

Troubleshooting tips

The following conditions can introduce set of seemingly terminating or intermittent error

  • wrong subscription selected in any of the subscription filters
  • trying to create delegation within single tenant
  • B2C resource not linked to subscription
  • defined principals do not exist in the target subscription
  • when browsing logs in B2C you have user who has wrong subscription context ”ERROR RETRIEVING DATA”
    • You can workaround this with using a user who is in both directories, or just in the AAD directory
  • By my quick testing it looks like the even though the MSP offering works correctly, its trying to fetch the registered providers from the B2C tenant which is the party using the delegation. You wont see this error if the admin account is present in both directories besides having access to the delegation (For example the user you used to create the B2C directory)
User in both directories using context of the subscription
User in B2C directory using context MSP delegation
The error is not about resource provider, but about access model
 
ERROR RETRIEVING DATA
Register resource provider 'Microsoft.Insights' for this subscription to enable this query If issue persists, please open a support ticket. Request id: 
New-AzDeployment : 8:46:49 AM - Resource Microsoft.Resources/deployments 'rgAssignment' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "BadRequest",
"message": "{\r\n \"error\": {\r\n \"code\": \"RegistrationAssignmentInvalidUpdate\",\r\n \"message\": \"Registration assignment '7df986f4-8c48-5c92-ab0c-66c7f11517c6' not allowed to update registration definition
reference.\"\r\n }\r\n}"
}
]
}
]
}
}'



till next time! Br, Joosua

Experimental – Using Azure Function Proxy as Authenticating Reverse Proxy for NodeJS Docker App

Disclaimer: Azure Function Proxies are meant to act as proxies for functions itself, and as aggregators of microservice style resources/API’s near the function proximity. If you need an actual reverse proxy, or full blown API gateway, then solutions such as Azure API management, Azure AD App Proxy, Azure App GW, Kemp VLM, or just placing NGINX on your container might be the the right pick.

Now as the disclaimer is out of the way I can continue experimenting with this excellent feature without of having any kind of business justification

My inspiration came from MS’s similar article which covers using function proxy route to publish a certain part from wordpress. My angle was to see if the same approach can be used with App Service Authentication.

Obvious caveats

  • This is not necessarily the way this feature intended to be used 🙂
  • cold start of any function type solution. (Maybe do the same with App service web app)
  • If you are running docker image, then why not run it in the app service in the first place?
    • If the app is something else than docker image and likes to live on a VM, then this approach might still be of interest

Obvious benefits

  • Deploy your reverse proxy, or API gateway and rules of the solution as code
    • Functions is not the only solution to support this approach certainly, but functions integrate with VScode and CI/CD solutions. You end up having your solution entirely defined as re-deployable code)
    • Setting reverse proxy rules as example
  • Alternative approach for Single Page App /Static website, where function is acting as middle-end aggregator for certain tasks that are better handled outside of the browser due to possible security concerns
    • Don’t get me wrong here… I believe you can make perfectly secure SPA’s, and looking at JAMStack, and new Azure Static Web Sites offering, it seems that we are also heading that way 🙂

Background

Test environment

  • Azure VM
    • running NodeJS Express app docker image baked in VSCode’s insanely good docker extension environment
    • In the same VNET as the App Service Plan
  • Function
    • In the same VNET as the Azure VM running the docker image

Test results

  • Sign in to the application works on fresh authentication
    • After fresh authentication the session is maintained by app service cookies
  • When there was existing session on Azure AD the authorization flow for this App Resulted in HTTP error 431.
    • If there was actual use scenario I would debug this further and possibly create another re-directing function to ingest the token which would drop the proper cookie for the subsequent sign in
  • I haven’t tested if there are possible issues with advanced content types, I would expect that the proxy function forwards the back-end responses content-type (maybe test for another blog)
  • From the TCPDump trace running the DockerVM you can see the internal IP of the App Service
    • 07:22:53.754245 IP 172.30.10.29.54044 > 172.30.10.36.8080: Flags [.], ack 218, win 221, options [nop,nop,TS val 104639808 ecr 1486010770], length 0

Ideas for next blog?

Some delicious continuation tests for this approach could be:

  • Based on the internal headers created by the EasyAuth module:
    • Create poc for Native and Single Page Apps using Authorization Header
    • Create test scenario for using internal B2C authentication (I have app ready for this)
    • Add internal proxy routes to perform further authorization rules
    • Forward Authentication tokens, or username headers to the docker back-end application by defining the proxy application as external redirect target, or by using the internal function methods
https://docs.microsoft.com/en-us/azure/app-service/overview-authentication-authorization

Till next time

Joosua

App Service – Key Vault Vnet Service Endpoint access options explored + NodeJS runtime examples

I was recently drafting recommendations for using Azure Key Vault with App service. While available documentation is excellent and comprehensive it seemed, that I needed to document some overview in order to save time in future. Otherwise I am back at deciphering some of the key configuration options, such as Azure Key Vault Firewall settings again 🙂

Important info about App service Regional VNET integration

Capabilities are very good after all.

While this blog highlights some limitations of regional VNET integration in App Service, I’d recommend that the reader compares these limitations to subscribing full fledged App Service Environment. Features like limiting outbound traffic and reaching private resources inside VNET, can be achieved with other plans than the App service environment -plan only.

For further info check the excellent article at https://docs.microsoft.com/en-us/azure/azure-functions/functions-networking-options

App Service and Key Vault firewall using the ”Trusted Services” option

  • Using Key Vault References for App Service at the moment is not supported when you are calling Key Vault using VNet service endpoint

Currently, Key Vault references won’t work if your key vault is secured with service endpoints. To connect to a key vault by using virtual network integration, you need to call Key Vault in your application code.

https://docs.microsoft.com/en-us/azure/azure-functions/functions-networking-options#use-key-vault-references

1-to-1 Relation between app service and the Subnet

  • The integration subnet can be used by only one App Service plan. What this means is that while you can have multiple web apps /functions enabled for VNET integration on the same App Service Plan, they must all share the same integration subnet
  • This means that App or function running on the app service plan cant be assigned to any other subnets, than the one app service plan is already assigned to
  • Try anything else, and you get ”Adding this VNET would exceed the App Service Plan VNET limit of 1”
    • This is explained in detail in docs issue at @github
The integration subnet can be used by only one App Service plan.https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#regional-vnet-integration

Consumption plans

Consumption plans do not support Virtual Network integration required for using VNET Service Endpoints used in this article


Getting to the point? Regional VNET integration

This blog focuses on Regional VNET integration for App Service, which is subject to following main assumptions

  • The Vnet which you select for the app service has to share the same subscription and the region as the App Service Plan (link)
    • The article in the link, also mentions ’Resources in VNets peered to the VNet your app is integrated with’ I haven’t tested if the same region requirement applies here, as VNET peering works across regions.
  • Your target resources in VNET’s must be in same region as your app service
    • Is this applicable to VNET service endpoints? based on my testing calling network restricted Key Vault behind service endpoint worked for app service regardless was key vault in the same region or not. This worked as long as the caller VNET is authorized. I believe this is exception, or that it only includes VNET based resources, not resources behind VNET service endpoints

  • Regional vnet interation enables you to place also NSG rules on outbound traffic from your App Service Function, or Web App
  • Virtual Network integration is only meant for outbound calls from your app into your VNet, or to another resource which is behind Vnet Service Endpoint
  • There is another feature called ’Gateway-required VNet Integration which relies on P2S connections to another regions from gateway enabled VNET’s which is subject to another set of assumptions.

Example scenarios

All testing was done on Azure Key Vault Standard, and Linux based app service plan.

  • App service plan S1 and P1V2
  • All code, apps and secrets are created for testing purposes (run none of this stuff against anything in production)
    • for both web apps and functions
      • Node 12 LTS runtime
      • System assigned managed identity
      • Key Vault is called on specific functions defined in the application code
  • All resources on West Europe
  • App Service and VNET in same subscription and region
  • Key Vault
    • Only allows traffics from authorized VNET’s using VNET service endpoints feature enabled on the source VNET (AppService Integration VNET)

Azure side configuration screencaps

Node JS example code for Linux App Service Plan

calling the Node.JS web app only demonstrates the connectivity to the key vault by fetching a list of secrets and outputting it to the screen (Nobody in their sane mind would list secrets in public website, so don’t use this code in this format against anything on production)

Expected Output from web app example
Expected Output from web app example

Web App

App.js

  • If you test the code, remember to update the Package.JSON to run app.js in main, not the default index.js
  • For both function and web app include request depedency on the Package JSON
  • For the kvOpt variable in code remember to update the fqdn of your key vault (this could also use env.variable, which update in the app settings)
    • Or you could add it as query param to the code if you want to test the samples with multiple key vaults
Query Param for the global KV name (The suffix is the same)
Calling with query Param
hardcoded URL as provided in the example code
var express = require('express')
var app = express()
var {secretsList,getMsitoken,getClientCredentialsToken} = require(`${__dirname}/src/msi`)
var port = process.env.PORT || 8080
console.log(port)
app.get('/home', (req,res) => {
    //console.log(process.env)
    var apiVer = "?api-version=2016-10-01"
    var kvOpt = {
        json:true,
        uri:"https://appservicekvs1.vault.azure.net/secrets/" + apiVer,
        headers:{
           
        }
    }
      
    if (process.env['MSI_ENDPOINT']) {
        console.log('using MSI version')
        getMsitoken()
        .catch((error) => {
            return (error)
        
        }).then((data) => {
            kvOpt.headers.authorization = "Bearer " + data['access_token']
            console.log(kvOpt)
            secretsList(kvOpt).catch((error) => {
                return res.send(error)
            } ).then((data) => {
                console.log(data)
                return res.send(data)
            })
        })
    } else {
        console.log('using local version')
        getClientCredentialsToken()
        .catch((error) => {
            return (error)
        
        }).then((data) => {
            kvOpt.headers.authorization = "Bearer " + data['access_token']
            console.log(kvOpt)
            secretsList(kvOpt).catch((error) => {
                return res.send(error)
            } ).then((data) => {
                console.log(data)
                return res.send(data)
            })
        })
    }
 
})
app.listen(port, () => {
    console.log('listening on', port)
})

MSI.JS

  • Place msi.js in folder called src
  • Populate the options of first function only if you want to test it locally (You have to create your own app registration, and add it to access policy of the Key Vault)
var rq = require('request')
var path = require('path')
function getClientCredentialsToken () {
    return new Promise ((resolve,reject) => {
        var options = {
            json:true,
            headers:[{
            "content-type":"application/x-www-form-urlencoded" 
            }
            ],
            form: {
                grant_type:"client_credentials",
                client_id:"",
                client_secret:"",
                resource:"https://vault.azure.net"
                }
            }
        
            rq.post("https://login.microsoftonline.com/dewired.onmicrosoft.com/oauth2/token",options, (error,response) => {
            
                if (error) {
                    return reject (error)
                }
                Object.keys(response).map((key) => {
                    if (key == "body")  {
                        if (response.body.error) {return reject(response.body.error)} 
                        else if (response.body.access_token) {return resolve(response.body)} 
                        else {return resolve (response.body)}
                    }
                    
                })
               
             }
            )
    })
}
function getMsitoken () {
    return new Promise ((resolve,reject) => {
        var options = {
            json:true,
            uri: `${process.env['MSI_ENDPOINT']}?resource=https://vault.azure.net&api-version=2019-08-01`,
            headers:{
            "X-IDENTITY-HEADER":process.env['IDENTITY_HEADER']
            }
        }
        console.log(options)
        rq.get(options, (error,response) => {
            
            if (error) {
                return reject (error)
            }
            Object.keys(response).map((key) => {
                if (key == "body")  {
                    if (response.body.error) {return reject(response.body.error)} 
                    else if (response.body.access_token) {return resolve(response.body)} 
                    else {return resolve (response.body)}
                }
                
            })
            
        })
    })
}
function secretsList (kvOpt) {
    return new Promise ((resolve,reject) => {
        rq.get(kvOpt,(error,response) => {
              if (error) {
                  //console.log(error)
                    return reject(error)
                }
                Object.keys(response).map((key) => {
                    if (key == "body")  {
                        if (response.body.error) {return reject(response.body.error)} 
                        else if (response.body.access_token) {return resolve(response.body)}
                        else {return resolve (response.body)}
                    }
                    
                })
        })
     }
    
    )
   
}
module.exports={getMsitoken,getClientCredentialsToken,secretsList}

Azure Function

  • MSI.js in the SRC folder is the same as in web app
  • Update the variables (kvOpt) just like in the Web App example
var {secretsList,getMsitoken,getClientCredentialsToken} = require(`${__dirname}/src/msi`)
module.exports = async function (context, req) {
    if (process.env['MSI_ENDPOINT']) {
        console.log('running MSIVersion')
        console.log('using MSI version')
        result = await getMsitoken()
        .catch((error) => {
            return context.res = {
                body:error
            };
        
        })
    
    } else {
        console.log('using local version')
        result = await getClientCredentialsToken()
        .catch((error) => {
            return context.res = {
                body:error
            };
        
        })
    }
    if (result['access_token']) {
        var apiVer = "?api-version=2016-10-01"
        var kvOpt = {
            json:true,
            uri:"https://appservicekvs1.vault.azure.net/secrets/" + apiVer,
            headers:{
                "Authorization": "Bearer " + result['access_token']
            }
        }
        console.log(kvOpt)
        var finalresult = await secretsList(kvOpt)
        .catch((error) => {
            return context.res = {
                body:error
            };
        
        })
        return context.res = {
            body:finalresult
        };
    
        }
};

Related error messages

Having missed any of the regional VNET integration settings, or having misconfigured access policies one might easily see any of the following errors

  1. ”Client address is not authorized and caller was ignored because bypass is set to None”.
    • Caller is not authorized in the firewall list
  2. The user, group or application ’appid=/’ does not have secrets list permission on key vault ’AppServicekvs1;location=westeurope’.
    • Caller is not authorized in the access policies

Till next time!

Br, Joosua

Deep diver – NodeJS with Azure Web apps and Azure Blob Storage SAS Authorization options

If you are working with Azure, chances are that you’ve at least indirectly consumed Azure Blob Storage at some point. Azure Storage in general is one of the elementary building blocks of almost any Azure service, and in many cases you end up dealing with storage authorization at some point. This is where SAS tokens enter the picture, and what this article is about

General description of SAS tokens from @docs MSFT

A shared access signature (SAS) provides secure delegated access to resources in your storage account without compromising the security of your data. With a SAS, you have granular control over how a client can access your data. You can control what resources the client may access, what permissions they have on those resources, and how long the SAS is valid, among other parameters.

https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview

The approaches provided here include NodeJS samples, but as maybe obvious these approaches are fairly agnostic of the framework. NodeJS is just used to provide samples for similar approaches. This approach works regardless of the runtime/ platform.

  • When you use Azure as the platform you gain the benefit of using VNET service endpoints, and managed service identities for app service based and containerized approaches
  • Other options exist (private links etc)

While multiple technical approaches for storage access exist based on SAS tokens, two approaches tend to stand out.

  1. Proxy based
    • Proxy processes the authorization and business logic rules and then pipes (proxies) the blob to the requester via SAS link stored in table storage (SAS link could also be created ad-hoc) / use of Table storage by no means is mandatory here, but provides a convenient way to provide references to SAS links
      • Even behind proxy it makes sense to use SAS links as it narrows access down for the particular NodeJS function to match requester permissions
      • This method also allows comprehensive error handling including retry logics, and different try/catch blocks for transient Azure Storage errors.
        • Azure Storage errors, which to be honest are rare to happen, but nonetheless can happen.
        • With redirect based the all error handling happens between user client and the storage HTTP services itself
      • Proxy based approach allows locking down the storage account in network level to the web application only.
      • In this approach only the proxy should be allowed to access the Storage Account from network perspective. Following options are available
        • Azure Storage Firewall
          • Authorized Azure VNET’s (VNET Service endpoints
          • IP address lists
        • Private Links (Perhaps a subject for a separate blog)
  2. Redirect based
    • Proxy processes the authorization and business logic rules, and then redirects the requester to blob object via SAS link
      • After the SAS link is obtained (by users browser) there is nothing to prevent user sending the link to another device, and use that link there, unless Azure AD SAS Delegation, or per download IP restrictions are set to the link.
      • Redirect based might be better if you are concerned about complexity and overheads introduced the proxy based methods (In redirect based Azure Storage accounts HTTP service processes the downloads, and can likely handle a large amount of concurrency)

Both of these options are explored also in @docs msft

https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview#when-to-use-a-shared-access-signature
  • Its worth mentioning that for both these methods/approaches great deal networking and authorization variations exist besides the ones presented here.

Examples

Prerequisites: SDK and depedencies

  • Storage SDK is the ’azure-storage’ SDK.
  • For Node.JS Web server the legendary ExpressJS
  • Node.JS native HTTPS API is used for creating a proxy client to pipe the client connection in the proxy based method
  • Important dependencies for both approaches are
 "dependencies": {
    "azure-storage": "^2.10.3",
    "express": "^4.17.1",
    "jsonwebtoken": "^8.5.1",
    "jwk-to-pem": "^2.0.3",
    "jwks-rsa": "^1.6.0",
  }
https://www.npmjs.com/package/azure-storage

Samples for both approaches

  • Samples highlight the use of ExpressJS and native node API’s to achieve either method. Azure Storage code is abstracted into separate functions. Both methods use the same Azure Storage access methods.

Proxy based

Below is example for ExpressJS based app, which has direct function invoked for get verb in route (’/receive)

  • App service and storage configuration
  • (S1 Plan) for App service
    • App Service custom DNS binding with App service managed certificate
  • VNET integration with stand-alone vnet
  • Storage account v2 with firewall set to authorize selected VNET.s
  • Phase 1 authorize the token verified by JWT.verify() must match user entry on req.query.to
    • Return authorization error if signed-in user doesn’t match req.query.to
  • Phase 2 Query table storage with req.query.to
  • Phase 3 proxy SAS link connection
    • Pipe if response was ok!
app.get('/receive', (req, res) => {
  var proxyClient = require('https')
  var usr = (decode(req.cookies.token).email)
  console.log(chalk.green((`${req.query.to} with ${usr}`)))
  // Phase 1 authorization the token verified by JWT.verify() must match user entry on req.query.to
  if (!usr.includes(req.query.to)) {
    // Return authorization error if signed-in user doesn't match req.query.to
    return res.send(`Authorization failed. Not logged in as recipient ${req.query.to} - Logged in as ${usr} `)
  }
  // Phase 2 Query table storage with req.query.to
  QueryTables(req.query.from, req.query.to, req.query.uid, (error, result, response) => {
    var sd = url.parse(response.body.value[0].filename).path
    // Phase 3 proxy SAS link connection
    proxyClient.get(response.body.value[0].sasLink, (proxyres) => {
      console.log(proxyres.statusCode)
      //Pipe if response was ok!
      if (proxyres.statusCode == 200) {
        var content = `attachment; filename=${sd}`
        res.setHeader('content-disposition', content)
        proxyres.on('data', (chunk) => {}).pipe(res)
      } else res.render('failed', {
        message: "Link expired, due to this SAS link cannot be verified, Server errorMsg " + response.statusMessage
      })
      proxyres.on('end', () => console.log('end'))
    }).end()
  })
})


Redirect based

Redirect based method is fairly simple, and essentially just uses the res.redirect() method of expressJs after authorizing the user

  • Phase 1 authorize the token verified by JWT.verify() must match user entry on req.query.to
    • Return authorization error if signed-in user doesn’t match req.query.to
  • Phase 2 Query table storage with req.query.to
  • Phase 3 redirect user SAS link connection
app.get('/redirect', (req, res) => {
  console.log('redirecting')
  var proxyClient = require('https')
  var usr = (decode(req.cookies.token).email)
  console.log(chalk.green((`${req.query.to} with ${usr}`)))
  // Phase 1 authorization the token verified by JWT.verify() must match user entry on req.query.to
  if (!usr.includes(req.query.to)) {
    // Return authorization error if signed-in user doesn't match req.query.to
    return res.send(`Authorization failed. Not logged in as recipient ${req.query.to} - Logged in as ${usr} `)
  }
  // Phase 2 Query table storage with req.query.to and redirect user to SASlink
  QueryTables(req.query.from, req.query.to, req.query.uid, (error, result, response) => {
    res.redirect(response.body.value[0].sasLink)
  })
})

Considerations for both approaches

  • For redirect method its of utmost importance to keep the SAS-link short lived.
  • For proxy method if you store the saslink in table storage ( instead of creating it based on the specifications stored in table storage) you will be more locked to provide longer lifetimes for SAS tokens.
    • Essentially you could create the sas link with one-time link (short lived) characteristics when table storage is invoked for link details

Other things:

  • Using Azure AD SAS delegation is not directly available for the SDK I am using for NodeJS.
  • In most scenarios you can replace public blob access with SAS tokens too, in cases where you have front-end (proxy) being able to facilitate access via creation SAS links
  • Checkout the excellent docs.microsoft.com best practices article on using SAS tokens
  • Creating SAS links from the SDK this far has required using account name and key connection methods.

Till next time!

Br, Joosua

Azure Functions with VSCode – Build, Test and Deploy your own GeoIP API to Azure

If you need easy way to provide GeoIP information (Geo location of the IP) to existing set of data, then these code and deployment samples might be just the thing for you; Or if you just want to experiment with Azure Functions 🙂

Obviously many services allow you to check Geo IP information, ranging from simple reverse lookups – to searching IP with various website based services. When you start to look at documented, supported and maintained API’s the list becomes smaller, this is where this blog helps.

  • Good maintained API’s exist, but for testing this is one the best approaches

Maxmind database files

In this blog we are exploring the latter (MMDB. files) which we use to build an API without direct throttling limitations – obviously throttling and quotas is something you want to use in commercial API

One of the best known providers of Geo IP information is Maxmind. Maxmind offers two options: a paid API with comprehensive information set, or free option, a basic information set based on .MMDB files which provide GeoIP dB to your apps using supported Modules.

Before I delve into building the API with Azure Functions, I highlight that MMDB. databases can be used to enrich files also as direct part of your application code. No point calling the API, if you can invoke the library directly from your application without any overheads.

Where external API approach becomes useful, is when you want to have more modular architecture between different services, that don’t for example share the same code base /runtime or platform – or you benefit of decoupling different services for microservice style architecture. For example I could provide GeoIP service for external process which supports inline lookups of data via HTTP requests in their process pipeline.

https://www.npmjs.com/package/maxmind (Note the libraries itself don’t include the MMDB. files, may be obvious but worth highlighting here, that you download and update them separately)

If you plan to build something commercial based on GeoLite2 databases visit their site for terms. While my motivation is not commercial (at least directly). Its still Easy to follow their straightforward licensing term of including this snippet in this site.


This product includes GeoLite2 data created by MaxMind, available from
https://www.maxmind.com.


Prerequisites

VScode has great set of Azure Extensions, Functions being one of them

1. Get the MMDB files

  • Download the database files from maxmind
Select download files
  • Extract the downloaded archive to a folder you can later copy from the MMDB file
    • I used 7ZIP to extract it. Note that depending on your extracting tool /distro you might have dig through two archives to get into the .MMDB file /Picture example
  • This is the archive you should see in the end

2. Create the Azure Function

  • VSCode: under functions select new project
  • VSCode: under functions new function
  • VSCode: select JavaScript
  • VScode: For template select ’HTTP Trigger’
  • Name the trigger
  • Select authorization level ’Function’, and select open in new window at step 6/6
  • Your workspace should look now like this

Sprinkle the code

If this was more serious project, I would put this all to GitHub repo, but since this is just few snippets, lets go with this 🙂

  • in the workspace run the following commands from the Integrated Console

(No NPM init is needed as the extension takes care of it)

npm install @maxmind/geoip2-node --save

index.js

Expected content
  • Overwrite contents of index.js with the contents of the snippet below
const {getIPv4Info} = require(`${__dirname}/src/lookups`)
module.exports = async function (context, req) {
 
    var azureanswer
    if (req.headers['x-forwarded-for']) {
        var x = req.headers['x-forwarded-for']
        azureanswer = await getIPv4Info(x.split(':')[0]).catch((error) => {return error})
    } else {azureanswer = 'Incorrect params'}
    var data = await getIPv4Info(req.query.ip).catch((error) => {return error})
    if (req.query.ip) {
        context.res = {
            headers:{
            'content-type':'text/plain; charset=utf-8'
            },
           body:data
        };
    }
    else {
        context.res = {
            status: 200,
            body: azureanswer
        };
    }
};

lookups.js

Create new folder called ’src’ in the workspace (Remember no capital letters!)

const Reader = require('@maxmind/geoip2-node').Reader;
const path = require('path')
const fs = require('fs')
var db = path.join(__dirname + "/GeoLite2-Country.mmdb")
fs.readFileSync(db)
function getIPv4Info (ip) {
    console.log('opening IPinfo')
    return new Promise ((resolve,reject) => {
        Reader.open(db, null).then(reader => {
            try {
              return resolve(reader.country(ip)) } catch { reject(`cant parse ${ip}`)
              }
          });
    })
}
/* debug here if standalone
getIPv4Info('1.1.1.1').then((data) => console.log(data)).catch((error) => console.log(error))
 */
module.exports={getIPv4Info}

Copy the .mmdb file to src folder

Test

  • If all is correctly in place your workspace should look like this
  • With F5 (windows) you can run the local version of the function
Test the function from powershell, or any other suitable client

3. Deploy

  • Select ’Create Function App in Azure’
  • Enter suitable name for the function
  • Select Node.JS 12 for the runtime version
  • Select windows as platform, this due to the remote debugging feature with VScode which is very useful and exclusive to the platform choice
  • select consumption plan
  • Create new resource group, or select existing
  • Create new or select exisisting storage account
  • If you want some good debug info, select Appinsights, for this demo I chose to skip it
  • You should have as output something like this
  • Select then deploy to function app
  • This is the point where the overwrite happens, regardless if this was existing or new function app
  • Start streaming logs
  • Now fire an request by copying the function url
  • Test the function for output.
    • With no params it uses your public IP
    • With params it uses the ip in params
Invoke-RestMethod 'https://sentinelhelpers.azurewebsites.net/api/AzureSentinelHelper?code=codehere&ip=1.1.1.1'

from here?

  • You could add any enriching function to the app, such like VirusTotal info, using their free API. The sky is the limit here 🙂

If you need to update the MMDB. files. Something like this can be used in helper function since you get permalinks for the files after registering

var https = require('https')
var fs = require('fs')
var key = 
var uri = `https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-Country&license_key=${key}&suffix=tar.gz`
function updateDb () {
    var dbfile = fs.createWriteStream(`${__dirname}/geoLite2.tar.gz`)
    https.get(uri, (res) => {
        res.pipe(dbfile)
        dbfile.on('close', () => console.log('file write finished') )
        }).on('finish',() => console.log('download finish')).end()   
}
updateDb()
module.exports={test}

Till next time!

Br, Joosua