Deep Diver – Azure AD Groups/Roles claims for developers and IT pro’s with code examples


Many enterprise applications rely on group /role information to be passed on assertions for authorization, and further role decisions. Last three to five years these applications have been moving to the cloud, or at least seeing parts of their authorization middle-wares upgraded to support SAML, or OAuth2, or both. Judging by how rich the group claim options are in Azure AD I’d say Microsoft is investing heavily into making configuration options cover all imaginable scenarios

Short version of this blog is:

Prefer SAML when:

  • Application relies on getting user and groups information including transformed claims from the IDP in the initial token response.
  • Application doesn’t need further information from other Azure AD API’s to be acquired by background flows using token delegation
  • You want to transform claims both for user and group information in the GUI
  • You want do as much as possible configuration and maintanence in the GUI

Prefer OAuth2 when

  • Application has multiple Azure AD API’s called using token delegation, or redirect based flows after initial authentication
  • Application needs combine information from Azure AD API’s beyond of that which is held in user attributes and groups
  • While Claims Transformations arent supported in the token itself, you can basically do whatever you need to combine information once you have the initial token.
  • Application needs multiple and complex group claim rules and you can do these in the back-end (Azure AD allows only adding single ’add group claims’ rule in SAML app) – Note that even in OAuth2 this applies, but in OAuth2 you do this after receiving the initial token

Mix it together

  • In complex scenarios you might need to decide on combining a mix of these approaches, or decide to lean heavily towards OAuth2. As I am mostly working from dev perspective I tend to prefer OAuth2 with or without OIDC, but in this blog I highlight benefits for both approaches
    • ”For applications that do interactive browser-based sign-in to get a SAML assertion and then want to add access to an OAuth protected API (such as Microsoft Graph), you can make an OAuth request to get an access token for the API. When the browser is redirected to Azure AD to authenticate the user, the browser will pick up the session from the SAML sign-in and the user doesn’t need to enter their credentials.” /

Useful information before proceeding

I’ve written few articles about Application Proxy If your applications remain on-premises but you need to modernize their access approach to Zero Trust

Eating the elephant

Since the scenarios where groups and roles are so many, I’ve tried to distill it into an table to help with possible evaluations, and provided some examples and considerations to back these scenarios.

Note that in Azure AD Group is not explicitly same as Role. But these two can be mixed in various rules together.

Decision criteria table

Send as claimsSAMLOAuth2
send roles 
send only groups that are assigned to the application ✔ (Send either the role or the group possible)✔ (Send either the role or the group possible)
send groups ✔(Via Group Claims in SSO settings of the enterprise application or Token configuration)✔ (Via ”Token Configuration OAuth2 Apps”)
Transform group/role attributes dynamically✔ (some limitations apply, but flexible nonetheless)See Query Graph API if when send as claims not possible
group size exceededSee Query Graph API if when send as claims not possible
or limit groups to those assigned to the application (preferred when full group information is not needed)
See Query Graph API if when send as claims not possible

Excerpt of configuration options presented in the table

send only groups that are assigned to the application (Via token configuration in App Registrations)

  • App Registrations Token Configuration includes three type of token configurations for the group claims option
  • Note the limitation only single group claims rule is available

send only groups that are assigned to the application (Via SSO settings in enterprise applications)

  • For SAML apps this approach is preferable when you also need to change the name of the claim
  • Note the limitation only single group claims rule is available

Query Graph API when send as claims is not possible

Querying Graph API is useful approach when you want/need information which is not available in the token response

Flows available for further queries

Delegate the token received in response using bearer flow Only works for SAML 1.0 tokens Rather use OAuth2 redirect based flowJWT Bearer flow available
Redirect user for OAuth2 Authorization Request *✔ (SAML app needs to support OAuth2 to in this approach as well)
  • ”For applications that do interactive browser-based sign-in to get a SAML assertion and then want to add access to an OAuth protected API (such as Microsoft Graph), you can make an OAuth request to get an access token for the API. When the browser is redirected to Azure AD to authenticate the user, the browser will pick up the session from the SAML sign-in and the user doesn’t need to enter their credentials.” /
  • For SAML and JWT tokens which exceed group size limit you get Graph API link instead of the groups
  • Azure Active Directory limits the number of groups it will emit in a token to 150 for SAML assertions, and 200 for JWT. If a user is a member of a larger number of groups, the groups are omitted and a link to the Graph endpoint to obtain group information is included instead.
  • In this approach you need to add additional permissions for the application if you want to get the group names besides group id’s.
    • If you emit groups in the claims of the token you get with the built-in scope to graph the needed information

SAML / OAuth2 group information examples

In attempt to keep this blog even remotely readable I’ve included examples of the two main approaches, as all other examples are more or less derivatives/mixes of those two


  • deliver most if not all information in the token response


  • fetch information after receiving the initial token via separate http request to MS Graph API

Emit Groups in the token (works for both SAML and OAuth2)

Configuration for the examples below

  • Note, here is alot to play with for different group configurations

Response example JWT token (Access Token):

Though avoidable, some times you end up getting group IDs only. If you prefer this way, and don’t need human readable name to be exposed in token this may work too. If you want name of the group, you need to use graph API

Response example SAML token

Response example for exceeded group amount

Get groups of the user after token response – OAuth2 (And SAML APPS that support OAuth2)

  • This scenario could be needed if filtering is required that is not available in claims customization, or the group size exceed token limits

Using OAuth2 JWT Bearer Flow

  • I’ve written about JWT bearer grant earlier here Concept: Publish on-prem API using AAD App Proxy and API Management with Azure AD JWT Bearer Grant
  • Note that this can be also achieved with:
    • Getting new access token with interactive user redirect flow
    • using refresh token flow (if initial scope allows storing refresh tokens)
    • SAML Bearer flow (This works with only SAML 1.1 tokens, so I am not recommending it, unless its token gotten from WSfederation protocol)
  • The JWT token is stored either in back-end token store, or in user cookies (In the example is in the req.cookies.token)

Response example

Note! you can clean up the contents of graph response, if needed. This is just example 🙂

Sample code for JWT-Bearer Grant

//Bearer Grant (depedencies request etc)

var getJwtBearerAssertion = ({client_id,redirect_uri,resource,assertion,client_secret},callback) => {
    var options = {
      form: {
      }"",options, (error,response) => {
        if (!response.body.access_token) { return callback(undefined,response.body)}
        callback(response.body.access_token, undefined)

// New code from here /Different JS file
var options = {
   getJwtBearerAssertion(options, (result,error) =>  {
                if (error) {return res.send(error)}
                apiCall(result,"", (result) => {
                      var data =   result.value.filter((group) => {
                          console.log('group iterated')
//Iterate groups into Cookies
                         if (group.displayName) {res.cookie(`groupID:${}`,group.displayName,CookieOpts)}
                        return != undefined

Other stuff: SAML parser in NodeJS

Following parsing code is used as expressJS middleware, to implement demonstrational SAML parsing functionality without verifying the SAML token itself. This is useful when you want to inspect assertions in the back-end test that information on functions you might be working. Such as updating back-end database stored information based on the information of such assertion

SAML parser Middle-ware function for expressJS

var xmlparser = require('fast-xml-parser');
const util = require('util')

function samlParser () {

    return function (req,res,next) {
        if (req.body.SAMLResponse) {
            console.log('SAML response')
            let bbuffer = Buffer.from(req.body.SAMLResponse, 'base64')
            var xmlstring = bbuffer.toString('utf-8')

            if (xmlparser.validate(xmlstring)) {
                var options = {
            var xmlpayload = xmlparser.parse(xmlstring,options)
            var detailed = util.inspect(xmlpayload,true,7,true)


            return next()


console.log('using SAML parser')


Security aspects

Using app permissions in place of delegated permissions?

This is generally not recommended practice (at least in my opinion) because

Using app permissions for substituting flows that are originally suited to use user delegation is not good idea, because it decouples user authorization and further requests to other API’s from the user context.

With delegated permissions 

  • With delegation existing access rights of the user is used to get further information. The process delegating the token cannot exceed permissions of the user with the token
  • More coherent log trail is produced as user context is shown which the app accessed API’s on behalf of the user

When instead app permissions are used

  • Log trail is problematic. The action which was destined towards user API’s doesn’t show user context rather only the application acting towards the API
  • Elevation of privilege risks enter the picture, as the app permissions might exceed by far margin the permissions of the user
  • incomplete or no user consent even when the app does actions on behalf of user without delegation. 

Attacker figuring out a client side input parameters intended for back-end queries

  • In ”SAML approach” all of the delivered attributes are verified by signature verification, this applies to OAuth2 tokens too, but in OAuth2 approach you need to query often information after receiving the token for further information
    • Since the token JWT token might not have needed information, hypothetically you could have bad application code running ”curl –insecure-” against what it expects to be Graph API” – but is instead being tricked to read an attacker controlled version of the endpoint. Attacker could use an limited Remote Code Injection approach, or figure out that some client side information is used in the back end with no integrity check, or has serious issues in with its input validation
      • example function running in the back-end expects that the array from client side function always produces single item array. The validation function assumes that the content is thus always stored in the index[0]. But the function processes all items in the array if the first index item is validated
  • The obvious counter argument here is that if application is running insecure curl requests and such bad coding practices 🙂 how can you be sure, that its not being tricked to use the wrong token sign-in key? or have more serious security issues. In especially older apps, the public verification key is stored in the application itself With Azure AD this is not recommended, querying the JWKS uri needs to happen always from the metadata because those public keys can rollover (Read ”Signing key rollover in Azure Active Directory”)
    • There is no plausible production scenario where you wouldn’t be verifying signatures of the either token types (SAML / JWT)

Where from here?

The current blog highlights technical decision criteria’s and examples of code sample and response outputs. For solving a particular group based scenarios don’t hesitate to ask me in twitter or LI for further additions into this blog

Microsoft references

Don’t try this at home (or how to enable Core Server Remote Management for AD FS GUI)

I’ve been running AD FS on Core servers for some time now, mostly because I like the smaller footprint and centralized management experience.

The smaller footprint also guarantees:

  • That there are less consumed resources
  • That the there is less potential attack surface

But I want my GUI…


The lovely GUI Icon

Sometimes I’ve felt the temptation to just peek into AD FS GUI from remote administration host… only to remember that it’s not possible due to the fact that there is no RSAT for managing AD FS.

Yes this, this is crazy… I am doing it just for the kick of it



This the part that you definitely shouldn’t do in production, or even in staging if you value your deployment – Nonetheless, I had the temptation to see if I can crack this nut:

  • Install and configure remote management host temporarily as AD FS slave node
  • Disable and stop AD FS service on the remote management node, because you won’t really be needing the service itself, you still need the installation to do management of the primary node


AD FS -> Nobody here, go away!

  • Do a crazy portproxy with NETSH to send port 1500 to primary AD FS node


TCP 1500 has now new destination

  • Enjoy remote management (and maybe some crazy side effects…)


Welcome to AD FS management on Core Server!

#Disable and stop the AD FS service on management computer
Get-Service adfssrv | Set-Service -StartupType Disabled; Stop-Service adfssrv
#Do a crazy binding to port 1500
netsh interface portproxy add v4tov4 listenaddress= listenport=1500connectaddress=yourremotehost connectport=1500

Speculation if this was really plausible approach:

Q: Would it just be smarter to have the primary node on GUI enabled server?

A: Pretty much yes

(in this crazy demonstrated approach the only difference, is that you don’t have to have AD FS service running on the remote management host)

Br, Joosua!

Experimental – Using Azure Function Proxy as Authenticating Reverse Proxy for NodeJS Docker App

Disclaimer: Azure Function Proxies are meant to act as proxies for functions itself, and as aggregators of microservice style resources/API’s near the function proximity. If you need an actual reverse proxy, or full blown API gateway, then solutions such as Azure API management, Azure AD App Proxy, Azure App GW, Kemp VLM, or just placing NGINX on your container might be the the right pick.

Now as the disclaimer is out of the way I can continue experimenting with this excellent feature without of having any kind of business justification

My inspiration came from MS’s similar article which covers using function proxy route to publish a certain part from wordpress. My angle was to see if the same approach can be used with App Service Authentication.

Obvious caveats

  • This is not necessarily the way this feature intended to be used 🙂
  • cold start of any function type solution. (Maybe do the same with App service web app)
  • If you are running docker image, then why not run it in the app service in the first place?
    • If the app is something else than docker image and likes to live on a VM, then this approach might still be of interest

Obvious benefits

  • Deploy your reverse proxy, or API gateway and rules of the solution as code
    • Functions is not the only solution to support this approach certainly, but functions integrate with VScode and CI/CD solutions. You end up having your solution entirely defined as re-deployable code)
    • Setting reverse proxy rules as example
  • Alternative approach for Single Page App /Static website, where function is acting as middle-end aggregator for certain tasks that are better handled outside of the browser due to possible security concerns
    • Don’t get me wrong here… I believe you can make perfectly secure SPA’s, and looking at JAMStack, and new Azure Static Web Sites offering, it seems that we are also heading that way 🙂


Test environment

  • Azure VM
    • running NodeJS Express app docker image baked in VSCode’s insanely good docker extension environment
    • In the same VNET as the App Service Plan
  • Function
    • In the same VNET as the Azure VM running the docker image

Test results

  • Sign in to the application works on fresh authentication
    • After fresh authentication the session is maintained by app service cookies
  • When there was existing session on Azure AD the authorization flow for this App Resulted in HTTP error 431.
    • If there was actual use scenario I would debug this further and possibly create another re-directing function to ingest the token which would drop the proper cookie for the subsequent sign in
  • I haven’t tested if there are possible issues with advanced content types, I would expect that the proxy function forwards the back-end responses content-type (maybe test for another blog)
  • From the TCPDump trace running the DockerVM you can see the internal IP of the App Service
    • 07:22:53.754245 IP > Flags [.], ack 218, win 221, options [nop,nop,TS val 104639808 ecr 1486010770], length 0

Ideas for next blog?

Some delicious continuation tests for this approach could be:

  • Based on the internal headers created by the EasyAuth module:
    • Create poc for Native and Single Page Apps using Authorization Header
    • Create test scenario for using internal B2C authentication (I have app ready for this)
    • Add internal proxy routes to perform further authorization rules
    • Forward Authentication tokens, or username headers to the docker back-end application by defining the proxy application as external redirect target, or by using the internal function methods

Till next time


App Service – Key Vault Vnet Service Endpoint access options explored + NodeJS runtime examples

I was recently drafting recommendations for using Azure Key Vault with App service. While available documentation is excellent and comprehensive it seemed, that I needed to document some overview in order to save time in future. Otherwise I am back at deciphering some of the key configuration options, such as Azure Key Vault Firewall settings again 🙂

Important info about App service Regional VNET integration

Capabilities are very good after all.

While this blog highlights some limitations of regional VNET integration in App Service, I’d recommend that the reader compares these limitations to subscribing full fledged App Service Environment. Features like limiting outbound traffic and reaching private resources inside VNET, can be achieved with other plans than the App service environment -plan only.

For further info check the excellent article at

App Service and Key Vault firewall using the ”Trusted Services” option

  • Using Key Vault References for App Service at the moment is not supported when you are calling Key Vault using VNet service endpoint

Currently, Key Vault references won’t work if your key vault is secured with service endpoints. To connect to a key vault by using virtual network integration, you need to call Key Vault in your application code.

1-to-1 Relation between app service and the Subnet

  • The integration subnet can be used by only one App Service plan. What this means is that while you can have multiple web apps /functions enabled for VNET integration on the same App Service Plan, they must all share the same integration subnet
  • This means that App or function running on the app service plan cant be assigned to any other subnets, than the one app service plan is already assigned to
  • Try anything else, and you get ”Adding this VNET would exceed the App Service Plan VNET limit of 1”
    • This is explained in detail in docs issue at @github
The integration subnet can be used by only one App Service plan.

Consumption plans

Consumption plans do not support Virtual Network integration required for using VNET Service Endpoints used in this article

Getting to the point? Regional VNET integration

This blog focuses on Regional VNET integration for App Service, which is subject to following main assumptions

  • The Vnet which you select for the app service has to share the same subscription and the region as the App Service Plan (link)
    • The article in the link, also mentions ’Resources in VNets peered to the VNet your app is integrated with’ I haven’t tested if the same region requirement applies here, as VNET peering works across regions.
  • Your target resources in VNET’s must be in same region as your app service
    • Is this applicable to VNET service endpoints? based on my testing calling network restricted Key Vault behind service endpoint worked for app service regardless was key vault in the same region or not. This worked as long as the caller VNET is authorized. I believe this is exception, or that it only includes VNET based resources, not resources behind VNET service endpoints

  • Regional vnet interation enables you to place also NSG rules on outbound traffic from your App Service Function, or Web App
  • Virtual Network integration is only meant for outbound calls from your app into your VNet, or to another resource which is behind Vnet Service Endpoint
  • There is another feature called ’Gateway-required VNet Integration which relies on P2S connections to another regions from gateway enabled VNET’s which is subject to another set of assumptions.

Example scenarios

All testing was done on Azure Key Vault Standard, and Linux based app service plan.

  • App service plan S1 and P1V2
  • All code, apps and secrets are created for testing purposes (run none of this stuff against anything in production)
    • for both web apps and functions
      • Node 12 LTS runtime
      • System assigned managed identity
      • Key Vault is called on specific functions defined in the application code
  • All resources on West Europe
  • App Service and VNET in same subscription and region
  • Key Vault
    • Only allows traffics from authorized VNET’s using VNET service endpoints feature enabled on the source VNET (AppService Integration VNET)

Azure side configuration screencaps

Node JS example code for Linux App Service Plan

calling the Node.JS web app only demonstrates the connectivity to the key vault by fetching a list of secrets and outputting it to the screen (Nobody in their sane mind would list secrets in public website, so don’t use this code in this format against anything on production)

Expected Output from web app example
Expected Output from web app example

Web App


  • If you test the code, remember to update the Package.JSON to run app.js in main, not the default index.js
  • For both function and web app include request depedency on the Package JSON
  • For the kvOpt variable in code remember to update the fqdn of your key vault (this could also use env.variable, which update in the app settings)
    • Or you could add it as query param to the code if you want to test the samples with multiple key vaults
Query Param for the global KV name (The suffix is the same)
Calling with query Param
hardcoded URL as provided in the example code
var express = require('express')
var app = express()
var {secretsList,getMsitoken,getClientCredentialsToken} = require(`${__dirname}/src/msi`)
var port = process.env.PORT || 8080
app.get('/home', (req,res) => {
    var apiVer = "?api-version=2016-10-01"
    var kvOpt = {
        uri:"" + apiVer,
    if (process.env['MSI_ENDPOINT']) {
        console.log('using MSI version')
        .catch((error) => {
            return (error)
        }).then((data) => {
            kvOpt.headers.authorization = "Bearer " + data['access_token']
            secretsList(kvOpt).catch((error) => {
                return res.send(error)
            } ).then((data) => {
                return res.send(data)
    } else {
        console.log('using local version')
        .catch((error) => {
            return (error)
        }).then((data) => {
            kvOpt.headers.authorization = "Bearer " + data['access_token']
            secretsList(kvOpt).catch((error) => {
                return res.send(error)
            } ).then((data) => {
                return res.send(data)
app.listen(port, () => {
    console.log('listening on', port)


  • Place msi.js in folder called src
  • Populate the options of first function only if you want to test it locally (You have to create your own app registration, and add it to access policy of the Key Vault)
var rq = require('request')
var path = require('path')
function getClientCredentialsToken () {
    return new Promise ((resolve,reject) => {
        var options = {
            form: {
  "",options, (error,response) => {
                if (error) {
                    return reject (error)
                Object.keys(response).map((key) => {
                    if (key == "body")  {
                        if (response.body.error) {return reject(response.body.error)} 
                        else if (response.body.access_token) {return resolve(response.body)} 
                        else {return resolve (response.body)}
function getMsitoken () {
    return new Promise ((resolve,reject) => {
        var options = {
            uri: `${process.env['MSI_ENDPOINT']}?resource=`,
        rq.get(options, (error,response) => {
            if (error) {
                return reject (error)
            Object.keys(response).map((key) => {
                if (key == "body")  {
                    if (response.body.error) {return reject(response.body.error)} 
                    else if (response.body.access_token) {return resolve(response.body)} 
                    else {return resolve (response.body)}
function secretsList (kvOpt) {
    return new Promise ((resolve,reject) => {
        rq.get(kvOpt,(error,response) => {
              if (error) {
                    return reject(error)
                Object.keys(response).map((key) => {
                    if (key == "body")  {
                        if (response.body.error) {return reject(response.body.error)} 
                        else if (response.body.access_token) {return resolve(response.body)}
                        else {return resolve (response.body)}

Azure Function

  • MSI.js in the SRC folder is the same as in web app
  • Update the variables (kvOpt) just like in the Web App example
var {secretsList,getMsitoken,getClientCredentialsToken} = require(`${__dirname}/src/msi`)
module.exports = async function (context, req) {
    if (process.env['MSI_ENDPOINT']) {
        console.log('running MSIVersion')
        console.log('using MSI version')
        result = await getMsitoken()
        .catch((error) => {
            return context.res = {
    } else {
        console.log('using local version')
        result = await getClientCredentialsToken()
        .catch((error) => {
            return context.res = {
    if (result['access_token']) {
        var apiVer = "?api-version=2016-10-01"
        var kvOpt = {
            uri:"" + apiVer,
                "Authorization": "Bearer " + result['access_token']
        var finalresult = await secretsList(kvOpt)
        .catch((error) => {
            return context.res = {
        return context.res = {

Related error messages

Having missed any of the regional VNET integration settings, or having misconfigured access policies one might easily see any of the following errors

  1. ”Client address is not authorized and caller was ignored because bypass is set to None”.
    • Caller is not authorized in the firewall list
  2. The user, group or application ’appid=/’ does not have secrets list permission on key vault ’AppServicekvs1;location=westeurope’.
    • Caller is not authorized in the access policies

Till next time!

Br, Joosua

Deep diver – NodeJS with Azure Web apps and Azure Blob Storage SAS Authorization options

If you are working with Azure, chances are that you’ve at least indirectly consumed Azure Blob Storage at some point. Azure Storage in general is one of the elementary building blocks of almost any Azure service, and in many cases you end up dealing with storage authorization at some point. This is where SAS tokens enter the picture, and what this article is about

General description of SAS tokens from @docs MSFT

A shared access signature (SAS) provides secure delegated access to resources in your storage account without compromising the security of your data. With a SAS, you have granular control over how a client can access your data. You can control what resources the client may access, what permissions they have on those resources, and how long the SAS is valid, among other parameters.

The approaches provided here include NodeJS samples, but as maybe obvious these approaches are fairly agnostic of the framework. NodeJS is just used to provide samples for similar approaches. This approach works regardless of the runtime/ platform.

  • When you use Azure as the platform you gain the benefit of using VNET service endpoints, and managed service identities for app service based and containerized approaches
  • Other options exist (private links etc)

While multiple technical approaches for storage access exist based on SAS tokens, two approaches tend to stand out.

  1. Proxy based
    • Proxy processes the authorization and business logic rules and then pipes (proxies) the blob to the requester via SAS link stored in table storage (SAS link could also be created ad-hoc) / use of Table storage by no means is mandatory here, but provides a convenient way to provide references to SAS links
      • Even behind proxy it makes sense to use SAS links as it narrows access down for the particular NodeJS function to match requester permissions
      • This method also allows comprehensive error handling including retry logics, and different try/catch blocks for transient Azure Storage errors.
        • Azure Storage errors, which to be honest are rare to happen, but nonetheless can happen.
        • With redirect based the all error handling happens between user client and the storage HTTP services itself
      • Proxy based approach allows locking down the storage account in network level to the web application only.
      • In this approach only the proxy should be allowed to access the Storage Account from network perspective. Following options are available
        • Azure Storage Firewall
          • Authorized Azure VNET’s (VNET Service endpoints
          • IP address lists
        • Private Links (Perhaps a subject for a separate blog)
  2. Redirect based
    • Proxy processes the authorization and business logic rules, and then redirects the requester to blob object via SAS link
      • After the SAS link is obtained (by users browser) there is nothing to prevent user sending the link to another device, and use that link there, unless Azure AD SAS Delegation, or per download IP restrictions are set to the link.
      • Redirect based might be better if you are concerned about complexity and overheads introduced the proxy based methods (In redirect based Azure Storage accounts HTTP service processes the downloads, and can likely handle a large amount of concurrency)

Both of these options are explored also in @docs msft
  • Its worth mentioning that for both these methods/approaches great deal networking and authorization variations exist besides the ones presented here.


Prerequisites: SDK and depedencies

  • Storage SDK is the ’azure-storage’ SDK.
  • For Node.JS Web server the legendary ExpressJS
  • Node.JS native HTTPS API is used for creating a proxy client to pipe the client connection in the proxy based method
  • Important dependencies for both approaches are
 "dependencies": {
    "azure-storage": "^2.10.3",
    "express": "^4.17.1",
    "jsonwebtoken": "^8.5.1",
    "jwk-to-pem": "^2.0.3",
    "jwks-rsa": "^1.6.0",

Samples for both approaches

  • Samples highlight the use of ExpressJS and native node API’s to achieve either method. Azure Storage code is abstracted into separate functions. Both methods use the same Azure Storage access methods.

Proxy based

Below is example for ExpressJS based app, which has direct function invoked for get verb in route (’/receive)

  • App service and storage configuration
  • (S1 Plan) for App service
    • App Service custom DNS binding with App service managed certificate
  • VNET integration with stand-alone vnet
  • Storage account v2 with firewall set to authorize selected VNET.s
  • Phase 1 authorize the token verified by JWT.verify() must match user entry on
    • Return authorization error if signed-in user doesn’t match
  • Phase 2 Query table storage with
  • Phase 3 proxy SAS link connection
    • Pipe if response was ok!
app.get('/receive', (req, res) => {
  var proxyClient = require('https')
  var usr = (decode(req.cookies.token).email)
  console.log(`${} with ${usr}`)))
  // Phase 1 authorization the token verified by JWT.verify() must match user entry on
  if (!usr.includes( {
    // Return authorization error if signed-in user doesn't match
    return res.send(`Authorization failed. Not logged in as recipient ${} - Logged in as ${usr} `)
  // Phase 2 Query table storage with
  QueryTables(req.query.from,, req.query.uid, (error, result, response) => {
    var sd = url.parse(response.body.value[0].filename).path
    // Phase 3 proxy SAS link connection
    proxyClient.get(response.body.value[0].sasLink, (proxyres) => {
      //Pipe if response was ok!
      if (proxyres.statusCode == 200) {
        var content = `attachment; filename=${sd}`
        res.setHeader('content-disposition', content)
        proxyres.on('data', (chunk) => {}).pipe(res)
      } else res.render('failed', {
        message: "Link expired, due to this SAS link cannot be verified, Server errorMsg " + response.statusMessage
      proxyres.on('end', () => console.log('end'))

Redirect based

Redirect based method is fairly simple, and essentially just uses the res.redirect() method of expressJs after authorizing the user

  • Phase 1 authorize the token verified by JWT.verify() must match user entry on
    • Return authorization error if signed-in user doesn’t match
  • Phase 2 Query table storage with
  • Phase 3 redirect user SAS link connection
app.get('/redirect', (req, res) => {
  var proxyClient = require('https')
  var usr = (decode(req.cookies.token).email)
  console.log(`${} with ${usr}`)))
  // Phase 1 authorization the token verified by JWT.verify() must match user entry on
  if (!usr.includes( {
    // Return authorization error if signed-in user doesn't match
    return res.send(`Authorization failed. Not logged in as recipient ${} - Logged in as ${usr} `)
  // Phase 2 Query table storage with and redirect user to SASlink
  QueryTables(req.query.from,, req.query.uid, (error, result, response) => {

Considerations for both approaches

  • For redirect method its of utmost importance to keep the SAS-link short lived.
  • For proxy method if you store the saslink in table storage ( instead of creating it based on the specifications stored in table storage) you will be more locked to provide longer lifetimes for SAS tokens.
    • Essentially you could create the sas link with one-time link (short lived) characteristics when table storage is invoked for link details

Other things:

  • Using Azure AD SAS delegation is not directly available for the SDK I am using for NodeJS.
  • In most scenarios you can replace public blob access with SAS tokens too, in cases where you have front-end (proxy) being able to facilitate access via creation SAS links
  • Checkout the excellent best practices article on using SAS tokens
  • Creating SAS links from the SDK this far has required using account name and key connection methods.

Till next time!

Br, Joosua

Azure Functions with VSCode – Build, Test and Deploy your own GeoIP API to Azure

If you need easy way to provide GeoIP information (Geo location of the IP) to existing set of data, then these code and deployment samples might be just the thing for you; Or if you just want to experiment with Azure Functions 🙂

Obviously many services allow you to check Geo IP information, ranging from simple reverse lookups – to searching IP with various website based services. When you start to look at documented, supported and maintained API’s the list becomes smaller, this is where this blog helps.

  • Good maintained API’s exist, but for testing this is one the best approaches

Maxmind database files

In this blog we are exploring the latter (MMDB. files) which we use to build an API without direct throttling limitations – obviously throttling and quotas is something you want to use in commercial API

One of the best known providers of Geo IP information is Maxmind. Maxmind offers two options: a paid API with comprehensive information set, or free option, a basic information set based on .MMDB files which provide GeoIP dB to your apps using supported Modules.

Before I delve into building the API with Azure Functions, I highlight that MMDB. databases can be used to enrich files also as direct part of your application code. No point calling the API, if you can invoke the library directly from your application without any overheads.

Where external API approach becomes useful, is when you want to have more modular architecture between different services, that don’t for example share the same code base /runtime or platform – or you benefit of decoupling different services for microservice style architecture. For example I could provide GeoIP service for external process which supports inline lookups of data via HTTP requests in their process pipeline. (Note the libraries itself don’t include the MMDB. files, may be obvious but worth highlighting here, that you download and update them separately)

If you plan to build something commercial based on GeoLite2 databases visit their site for terms. While my motivation is not commercial (at least directly). Its still Easy to follow their straightforward licensing term of including this snippet in this site.

This product includes GeoLite2 data created by MaxMind, available from


VScode has great set of Azure Extensions, Functions being one of them

1. Get the MMDB files

  • Download the database files from maxmind
Select download files
  • Extract the downloaded archive to a folder you can later copy from the MMDB file
    • I used 7ZIP to extract it. Note that depending on your extracting tool /distro you might have dig through two archives to get into the .MMDB file /Picture example
  • This is the archive you should see in the end

2. Create the Azure Function

  • VSCode: under functions select new project
  • VSCode: under functions new function
  • VSCode: select JavaScript
  • VScode: For template select ’HTTP Trigger’
  • Name the trigger
  • Select authorization level ’Function’, and select open in new window at step 6/6
  • Your workspace should look now like this

Sprinkle the code

If this was more serious project, I would put this all to GitHub repo, but since this is just few snippets, lets go with this 🙂

  • in the workspace run the following commands from the Integrated Console

(No NPM init is needed as the extension takes care of it)

npm install @maxmind/geoip2-node --save


Expected content
  • Overwrite contents of index.js with the contents of the snippet below
const {getIPv4Info} = require(`${__dirname}/src/lookups`)
module.exports = async function (context, req) {
    var azureanswer
    if (req.headers['x-forwarded-for']) {
        var x = req.headers['x-forwarded-for']
        azureanswer = await getIPv4Info(x.split(':')[0]).catch((error) => {return error})
    } else {azureanswer = 'Incorrect params'}
    var data = await getIPv4Info(req.query.ip).catch((error) => {return error})
    if (req.query.ip) {
        context.res = {
            'content-type':'text/plain; charset=utf-8'
    else {
        context.res = {
            status: 200,
            body: azureanswer


Create new folder called ’src’ in the workspace (Remember no capital letters!)

const Reader = require('@maxmind/geoip2-node').Reader;
const path = require('path')
const fs = require('fs')
var db = path.join(__dirname + "/GeoLite2-Country.mmdb")
function getIPv4Info (ip) {
    console.log('opening IPinfo')
    return new Promise ((resolve,reject) => {, null).then(reader => {
            try {
              return resolve( } catch { reject(`cant parse ${ip}`)
/* debug here if standalone
getIPv4Info('').then((data) => console.log(data)).catch((error) => console.log(error))

Copy the .mmdb file to src folder


  • If all is correctly in place your workspace should look like this
  • With F5 (windows) you can run the local version of the function
Test the function from powershell, or any other suitable client

3. Deploy

  • Select ’Create Function App in Azure’
  • Enter suitable name for the function
  • Select Node.JS 12 for the runtime version
  • Select windows as platform, this due to the remote debugging feature with VScode which is very useful and exclusive to the platform choice
  • select consumption plan
  • Create new resource group, or select existing
  • Create new or select exisisting storage account
  • If you want some good debug info, select Appinsights, for this demo I chose to skip it
  • You should have as output something like this
  • Select then deploy to function app
  • This is the point where the overwrite happens, regardless if this was existing or new function app
  • Start streaming logs
  • Now fire an request by copying the function url
  • Test the function for output.
    • With no params it uses your public IP
    • With params it uses the ip in params
Invoke-RestMethod ''

from here?

  • You could add any enriching function to the app, such like VirusTotal info, using their free API. The sky is the limit here 🙂

If you need to update the MMDB. files. Something like this can be used in helper function since you get permalinks for the files after registering

var https = require('https')
var fs = require('fs')
var key = 
var uri = `${key}&suffix=tar.gz`
function updateDb () {
    var dbfile = fs.createWriteStream(`${__dirname}/geoLite2.tar.gz`)
    https.get(uri, (res) => {
        dbfile.on('close', () => console.log('file write finished') )
        }).on('finish',() => console.log('download finish')).end()   

Till next time!

Br, Joosua