LAB: Microsoft Defender ATP and Conditional Access

The integration between Intune and Microsoft Defender Advanced Threat Protection (MDATP) has been there for a while now. It’s an interesting feature, as it allows the risk score assigned by MDATP to be utilized in CA policies. Most organizations I’ve worked with only use Intune for MDM and MAM and still use SCCM (or the like) for managing workstations. While Intune’s capabilities in workstation management are still limited, it’s constantly evolving – and with SCCM co-management supported, it’s becoming a very viable option to enroll also your workstations into Intune. In any case, I wanted to do a quick experiment on how the MDATP -> Intune -> CA integration works, and how quickly we can go from detecting an alert in MDATP into actually making access restrictions in Azure AD based on the incident.

In all its simplicity, the environment is depicted in the picture below. The laptop in this case is just a VM running in Azure but it will do the trick. The Windows 10 is directly joined to Azure AD (this would work equally well with hybrid scenarios, but I got lazy setting up the local infrastructure). The device will be enrolled into Intune and into MDATP (using Intune).

Setup: Intune + MDATP + CA

To begin with, you have to integrate Intune with MDATP. You’ll find the setting within the Intune management blade:

As well as in the advanced settings of the MDATP portal:

Once the tenant level integration is done, you need to create a device configuration profile for MDATP:

The next thing you need is a device compliance policy. This will determine at which MDATP risk level the device will be marked as non-compliant. In my case, I’ve set it to ”Low”, which means that any risk level higher than that will push the device out of compliance. We’ll see how the risk levels are shown in MDATP later.

On the compliance policy settings we want to make sure that devices with no compliancy status will be marked non-compliant:

And finally, you should set up a security baseline for MDATP. Although not strictly required for this experiment, it gives you a good idea which MDATP features you can centrally manage via Intune. If you’re running your Windows 10 remotely, be careful with the firewall configurations. The default settings in the baseline will make you lose RDP connectivity (been there…).

Finally, we’ll create a couple of conditional access policies. For testing purposes, I’ll create two policies:

  1. Access to SharePoint Online will require either a compliant device or MFA
  2. Access to Exchange Online will require both a compliant device and MFA (picture below)

Enroll!

Now we’re good to go! I start by joining the Windows 10 client to Azure AD, which will also automatically enroll it into Intune (as I have enabled the enrollment policy). After that, I’ll login with an Azure AD account and run the following command. This should generate an informational alert in MDATP, just to show that we’re getting the data.

The information in the MDATP portal lags a few minutes behind, you have to wait for a bit for the information to appear. But eventually, you should see this:

As you can see, the risk level is currently ”No known risks”, as this activity is not considered suspicious. Note, this view also shows the machines exposure level. MDATP assesses any vulnerabilities on the host, either due to missing security patches or known vulnerable configurations. The exposure level does not impact the devices compliance status, but provides very useful information about the security posture of the device.

At this point, we can see our device also in Intune, and it is compliant with all the defined policies:

When I access SharePoint Online, I’m able to get in with just username and password. And if I login to Exchange Online, I need to authenticate using MFA, but I’m able to get in. So everything is working as expected. Note: if you’re using Chrome, you need to have the Windows 10 Accounts extension enabled. Otherwise the device information will not get conveyed to Azure AD, and it will assume that you’re using a non-compliant device.

Infect me!

Now, let’s break things. In order to do that, I need to simulate an attack on my Windows 10 device so that MDATP will increase the risk level. For that purpose I’m using a Word document that is crafted for this purpose.

The document contains a macro that drops two files on the desktop (diy_rs3_jscript_executes_ps.js and WinATP-Intro-Backdoorexe.jpg). It will then establish persistence by modifying the HKCU\Software\Microsoft\Windows\CurrentVersion\Run registry key. And finally, it starts a trusted process (RuntimeBroker.exe) and injects malicious code into it. This kind of pattern might go unnoticed from traditional anti-malware solutions, and that’s of course the point here.

Once I’ve opened the document, I’ll wait a couple of minutes and voilà! There’s not just one alert, but a bunch of them, and the device risk level jumped to High. We’ll briefly look into the forensics later, now we just want to see what happens with conditional access.

The information has also propagated to Intune via the integration and the device is no longer compliant (this took literally just a couple of minutes from the simulated infection):

At this point, I’m still able to use my open browser session as the access token is still valid (1 hour validity time). But I’m impatient so I just close my browser and reopen it. When I login to Exchange Online now, I get the following notice. In this case, it should say: ”Oops – you really shouldn’t have opened that document”.

I’m still able to access SharePoint Online but it now requires MFA. So the CA policies work just as expected.

Off the grid

One useful feature within MDATP is that you can completely isolate the client from the network. It might be that the device is currently in the corporate network, and therefore just restricting access through Azure AD might not be enough (unless you’ve implemented a full-blown zero-trust model).

Once I do that, it takes a minute or two and I lose my RDP connection. Luckily, that the device will still be able to communicate back to MDATP (well, it would be a pretty crappy feature otherwise). This way, once the remediation activities have been completed and, you can remotely allow the device back into the network. Also, I can still continue my forensic analysis while the device is safely off the grid.

CSI

So what would happen next? Obviously, at this point someone from the SOC team, or whoever is monitoring the alerts, would need to investigate what happend, and help getting the device back into compliance. There are a couple of MDATP features worth exploring at this point (we won’t go through everything). MDATP shows you the process tree during the incident, which gives a nice overview of what happened:

 You can also check on individual files, and where they have been seen in the organization, in this case I’m looking at the WinATP-Intro-Backdoorexe.jpg:

You can also take the file hash and use it to drill down to the data using the advanced hunting functionality. This allows you to search the data directly from the MDATP logs using Kusto query language (same that is used with Log Analytics). There’s also a community that has produced useful examples, both for generic use as well as for identifying specific exploits (https://github.com/microsoft/WindowsDefenderATP-Hunting-Queries). In this case, I’m just looking into all the events involving this particular file.

Another useful feature is the ”Collect investigation package”:

This will pull all sorts of useful information from the machine and allow you to download it in zip format. This can be very useful when doing the forensic analysis.

Reincarnation

So let’s say we’ve now concluded our investigations and cleaned up the machine. What next? We have to decrease the risk level of the machine for it to become compliant again. To do that, we have to resolve the alerts in the portal. You can also link alerts into incidents, as in this case all the alerts originating from this machine should all be related to a single incident.

Once all the alerts have been resolved, the risk level goes back to ”No known risks”. At this point, if we have isolated the machine, we could release it from the isolation (it took a couple of minutes for me to be able to reconnect my RDP session after the release).

And as expected, device status in Intune is back to compliant:

And I’m able to read my email again!

That concludes today’s experiments.

Deploy: Native Exchange ActiveSync with Conditional Access and Intune while blocking legacy auth?

I’ve seen many companies struggle with EAS (Exchange ActiveSync) configuration, in relation how to adapt strong authentication and trusted devices approach for native mail clients. Thus I’d like to present three possible scenarios for EAS handling with Conditional Access/Intune mostly

Update: Microsoft will be initially deprecating basic auth for EAS, which some of the options presented below do rely

The options

  1. Allow EAS go unchecked (not recommended)
  2. Allow compliant (enrolled) native clients
  3. Use only Outlook App / Don’t allow native clients (Using Approved Client App option in Conditional Access)
  4. Mix of 2 and 3 with different policies
  • Option 1. Not recommended (even without Intune)
  • Option 2. Recommend to most who are unsure whether they can transition from Native mail clients to using Outlook app (Option 3), which gives the best control, if you’re considering having Outlook as managed application
  • Its also worth mentioning, that IOS native mail client has supported modern authentication for a good while. Meaning, that If it’s you’re only client for Native Mail, you don’t need necessarily a separate policies for EAS
  • For org’s without Conditional Access and Intune, there is less flexible, but similar option available to scenario 2 – Which I’ve addressed in another blog post (So no reason really for option 1

Before configuration guide, lets address these two strange sentences

  • ”Apply only to supported platforms”
  • ”Exchange ActiveSync currently does not support all the other conditions”

”Apply only to supported platforms”

https://docs.microsoft.com/en-us/intune/conditional-access-intune-reassign

The heading ”Apply only to supported platforms” is super confusing. From a security point of view I’d think that unsupported platforms would go past Conditional Access? (as they aren’t other clients, which we handle in another block policy)

It boils down to this:

”if you have chosen to block clients that aren’t supported by Intune, use the Apply policy only to supported platforms option”


See, when the device isn’t supported by Intune (thus unable to ever get the Compliant status for ActiveSync access) it won’t get past Conditional Access.


Exchange ActiveSync currently does not support all the other conditions

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/conditions#exchange-activesync-clients

The supported conditions for Native Mail Apps Are

  • In Users / Groups selection of specific users/groups, or all users (include and exclude both seem to apply)
  • In Cloud Apps only Exchange Online is selected
  • In Conditions: Apply only to supported platforms
  • In Conditions – Client Apps only ActiveSync is selected (and in scope of this blog, and other recommendations it’s recommended to narrow it down to ’Apply policy only to supported platforms’
  • In Access Controls only ’ Require device to be marked as compliant’ is the only supported condition

You may have varying mileage with other conditions, but I’d recommend sticking with recommendations in MS articles


Policy examples

  • Exclude Exchange Online from other policies before configuring specific policies for it:
  • In my example I haven’t specified mobile platform specific policies for ActiveSync, since the configuration options in ActiveSync are very limited. I don’t see thus much (or any value) of separating for example IOS and Android into their respective policies, unless you wan’t to define Block Policy for mobile platform (in this case bare in mind, that the platform selection is based often on User-Agents, meaning that if you really need to block mobile platform, ensure that the allowed mobile platforms are narrowed down also in protocol, apps and in Intune policies
  • There is many ways to deal with service accounts with the following policies. I will be providing another post on how to deal with service accounts in Conditional Access, to keep this post tidy 🙂
    • Any way as a single mention is that, when you want to bypass certain account or action remember to narrow the bypass conditions so that for example the IP based bypass for a single service account is possible only for one application (meaning that Service account even in ”trusted location” won’t have any more access than the single app – unless specified otherwise.

EXO (Modern auth and browser clients)

EXO – Activesync (Compliant only)

General – Block ’Other Clients’

Note! you could possibly also use one of the built in baseline rules, as it doesn’t block EAS. I am opting for non-baseline policy because I may want to exclude some directory roles (service accounts) to another policy. The Policy also lists that it blocks the Native Android Mail Client, which I supported is still to some extent supported platform, and I want to maintain it in separate policy

Expected behaviour

  1. Newly added devices will be quarantined in EXO
  2. Client will receive mail prompting to enroll the company portal app
  3. After enrollment phone will stay in the quarantine, while the compliance information is propagating
  4. On success
    1. CA device info will display ”join type = Azure AD Registered”
    2. CA Policy for ’require compliant device’ result = Success
    3. Exchange display DeviceAccessState = Allowed, DeviceAccessStateReason = ExternallyManaged

Below is short the process shortly

There is also a baseline policy, which is recommended unless you need more exclude conditions (such as native Android Mail Clients)

Intune Configuration

Users devices show as compliant in both Azure AD, and Intune
’Compliant status’ in Azure AD
Ensure that all used platforms have a compliance policy
Ensure devices with no compliance policy assigned are handled as ’Not Compliant’

Keywords for troubleshooting

EXO powershell Module

”DeviceAccessState : Quarantined”
”DeviceAccessStateReason : ExternalEnrollment”

During the enrollment the devices will stay in quarantine, until the enrollment is complete (device gets registered ID in AAD, and Azure AD displays Success for Conditional Access 

Once successful the states and reasons should be as following

”DeviceAccesState : Allowed”
”DeviceAccessStateReason : ExternallyManaged”


Company Portal App Android

”Company Access Setup is Incomplete”

Email account Activation”

with Android devices even after successful enrollment and Conditional Access working as supposed (and with DeviceAccessState in EXO) – I suppose this is some bug for the enrollment not being able to display the status correctly (tested on Android 7xx and 8xx)


References

LINK to MS article regarding blocking legacy auth

https://docs.microsoft.com/en-us/intune/conditional-access-intune-reassign

Azure AD Directories and B2B user decision matrix – One-slider

Click for larger version

Ever pondered on how to decide about B2B Account Types? One thing is for sure, If you’re enterprise org, you’re better of having multiple partner account types, because it’s not one size fits all type scenario

  • The matrix makes clear separation between collaboration only, and administrative tasks performing partners
    • This is based on multiple recommendations, but is based mostly on the following Azure Subscription recommendation from Azure Secure DevOps Kit. (Obviously if the account is also homed in Azure AD and you’ve setup B2B conditional access policy for guests, you might consider yourself covered to some extent…)
link
  • Where authentication happens is important part on the picture, and is the main deciding factor on users home directory

Other considerations

  • Licensing is separate discussion… Anyway key takeaway is, that the 1:5 ratio licensing works in the background = If you assign license to guest user its one license gone, and doesn’t benefit from the B2B-Licensing
  • The picture doesn’t consider SSO between the host tenant and its IDP, as it basically wouldn’t add anything to the picture (unless you’d ”multiplex” multiple claims provider against single AD FS, and then via claims pipeline transformations emit claims for the guest type users in you’re tenant)
  • More detailed explanation of all the scenarios (minus the flowchart) can be found here Properties of an Azure Active Directory B2B collaboration user

Please don’t hesitate to comment or send feedback, if you notice any errors or wrong assumptions in the flowchart

”MOAR STUFF”

If you want to check B2B deep diver on user types and authn/authz, then check: https://securecloud.blog/2019/05/06/deep-diver-azure-ad-b2b/

Click for bigger picture

Add sAMAccountName to Azure AD Access Token (JWT) with Claims Mapping Policy (and avoiding AADSTS50146)

With the possibilities available (and quite many of blogs) regarding the subject), I cant blame anyone for wondering whats the right way to do this. At least I can present one way that worked for me

Here are the total ways to do it (1. obviously not the JWT token)

  1. With SAML federations you have full claims selection in GUI
  2. Populate optional claims to the API in app registration manifest, given you’ve updated the schema for the particular app
  3. Create custom Claims Policy, to choose emitted claims (The option we’re exploring here)
  4. Query the directory extension claims from Microsoft Graph API appended in to the directory schema extension app* that Graph API can call

Please note, for sAMAccountName we’re not using the approach where we add directory extensions to Graph API queryable application = NO DIRECTORY EXTENSION SYNC IN AAD CONNECT NEEDED


Checklist for using Claims Mapping Policy

Pre: Have Client application, and web API ready before proceeding

#Example App to Add the Claims 

AzureADPreview\Connect-AzureAD

$Definition = [ordered]@{
    "ClaimsMappingPolicy" = [ordered]@{
        "Version" = 1
        "IncludeBasicClaimSet" = $true
        "ClaimsSchema" = @(
            [ordered]@{
                "Source" = "user"
                "ID" = "onpremisessamaccountname"
                "JwtClaimType" = "onpremisessamaccountname"
            }
        )
    }
}

$pol =  New-AzureADPolicy -Definition ($definition | ConvertTo-Json -Depth 3) -DisplayName ("Policy_" + ([System.Guid]::NewGuid().guid) + "_" + $template.Values.claimsschema.JwtClaimType) -Type "ClaimsMappingPolicy" 
 
$entApp =  New-AzureADApplication -DisplayName  ("DemoApp_" + $template.Values.claimsschema.JwtClaimType)
$spnob =  New-AzureADServicePrincipal -DisplayName $entApp.DisplayName -AppId $entApp.AppId 

Add-AzureADServicePrincipalPolicy -Id $spnob.ObjectId -RefObjectId $pol.Id 

#From the GUI change the Identifier and acceptMappedClaims value (From the legacy experience)


  • Generally: The app that will emit the claims is not the one you use as the clientID (Client subscribing to the Audience)
    • Essentially you should create un-trusted client with clientID, and then add under Api permissions the audience/resource you’re using
  • Ensure that SPN has IdentifierURI that matches registered custom domain in the tenant
    • The reasoning is vaguely explained here & here
      • Whatever research work the feedback senders did, it sure looked in depth 🙂
  • Update the app manifest to Accept Mapped Claims
    • Do this in the legacy experience, the new experience at least in my tenant didn’t support updating this particular value
”Insufficient privileges to complete the operation”

if mapped claims are not accepted in manifest, and pre-requisites are not satisfied you might get this error

”AADSTS50146: This application is required to be configured with an application-specific signing key. It is either not configured with one, or the key has expired or is not yet valid. Please contact the application’s administrator.”

  • Below is example for the Manifest changes (AcceptMappedClaims, and verified domain matching URI)
     "id": "901e4433-88a9-4f76-84ca-ddb4ceac8703",
    "acceptMappedClaims": true,
    "accessTokenAcceptedVersion": null,
    "addIns": [],
    "allowPublicClient": null,
    "appId": "9bcda514-7e6a-4702-9a0a-735dfdf248fd",
    "appRoles": [],
    "oauth2AllowUrlPathMatching": false,
    "createdDateTime": "2019-06-05T17:37:58Z",
    "groupMembershipClaims": null,
    "identifierUris": [
        "https://samajwt.dewi.red"
    ],

Testing

References

https://github.com/MicrosoftDocs/azure-docs/issues/5394

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/develop/active-directory-claims-mapping.md

https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential#create-a-self-signed-certificate

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/develop/v2-protocols-oidc.md#fetch-the-openid-connect-metadata-document

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/develop/active-directory-claims-mapping.md#example-create-and-assign-a-policy-to-include-the-employeeid-and-tenantcountry-as-claims-in-tokens-issued-to-a-service-principal

Concept: Publish on-prem API using AAD App Proxy and API Management with Azure AD JWT Bearer Grant

Disclaimer: Azure AD App Proxy is perfectly capable of covering most of the internal API publishing scenarios, If you can handle API request and response handling with just client and on-premises server. Alternatively you might have another component on-prem which can act as middle-tier component to do further validation and shaping of requests. In a nutshell, if you’re API-scenario doesn’t benefit from from middle-tier service, then I suggest you continue with ”keeping-it-simple” /And as always, all Disclaimer: The information in this weblog is provided “AS IS” with no warranties and confers no rights.

Better together?

API Management and AAD App Proxy can complement each other when you need to have request shaping / central API Gateway for processing before calling back-end API’s – In this blog I explore a PoC example, and some reasoning for such scenarios

When to use? / benefits

  • You’re internal API isn’t visible to Azure API management via on-premises network connectivity, and you’re not planning to use site-site networking in the future, or for a particular API
  • You want to enrich payloads and headers of requests for particular back-end services. For example services which cant consume claims in JWT Tokens. You also want to ensure, that selected parts of these payloads cannot be forged by the client
  • Have single end-point to distribute and shape/manipulate traffic to various API’s (General argument)
Extra Claims from APIM

How it works (short)

  • APIM calls the App Proxy SPN instead of mobile clients
  • The ServicePrincipals for APIM and AppProxy App are blocked for providing access tokens via Authorization Code Grant (removal of redirect uri). Leaving only option that APIM will use the JWT-Bearer grant. This ensures, that only the APIM can fetch the ”final” access token, for the App Proxy App
  • Flow: Native client (Auth Code Flow) -> APIM (JWT-Bearer Grant) -> Azure AD App Proxy SPN Authorization (Permissions to make this work are explained on later part of this blog)

Ensuring integrity with retrofit of AAD App Proxy & APIM

  • In order to retrofit the Azure AD Application Proxy with APIM it’s essential that the App Proxy Application and APIM SPN can act only as as Web API’s (not public clients) this keeps the flow intact
    • Stripping token issuance rights of the SPN for Authorization Code flow, ensures: That only the App registration for APIM can delegate user access to the App Proxy SPN (Audience) using that particular flow
    • This assumption only works when you don’t retroactively enable Implicit Flow on the SPN itself. (You can have Implicit grant on other clients, but not on the this particular SPN, which is the owner of the AppProxy Audience ( identifierUri)

How To?

  • Remove redirect URI’s from both middle-stream API and App Proxy Application
  • Then perform the fencing (below) by delegating rights in correct order to support the flow
  • Now the public client, can only get tokens for APIM, but can never call App Proxy directly, as the client doesn’t have direct permissions on the App Proxy SPN (Only the APIM has)

Azure AD Fencing: Utilizing extended grant types to access downstream API’s

Click the picture for bigger version
OAuth2.0 On-Behalf-Of flow
https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow#protocol-diagram

Using Azure AD Specific GrantType (urn:ietf:params:oauth:grant-type:jwt-bearer) is the magic component we use here.

  • JWT bearer flow allows us to create ”DMZ-like” fencing between direct calls, and downstream calls destined AppProxy SPN with Middle-tier API

Using the On-Behalf-Of flow (JWT bearer), we can ensure that APIM is the only allowed caller for the App Proxy Audience

Click the picture for bigger version

API Management configuration

  • The following policy is ”tip of the ice berg” in terms of how you can shape, and handle requests bound to multiple directions
  • There is possibility of doing more graceful handling with more the multiple policy clauses APIM provides
  • I can hardly claim any credit (apart architectural and flow design) for the APIM policies below, as web is full great APIM examples for all of the policies I have used below
 
<policies>
    <inbound>
        <!-- validate the initial call destined later towards middle-tier API-->
        <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
            <openid-config url="https://login.microsoftonline.com/dewired.onmicrosoft.com/.well-known/openid-configuration" />
            <audiences>
                <audience>https://webapi-a.dewi.red</audience>
            </audiences>
            <required-claims>
                <claim name="appid" match="any">
                    <value>299568d2-3036-41d9-a961-89266e67ea82</value>
                </claim>
            </required-claims>
        </validate-jwt>
        <!-- Forward THE UPN header to back-end -->
        <set-variable name="UPN" value="@(context.Request.Headers["Authorization"].First().Split(' ')[1].AsJwt()?.Claims["upn"].FirstOrDefault())" />
        <set-variable name="Bearer" value="@(context.Request.Headers["Authorization"].First().Split(' ')[1])" />
        <set-header name="back-endUPN" exists-action="override">
            <value>@(context.Variables.GetValueOrDefault<string>("UPN"))</value>
        </set-header>
        <!-- Send new request with the Token -->
        <send-request mode="new" response-variable-name="OBOtoken" timeout="20" ignore-error="false">
            <set-url>{{tokenURL2}}</set-url>
            <set-method>POST</set-method>
            <set-header name="Content-Type" exists-action="override">
                <value>application/x-www-form-urlencoded</value>
            </set-header>
            <set-header name="User-Agent" exists-action="override">
                <value>Mozilla/5.0 (Windows NT; Windows NT 10.0; fi-FI) WindowsPowerShell/5.1.17763.503</value>
            </set-header>
            <set-body>@{
            var tokens = context.Variables.GetValueOrDefault<string>("Bearer");
           
              return "assertion=" + tokens + @"&client_id={{clientid2}}&resource={{resource}}&client_secret={{ClientSecret}}&grant_type={{grantType}}&requested_token_use={{requested_token_use}}";
             
               }</set-body>
        </send-request>
        <!-- Forward the OBOtoken to AppProxy  -->
        <choose>
            <when condition="@(((IResponse)context.Variables["OBOtoken"]).StatusCode == 200)">
                <set-variable name="OBOBearer" value="@(((IResponse)context.Variables["OBOtoken"]).Body.As<JObject>(preserveContent: true).GetValue("access_token").ToString())" />
                <set-variable name="Debug" value="@(((IResponse)context.Variables["OBOtoken"]).Body.As<JObject>(preserveContent: true).ToString())" />
                <set-header name="Authorization" exists-action="override">
                    <value>@{
                    var ForwardToken = context.Variables.GetValueOrDefault<string>("OBOBearer");
                    return "Bearer "+ ForwardToken;
                    }</value>
                </set-header>
                <set-header name="Content-Type" exists-action="override">
                    <value>application/json</value>
                </set-header>
            </when>
        </choose>
        <base />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies> 

Testing the solution in action

Before testing ensure that all places where you’ve defined audiences are explicitly matching for the audience (App registrations, Named Values, APIM policies, Clients requesting the access)

There are multiple ways to test the solution, but testing through APIM’s test console, and peeking the in-secure back-end resource via HTTP trace yields the most verbose results

  • Get Access Token for the API-A (APIM) with ”bulk client”
  • Paste the access token to APIM test console and perform test call to view traces
  • Check the back-end for results
Click for bigger picture

And that’s it!

Further stuff

  • Use keyVault instead of named values (secret) for storing secrets
named values
  • Place WAF to the front of APIM
  • fine tune to the policies in APIM ( this was just the PoC)
    • For example, the back-end could use cached token for the downstream call, as the user, and user-identity is validated in the first step (its validated in the second step also)
https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-cache
  • Get some proficiency in C# syntax … As PS and JavaScript fellow, I found myself seriously struggling to properly escape, cast and enumerate variables/content

P.S. My similar articles