The policy reference for using appArmor profiles on AKS nodes is a bit of head scratcher. Multiple times I’ve come across the failed state in Azure Policy – even though Microsoft applies the default appArmor profile to each AKS node by default*
- The actual policy is named as follow ”Kubernetes cluster containers should only use allowed AppArmor profiles”
Background
Microsoft provides good explanation on the appArmor control regarding AKS nodes at @docs. The important thing to note is, that while the appArmor annotation is defined in YAML manifest for pods, the actual policy is applied in AKS nodes (Kubelets)
- default profile is applied regardless the fact, that you can have empty podspec with no annotations for appArmor

- To confirm that default profile is actually working you can use exec into pod, and check the following
kubectl exec yourPod -n yourNameSpace -- cat /sys/module/apparmor/parameters/enabled

Mitigating the control
Control mitigation is outlined in the policy definition itself, but it’s not the clearest how it work. That’s why I had to dig into the OPA REGO syntax with my colleague, and display full data provided by the input variable.
Essentially, it will take inputs from the Azure Policy, for the defaultValue array, and compare the value to one included in podSpec. If you plan to use the already enabled default apparmor profile, all you need to add is ”runtime/default” as value to new Azure Policy definition, and update podspec(example) to match it.
Azure Policy snippet
"allowedProfiles": {
"type": "Array",
"metadata": {
"displayName": "Allowed AppArmor profiles",
"description": "The list of AppArmor profiles that containers are allowed to use. E.g. 'runtime/default;docker/default'. Provide empty list as input to block everything."
},
"defaultValue": [
"runtime/default"
]
},
OPA definition
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sazureenforceapparmorfork
spec:
crd:
spec:
names:
kind: k8sazureenforceapparmorfork
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
allowedProfiles:
type: array
items:
type: string
excludedContainers:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sazureenforceapparmorfork
violation[{"msg": msg, "details": {}}] {
not input.review.object.spec.nodeSelector["kubernetes.io/os"] == "windows"
metadata := input.review.object.metadata
container := input_containers[_]
not input_container_excluded(container.name)
not input_apparmor_allowed(container, metadata)
msg := sprintf("AppArmor profile is not allowed, pod: %v, container: %v. Allowed profiles: %v", [input.review.object.metadata.name, container.name, input])
}
input_apparmor_allowed(container, metadata) {
metadata.annotations[key] == input.parameters.allowedProfiles[_]
key == sprintf("container.apparmor.security.beta.kubernetes.io/%v", [container.name])
}
input_containers[c] {
c := input.review.object.spec.containers[_]
}
input_containers[c] {
c := input.review.object.spec.initContainers[_]
}
input_container_excluded(field) {
field == input.parameters.excludedContainers[_]
}
PodSpec
apiVersion: v1
kind: Pod
metadata:
namespace: pod-identity
name: demorce2
labels:
aadpodidbinding: app-ident
app: demorce2
annotations:
container.apparmor.security.beta.kubernetes.io/v5: runtime/default
spec:
containers:
- name: v5
image: acraksa.azurecr.io/v5rce:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8443
nodeSelector:
kubernetes.io/os: linux


Policy message after mitigation
{
"level": "info",
"ts": "2021-11-17T13:16:16.316733004Z",
"msg": "webhook admission request for azurepolicy-deny-apparmor-profile-5c40085ac6452d9f2af3 allowed: true",
"log-id": "afdb70a2-a-32",
"method": "github.com/Azure/azure-policy-kubernetes/pkg/webhook.ValidateGatekeeperResources"
}
"container.apparmor.security.beta.kubernetes.io/v5": "runtime/default",
"kubectl.kubernetes.io/last-applied-configuration": {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"container.apparmor.security.beta.kubernetes.io/v5": "runtime/default"
},
"labels": {
"aadpodidbinding": "app-ident",
"app": "demorce2"
},
"name": "demorce2",
"namespace": "pod-identity"
},
"spec": {
"containers": [{
"image": "acraksa.azurecr.io/v5rce:latest",
"name": "v5",
"ports": [{
"containerPort": 8443
}],
"resources": {
"limits": {
"cpu": "500m",
"memory": "128Mi"
}
}
}],
"nodeSelector": {
"kubernetes.io/os": "linux"
}
}
}
Problems
I had case with one cluster, where the OPA REGO would not receive the input from the podSpec, no matter why I defined. That’s why I recommend testing this in new cluster.
References
After reading while into the documentation, and testing the mitigation outlined in policy, I stumbled also to answer provided at docs ”how-to-fix-the-security-recommendation-34overridin.html”

0 comments on “AKS – Policy reference: Overriding or disabling of containers AppArmor profile should be restricted”