Hi All,
I have recently been working on a project that needed to make use of AKS (Azure Kubernetes Service). In this post I would like to discuss how I went about deploying a private AKS cluster as a proof of concept using ARM template.
Azure Resources
Before I could use the below ARM template I needed to ensure that the following resources where created before hand, these resources for the purpose of my POC where in the same RG where I was going to deploy the AKS cluster into. In my case the RG was called ‘rg-aks-withnetowrk-joe‘.
A Virtual Network – this would be used for Pod and Node IP address assignment
A User Assigned Managed Identity – this needed the following permissions. (Managed Identity Operator and Network Contributor)
The Template
below is the ARM template code that I used to deploy this POC AKS cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "managedClusterName": { "defaultValue": "joedevtestaksclusterarm", "type": "String" }, "clusterRGName": { "defaultValue": "rg-aks-withnetowrk-joe", "type": "String" }, "agentVMSize": { "type": "string", "defaultValue": "Standard_DS2_v2", "allowedValues": [ //https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#cluster-configuration-presets-in-the-azure-portal //Production Standard "Standard_D8ds_v5", //Dev/Test "Standard_DS2_v2", //Production Economy "Standard_D8ds_v5", //Production Enterprise "Standard_D16ds_v5" ], "metadata": { "description": "The size of the Virtual Machine." } }, "osDiskSizeGB": { "type": "int", "defaultValue": 0, "maxValue": 1023, "minValue": 0, "metadata": { "description": "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize." } }, "agentCount": { "type": "int", "defaultValue": 2, "maxValue": 3, "minValue": 1, "metadata": { "description": "The number of nodes for the cluster." } }, "networkPlugin": { "defaultValue": "azure", "type": "string", "allowedValues": [ "azure", "kubenet" ], "metadata": { "description": "Network plugin used for building Kubernetes network." } }, "clusterTags": { "defaultValue": {}, "type": "object", "metadata": { "description": "Specifies the tags of the AKS cluster." } }, "clusterSku": { "defaultValue": { "name": "Base", "tier": "Standard" //can also have tier set to be 'Free' }, "type": "object", "metadata": { "descirption": "The managed cluster SKU tier." } }, "enableVnetSubnetID": { "defaultValue": false, "type": "bool", "metadata": { "description": "Flag to turn on or off of vnetSubnetID." } }, "loadBalancerSku": { "defaultValue": "Standard", "type": "string", "allowedValues": [ "Basic", "Standard" ], "metadata": { "description": "Specifies the sku of the load balancer used by the virtual machine scale sets used by node pools." } }, "isPrivateClusterSupported": { "type": "bool", "defaultValue": true }, "enableAuthorizedIpRange": { "type": "bool", "defaultValue": false }, "authorizedIPRanges": { "defaultValue": [], "type": "array", "metadata": { "description": "Boolean flag to turn on and off http application routing." } }, "enablePrivateCluster": { "type": "bool", "defaultValue": true, "metadata": { "description": "Enable private network access to the Kubernetes cluster." } }, "networkPolicy": { "defaultValue": "", "type": "string", "metadata": { "description": "Network policy used for building Kubernetes network." } }, "serviceCidr": { "defaultValue": "", "type": "string", "metadata": { "description": "A CIDR notation IP range from which to assign service cluster IPs." } }, "dnsServiceIP": { "defaultValue": "", "type": "string", "metadata": { "description": "Containers DNS server IP address." } }, "vnetResourceGroup": { "defaultValue": "rg-aks-withnetowrk-joe", "type": "string", "metadata": { "description": "Resource GB of VNET." } }, "vnetSubscriptionId": { "defaultValue": "324b9d03-278a-4627-b8d4-b770ff0ec80a", "type": "string", "metadata": { "description": "Subscription ID of Vnet RG" } }, "vnetName": { "defaultValue": "aks-vnet-joe", "type": "string", "metadata": { "description": "Name Of Vnet for AKS" } }, "outboundType": { "defaultValue": "loadBalancer", "type": "string", "metadata": { "description": "Outbound Type" } }, "userAssignedManagedIdentityName": { "defaultValue": "mid-aks-joe-kubelet", "type": "string", "metadata": { "description": "Name Of MID" } } }, "variables": { "vnetID": "[resourceId(parameters('vnetSubscriptionId'), parameters('vnetResourceGroup'), 'Microsoft.Network/VirtualNetworks', parameters('vnetName'))]", "nodeSubnetID": "[concat(variables('vnetID'), '/subnets/nodesubnet')]", "podSubnetID": "[concat(variables('vnetID'), '/subnets/podsubnet')]", "clusterInfrastructureRG": "[concat(parameters('clusterRGName'), '_', parameters('managedClusterName'), '_uksouth')]", "defaultApiServerAccessProfile": { "authorizedIPRanges": "[if(parameters('enableAuthorizedIpRange'), parameters('authorizedIPRanges'), null())]", "enablePrivateCluster": "[parameters('enablePrivateCluster')]" } }, "resources": [ { "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2023-08-02-preview", "name": "[parameters('managedClusterName')]", "location": "uksouth", "identity": { "type": "UserAssigned", "userAssignedIdentities": { "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedManagedIdentityName'))]": {} } }, "tags": { "tag1": "testing" }, "sku": "[parameters('clusterSku')]", "properties": { "kubernetesVersion": "1.27.7", "dnsPrefix": "[concat(parameters('managedClusterName'), '-dns')]", "identityProfile": { "kubeletidentity": { "resourceId": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedManagedIdentityName'))]", "clientId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',parameters('userAssignedManagedIdentityName')),'2018-11-30','Full').properties.clientId]", "objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',parameters('userAssignedManagedIdentityName')),'2018-11-30','Full').properties.principalId]" } }, "agentPoolProfiles": [ { "name": "agentpool", "max-pods": 30, "count": "[parameters('agentCount')]", "vmSize": "[parameters('agentVMSize')]", "osDiskSizeGB": "[parameters('osDiskSizeGB')]", "orchestratorVersion": "1.27.7", "enableNodePublicIP": false, "tags": "[parameters('clusterTags')]", "mode": "System", "osType": "Linux", "securityProfile": { "sshAccess": "LocalUser" }, "vnetSubnetID": "[variables('nodeSubnetID')]", "podSubnetID": "[variables('podSubnetID')]" } ], "servicePrincipalProfile": { "clientId": "msi" }, "addonProfiles": { "azureKeyvaultSecretsProvider": { "enabled": false }, "azurepolicy": { "enabled": true } }, "nodeResourceGroup": "[variables('clusterInfrastructureRG')]", "enableRBAC": true, "supportPlan": "KubernetesOfficial", "networkProfile": { "loadBalancerSku": "[parameters('loadBalancerSku')]", "networkPlugin": "[parameters('networkPlugin')]", "networkPolicy": "[parameters('networkPolicy')]", "serviceCidr": "[parameters('serviceCidr')]", "dnsServiceIP": "[parameters('dnsServiceIP')]", "outboundType": "[parameters('outboundType')]" }, "privateLinkResources": [ { "id": "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('managedClusterName')), '/privateLinkResources/management')]", "name": "management", "type": "Microsoft.ContainerService/managedClusters/privateLinkResources", "groupId": "management", "requiredMembers": [ "management" ] } ], "apiServerAccessProfile": "[if(parameters('isPrivateClusterSupported'), variables('defaultApiServerAccessProfile'), null())]", "autoUpgradeProfile": { "upgradeChannel": "patch", "nodeOSUpgradeChannel": "NodeImage" }, "disableLocalAccounts": false, "securityProfile": {}, "storageProfile": { "diskCSIDriver": { "enabled": true, "version": "v1" }, "fileCSIDriver": { "enabled": true }, "snapshotController": { "enabled": true } }, "oidcIssuerProfile": { "enabled": false }, "workloadAutoScalerProfile": {}, "metricsProfile": { "costAnalysis": { "enabled": false } } } } ] } |
The Created Resources
After running the above ARM template the following resources where created.
The AKS Instance itself that I have marked in Red, I have also showed the Managed Identity and Vnet that I created eralier.
Next the image below shows the automatically created resources that are used by the AKS, this get’s created in a node resource group mine was called ‘rg-aks-withnetowrk-joe_joedevtestaksclusterarm_uksouth’ the name is specified inside of the ARM template.
since this is a private cluster you will notice that the deployment has created a private endpoint called ‘kube-apiserver’. Although this is a private cluster, PODs can still be accessed publicly from the internet using the ‘kubernetes‘ load balancer that got created. Don’t worry I will go further into this shortly.
now in order to be able to manage the private AKS cluster we either need to have an express route or VPN connection into Azure or a virtual machine deployed into a RG and a Vnet that is peered with the Vnet of our AKS cluster.
Accessing the Private AKS Cluster
Since this was a POC I opted to create a separate resource group called ‘rg-mgmt‘ that would contain a VM and a VNet as shown in the below image, the purpose of this RG was to simulate an on premise environment.
The VM had a public IP address so that I was able to connect via RDP and it also had a private IP address. The ‘vnet-mgmt’ vnet was peered with the AKS instance vnet as shown below:
now we have more step to perform before we can connect to our AKS cluster. In the AKS Node RG, again which in my case was called ‘rg-aks-withnetowrk-joe_joedevtestaksclusterarm_uksouth‘ I went to the private DNS zone and I needed to my mgmt-vnet to the virtual network links as shown in the below image:
Connecting to Cluster
Now with all that done, I was now in a position to RDP on the VM and then install the following software:
Kubectl – https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
Az Cli – https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=powershell
Once you have installed the above software and updated your PATH to point to the location of where you downloaded the Kubectl, you can then use the following command using command prompt.
1 |
az aks get-credentials --resource-group YOUR_RG --name YOUR_CLUSTER_NAME |
just change the values accordingly. if all being well with the above command you can then issue the following command:
1 |
kubectl get nodes -o wide |
the output of the above command will be:
Creating A Deployment
Public Access
now we can create a deployment, the first deployment will allow external users on the internet to access our pod which will just be a simple Nginx web server.
create the following yaml file called ‘nginx-deployment.yaml‘ and save it a location of your choice on the VM, the contents of this yaml file is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer |
the above is essentially creating a deployment and a service.
run the following command to create this deployment (ensure you use the correct path to where you saved the deployment yaml file):
1 |
kubectl apply -f nginx-deployment.yaml |
check the pods have been created:
1 |
kubectl get pods |
as shown above, you can see the 3 nginx replicas that we specified in our deployment file.
now issue the following command to know the external IP address of the LoadBalancer service:
1 |
kubectl get services |
I have underlined in red the public IP address.
if you don’t see a public IP address and it just says ‘Pending’ then issue the below command to troubleshoot:
1 |
kubectl describe service NAME_OF_SEVICE_HERE |
then look at the events for any error messages.
Private Access
Now, what is you don’t want some of the pods that you create to be accessed via the internet, for example you just want internal office access? To do this we need to create another deployment but with an internal load balancer. create the following file, internal-deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-internal spec: replicas: 3 selector: matchLabels: app: nginx-internal template: metadata: labels: app: nginx-internal spec: containers: - name: nginx-internal image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-internal service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-pls-create: "true" spec: selector: app: nginx-internal ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer |
then apply this yaml file:
1 |
kubectl apply -f internal-deployment.yaml |
this will create the PODs and Service, but will also create the following resources in Azure within the Node RG:
now we need to create a Private end point, I did this using the portal:
I had already created a private end point subnet, this private endpoint maps to the private link service, after the private end point has been created you will need to manually approve the connection. this is done from within the private link service.
now this is done, you will use the private end point IP to access the pods.
Conclusion
that’s all for now. if you have any problems feel free to reach out to me and I will try to help if I can. I have tried to keeps things as simple as possible for this article, but you can configure AKS ARM template even further.