Use KeyVault and Implement VNet Peering in Azure with Terraform
In my previous blog, we deployed a very simple scenario using Terraform.
In this blog, let’s get to a bit more next level where we implement VNet Peering between the two virtual networks and we’ll also utilize Azure Key Vault to access our Virtual Machines through secrets rather than keeping the credentials in our code & absorbing them through variable definitions which is NOT an ideal way of restricting the resource’s access
How does our Architecture look like and What do we need to build ?
Below is the pictorial representation of our goal and let’s keep an eye as we progress with our implementation
A picture in mind always helps ;)
In order to achieve the above, we need to first break down things in to small pieces — Let’s do that and break in to as small steps as possible like below [Remember that these/each step[s] can be further more granularized but for simplicity I’ve limited to ‘10’]
Let’s begin the fun and make our hands a bit more dirty than before :)
Build the basic Terraform Folder Structure
- Create a folder named
Vnet_Peering_Lab
and ensure to follow and implement ‘Pre-requisites’ , 2nd and 3rd sections of my previous blog before proceeding to the next steps - Configure the
backend.tf
accordingly in that folder which is used to store the Terraform State remotely in the respective Azure Storage Account/Container
- Create
provider.tf
in the same folder and have the below code which helps to configure infrastructure in Microsoft Azure platform. Please ignore thekey_vault
section for now and I’ll explain the use as we come to the section of ‘Key Vault’ Implementation further below of this blog
- Create
variables.auto.tfvars
having the default declared values like below. We defined address spaces for VNets and Subnets for both the Primary and Secondary Resource Group.firewall_allocation_method
andfirewall_sku
will be consumed for buildingvirtual_networks
in the later sections
- Create
variables.tf
like below
- Create two Resource Groups
- Let’s create Primary and Secondary Resource Groups by defining them under the
resource_groups
module. Create a foldermodules\resource_groups
underVnet_Peering_Lab
and create filesrg_main.tf, rg_variables.tf
andrg_outputs.tf
- Your folder structure should look similar to below. You can ignore other folders for now or feel free to create those folders and dummy files under those folders as shown below. Eventually, our entire
Vnet_Peering_Lab
will have all these files and you’ll be writing them as we go along :)
- Creating the Primary and Secondary Resource Groups. Let’s call the
resource_groups
module from themain.tf
from the root folder and pass the required variables in to that folder like below
- Both the resource group names will be used in other modules, hence
rg_outputs.tf
has these definitions - Now perform
terraform init, terraform plan
andterraform apply --auto-approve
one after the other successful command execution. I encourage you to also useterraform validate
andterraform fmt
commands as frequent as possible - You should be able to see the new [2] resource groups in your Azure Portal now
2. Create a VNet and a Subnet in each Resource Group
- If not created already, Create a folder
modules\virtual_networks
underVnet_Peering_Lab
and create filesvnet_main.tf, vnet_variables.tf
andvnet_outputs.tf
- Let’s call the
virtual_networks
module from themain.tf
from the root folder and pass the required variables in to that folder like below. - Take a look at the
depends_on
and how it is used in this section. Before we deploy thevnets and subnets
in to the RGs, we must have RGs existing in the Portal and that condition fulfils the need below
In the above, we are also creating a Public IP which will be associated with the LinuxVM [we will deploy that in the later section] for public access
- Create Network Interface [NIC] and associate the Public IP to it such that it can be attached to the LinuxVM once we deploy it
- Create the VNet, Subnet for Secondary Resource Group and also NIC as well which will be attached to the WindowsVM [we will deploy that in the later section] for internal access
- Now, it’s time to output the values as references from the
virtual_networks
module throughvnet_outputs.tf
like below. These values will be used in other applicable modules in our next sections
- Now perform
terraform init, terraform plan
andterraform apply --auto-approve
one after the other successful command execution. I encourage you to also useterraform validate
andterraform fmt
commands as frequent as possible - You should be able to see the VNets and Subnets in respective RGs in the portal
3. Implement VNet Peering between the two Virtual Networks
- If not created already, Create a folder
modules\virtual_network_peering
underVnet_Peering_Lab
and create filespeering_main.tf, peering_variables.tf
andpeering_outputs.tf
- Let’s call the
virtual_network_peering
module from themain.tf
from the root folder and pass the required variables in to that folder like below - Again observe
depends_on
and how we are making use of it here
- We don’t really have anything from the
virtual_network_peering
module to output to consume in other modules. Hence,peering_outputs.tf
is blank and it just sits there idle for the terraform schema/structure purpose - Now perform
terraform init, terraform plan
andterraform apply --auto-approve
one after the other successful command execution. I encourage you to also useterraform validate
andterraform fmt
commands as frequent as possible - You should be able to see the successful peering connection between the two VNets in the Azure Portal like below
4. Use Azure Key Vault and create Secrets for the LinuxVM and the WindowsVM
- If not created already, Create a folder
modules\az_key_vault
underVnet_Peering_Lab
and create fileskv_main.tf, kv_variables.tf
andkv_outputs.tf
- Let’s call the
az_key_vault
module from themain.tf
from the root folder and pass the required variables in to that folder like below.
- Under
kv_main.tf
, we create an Azure Key Vault withkey, secret
andstorage
permissions. As this is the development/test environment, we’ve given maximum permissions but in an ideal situation, these permissions will be limited to the least possible permissions to limit the lateral movement and unnecessary exposure - Continuing further, let’s create secrets which can be then used to feed as the
admin_password
to access both the VMs [LinuxVM and the WindowsVM]
- Let’s output the secrets created and also the URI such that the secrets can be consumed under the
vm
module in up coming sections
Let’s understand a bit about key_vault
section under providers.tf
as we pointed in our earlier section of this blog
purge_soft_delete_on_destory = true
→ Enables the key vault to be deletedrecover_soft_deleted_key_results = false
→ Disables the use of soft deleted key vaults- Now perform
terraform init, terraform plan
andterraform apply --auto-approve
one after the other successful command execution. I encourage you to also useterraform validate
andterraform fmt
commands as frequent as possible - You should be able to see the Azure Key Vault and the Secrets under it in the Azure Portal like below
- As a sample, see detailed view for one of the secrets in the portal. Ideally, we should be setting up the Expiration date as we don’t want the secrets to be lying around for infinite time. But as this is the Dev/Test environment and we will
terraform destroy
the resources immediately after our successful deployment, this is ignored for now. Keep a note of this behaviour
5. Deploy LinuxVM [in Primary RG] and WindowsVM [in Secondary RG]
- If not created already, Create a folder
modules\vm
underVnet_Peering_Lab
and create filesvm_main.tf, vm_variables.tf
andvm_outputs.tf
- Let’s call the
vm
module from themain.tf
from the root folder and pass the required variables in to that folder like below
- It’s time to create our LinuxVM in the Primary RG. Observe how
network_interface_ids
we consumed here which were the outputs from thevirtual_networks
module. Similarly, observe the same for WindowsVM too - Commented the code for
admin_ssh_key
as we are using the Secret from the Azure Key Vault as theadmin_password
now - Also make a note of the comment under
disable_password_authentication
section and why we turned it asfalse
now
- Let’s create our WindowsVM in the Secondary RG
- In order to allow ICMP connection to our WindowsVM, we should be disabling the Windows Firewall on the VM. We use VM Extension to do so. Have a look at the code snippet below under
vm_main.tf
- Now perform
terraform init, terraform plan
andterraform apply --auto-approve
one after the other successful command execution. I encourage you to also useterraform validate
andterraform fmt
commands as frequent as possible - You should be able to see the LinuxVM and the WindowsVM under the respective RGs in the Azure Portal now
I think we’ve come good far in building the infrastructure we need through Terraform. Take couple of mins break and then deep delve in to the Azure Portal to see how your resources are showing up to understand the configurations between them and how they got tied through applicable parameters
It’s good time now for a break and relax before we implement the traffic rules
Let’s get back to the business
6. Define Traffic Rules for the LinuxVM and WindowsVM
- Objective is to
— Limit and allow the Internet Access to LinuxVM through SSH
— Limit and allow the ICMP access to WindowsVM from LinuxVM and RDP from Peered network only
- If not created already, Create a folder
modules\traffic_rules
underVnet_Peering_Lab
and create filesrules_main.tf, rules_variables.tf
andrules_outputs.tf
- Let’s call the
traffic_rules
module from themain.tf
from the root folder and pass the required variables in to that folder like below
- We don’t really have anything from the
traffic_rules
module to output to consume in other modules. Hence,rules_outputs.tf
is blank and it just sits there idle for the terraform schema/structure purpose
Now perform terraform init, terraform plan
and terraform apply --auto-approve
one after the other successful command execution. I encourage you to also use terraform validate
and terraform fmt
commands and fix any errors by following the blog carefully
With this, the deployment of resources is complete can be observed in your Azure Portal
Let’s now test the access to our LinuxVM and WindowsVM and confirm the behaviour
— From the terminal from your local machine, perform ssh adminuser@LinuxVMPubIP
[use the LinuxVM Public IP appearing in your Azure Portal in place of LinuxVMPubIP
] and use the linuxVM-pswd
from the portal as admin_password
to login to the LinuxVM. You should be able to login successfully
— Ping to WindowsVM from LinuxVM. Using the WindowsVM Private IP address [note down this IP address from your Azure Portal] and use ping WindowsVMPrivateIP
to check whether it is working properly as shown below
— Try to RDP in to this WindowsVM frm your local machine using any of the RDP tools. You’ll be unsucessfull to do as the WindowsVM is expected to accept only the requests from the Peered Network resources
Finally perform terraform destroy --auto-approve
to destroy all the resources in the Azure Portal to free up consuming the $
That’s it and You are now done with this scenario :)
Full code can be accessed from here — https://github.com/ramakb/VNet_Peering_Lab
Hope you find this information helpful.
Thanks for taking time to read!