Use KeyVault and Implement VNet Peering in Azure with Terraform

BRK0018
11 min readJun 13, 2022

In my previous blog, we deployed a very simple scenario using Terraform.

In this blog, let’s get to a bit more next level where we implement VNet Peering between the two virtual networks and we’ll also utilize Azure Key Vault to access our Virtual Machines through secrets rather than keeping the credentials in our code & absorbing them through variable definitions which is NOT an ideal way of restricting the resource’s access

How does our Architecture look like and What do we need to build ?

Below is the pictorial representation of our goal and let’s keep an eye as we progress with our implementation

A picture in mind always helps ;)

VNet Peering and KeyVault

In order to achieve the above, we need to first break down things in to small pieces — Let’s do that and break in to as small steps as possible like below [Remember that these/each step[s] can be further more granularized but for simplicity I’ve limited to ‘10’]

Steps to implement the Architecture shown above

Let’s begin the fun and make our hands a bit more dirty than before :)

Build the basic Terraform Folder Structure

  • Create a folder named Vnet_Peering_Laband ensure to follow and implement ‘Pre-requisites’ , 2nd and 3rd sections of my previous blog before proceeding to the next steps
  • Configure the backend.tf accordingly in that folder which is used to store the Terraform State remotely in the respective Azure Storage Account/Container
backend.tf
  • Create provider.tf in the same folder and have the below code which helps to configure infrastructure in Microsoft Azure platform. Please ignore the key_vault section for now and I’ll explain the use as we come to the section of ‘Key Vault’ Implementation further below of this blog
provider.tf
  • Create variables.auto.tfvars having the default declared values like below. We defined address spaces for VNets and Subnets for both the Primary and Secondary Resource Group. firewall_allocation_method and firewall_sku will be consumed for building virtual_networks in the later sections
variables.auto.tfvars
  • Create variables.tf like below
variables.tf
  1. Create two Resource Groups
  • Let’s create Primary and Secondary Resource Groups by defining them under the resource_groups module. Create a folder modules\resource_groups under Vnet_Peering_Lab and create files rg_main.tf, rg_variables.tf and rg_outputs.tf
  • Your folder structure should look similar to below. You can ignore other folders for now or feel free to create those folders and dummy files under those folders as shown below. Eventually, our entire Vnet_Peering_Lab will have all these files and you’ll be writing them as we go along :)
VNet_Peering_Lab folder structure
  • Creating the Primary and Secondary Resource Groups. Let’s call the resource_groups module from the main.tf from the root folder and pass the required variables in to that folder like below
main.tf Snippet_1
rg_variables.tf
rg_main.tf
rg_outputs.tf
  • Both the resource group names will be used in other modules, hence rg_outputs.tf has these definitions
  • Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands as frequent as possible
  • You should be able to see the new [2] resource groups in your Azure Portal now

2. Create a VNet and a Subnet in each Resource Group

  • If not created already, Create a folder modules\virtual_networks under Vnet_Peering_Lab and create files vnet_main.tf, vnet_variables.tf and vnet_outputs.tf
  • Let’s call the virtual_networks module from the main.tf from the root folder and pass the required variables in to that folder like below.
  • Take a look at the depends_on and how it is used in this section. Before we deploy the vnets and subnets in to the RGs, we must have RGs existing in the Portal and that condition fulfils the need below
main.tf Snippet_2
vnet_variables.tf
vnet_main Snippet_1.tf

In the above, we are also creating a Public IP which will be associated with the LinuxVM [we will deploy that in the later section] for public access

  • Create Network Interface [NIC] and associate the Public IP to it such that it can be attached to the LinuxVM once we deploy it
vnet_main Snippet_2.tf
  • Create the VNet, Subnet for Secondary Resource Group and also NIC as well which will be attached to the WindowsVM [we will deploy that in the later section] for internal access
vnet_main Snippet_3.tf
  • Now, it’s time to output the values as references from the virtual_networks module through vnet_outputs.tf like below. These values will be used in other applicable modules in our next sections
vnet_outputs.tf
  • Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands as frequent as possible
  • You should be able to see the VNets and Subnets in respective RGs in the portal

3. Implement VNet Peering between the two Virtual Networks

  • If not created already, Create a folder modules\virtual_network_peering under Vnet_Peering_Lab and create files peering_main.tf, peering_variables.tf and peering_outputs.tf
  • Let’s call the virtual_network_peeringmodule from the main.tf from the root folder and pass the required variables in to that folder like below
  • Again observe depends_on and how we are making use of it here
main.tf Snippet_3
peering_variables.tf
peering_main.tf
  • We don’t really have anything from the virtual_network_peering module to output to consume in other modules. Hence, peering_outputs.tf is blank and it just sits there idle for the terraform schema/structure purpose
  • Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands as frequent as possible
  • You should be able to see the successful peering connection between the two VNets in the Azure Portal like below
Peering Vnet1 to Vnet2
Peering Vnet2 to Vnet1

4. Use Azure Key Vault and create Secrets for the LinuxVM and the WindowsVM

  • If not created already, Create a folder modules\az_key_vault under Vnet_Peering_Lab and create files kv_main.tf, kv_variables.tf and kv_outputs.tf
  • Let’s call the az_key_vault module from the main.tf from the root folder and pass the required variables in to that folder like below.
main.tf Snippet_4
kv_variables.tf
kv_main Snippet_1
  • Under kv_main.tf, we create an Azure Key Vault with key, secret and storage permissions. As this is the development/test environment, we’ve given maximum permissions but in an ideal situation, these permissions will be limited to the least possible permissions to limit the lateral movement and unnecessary exposure
  • Continuing further, let’s create secrets which can be then used to feed as the admin_password to access both the VMs [LinuxVM and the WindowsVM]
kv_main Snippet_2
  • Let’s output the secrets created and also the URI such that the secrets can be consumed under the vm module in up coming sections
kv_outputs.tf

Let’s understand a bit about key_vault section under providers.tf as we pointed in our earlier section of this blog

provider.tf
  • purge_soft_delete_on_destory = true → Enables the key vault to be deleted
  • recover_soft_deleted_key_results = false → Disables the use of soft deleted key vaults
  • Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands as frequent as possible
  • You should be able to see the Azure Key Vault and the Secrets under it in the Azure Portal like below
Azure Key Vault and Secrets in the Azure Portal
  • As a sample, see detailed view for one of the secrets in the portal. Ideally, we should be setting up the Expiration date as we don’t want the secrets to be lying around for infinite time. But as this is the Dev/Test environment and we will terraform destroythe resources immediately after our successful deployment, this is ignored for now. Keep a note of this behaviour

5. Deploy LinuxVM [in Primary RG] and WindowsVM [in Secondary RG]

  • If not created already, Create a folder modules\vm under Vnet_Peering_Lab and create files vm_main.tf, vm_variables.tf and vm_outputs.tf
  • Let’s call the vm module from the main.tf from the root folder and pass the required variables in to that folder like below
main.tf Snippet_5
vm_variables.tf
  • It’s time to create our LinuxVM in the Primary RG. Observe how network_interface_ids we consumed here which were the outputs from the virtual_networks module. Similarly, observe the same for WindowsVM too
  • Commented the code for admin_ssh_key as we are using the Secret from the Azure Key Vault as the admin_password now
  • Also make a note of the comment under disable_password_authentication section and why we turned it as false now
vm_main.tf Snippet_1
  • Let’s create our WindowsVM in the Secondary RG
vm_main.tf Snippet_2
  • In order to allow ICMP connection to our WindowsVM, we should be disabling the Windows Firewall on the VM. We use VM Extension to do so. Have a look at the code snippet below under vm_main.tf
vm_main.tf Snippet_3
vm_outputs.tf
  • Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands as frequent as possible
  • You should be able to see the LinuxVM and the WindowsVM under the respective RGs in the Azure Portal now

I think we’ve come good far in building the infrastructure we need through Terraform. Take couple of mins break and then deep delve in to the Azure Portal to see how your resources are showing up to understand the configurations between them and how they got tied through applicable parameters

It’s good time now for a break and relax before we implement the traffic rules

Chai Time ;)

Let’s get back to the business

6. Define Traffic Rules for the LinuxVM and WindowsVM

  • Objective is to

— Limit and allow the Internet Access to LinuxVM through SSH

— Limit and allow the ICMP access to WindowsVM from LinuxVM and RDP from Peered network only

  • If not created already, Create a folder modules\traffic_rules under Vnet_Peering_Lab and create files rules_main.tf, rules_variables.tf and rules_outputs.tf
  • Let’s call the traffic_rules module from the main.tf from the root folder and pass the required variables in to that folder like below
main.tf Snippet_6
rules_variables.tf
rules_main.tf Snippet_1
rules_main.tf Snippet_2
  • We don’t really have anything from the traffic_rules module to output to consume in other modules. Hence, rules_outputs.tf is blank and it just sits there idle for the terraform schema/structure purpose

Now perform terraform init, terraform plan and terraform apply --auto-approve one after the other successful command execution. I encourage you to also use terraform validate and terraform fmt commands and fix any errors by following the blog carefully

With this, the deployment of resources is complete can be observed in your Azure Portal

Primary Resource Group
Secondary Resource Group

Let’s now test the access to our LinuxVM and WindowsVM and confirm the behaviour

— From the terminal from your local machine, perform ssh adminuser@LinuxVMPubIP [use the LinuxVM Public IP appearing in your Azure Portal in place of LinuxVMPubIP] and use the linuxVM-pswd from the portal as admin_password to login to the LinuxVM. You should be able to login successfully

SSH in to LinuxVM from the local machine

— Ping to WindowsVM from LinuxVM. Using the WindowsVM Private IP address [note down this IP address from your Azure Portal] and use ping WindowsVMPrivateIP to check whether it is working properly as shown below

— Try to RDP in to this WindowsVM frm your local machine using any of the RDP tools. You’ll be unsucessfull to do as the WindowsVM is expected to accept only the requests from the Peered Network resources

Finally perform terraform destroy --auto-approve to destroy all the resources in the Azure Portal to free up consuming the $

That’s it and You are now done with this scenario :)

Full code can be accessed from here — https://github.com/ramakb/VNet_Peering_Lab

Hope you find this information helpful.

Thanks for taking time to read!

--

--

BRK0018

Human being First, followed by A Husband and A Father of Two Smiles — Rest is the Magic!