Deploy Linux VM with Terraform on Azure with NSG Rule and Boot Diagnostics

BRK0018
8 min readJun 5, 2022

As I started playing with Azure using Terraform, thought of sharing my hands-on learning through a blog about how to create and deploy a simple Linux VM using Terraform. Along with it, we’ll also control the traffic to that VM through a simple NSG Rule and associate it with the subnet in which we’ve this VM - that way we protect the VM rather than opening all the ports to the public access. We’ll also use storage account/container to store the boot diagnostics of the VM

To be frank, nothing fancy in this blog as the intention is to begin with some basic fundamentals

I’ll try to build more complex modules in the upcoming blogs to get in to more advanced levels. Fingers crossed!

Let’s begin our journey of “Basics with Terraform & Azure

  • Pre-requisites
  1. Azure subscription is required and The reader is expected to have an understanding about Azure Platform, Virtual Networks and Network Security Group fundamentals. General idea can be gathered from here
  2. Terraform is installed on your local machine. Follow this for guidelines
  3. In the Azure Portal, Manually
  • Create a Service Principal and ensure to give contributor role through Access Control [IAM] to your subscription
  • Create a Client Secret and Ensure to copy and store the Secret Value
  • Use setx command to store the SubscriptionID, TenantID, ClientID values on to your system environment variables rather than passing or referring them in your variables.tf file.

Example: setx SubscriptionID "yoursubscriptionID" Please refer here for more information

2. In the Azure Portal, Manually

  • Create the Resource Group, Storage Account and a Container which will be primarily used for the backend TFState file

3. On your local machine,

  • Use any of your favourite Code Editors like VSCode as an example
  • Create a folder with a name of your choice. In my example I’ve Exercise-3 as the folder name
  • Configure the backend.tf accordingly in the same folder which is used to store the Terraform State remotely in the respective Azure Storage Account/Container. Refer this to understand more about Terraform Backend. Sample code is given below
backend.tf
  • Create provider.tf in the same folder and have the below code which helps to configure infrastructure in Microsoft Azure platform using the Azure Resource Manager APIs — Please refer here for more information
provider.tf
  • Create variables.auto.tfvars having the default declared values like below. The same can be defined in variables.tf as well but there are few advantages of having it in *.tfvars file. Please refer here for a simple clarification between variables.tf and *.tfvars
  • In the below, `-vnet-address-space` and `subnet-1-address-space` are used for the Virtual Network and the Subnet in which our Linux VM is going to be deployed
variables.auto.tfvars
  • Create variables.tf like below. I’ve used random_ID variable. Intention is to attach this ID to the Resource Group name and the LinuxVM name which we are going to create soon in the next steps. Use of this random_ID is optional though
variables.tf
  • Let’s first create the Resource Group and have below code in the main.tf Take a look at how I’m using the random_ID to attach it to the Resource Group name to bring the uniqueness to the naming convention
main.tf snippet-1

After saving all these files, Open the terminal in the VSCode and perform below actions to initialize and execute your terraform project code

terraform init and observe the changes in the folder ‘Exercise-3’

terraform plan and observe the console output and understand

terraform apply --auto-approve

After this is successfully executed, open your Azure Portal and see the changes. You should be able to see the Resource Group listed in your Azure Portal

Please refer here for more insights about what is the purpose of each of the above commands. In addition to the above commands, terraform fmt and terraform validate are pretty useful too. Explore about them and understand how they help

4. Now, it’s time to do little more like creating the Virtual Network, Subnet and actually deploying the Virtual Machine in to that Resource Group. I’m using the concept of terraform modules here. If you’ve never explored about this concept, I highly encourage to read this blog

  • Create a sub-folder structure like below under Exercise-3 where we would like to use separate modules for each of the sections like virtual_networks, vm, traffic_rules and storage_accounts
  • Under virtual_networks module, create the files as shown below
Exercise-3 folder view with modules in it
  • In your main.tf, let’s reference this virtual_networks module and pass the required variables which can be used for creating the virtual network. I used time_sleep and also depends_on which are self-explanatory but feel free to explore how they will help in this scenario
main.tf snippet-2

See below the different .tf files under virtual_networks module. vnet_outputs.tf outputs from this module will be consumed in other modules such as traffic_rules as per the needs in the following sections

vnet_main.tf
vnet_variables.tf
vnets_outputs.tf
  • You might notice some warnings / errors at this stage as some references in vnet_main.tf are incomplete, we’ll sort them out soon in the following sections
  • As we’ve defined the VNet, let’s define the Linux VM vm module in main.tfand associate that with the VNet and the Subnet
main.tf snippet-3

Let’s create vm_main.tf, vm_variables.tf and vm_outputs.tf files under the vm module folder created earlier. The code snippets in those files looks like below

vm_variables.tf
vm_main.tf
  • admin_ssh_key — in the above code snippet helps you to ssh in to the vm using your ssh keys. Please follow these guidelines to generate the ssh keys and change the path in your code snippet as per the location of your ssh keys
  • vm_outputs.tf is currently blank as we don’t need to pass of the VM information to other modules in our scenario. Feel free to delete the vm_outputs.tf. But generally, I keep it as it is as it’s a good practice to follow the structure and this file can be used for any future modifications/enhancements to the scenario
  • You might notice still some warnings / errors at this stage as some references in vnet_main.tf and other .tf files incomplete, we’ll sort them out soon in the upcoming sections

5. Let’s create some traffic_rules to only allow specific Inbound traffic to our vm

  • Define the traffic_rules module in the main.tf like below
main.tf snippet-4
  • rules_main.tf and rules_variables.tf under traffic_rules folder
rules_main.tf snippet-1
rules_variables.tf
  • similar to vm_outputs.tf, rules_outputs.tf are currently blank for the same reason as mentioned before
  • Till now, the folder structure looks like below
Exercise-3 folder & file structure

6. Let’s capture the boot_diagnostics of the vm we wanted to deploy through this exercise. We’ll use storage_accounts module to fulfil these needs as shown below

  • Define storage_accounts module in the main.tf like below
main.tf snippet-5
  • sa_main.tf, sa_outputs.tf and sa_variables.tf under storage_accounts folder look like below
sa_variables.tf
sa_main.tf
sa_outputs.tf
  • Observe how output from sa_outputs.tf is referenced and consumed in vm_main.tf under boot_diagnostics
vm_maint.tf referencing the boot_diagnostics for uri

Now, Let’s run

terraform init, terraform plan and terraform apply --auto-approve. After successful deployment, You should be able to see all the resources in the Azure Portal successfully like below

Azure Portal Resources deployed through Terraform
Linux VM with Public IP

Vnet Map looks like below

tf-network topology before NSG rule
  • Now observe, what we defined in vnet_main.tf and how they are shown in the above Vnet Map under your Azure Portal/yourVNet Overview/Topology section
  • Let’s ssh in to the VM using

ssh adminuser@pubIPofyourVM once successfully logged in, use this to install the nginx server on to that VM. Once you made the server up and running, open any browser and try to access it using http://pubIPofyourVM and you should be seeing the nginx server welcome page like below

Notice that you are able to access your VM through port 80 as there are NO restrictions applied yet to your Inbound traffic to the Linux VM. Hence you are able to access it through browser without any traffic blocking

Also try to ping google.com from your VM ssh session. It should be successful to ping as well as Outbound traffic from your Linux VM is allowed too

Let’s associate the NSG rule we’ve created in the rules_main.tf code with the subnet now such that we only allow Inbound access through port 22. Now your rules_main.tf should be updated like below

rules_main.tf snippet-2

After the save, perform terraform init, terraform plan and terraform apply --auto-approve to see the changes as intended on your resources

Now, if you access http://pubIPofyourVM, shouldn’t be allowed as only Port 22 is acceptable. Hence, you should be still able to ssh in to that machine from your terminal and able to ping google.com as Outbound traffic from the VM rules are unchanged. And now observe the your Vnet Topology in the Azure Portal

tf-network topology after NSG rule applied

You can browse through the Storage Account/Container in the Azure Portal for the boot_diagnostics logs and confirm that they exist as per the expectations

With this, the deployment of the Linux VM with NSG rule and Boot Diagnostics is completely successfully

Finally perform terraform destroy --auto-approve to destroy all the resources in the Azure Portal to free up consuming the $

That’s it for now!

Hope this information finds useful.

Thanks for taking time to read!

--

--

BRK0018

Human being First, followed by A Husband and A Father of Two Smiles — Rest is the Magic!