Building and destroying projects with Terraform

build and destroy cloud projects with terraform

Tutorials we publish on Popularowl are practical and hands-on.

In most cases, as a prerequisite, we need to setup one or more virtual servers and pre-install them with  enterprise technology tools and applications.

In addition, we want to be able to destroy all this virtual infrastructure after we finish working with it. And quickly recreate it next time we need the setup.

Infrastructure as a code approach and tools like Terraform will help us solve this challenge.

What will we build?

use terraform for creating and destroying virtual server setup

In this tutorial, we will build the basic Terraform project for automating the provisioning and setup of the virtual machine.

This Terraform project is widely used across multiple other tutorials on Popularowl as a foundation for provisioning required VMs and software.

By the end of this tutorial, you will have a set of files and scripts which allow you to rapidly create and destroy Linux based virtual servers on Digital Ocean.


1. Using Terraform

All popular public cloud providers allow to create / destroy virtual servers and other cloud products.

You can achieve this via provided user interfaces or programatically via cloud platform APIs.

Tools like Terraform, aim to simplify and automate such programatic interactions with the cloud platform APIs.

Terraform allows us to keep the set of instructions as configuration files and automates the rest.

It supports multiple public cloud providers like AWS, Google Cloud, Azure, Digital Ocean etc.

Exactly what we need to solve our challenge.

Install Terraform on your local machine.

2. Create variables file

First, create the new project directory called foundation. In this directory create a Terraform file foundation/

Variables file will hold the potentially dynamic values for our setup, like server size, server OS image name etc.

# Terraform variables referenced by other terraform
# files in this project

# token should be exported in the local
# shell environment to avoid hardcoded values
# see for examples
variable "token" {
  description = "Digital Ocean Api Token"

variable "region" {
  description = "Digital Ocean Region"
  default = "lon1"
variable "droplet_image" {
  description = "Digital Ocean Droplet Image Name"
  default = "debian-9-x64"

variable "droplet_size" {
  description = "Droplet size for Jenkins server"
  default = "1gb"

# location of the private ssh key 
# used for ssh connection by terraform
# change the location if different on 
# your local machine
variable "pvt_sshkey" {
  description = "Location of the local private ssh key"
  default = "~/.ssh/id_rsa"

# ssh_fingerprint should be exported in the local
# shell environment to avoid hardcoded values
# see for examples
variable "ssh_fingerprint" {
  description = "Fingerprint of the public ssh key stored on Digital Ocean"

3. Main Terraform Steps

Next, create the file named foundation/

This file will hold the main steps we want Terraform to run for us in this simple project.

# choose Digital Ocean provider
provider "digitalocean" {
  token = "${var.token}"

# create VM instance on Digital Ocean
resource "digitalocean_droplet" "popularowl-server" {
    image = "${var.droplet_image}"
    name = "popularowl-server"
    region = "${var.region}"
    size = "${var.droplet_size}"
    ssh_keys = [

    # allow Terrform to connect via ssh
    connection {
        user = "root"
        type = "ssh"
        private_key = "${file(var.pvt_sshkey)}"
        timeout = "2m"

    # run all necessary commands via remote shell 
    provisioner "remote-exec" {
        inline = [
            # steps to run in ssh shell
            "apt update"        

# print out ip address of created Jenkins server VM
output "service-ip" {
  value = "${digitalocean_droplet.popularowl-server.ipv4_address}"

In the above file, we choose to use Terraform provider for Digital Ocean.

This means, that Terraform will automatically connect to the cloud platform by using defined set of its APIs. And will perform all the setup needed.

We then instruct Terraform to create us a resource, which I call popularowl-server.

The properties for this resource are supplied as variable names (remember the file we created earlier?)

Notice that we haven’t hardcoded var.ssh_key_fingerprint variable in the file?

It is a good practice not to hardcode the sensitive information in the code (which gets checked into version control history).

We will supply this value as environment variable later and Terraform will understand it.

We also instruct Terraform to connect to newly created virtual server via ssh and execute specific shell commands. In this step its only apt update.

4. Run the shell steps from file

Even if we can run all the shell instructions via ssh, it makes sense to run shell steps from the separate file.

This gives us more control if the number of such shell steps grows bigger.

Create the directory called files and the new file files/

Next, define all the shell commands you want to run. For the purpose of this tutorial they are the following


# setup the new virtual mashine
apt update &&
echo "All done. Welcome to your new virtual server."

Next we update the file – resource section, with the following provisioners:


provisioner "file" {
    source      = "files/"
    destination = "/tmp/"


provisioner "remote-exec" {
    inline = [
        # run setup script
        "chmod 755 /tmp/",

The above commands instruct the Terraform to copy file over to the newly create virtual machine. And secure shell will execute this shell script.

5. Print out VM details

We are almost done. Last item we will add to Terraform files is the output of information after all setup is finished.

Add the following to the very bottom of the


output "service-ip" {
  value = "your new instance is running with IP address: ${digitalocean_droplet.popularowl-server.ipv4_address}"


6. Export sensitive variables

There are 2 sensitive variables for this project which Terraform will need in order to run the setup steps for us.

First is Digital Ocean access token. This token should be retrieved from the Digital Ocean cloud platform.

Second, is the fingerprint data for the public key we want to upload to the newly created VM so we can connect via ssh. It has to be setup / created on Digital Ocean platform as well.

Once you have both of above, export them as environment variables you local development machine shell. Terraform will read them if the prefix is TF_VAR_

export TF_VAR_token=xxxxxxxxx
export TF_VAR_ssh_key_fingerprint=xxxxxxxxx

7. Create and destroy VMs.

We are now ready to automatically create VMs.

terraform init
terraform plan
terraform apply
terraform destroy

Run terraform init to setup the project (Terraform will download all dependencies). Run terraform plan to see what resources will Terraform create. Run terraform apply to create and setup resources. Run terraform destroy to destroy all virtual machines in this project.


We have created a simple, yet very powerful foundation project for automating the setup and configuration of virtual machines.

You can destroy and recreate the setup within minutes. This allows you to manage the cost of cloud resources used. You can always recreate the state of the infrastructure setup.

Full source code of files we created during this tutorial is hosted on GitHub.

Did you like this post?
Subscribe to receive new Popularowl
tutorials and posts

2 Responses

  • Ashleigh

    Not sure if it was just me, but I had to fix a few resultant errors after following this tut.

    Variable names in and don’t match, splitting a string declaration over two lines seems to be problematic (in the output block) and the “Update the file with the following” instruction is too vague – maybe specify you mean within the “resource” block.

    Also, apparently the “connection” block needs a host specified?

    • hi there, thanks for feedback.

      The code examples of this tutorial are hosted on GitHub. It’s a functional and working code.

      code snippet formatting in this blog post was a bit buggy – should be fixed now.

      ‘connection’ block doesn’t need host specified in this case. As Terraform is provisioning the new VM – it knows the IP address of the newly provisioned instance before attempting to connect.

Post Your Comment

Your email address will not be published.