How to install and use terraform as a infrastructure automation tool

Snigdha Sambit Aryakumar
10 min readMar 16, 2022

In this tutorial we will use Terraform to provision, update, and destroy infrastructure in google cloud using the sample configuration provided. We will also cover the basic terraform commands and how to install terarform on your local machine.

Prerequisites

Below are the list of things we need before we start our demo. Please follow the steps and install or update your lcoal with the instructions

Installing Terraform

For the installation please refer to the below official document:

Once you have installed you can verify your installation using the below command

terraform -help
Usage: terraform [global options] <subcommand> [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
All other commands:
console Try Terraform expressions at an interactive command prompt
fmt Reformat your configuration in the standard style
force-unlock Release a stuck lock on the current workspace
get Install or upgrade remote Terraform modules
graph Generate a Graphviz graph of the steps in an operation
import Associate existing infrastructure with a Terraform resource
login Obtain and save credentials for a remote host
logout Remove locally-stored credentials for a remote host
output Show output values from your root module
providers Show the providers required for this configuration
refresh Update the state to match remote systems
show Show the current state or a saved plan
state Advanced state management
taint Mark a resource instance as not fully functional
test Experimental support for module integration testing
untaint Remove the 'tainted' state from a resource instance
version Show the current Terraform version
workspace Workspace management
Global options (use these before the subcommand, if any):
-chdir=DIR Switch to a different working directory before executing the
given subcommand.
-help Show this help output, or the help for a specified subcommand.
-version An alias for the "version" subcommand.

Add any subcommand to terraform -help to learn more about what it does and available options.

terraform -help plan

Configuring Google Cloud SDK

Please follow the steps mentioned in this page for setting up gcloud cli and other necessary tools

https://cloud.google.com/sdk/docs/install-sdk

Configuring our Service Account on Google Cloud Platform

I am assuming that you already have a GCP project, if not you can setup a free one and then we can set up a service account and set the correct permissions to manage the project’s resources.

· Create a service account and attach the necessary roles that are required

· Download the generated JSON file and save it to your project’s directory.

Terraform Demo

For the purpose of the demo we will be create a google storage bucket in our project and store the statefile terraform in that bucket. So, ultimately we we will have a bucket created and a statefile pushed into that bucket

All the configs and instructions are stored in the below github repo

https://github.com/snigdhasambitak/terraform-gcloud-demo

Setup terraform provider

A Terraform provider, allows us to connect with Google Cloud API . Don’t worry about credentials, we will handle this in a later part of the article.

terraform {
required_providers {
google = {
# credentials = "${file("service-account.json")}"
source = "hashicorp/google"
version = "~> 4.0"
}
google-beta = {
# credentials = "${file("service-account.json")}"
source = "hashicorp/google"
version = "~> 4.0"
}
}
required_version = ">= 0.14"
}

We will store the above file in a file named as providers.tf

Setup google storage bucket

In terraform we have a main.tf file that is the mian entrypoint for terraform and it spins up the resources defined in this file. The contents of our main.tf are as follows:

resource "google_storage_bucket" "default" {  name          = module.this.id
location = var.location
project = var.project
storage_class = var.storage_class
force_destroy = var.force_destroy
labels = module.this.tags
dynamic "retention_policy" {
for_each = var.retention_policy == null ? [] : [var.retention_policy]
content {
is_locked = var.retention_policy.is_locked
retention_period = var.retention_policy.retention_period
}
}
dynamic "lifecycle_rule" {
for_each = var.lifecycle_rules
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
age = lookup(lifecycle_rule.value.condition, "age", null)
created_before = lookup(lifecycle_rule.value.condition, "created_before", null)
with_state = lookup(lifecycle_rule.value.condition, "with_state", lookup(lifecycle_rule.value.condition, "is_live", false) ? "LIVE" : null)
matches_storage_class = lookup(lifecycle_rule.value.condition, "matches_storage_class", null)
num_newer_versions = lookup(lifecycle_rule.value.condition, "num_newer_versions", null)
}
}
}
versioning {
enabled = var.versioning_enabled
}
dynamic "encryption" {
for_each = var.default_kms_key_name != null ? [1] : []
content {
default_kms_key_name = var.default_kms_key_name
}
}
}
resource "google_storage_bucket_iam_member" "members" {
for_each = {
for m in var.iam_members : "${m.role} ${m.member}" => m
}
bucket = google_storage_bucket.default.name
role = each.value.role
member = each.value.member
}

Setup variables

Like any programming language we can define the variables for a terraform configs which are invoked during run time

variable "location" {
type = string
default = "europe-west1"
description = "The GCS region."
}
variable "project" {
type = string
default = null
description = "The ID of the project in which the resource belongs. If it is not provided, the provider project is used."
}
variable "force_destroy" {
type = bool
default = false
description = "When deleting a bucket, this boolean option will delete all contained objects."
}
variable "storage_class" {
type = string
default = "REGIONAL"
description = "The Storage Class of the new bucket. Allowed values: `STANDARD`, `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`, `ARCHIVE`."
validation {
condition = contains(["STANDARD", "MULTI_REGIONAL", "REGIONAL", "NEARLINE", "COLDLINE", "ARCHIVE"], var.storage_class)
error_message = "Allowed values: `STANDARD`, `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`, `ARCHIVE`."
}
}
variable "default_kms_key_name" {
type = string
default = null
description = "The `id` of a Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified."
}
variable "versioning_enabled" {
type = bool
default = true
description = "While set to `true`, versioning is fully enabled for this bucket."
}
variable "retention_policy" {
type = object({
is_locked = bool
retention_period = number
})
default = null
description = <<-DOC
Configuration of the bucket's data retention policy for how long objects in the bucket should be retained.
is_locked:
If set to `true`, the bucket will be locked and permanently restrict edits to the bucket's retention policy.
retention_period:
The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived.
DOC
}
variable "lifecycle_rules" {
type = set(object({
action = any
condition = any
}))
default = []
description = <<-DOC
The list of bucket Lifecycle Rules.
action:
type:
The type of the action of this Lifecycle Rule. Allowed values: `Delete` and `SetStorageClass`.
storage_class:
The target Storage Class of objects affected by this Lifecycle Rule.
Required if action type is `SetStorageClass`.
Allowed values: `STANDARD`, `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`, `ARCHIVE`.
condition:
age:
Minimum age of an object in days to satisfy this condition.
created_before:
Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.
with_state:
Match to live and/or archived objects. Unversioned buckets have only live objects.
Allowed values: `LIVE`, `ARCHIVED`, `ANY`.
matches_storage_class:
Storage Class of objects to satisfy this condition.
Allowed values: `STANDARD`, `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`, `ARCHIVE`.
num_newer_versions:
Relevant only for versioned objects.
The number of newer versions of an object to satisfy this condition.
custom_time_before:
Creation date of an object in RFC 3339 (e.g. `2017-06-13`) to satisfy this condition.
days_since_custom_time:
Date in RFC 3339 (e.g. `2017-06-13`) when an object's Custom-Time metadata is earlier than the date specified in this condition.
days_since_noncurrent_time:
Relevant only for versioned objects.
Number of days elapsed since the noncurrent timestamp of an object.
noncurrent_time_before:
Relevant only for versioned objects.
The date in RFC 3339 (e.g. `2017-06-13`) when the object became nonconcurrent.
DOC
}
variable "iam_members" {
description = "The list of IAM members to grant permissions on the bucket."
type = list(object({
role = string
member = string
}))
default = []
}

Setup outputs

The terraform output command is used to extract the value of an output variable from the state file and we can define outputs in our configs like the below file

output "self_link" {
value = join("", google_storage_bucket.default.*.self_link)
description = "The URI of the created resource"
}
output "url" {
value = join("", google_storage_bucket.default.*.url)
description = "The base URL of the bucket, in the format gs://<bucket-name>"
}
output "name" {
value = join("", google_storage_bucket.default.*.name)
description = "The name of bucket"
}

terraform modules

A module is a container for multiple resources that are used together. Modules can be used to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.

The .tf files in your working directory when you run terraform plan or terraform apply together form the root module. That module may call other modules and connect them together by passing output values from one to input values of another

In order to create a bucket lets create a module which we will inherit in our example

The configs which we have defined above is a module and we can inherit that using the below configs

module "my_bucket" {
source = "git::https://github.com/snigdhasambitak/terraform-gcloud-demo.git?ref=main"
name = "bucket"
stage = "production"
namespace = "snigdha"
project = "playground-snigdha-lwqar"
force_destroy = true
lifecycle_rules = [{
action = {
type = "Delete"
}
condition = {
age = 365
with_state = "ANY"
}
}]
iam_members = [{
role = "roles/storage.objectViewer"
member = "user:saryakumar@example.com"
}]
}

Provisioning

To provision this example, run the following from within this directory:

terraform init # to get the plugins

terraform init
Initializing modules...
Initializing the backend...Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Using previously-installed hashicorp/google v4.14.0
Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

terraform plan # to see the infrastructure plan

terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# module.my_bucket.google_storage_bucket.default will be created
+ resource "google_storage_bucket" "default" {
+ force_destroy = true
+ id = (known after apply)
+ labels = {
+ "name" = "snigdha-production-bucket"
+ "namespace" = "snigdha"
+ "stage" = "production"
}
+ location = "EUROPE-WEST1"
+ name = "snigdha-production-bucket"
+ project = "playground-snigdha-lwqar"
+ self_link = (known after apply)
+ storage_class = "REGIONAL"
+ uniform_bucket_level_access = (known after apply)
+ url = (known after apply)
+ lifecycle_rule {
+ action {
+ type = "Delete"
}
+ condition {
+ age = 365
+ matches_storage_class = []
+ with_state = "ANY"
}
}
+ versioning {
+ enabled = true
}
}
# module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"] will be created
+ resource "google_storage_bucket_iam_member" "members" {
+ bucket = "snigdha-production-bucket"
+ etag = (known after apply)
+ id = (known after apply)
+ member = "user:saryakumar@example.com"
+ role = "roles/storage.objectViewer"
}
Plan: 2 to add, 0 to change, 0 to destroy.Changes to Outputs:
+ name = "snigdha-production-bucket"
+ self_link = (known after apply)
+ url = (known after apply)

terraform apply # to apply the infrastructure build

terraform applyTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# module.my_bucket.google_storage_bucket.default will be created
+ resource "google_storage_bucket" "default" {
+ force_destroy = true
+ id = (known after apply)
+ labels = {
+ "name" = "snigdha-production-bucket"
+ "namespace" = "snigdha"
+ "stage" = "production"
}
+ location = "EUROPE-WEST1"
+ name = "snigdha-production-bucket"
+ project = "playground-snigdha-lwqar"
+ self_link = (known after apply)
+ storage_class = "REGIONAL"
+ uniform_bucket_level_access = (known after apply)
+ url = (known after apply)
+ lifecycle_rule {
+ action {
+ type = "Delete"
}
+ condition {
+ age = 365
+ matches_storage_class = []
+ with_state = "ANY"
}
}
+ versioning {
+ enabled = true
}
}
# module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"] will be created
+ resource "google_storage_bucket_iam_member" "members" {
+ bucket = "snigdha-production-bucket"
+ etag = (known after apply)
+ id = (known after apply)
+ member = "user:saryakumar@example.com"
+ role = "roles/storage.objectViewer"
}
Plan: 2 to add, 0 to change, 0 to destroy.Changes to Outputs:
+ name = "snigdha-production-bucket"
+ self_link = (known after apply)
+ url = (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesmodule.my_bucket.google_storage_bucket.default: Creating...
module.my_bucket.google_storage_bucket.default: Creation complete after 2s [id=snigdha-production-bucket]
module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"]: Creating...
module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"]: Creation complete after 4s [id=b/snigdha-production-bucket/roles/storage.objectViewer/user:saryakumar@example.com]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.Outputs:name = "snigdha-production-bucket"
self_link = "https://www.googleapis.com/storage/v1/b/snigdha-production-bucket"
url = "gs://snigdha-production-bucket"

terraform output # outputs the created resources

terraform output
name = "snigdha-production-bucket"
self_link = "https://www.googleapis.com/storage/v1/b/snigdha-production-bucket"
url = "gs://snigdha-production-bucket"

terraform destroy # to destroy the built infrastructure

terraform destroy
module.my_bucket.google_storage_bucket.default: Refreshing state... [id=snigdha-production-bucket]
module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com]: Refreshing state... [id=b/snigdha-production-bucket/roles/storage.objectViewer/user:saryakumar@example.com]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:# module.my_bucket.google_storage_bucket.default will be destroyed
- resource "google_storage_bucket" "default" {
- default_event_based_hold = false -> null
- force_destroy = true -> null
- id = "snigdha-production-bucket" -> null
- labels = {
- "name" = "snigdha-production-bucket"
- "namespace" = "snigdha"
- "stage" = "production"
} -> null
- location = "EUROPE-WEST1" -> null
- name = "snigdha-production-bucket" -> null
- project = "playground-snigdha-lwqar" -> null
- requester_pays = false -> null
- self_link = "https://www.googleapis.com/storage/v1/b/snigdha-production-bucket" -> null
- storage_class = "REGIONAL" -> null
- uniform_bucket_level_access = false -> null
- url = "gs://snigdha-production-bucket" -> null
- lifecycle_rule {
- action {
- type = "Delete" -> null
}
- condition {
- age = 365 -> null
- days_since_custom_time = 0 -> null
- days_since_noncurrent_time = 0 -> null
- matches_storage_class = [] -> null
- num_newer_versions = 0 -> null
- with_state = "ANY" -> null
}
}
- versioning {
- enabled = true -> null
}
}
# module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com] will be destroyed
- resource "google_storage_bucket_iam_member" "members" {
- bucket = "b/snigdha-production-bucket" -> null
- etag = "CAI=" -> null
- id = "b/snigdha-production-bucket/roles/storage.objectViewer/user:saryakumar@example.com" -> null
- member = "user:saryakumar@example.com" -> null
- role = "roles/storage.objectViewer" -> null
}
Plan: 0 to add, 0 to change, 2 to destroy.Changes to Outputs:
- name = "snigdha-production-bucket" -> null
- self_link = "https://www.googleapis.com/storage/v1/b/snigdha-production-bucket" -> null
- url = "gs://snigdha-production-bucket" -> null
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yesmodule.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"]: Destroying... [id=b/snigdha-production-bucket/roles/storage.objectViewer/user:saryakumar@example.com]
module.my_bucket.google_storage_bucket_iam_member.members["roles/storage.objectViewer user:saryakumar@example.com"]: Destruction complete after 4s
module.my_bucket.google_storage_bucket.default: Destroying... [id=snigdha-production-bucket]
module.my_bucket.google_storage_bucket.default: Destruction complete after 1s
Destroy complete! Resources: 2 destroyed.

--

--

Snigdha Sambit Aryakumar

Technical Lead @ Travix International | Helps building and delivering software faster