Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp. It is a declarative Infrastructure as Code tool, meaning instead of writing step-by-step imperative instructions like with SQL, JavaScript or Python, you can declare what you want using a YAML like syntax. Terraform is also stateful, meaning it keeps track of your current state, and compares it with your desired state. A reconcilation process between the two states generates an plan that Terraform can execute to create new resources, or update/delete existing resources. This plan is implemented as an acyclic graph, and is what allows Terraform to understand and handle dependencies between resources. Using Terraform is a great way to manage account level Snowflake resources like Warehouses, Databases, Schemas, Tables, and Roles/Grants, among many other use cases.

A Terraform provider is available for Snowflake, that allows Terraform to integrate with Snowflake.

Example Terraform use-cases:

Many Snowflake customers use Terraform to comply with security controls, maintain consistency, and support similar engineering workflows for infrastructure at scale.

This tutorial demonstrates how you can use Terraform to manage your Snowflake configurations in an automated and source-controlled way. We show you how to install and use Terraform to create and manage your Snowflake environment, including how to create a database, schema, warehouse, multiple roles, and a service user.

This introductory tutorial does not cover specific cloud providers or how to integrate Terraform into specific CI/CD tools. Those topics may be covered in future labs.

Prerequisites

What You'll Learn

What You'll Need

What You'll Build

Create a new repository to hold your Terraform project. We use the name sfguide-terraform-sample in this lab.

Run the following commands if you use the git command-line client replacing YOUR_GITHUB_USERNAME with your username:

$ mkdir sfguide-terraform-sample && cd sfguide-terraform-sample
$ echo "# sfguide-terraform-sample" >> README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git branch -M main
$ git remote add origin git@github.com:YOUR_GITHUB_USERNAME/sfguide-terraform-sample.git
$ git push -u origin main

You now have an empty repo that we will use in subsequent steps to Terraform your Snowflake account.

We will now create a user account separate from your own that uses key-pair authentication. The reason this is required in this lab is due to the provider's limitations around caching credentials and the lack of support for 2FA. Service accounts and key pair are also how most CI/CD pipelines run Terraform.

Create an RSA key for Authentication

This creates the private and public keys we use to authenticate the service account we will use for Terraform.

$ cd ~/.ssh
$ openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out snowflake_tf_snow_key.p8 -nocrypt
$ openssl rsa -in snowflake_tf_snow_key.p8 -pubout -out snowflake_tf_snow_key.pub

Create the User in Snowflake

Log in to the Snowflake console and create the user account by running the following command as the ACCOUNTADMIN role.

But first:

  1. Copy the text contents of the ~/.ssh/snowflake_tf_snow_key.pub file, starting after the PUBLIC KEY header, and stopping just before the PUBLIC KEY footer.
  2. Paste over the RSA_PUBLIC_KEY_HERE label (shown below).

Execute both of the following SQL statements to create the User and grant it access to the SYSADMIN and SECURITYADMIN roles needed for account management.

CREATE USER "tf-snow" RSA_PUBLIC_KEY='RSA_PUBLIC_KEY_HERE' DEFAULT_ROLE=PUBLIC MUST_CHANGE_PASSWORD=FALSE;

GRANT ROLE SYSADMIN TO USER "tf-snow";
GRANT ROLE SECURITYADMIN TO USER "tf-snow";

We need to pass provider information via environment variables and input variables so that Terraform can authenticate as the user.

Run the following to find YOUR_SNOWFLAKE_ACCOUNT. Refer to the Account Identifiers documentation for more information.

SELECT LOWER(current_organization_name() || '-' || current_account_name()) as YOUR_SNOWFLAKE_ACCOUNT;

Add Account Information to Environment

Run these commands in your shell. Be sure to replace the YOUR_SNOWFLAKE_ACCOUNT placeholder with the correct value.

NOTE: Setting SNOWFLAKE_REGION is required if you are using a Legacy Account Locator.

$ export SNOWFLAKE_USER="tf-snow"
$ export SNOWFLAKE_AUTHENTICATOR=JWT
$ export SNOWFLAKE_PRIVATE_KEY=`cat ~/.ssh/snowflake_tf_snow_key.p8`
$ export SNOWFLAKE_ACCOUNT="YOUR_SNOWFLAKE_ACCOUNT"

If you plan on working on this or other projects in multiple shells, it may be convenient to put this in a snow.env file that you can source or put it in your .bashrc or .zshrc file. For this lab, we expect you to run future Terraform commands in a shell with those set.

Add a file to your project in the base directory named main.tf. In main.tf we set up the provider and define the configuration for the database and the warehouse that we want Terraform to create.

Copy the contents of the following block to your main.tf

terraform {
  required_providers {
    snowflake = {
      source  = "Snowflake-Labs/snowflake"
      version = "~> 0.87"
    }
  }
}

provider "snowflake" {
  role = "SYSADMIN"
}

resource "snowflake_database" "db" {
  name = "TF_DEMO"
}

resource "snowflake_warehouse" "warehouse" {
  name           = "TF_DEMO"
  warehouse_size = "xsmall"
  auto_suspend   = 60
}

This is all the code needed to create these resources.

To set up the project to run Terraform, you first need to initialize the project.

Run the following from a shell in your project folder:

$ terraform init

The dependencies needed to run Terraform are downloaded to your computer.

In this demo, we use a local backend for Terraform, which means that state file are stored locally on the machine running Terraform. Because Terraform state files are required to calculate all changes, its critical that you do not lose this file (*.tfstate, and *.tfstate.* for older state files), or Terraform will lose track of what resources it is managing. For any kind of multi-tenancy or automation workflows, it is highly recommended to use Remote Backends, which is outside the scope of this lab.

The .terraform folder is where all provider and modules depenencies are downloaded. There is nothing important in this folder and it is safe to delete. You should add this folder, and the Terraform state files (since you do not want to be checking in sensitive information to VCS) to .gitignore.

Create a file named .gitignore in your project root, then add the following text to the file and save it:

*.terraform*
*.tfstate
*.tfstate.*

Run the following to check in your files for future change tracking:

$ git checkout -b dbwh
$ git add main.tf
$ git add .gitignore
$ git commit -m "Add Database and Warehouse"
$ git push origin HEAD

Next, you can log in to GitHub and create a Pull Request.

In many production systems, a GitHub commit triggers a CI/CD job, which creates a plan that engineers can review. If the changes are desired, the Pull Request is merged. After the merge, a CI/CD job kicks off and applies the changes made in the main branch.

To manage different environments (dev/test/prod), you can configure the Terraform code with different input variables and either deploy to either the same Snowflake account, or different Snowflake accounts.

Your specific workflow will depend on your requirements, including your compliance needs, your other workflows, and your environment and account topology.

For this lab, you can simulate the proposed CI/CD job and do a plan to see what Terraform wants to change. During the Terraform plan, Terraform performs a diff calculation to compare the desired state with the previous/current state from the state file to determine the acitons that need to be done.

From a shell in your project folder (with your Account Information in environment), run:

$ terraform plan

Now that you have reviewed the plan, we simulate the next step of the CI/CD job by applying those changes to your account.

  1. From a shell in your project folder (with your Account Information in environment) run:
    $ terraform apply
    
  2. Terraform regenerates the execution plan (unless you supply an optional path to the plan.out file) and applies the needed changes after confirmation. In this case, Terraform will create two new resources, and have no other changes.
  3. Log in to your Snowflake account and verify that Terraform created the database and the warehouse.

All databases need a schema to store tables, so we'll add that and a service user so that our application/client can connect to the database and schema. The syntax is very similar to the database and warehouse you already created. By now you have learned everything you need to know to create and update resources in Snowflake. We'll also add privileges so the service role/user can use the database and schema.

You'll see that most of this is what you would expect. The only new part is creating the private key. Because the Terraform TLS private key generator doesn't export the public key in a format that the Terraform provider can consume, some string manipulations are needed.

  1. Change the warehouse size in the main.tf file from xsmall to small.
  2. Add the following resources to your main.tf file:
provider "snowflake" {
  alias = "security_admin"
  role  = "SECURITYADMIN"
}

resource "snowflake_role" "role" {
  provider = snowflake.security_admin
  name     = "TF_DEMO_SVC_ROLE"
}

resource "snowflake_grant_privileges_to_account_role" "database_grant" {
  provider          = snowflake.security_admin
  privileges        = ["USAGE"]
  account_role_name = snowflake_role.role.name
  on_account_object {
    object_type = "DATABASE"
    object_name = snowflake_database.db.name
  }
}

resource "snowflake_schema" "schema" {
  database   = snowflake_database.db.name
  name       = "TF_DEMO"
  is_managed = false
}

resource "snowflake_grant_privileges_to_account_role" "schema_grant" {
  provider          = snowflake.security_admin
  privileges        = ["USAGE"]
  account_role_name = snowflake_role.role.name
  on_schema {
    schema_name = "\"${snowflake_database.db.name}\".\"${snowflake_schema.schema.name}\""
  }
}

resource "snowflake_grant_privileges_to_account_role" "warehouse_grant" {
  provider          = snowflake.security_admin
  privileges        = ["USAGE"]
  account_role_name = snowflake_role.role.name
  on_account_object {
    object_type = "WAREHOUSE"
    object_name = snowflake_warehouse.warehouse.name
  }
}

resource "tls_private_key" "svc_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "snowflake_user" "user" {
    provider          = snowflake.security_admin
    name              = "tf_demo_user"
    default_warehouse = snowflake_warehouse.warehouse.name
    default_role      = snowflake_role.role.name
    default_namespace = "${snowflake_database.db.name}.${snowflake_schema.schema.name}"
    rsa_public_key    = substr(tls_private_key.svc_key.public_key_pem, 27, 398)
}

resource "snowflake_grant_privileges_to_account_role" "user_grant" {
  provider          = snowflake.security_admin
  privileges        = ["MONITOR"]
  account_role_name = snowflake_role.role.name  
  on_account_object {
    object_type = "USER"
    object_name = snowflake_user.user.name
  }
}

resource "snowflake_grant_account_role" "grants" {
  provider  = snowflake.security_admin
  role_name = snowflake_role.role.name
  user_name = snowflake_user.user.name
}


  1. To get the public and private key information for the application, use Terraform output values.

Add the following resources to a new file named outputs.tf

output "snowflake_svc_public_key" {
  value = tls_private_key.svc_key.public_key_pem
}

output "snowflake_svc_private_key" {
  value     = tls_private_key.svc_key.private_key_pem
  sensitive = true
}

Let's check in our files for future change tracking:

$ git checkout -b svcuser
$ git add main.tf
$ git add outputs.tf
$ git commit -m "Add Service User, Schema, Grants"
$ git push origin HEAD

You can log in to Github to create the Pull Request and merge the changes. Pull Requests allow for process controls for reviews and the centralized git repo can automatically event CI/CD workflows as desired.

To simulate the CI/CD pipeline, we can apply the changes to conform the desired state with the stored state.

  1. From a shell in your project folder (with your Account Information in environment) run:
    $ terraform apply
    
  2. Accept the changes if they look appropriate.
  3. Log in to the console to see all the changes complete.

Because all changes are stored in source control and applied by CI/CD, you can get a history of all your environment changes. You can put compliance controls into place and limit authorization to directly manage the environments to fewer users. This also makes it easy to bring up new environments that are identical to others in a timely manner without managing SQL scripts.

You're almost done with the demo. We have one thing left to do: clean up your account.

Destroy all the Terraform Managed Resources

  1. From a shell in your project folder (with your Account Information in environment) run:
     $ terraform destroy
    
  2. Accept the changes if they look appropriate.
  3. Log in to the console to verify that all the objects are destroyed. The database, schema, warehouse, role, and the user objects created by Terraform will be automatically deleted.

Drop the User we added

If you are new to Terraform, there's still a lot to learn. We suggest researching remote state, input variables, and building modules. This will empower you to build and manage your Snowflake environment(s) through a simple declarative language.

The Terraform provider for Snowflake is an open-source project. If you need Terraform to manage a resource that has not yet been created in the provider, we welcome contributions! We also welcome submitting issues and feedback to the Github Project to help improve the Terraform provider project and overall experience.

Next steps

Other tools are available to manage things like data migrations and table schema changes. We often see Terraform paired with one of the following to meet all customer requirements.

If you had any issues with this project you can pull a working version directly from Github.

What we've covered