Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp. A provider is available for Snowflake (written by the Chan Zuckerberg Initiative), as well as the cloud providers we host on: Azure, AWS, and GCP.

Terraform is declarative, so you can define the resources and configurations you want, and Terraform calculates dependencies, looks at previous state, and makes all the necessary changes to converge to the new desired state. Using Terraform is a great way to deploy new projects and make infrastructure changes in production. It makes managing dependencies between cloud providers especially easy.

Example Terraform use-cases:

Many Snowflake customers use IaC to abide by compliance controls, maintain consistency, and support similar engineering workflows for infrastructure while updates are underway.

This tutorial demonstrates how you can use Terraform to manage your Snowflake configurations in an automated and source-controlled way. We show you how to install and use Terraform to create and manage your Snowflake environment, including how to create a database, schema, warehouse, multiple roles, and a service user.

This introductory tutorial does not cover specific cloud providers or how to integrate Terraform into specific CI/CD tools. Those topics may be covered in future labs.


What You'll Learn

What You'll Need

What You'll Build

Create a new repository to hold your Terraform project. We use the name sfguide-terraform-sample in this lab.

Run the following commands if you use the git command-line client replacing YOUR_GITHUB_USERNAME with your username:

$ mkdir sfguide-terraform-sample && cd sfguide-terraform-sample
$ echo "# sfguide-terraform-sample" >> README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git branch -M main
$ git remote add origin git@github.com:YOUR_GITHUB_USERNAME/sfguide-terraform-sample.git
$ git push -u origin main

You now have an empty repo that we will use in subsequent steps to Terraform your Snowflake account.

We will now create a user account separate from your own that uses key-pair authentication. The reason this is required in this lab is due to the provider's limitations around caching credentials and the lack of support for 2FA. Service accounts and key pair are also how most CI/CD pipelines run Terraform.

Create an RSA key for Authentication

This creates the private and public keys we use to authenticate the service account we will use for Terraform.

$ cd ~/.ssh
$ openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out snowflake_tf_snow_key.p8 -nocrypt
$ openssl rsa -in snowflake_tf_snow_key.p8 -pubout -out snowflake_tf_snow_key.pub

Create the User in Snowflake

Log in to the Snowflake console and create the user account by running the following command as the ACCOUNTADMIN role.

But first:

  1. Copy the text contents of the ~/.ssh/snowflake_tf_snow_key.pub file, starting after the PUBLIC KEY header, and stopping just before the PUBLIC KEY footer.
  2. Paste over the RSA_PUBLIC_KEY_HERE label (shown below).

Execute both of the following SQL statements to create the User and grant it access to the SYSADMIN and SECURITYADMIN roles needed for account management.



We need to pass provider information via environment variables and input variables so that Terraform can authenticate as the user.

Run the following to find the YOUR_ACCOUNT_LOCATOR and your Snowflake Region ID values needed.

SELECT current_account() as YOUR_ACCOUNT_LOCATOR, current_region() as YOUR_SNOWFLAKE_REGION_ID;

You can find your Region ID (YOUR_REGION_HERE) from YOUR_SNOWFLAKE_REGION_ID in this reference table. Example: aws_us_west_2 would have a us-west-2 value for YOUR_REGION_HERE.

Add Account Information to Environment

Run these commands in your shell. Be sure to replace the YOUR_ACCOUNT_LOCATOR and YOUR_REGION_HERE placeholders with the correct values.

$ export SNOWFLAKE_USER="tf-snow"
$ export SNOWFLAKE_PRIVATE_KEY_PATH="~/.ssh/snowflake_tf_snow_key"

If you plan on working on this or other projects in multiple shells, it may be convenient to put this in a snow.env file that you can source or put it in your .bashrc or .zshrc file. For this lab, we expect you to run future Terraform commands in a shell with those set.

Add a file to your project in the base directory named main.tf. In main.tf we set up the provider and define the configuration for the database and the warehouse that we want Terraform to create.

Copy the contents of the following block to your main.tf

terraform {
  required_providers {
    snowflake = {
      source  = "chanzuckerberg/snowflake"
      version = "0.22.0"

provider "snowflake" {
  alias = "sys_admin"
  role  = "SYSADMIN"

resource "snowflake_database" "db" {
  provider = snowflake.sys_admin
  name     = "TF_DEMO"

resource "snowflake_warehouse" "warehouse" {
  provider       = snowflake.sys_admin
  name           = "TF_DEMO"
  warehouse_size = "large"

  auto_suspend = 60

This is all the code needed to create these resources.

To set up the project to run Terraform, you first need to initialize the project.

Run the following from a shell in your project folder:

$ terraform init

The dependencies needed to run Terraform are downloaded to your computer.

In this demo, we use local state for Terraform. The state files are required to calculate all changes. Because the files are needed to correctly merge future changes, they are extremely important. If multiple users run Terraform, and/or if it runs on different computers, and/or if runs through CI/CD, the state files' state needs to be put in Remote Storage. While using local state, you'll see the current state stored in *.tfstate and old versions named *.tfstate.*.

The .terraform folder is where all dependencies are downloaded. It's safe to add that and the state files to .gitignore to minimize changes.

Create a file named .gitignore in your project root, then add the following text to the file and save it:


Run the following to check in your files for future change tracking:

$ git checkout -b dbwh
$ git add main.tf
$ git add .gitignore
$ git commit -m "Add Database and Warehouse"
$ git push origin HEAD

Next, you can log in to GitHub and create a Pull Request.

In many production systems, a GitHub commit triggers a CI/CD job, which creates a plan that engineers can review. If the changes are desired, the Pull Request is merged. After the merge, a CI/CD job kicks off and applies the changes made in the main branch.

To manage different environments (dev/test/prod), you can use names to isolate resources from one another, or—to reduce complexity and limit the blast radius—you can use separate Snowflake accounts. If you pass the environment as an input variable to the project, you can run the same Terraform project unchanged for each environment.

Your specific workflow will depend on your requirements, including your compliance needs, your other workflows, and your environment and account topology.

For this lab, you can simulate the proposed CI/CD job and do a plan to see what Terraform wants to change. During plan, Terraform compares its known and stored state with what's in the desired resources, and displays all changes needed to conform the resources.

From a shell in your project folder (with your Account Information in environment), run:

$ terraform plan

Now that you have reviewed the plan, we simulate the next step of the CI/CD job by applying those changes to your account.

  1. From a shell in your project folder (with your Account Information in environment) run:
    $ terraform apply
  2. Terraform recreates the plan and applies the needed changes after you verify it. In this case, Terraform will be creating two new resources, and have no other changes.
  3. Log in to your Snowflake account and verify that Terraform created the database and the warehouse.

All databases need a schema to store tables, so we'll add that and a service user so that our application/client can connect to the database and schema. The syntax is very similar to the database and warehouse you already created. By now you have learned everything you need to know to create and update resources in Snowflake. We'll also add privileges so the service role/user can use the database and schema.

You'll see that most of this is what you would expect. The only complicated part is creating the private key. Because the Terraform TLS private key generator doesn't export the public key in a format that the Terraform provider can consume, some string manipulations are needed.

  1. Change the warehouse size in the main.tf file from xsmall to small.
  2. Add the following resources to your main.tf file:
        provider "snowflake" {
            alias = "security_admin"
            role  = "SECURITYADMIN"
        resource "snowflake_role" "role" {
            provider = snowflake.security_admin
            name     = "TF_DEMO_SVC_ROLE"
        resource "snowflake_database_grant" "grant" {
            provider          = snowflake.security_admin
            database_name     = snowflake_database.db.name
            privilege         = "USAGE"
            roles             = [snowflake_role.role.name]
            with_grant_option = false
        resource "snowflake_schema" "schema" {
            provider   = snowflake.sys_admin
            database   = snowflake_database.db.name
            name       = "TF_DEMO"
            is_managed = false
        resource "snowflake_schema_grant" "grant" {
            provider          = snowflake.security_admin
            database_name     = snowflake_database.db.name
            schema_name       = snowflake_schema.schema.name
            privilege         = "USAGE"
            roles             = [snowflake_role.role.name]
            with_grant_option = false
        resource "snowflake_warehouse_grant" "grant" {
            provider          = snowflake.security_admin
            warehouse_name    = snowflake_warehouse.warehouse.name
            privilege         = "USAGE"
            roles             = [snowflake_role.role.name]
            with_grant_option = false
        resource "tls_private_key" "svc_key" {
            algorithm = "RSA"
            rsa_bits  = 2048
        resource "snowflake_user" "user" {
            provider          = snowflake.security_admin
            name              = "tf_demo_user"
            default_warehouse = snowflake_warehouse.warehouse.name
            default_role      = snowflake_role.role.name
            default_namespace = "${snowflake_database.db.name}.${snowflake_schema.schema.name}"
            rsa_public_key    = substr(tls_private_key.svc_key.public_key_pem, 27, 398)
        resource "snowflake_role_grants" "grants" {
            provider  = snowflake.security_admin
            role_name = snowflake_role.role.name
            users     = [snowflake_user.user.name]
    1. To get the public and private key information for the application, use Terraform output variables. Add the following resources to a new file named outputs.tf
      output "snowflake_svc_public_key" {
      value = tls_private_key.svc_key.public_key_pem
      output "snowflake_svc_private_key" {
      value = tls_private_key.svc_key.private_key_pem
    Commit Changes to Source Control
    $ git checkout -b svcuser
    $ git add main.tf
    $ git add outputs.tf
    $ git commit -m "Add Service User, Schema, Grants"
    $ git push origin HEAD

You can log in to Github to create the Pull Request and merge the changes. Pull Requests allow for process controls for reviews and the centralized git repo can automatically event CI/CD workflows as desired.

To simulate the CI/CD pipeline, we can apply the changes to conform the desired state with the stored state.

  1. From a shell in your project folder (with your Account Information in environment) run:
    $ terraform apply
  2. Accept the changes if they look appropriate.
  3. Log in to the console to see all the changes complete.

Because all changes are stored in source control and applied by CI/CD, you can get a history of all your environment changes. You can put compliance controls into place and limit authorization to directly manage the environments to fewer users. This also makes it easy to bring up new environments that are identical to others in a timely manner without managing SQL scripts.

You're almost done with the demo. We have one thing left to do: clean up your account.

Destroy all the Terraform Managed Resources

  1. From a shell in your project folder (with your Account Information in environment) run:
    $ terraform destroy
  2. Accept the changes if they look appropriate.
  3. Log in to the console to verify that all the objects are destroyed. The database, schema, warehouse, role, and the user objects created by Terraform will be automatically deleted.

Drop the User we added

   DROP USER "tf-snow";

If you are new to Terraform, there's still a lot to learn. We suggest researching remote state, input variables, and building modules. This will empower you to build and manage your Snowflake environment(s) through a simple declarative language.

The Terraform provider for Snowflake is an open-source project. If you need Terraform to manage a resource that has not yet been created in the provider, we welcome contributions! We also welcome submitting issues and feedback to the Github Project to help improve the Terraform provider project and overall experience.

Next steps

Other tools are available to manage things like data migrations and table schema changes. We often see Terraform paired with one of the following to meet all customer requirements.

If you had any issues with this project you can pull a working version directly from Github.

What we've covered