Snowflake Trail is Snowflake's suite of observability capabilities that enable its users to better monitor, troubleshoot, debug and take actions on pipelines, apps, user code and compute utilizations.

Snowflake Trail Infographic

As you can see above, Snowflake Trail utilizes core observability data — logs, metrics, traces, events, alerts, and notifications — to provide comprehensive workload monitoring across AI, applications, pipelines, and infrastructure.

This quickstart is intended to help tie together all the components of Snowflake Trail, and in turn, help you get started with Observability in Snowflake. While this quickstart will walk you through the basics of enabling and viewing telemetry, you will need to dive deeper into each area in order to fully understand Snowflake Trail. So wherever possible, links will be provided to additional quickstarts, documentation, and resources.

What You'll Learn:

What You'll Need:

What You'll Build:

Observability in Snowflake comes in two main categories: System Views and Telemetry.

System Views provide historical data about your Snowflake account through views and table functions in the following schemas:

  1. The Snowflake Information Schema (INFORMATION_SCHEMA) in every Snowflake database
  2. The Account Usage (ACCOUNT_USAGE and READER_ACCOUNT_USAGE) in the Snowflake database
  3. The Organization Usage (ORGANIZATION_USAGE) in the Snowflake database

Telemetry data, on the other hand, is delivered exclusively through event tables. An event table is a special kind of database table with a predefined set of columns that follows the data model for OpenTelemetry - a leading the industry standard for collecting and structuring observability data across systems.

This distinction is important because of the default behavior of each:

By default, Snowflake includes a predefined event table (SNOWFLAKE.TELEMETRY.EVENTS) that is used if you don't specify an active event table. You can also create your own event tables for specific uses.

Most telemetry levels can be set at the account, object, or session level. And while many of the telemetry levels can be set via Snowsight, some require using SQL commands for full flexibility.

For this quickstart, we will focus on enabling telemetry at the account level. We will show you both the SQL and Snowsight methods, and use the default table.

Complete one of the following:

(Option 1): Setting Telemetry Levels via Snowsight

You can use Snowsight to set telemetry levels at the account level.

  1. Sign in to Snowsight.
  2. In the navigation menu, select Monitoring » Traces and Logs.
  3. On the Traces & Logs page, select Set Event Level.
  4. For Set logging & tracing for, ensure Account is selected.
  5. Set your desired levels:
    1. For All Events, select On
    2. For Logs, select INFO
    3. Ensure all other fields show as On.
  6. Click Save.

You can see the Set Event Level dialog box below.

(Option 2): Setting Telemetry Levels via SQL

  1. Open a new SQL worksheet or a workspace.
  2. Run the following:
-- Switch to ACCOUNTADMIN
USE ROLE ACCOUNTADMIN;

-- Set account level values
ALTER ACCOUNT SET LOG_LEVEL = 'INFO'; 
ALTER ACCOUNT SET METRIC_LEVEL = 'ALL'; 
ALTER ACCOUNT SET TRACE_LEVEL = 'ALWAYS'; 

Note that valid and default values are as follows:

Level

Valid Values

Default Value

LOG_LEVEL

TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF

OFF

METRIC_LEVEL

ALL, NONE

NONE

TRACE_LEVEL

ALWAYS, ON_EVENT, OFF

OFF

Additional Resources

A trace represents the complete execution path of a request through your Snowflake workloads. It provides a detailed view of how operations flow through different components, helping you understand performance bottlenecks, dependencies, and execution patterns. Each trace is made up of one or more spans, where each span represents a single operation within the trace (like a SQL query, UDF execution, or procedure call).

Why Traces Are Useful

Traces help you:

Accessing Traces in Snowsight

The easiest way to get started with traces is through the Trace Explorer UI in Snowsight:

  1. Navigate to Monitoring » Traces and Logs
  2. (Optional) Use filters to narrow down the returned results.

You'll now have a list of all the traces for your event table. The Trace Explorer interface shows a list of traces with key information such as Date, Duration, Trace Name, Status, and number of Spans in the trace.

An screenshot of the Trace Explorer console showing a list of traces

You can now click on any trace to view its spans in detail.

Viewing Trace Details

When you click on a trace, you'll see a detailed view showing the timeline of all spans within that trace. This view allows you to:

An screenshot of the Trace Explorer console showing the component spans.

Viewing Span Details

Clicking on any individual span in the trace opens a sidebar with four tabs: Details, Span Events, Related Metrics, and Logs.

Details

The Details tab shows info and attributes about the selected span, including Trace ID, Span ID, Duration, Type, Warehouse, and more.

An screenshot of the Trace Explorer console showing details about the span step

Span Events

The Span Events tab shows details of events recorded within the span.

An screenshot of the Trace Explorer console showing details about span events

Related Metrics

The Related Metrics tab shows CPU and memory metrics related to the span.

An screenshot of the Trace Explorer console showing details about related metrics

Logs

The Logs tab shows logs directly related to the trace.

An screenshot of the Trace Explorer console showing details about related logs

This detailed information helps you understand exactly what happened during each operation and identify optimization opportunities.

Additional Resources

Logs are structured records of events that occur during the execution of your Snowflake workloads. They provide detailed information about what happened during code execution, including informational messages, warnings, errors, and debug information. Logs are essential for troubleshooting issues, understanding application behavior, and monitoring the health of your systems.

Why Logs Are Useful

Logs help you:

Accessing Logs in Snowsight

To view logs in Snowsight:

  1. Navigate to Monitoring » Traces & Logs
  2. Click on the Logs tab to switch from the default traces view.
  3. (Optional) Use the filters to find specific logs. For example:
    • Time Range can be set either by using the drop-down or by clicking on the graph.
    • Severity can be used to select specific log levels (DEBUG, WARN, etc).
    • Languages allows filtering by handler code language (Python, Java, etc).
    • Database allows filtering by specific procedures, functions, or applications.
    • Record allows selecting Logs, Events, or All.

Log Details

By default, you will be shown a list of logs sorted by timestamp.

A screenshot showing a list of logs in the Snowsight console

You can also click on any log entry to bring up a sidebar with more details, including the full log text.

A screenshot showing details of one specific log in the sidebar

Additional Resources

Query History provides a comprehensive view of all SQL queries executed in your Snowflake account. It's one of the most important tools for monitoring, troubleshooting, and optimizing database performance. Query History shows detailed information about query execution, performance metrics, and resource usage patterns.

Why Query History Is Useful

Query History helps you:

Query history can be viewed as Individual Queries or Grouped Queries.

Accessing Query History in Snowsight (Individual Queries)

To view Query History in Snowsight:

  1. Navigate to Monitoring » Query History
  2. (Optional) Use the filters to find specific queries:
    • Status: Filter by execution status (Success, Failed, etc.)
    • User: Filter by specific users
    • Time Range: Filter by execution time
    • Filters: Various other filters to help you find a specific query.

When you click on any query in the history, you'll see three main tabs with detailed information:

A screenshot of the Query History UI in Snowsight

Query Details

The Query Details tab shows details about the query run (status, duration, ID, etc), the SQL text of the query run, and the query results.

A screenshot of the detailed view of one query in Query History

Query Profile

The Query Profile tab provides a visual representation of query execution, which provides critical details for debugging and optimizing complex queries.

A screenshot of the Query Profile showing the execution tree visualization

For a list of all possible fields, see the documentation here.

Query Telemetry

The Query Telemetry tab shows the same telemetry data as the Trace Explorer.

A screenshot of the Query Telemetry tab showing traces and spans for a query.

Accessing Grouped Query History in Snowsight

Snowsight also provides Grouped Query History, which aggregates similar queries together:

  1. Navigate to Monitoring » Query History
  2. Click on the Grouped Queries tab

This feature helps you:

A screenshot of the Grouped Queries tab in Snowsight

By clicking into a single grouped query, you can see detailed information about execution count, duration, and more.

A screenshot of the detailed view for one grouped query

Additional Resources

Copy History provides comprehensive monitoring for all data loading activities in your Snowflake account. It tracks operations from COPY INTO commands, Snowpipe, and Snowpipe Streaming, giving you visibility into data ingestion performance, errors, and throughput patterns.

Why Copy History Is Useful

Copy History helps you:

Accessing Copy History in Snowsight

To view Copy History in Snowsight:

  1. Navigate to Ingestion » Copy History
  2. (Optional) Use the filters to narrow down activity by status, database, pipe, and more.

Copy Operations

Each copy operation entry shows details such as status, target table, pipe, data size, and number of rows loaded.

A screenshot of the Copy History page in Snowsight showing filter options

Click on any operation to see detailed information about that operation/target table, such as individual file details and status.

A screenshot of the Copy History view for a single table in Snowsight showing filter options

Additional Resources

Task History provides monitoring and observability for Snowflake Tasks, which are scheduled SQL statements or procedures that run automatically. Tasks are essential for building data pipelines, ETL processes, and automated maintenance operations. Task History helps you monitor task execution, troubleshoot failures, and optimize task performance.

Why Task History Is Useful

Task History helps you:

Accessing Task History in Snowsight

To view Task History in Snowsight:

  1. Navigate to Transformation » Tasks
  2. (Optional) Use the filters to narrow down activity by status, database, and more.

Task History can be viewed as either Task Graphs or Task Runs.

Task Graphs

The Task Graphs view groups related tasks together in a directed acyclic graph (DAG) that shows the relationship between the root task and any dependent tasks. Each row shows the root task name, schedule, recent run history, and more.

A screenshot showing the Tasks Graphs UI in Snowsight

Click on any task execution to see detailed information including child task names, status, duration, and more. The real value of this is to be able to see dependencies between different tasks.

A screenshot showing the task graph details

Tasks Runs

The Tasks Runs view shows individual task executions without grouping by parent task.

A screenshot showing the list of task runs

Clicking into any task run will bring you to the Run History for that task.

A screenshot showing the task runs for a single task

Additional Resources

Dynamic Tables are a table type that automatically materializes the results of a query and keeps them updated as the underlying data changes. They combine the simplicity of views with the performance of materialized data, automatically managing refresh operations. Dynamic Tables monitoring helps you track refresh performance, data freshness, and resource usage.

Why Dynamic Tables Monitoring Is Useful

Dynamic Tables monitoring helps you:

Accessing Dynamic Tables in Snowsight

To view Dynamic Tables monitoring in Snowsight:

  1. Navigate to Transformation » Dynamic Tables
  2. (Optional) Use the filters to narrow down by refresh status and database.

Dynamic Table Refreshes

For each Dynamic Table, we get info such as status, target lag, database, and more.

A screenshot showing the Dynamic Tables UI

Clicking on any table will bring you to the graph view for that table.

A screenshot showing the graph view of a dynamic table

Additional Resources

AI Observability in Snowflake provides monitoring and insights for AI/ML workloads, including Cortex AI functions and model inference operations. As AI becomes increasingly integrated into data workflows, observability helps ensure AI operations are performing reliably and cost-effectively.

AI Observability has the following features:

Additional Resources

Congratulations! You have successfully explored the comprehensive observability capabilities available in Snowflake Trail. By following this quickstart, you've gained hands-on experience with the key components that make up Snowflake's observability platform.

What You Learned

Through this quickstart, you have learned how to:

Possible Next Steps

Now that you have explored the basics of Snowflake Trail, consider these possible next steps:

Related Resources

Core Documentation

Quickstart Guides