The Cost Optimization Pillar focuses on integrating financial accountability and cost awareness throughout the cloud platform lifecycle. It involves establishing principles, gaining visibility into spending, implementing controls, and continuously optimizing resources to align cloud costs with business value. This pillar is essential for financial stakeholders, cloud architects, and engineering teams seeking to maximize return on investment in cloud infrastructure.
Connect cloud spending directly to business outcomes, ensuring that every dollar spent in the cloud contributes meaningfully to strategic objectives and demonstrable value. Embed cost considerations directly into platform planning and architecture considerations.
You can neither control nor optimize what you can't see. Gain deep and granular insight into all aspects of your cloud spending, fostering transparency, and attributing costs effectively.
Establish policies and mechanisms to govern resource provisioning and consumption, preventing unnecessary costs and enforcing financial boundaries.
Continuously improve the efficiency of your resources and workloads to maximize the value derived from your platform investment.
The following key recommendations are covered within each principle of Cost Optimization:
To maximize organizational outcomes, Snowflake consumption must be explicitly tied to business value. While cost optimization ensures efficiency, it does not guarantee that spend is aligned with outcomes that matter to stakeholders. Alignment of business value to cost ensures workloads, pipelines, dashboards, and advanced analytics are continuously evaluated not only for cost but also for the value they deliver. This approach ensures Snowflake delivers as a strategic business platform rather than a technical expense.
Business value to cost alignment represents a maturity step in FinOps on Snowflake. By embedding benchmarking, impact analysis, SLA definition, usage metrics, ROI measures, and business impact evaluation into daily operations, organizations can ensure that Snowflake consumption is continuously justified, optimized, and communicated in business terms. This elevates the conversation with leadership from cost oversight to value realization and ensures that Snowflake is understood as a platform for growth, innovation, and competitive advantage.
Cost-Aware Architecting is the practice of embedding financial accountability directly into the design and development of Snowflake workloads. By shifting left—introducing cost considerations early in the architecture lifecycle—organizations ensure that ingestion, transformation, analytics, and distribution workloads are not only performant but also aligned with budget expectations. Many cost overruns in Snowflake originate from architectural decisions made without cost implications in mind.
For example, designing ingestion with sub-second latency when daily freshness is sufficient, or selecting inefficient table designs that increase query scanning. These can lead to disproportionate spend. Shifting cost awareness into architecture helps prevent inefficiencies before they occur and reinforces Snowflake's role as a cost-effective enabler of business value.
At the ingestion layer, best practices include balancing latency versus cost by selecting appropriate services (e.g., Snowpipe, Snowpipe Streaming, or third-party tools) and choosing the right storage format (e.g., native tables, Iceberg). For transformations, design with frequency versus SLA in mind to ensure data freshness matches the business need. For analytics, apply schema design best practices such as thoughtful clustering key choices and pruning strategies to reduce consumed credits. In distribution, optimize data transfer by monitoring egress patterns and applying cost-saving practices like the Snowflake Data Transfer Optimizer.
To maximize organizational outcomes, Snowflake consumption must be explicitly tied to measurable business value and clearly communicated in terms that resonate with stakeholders. Establishing baselines using Snowflake's Account Usage views creates a reference point, while tracking the current state highlights trends in performance and consumption. Defining explicit goal states—such as reduced cost per decision, improved time-to-market, or broader data access—ties workloads directly to outcomes that matter to stakeholders. Outliers that diverge from these goals should be flagged for review and optimization to prevent wasted resources. Best practices include applying unit economic measures related to your field (e.g. cost per terabyte analyzed or cost per fraud case prevented) and publishing ROI dashboards that continuously link Snowflake consumption to business outcomes. By incorporating measurement into daily operations, organizations can move the conversation with leadership from cost oversight to demonstrable value realization, positioning Snowflake as a clear enabler of enterprise growth and innovation.
Defining SLAs or explicit business needs ensures that Snowflake workloads are aligned with their intended purpose and that consumption levels are justified by business outcomes. Some Snowflake workloads can become over-engineered or maintained without clear justification. Tying each workload to an SLA or business requirement prevents waste and ensures that investment aligns with value. Before implementation, it is crucial to document and align on the value of meeting an SLA, identifying all stakeholders who rely on the workload. This includes differentiating between tangible outcomes, such as increased revenue, and intangible outcomes, such as compliance or data trust. Efficient customers use both Snowflake Resource Monitors and Budgets features to enforce guardrails that ensure workloads remain within acceptable cost-performance boundaries. All design decisions have trade-offs, and explicitly calling out the expected outcomes leads to streamlined decision-making in the future when outcomes are reviewed.

Benchmarking establishes performance and cost baselines for Snowflake workloads and compares them against internal standards as well as performance and cost results from previous tech solutions. These benchmarks can measure workload efficiency, the adoption of specific Snowflake features, and the alignment of workload costs to business outcomes. Without benchmarks, organizations lack the context to determine if their Snowflake consumption is delivering economies of scale or value back to the business. Benchmarking allows teams to identify best practices, track improvements over time, and highlight outliers that may be driving unnecessary spend or delivering unexpected value.
Best practices include measuring technical unit economic metrics (e.g. credits per 1K queries, credits per 1 TB scanned), warehouse efficiency and utilization by workload type, and business unit economics (e.g. credits per customer acquired, credits per partner onboarded, or credits per data product-specific KPIs). This provides a more comprehensive picture of consumption in relation to cost and value. Outliers should be highlighted in executive communications as either success stories or cautionary examples. Benchmarking should be embedded in a continuous improvement loop, where insights drive action, action improves efficiency, and those improvements are effectively measured.
The Snowflake Visibility principle is designed to transform opaque cloud spending into actionable insights, fostering financial accountability and maximizing business value within your Snowflake environment. It is foundational to the FinOps framework, as you cannot control, optimize, or attribute business value to what you cannot see. To effectively manage and optimize cloud costs in Snowflake, it's crucial to align organizationally to an accountability structure of spend, gain deep and granular insight into all aspects of your cloud spending, and transparently display it to the appropriate stakeholders to take action.
Implementing a robust FinOps visibility framework in Snowflake, supported by cross-functional collaboration, enables each business function to access timely and relevant usage and cost data. This empowers them to understand the business impact of their consumption and take prompt action when anomalies arise. To meet this vision, consider the following recommendations based on industry best practices and Snowflake's capabilities:
It is essential to review Snowflake's billing models to align technical and non-technical resources on financial drivers and consumption terminology. Snowflake's elastic, credit-based consumption model charges separately for compute (Virtual Warehouses, Compute Pools, etc), storage, data transfer, and various serverless features (e.g., Snowpipe, Automatic Clustering, Search Optimization, Replication/Failover, AI Services). Understanding the interplay of these billing types ensures you can attribute costs associated with each category's unique usage parameters. High-level categories are below.
Implementing robust and organizationally consistent tagging and labeling strategies across all resources (e.g. storage objects, warehouses, accounts, queries) is crucial to accurately allocate costs to specific teams, products, or initiatives and linking actions to outcomes.
Tagging in Snowflake
Tagging can be done at several levels:
Tagging models
In the initial setup of a business unit or use case, it is important to consider the model for tagging costs within the platform via shared or dedicated resources. These fall into three large buckets:

Each model has its pros and cons, including how to handle concepts such as idle time or whether to show/charge back attributed or billed credits. Review each model before deploying resources. If an organization is caught between models, a common approach is to start in a shared resource environment and graduate to dedicated resources as the workload increases.
Tag enforcement
Clear and consistent naming conventions for accounts, warehouses, databases, schemas, and tables facilitate immediate cost understanding. Enforcing robust tagging policies (e.g., requiring specific tags for new resource creation and using automated scripts to identify untagged resources) is crucial for accurate data interpretation and effective cost management. Without tag enforcement, it is difficult to accurately allocate all costs and can require manual effort, like extensive tag-mapping tables. Tag values are enforced within an account, but if a multi-account strategy is needed for your organization, a tag database can be replicated and leveraged across all accounts to ensure consistent values are used. For best-in-class visibility, it is recommended to have a tagging strategy and tag all resources in an organization to allocate costs to relevant owners.
To effectively manage Snowflake spend and align business structure to technical resources, you should implement a system of showback or chargeback. This approach is crucial for promoting accountability and optimizing resource usage as there is a single owner for each object within the platform.
Showback
If cost accountability models have not been implemented previously, consider a showback model. This involves transparently reporting Snowflake costs to different departments or projects to raise awareness of their costs. By showing each team their monthly consumption (broken down by warehouse usage, query costs, and storage, etc.), it encourages a cost-conscious culture. This initial step helps teams understand the financial impact of their actions without the immediate pressure of budget cuts. Tools like Snowflake's built-in Cost Management UI & budget views, third-party cost management platforms, or custom dashboards can be used to provide these reports.
Chargeback
For more financially mature organizations, a chargeback model can be very effective for managing costs. This system directly bills departments for their Snowflake usage. This creates a powerful financial incentive for teams to optimize their workloads. To make this transition smooth and fair, you need to define clear rules for cost allocation. By implementing chargeback, you turn each department into a financial stakeholder, encouraging them to right-size their warehouses, suspend them during idle periods, and write more efficient queries. This shift in accountability leads to a more disciplined and cost-effective use of your Snowflake environment.
In either case, having a centralized dashboard or visual for all organizations to review intra-period is critical for financial accountability and next-step actions.
The most mature FinOps customers are those who programmatically and strategically drive consumption insights across the business. This involves three core elements:
Track usage data for all platform resources
To deliver clear and actionable consumption insights, it is essential to leverage the rich usage data that Snowflake natively provides. The foundation for all cost visibility is the SNOWFLAKE database, which contains two key schemas for this purpose: ACCOUNT_USAGE (for granular, account-level data) and ORGANIZATION_USAGE (for a consolidated view across all accounts).
Metric Category | Description | Key Metrics | Primary Data Sources |
Compute & query metrics | Understand the cost of query execution, warehouse consumption, and overall compute health. These are often the most dynamic and largest portion of your spend. | Credits used: Total credits consumed by individual warehouses. Query performance: Execution time, bytes scanned, and compilation time for specific queries or parameterized query hashes. Warehouse health: % idle time, queueing, spilling, and concurrency to identify under- or over-provisioned warehouses. | ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY: Provides hourly credit usage for individual virtual warehouses, including attributed and compute credits to compute idle time %. ACCOUNT_USAGE.QUERY_HISTORY: Offers detailed data on every query executed, including performance metrics and associated warehouses. |
Storage metrics | Insight into costs associated with compressed data, including active data, historical data (Time Travel), and disaster recovery data (Fail-safe). | Storage volume: The average monthly compressed data volume stored. Inactive storage: Bytes consumed by Time Travel and Fail-safe data, which are common areas for review for optimization. Storage growth rates: Tracking the rate of increase to forecast future storage costs Table access: | ACCOUNT_USAGE.TABLE_STORAGE_METRICS: Details table-level storage utilization, including active, Time Travel, and Fail-safe bytes. ACCOUNT_USAGE.ACCESS_HISTORY: Records object access patterns to identify unused tables or views that can be archived or deleted. |
Serverless & AI metrics | Track the consumption of credits by automated, Snowflake-managed services and AI features. | Credits used by service: Consumption broken down by specific services like Snowpipe, Automatic Clustering, Search Optimization, or Cortex AI features. Cost per credit-consuming events: Identify specific events that trigger high credit usage and develop a cost per event within these services (e.g. Cost per DML statement for Auto Clustering). | ACCOUNT_USAGE.<Serverless Feature>_HISTORY: Specific view bespoke to each serverless feature's usage history. ORGANIZATION_USAGE.METERING_DAILY_HISTORY: Provides daily credit usage categorized by service type (e.g., Compute, Storage, Snowpipe, AI Services). |
Data transfer | Track the cost of moving data into (ingress) and out of (egress) Snowflake. Costs are typically incurred when data crosses cloud provider regions or different cloud providers. | Bytes transferred: The total volume of data moved between regions or clouds, which is the basis for billing. | ACCOUNT_USAGE.DATA_TRANSFER_HISTORY: Review data transfer charges due to individual data transfer causes. ORGANIZATION_USAGE.DATA_TRANSFER_DAILY_HISTORY: Organization-wide Daily view of Data Transfer charges across all accounts in an organization. Able to see all charges regardless of transfer type. |
Financial metrics | Translate credit consumption into currency and provide a high-level, organization-wide view of spending. | Overall dollar spend: Daily credit usage converted into your billing currency. | ORGANIZATION_USAGE.USAGE_IN_CURRENCY_DAILY: Provides daily credit usage and converts it into currency, which is paramount for financial reconciliation. Also includes non-resource-based billing (e.g. rebates and Priority Support). ORGANIZATION_USAGE.RATE_SHEET_DAILY: Details adjusted billing prices based on negotiated capacity discounts across service types. |
Normalize consumption with unit economic metrics
For organizations to achieve comprehensive financial visibility, it is recommended best practice to move beyond tracking aggregate spend and implement Unit Economics Metrics. Unit economics provides a powerful methodology for normalizing cloud consumption by tying platform costs to specific business or operational drivers. This per-unit approach helps you understand cost efficiency, measure the ROI of your initiatives, and make data-driven decisions about resource allocation and optimization. By translating abstract credit consumption into tangible metrics, you can empower technical and business teams with a shared language for discussing value and cost. These metrics are commonly tracked across time to show changes in efficiency or business impact.
Efficiency metrics (technical KPIs)
Efficiency Metrics are technical Key Performance Indicators (KPIs) that connect cloud costs directly to platform operations and workloads. They are crucial for engineering teams and platform owners to identify inefficiencies, optimize resource usage, and understand the cost drivers of the data platform itself. These metrics provide the granular, operational view needed to manage the platform's performance day-to-day. Some common examples include:

Customers can track credits (warehouse) per thousand queries within a use case to see how efficiency has evolved over time and determine if they are achieving economies of scale.
Business metrics (business KPIs)
Business Metrics link cloud spending to meaningful business outcomes and value drivers. These KPIs are essential for executives, finance teams, and product managers to understand the return on investment (ROI) of cloud expenditure and to allocate costs accurately across different parts of the organization. They answer the critical question: "What business value are we getting for our cloud spend?" Examples include:

If Snowflake is in the value chain for orders, the cost per order can be a good metric to tie Snowflake consumption to Business Demand Drivers.
Visualize metrics with Snowsight tools and external BI tools
A critical component of cost governance is the effective visualization of spending and usage data. Raw data, while comprehensive, is often difficult to interpret and act upon. By translating cost and usage metrics into interactive dashboards and reports, you can empower stakeholders—from engineers to executives—to understand spending patterns, troubleshoot, and make informed decisions. A multi-layered approach can be used to track meaningful cost metrics.
Cost Anomaly Detection is a critical component of visibility that leverages machine learning to continuously monitor credit consumption against historical spending patterns, automatically flagging significant deviations from the established baseline. This proactive monitoring is essential for preventing budget overruns and identifying inefficiencies, shifting the organization from a reactive to a proactive cost management posture to mitigate financial risk. As a best practice, you should initially review anomaly detection on the entire account to gain a broad view, then dive deeper into a more granular review for individual high-spend warehouses. This approach allows for more targeted analysis and assigns clear ownership for investigating any flagged anomalies. There are several methods for anomaly detection supported by Snowflake:
Cost Anomalies in Snowsight
Snowsight, Snowflake's primary web interface, offers a dedicated Cost Management UI that allows users to visually identify and analyze the details of any detected cost anomaly. The importance of this intuitive visual interface lies in its ability to make complex cost data accessible to a wide range of stakeholders, enabling rapid root cause analysis by correlating a cost spike with specific query history or user activity. One of the tabs in this UI is the Cost Anomaly Detection tab, which enables you to view cost anomalies at the organization or account level and explore the top warehouses or accounts driving this change. To foster a culture of cost awareness and accountability, it is a best practice to ensure there is an owner for an anomaly detected in the account and set up a notification (via email) in the UI itself to ensure that cost anomalies are quickly and accurately investigated.
Programmatic Cost Anomaly Detection
For deeper integration and automation, organizations can review anomalies programmatically using the SQL functions and views available within the SNOWFLAKE.LOCAL schema. This approach is important for enabling automation and scalability, allowing cost governance to be embedded directly into operational workflows, such as feeding anomaly data into third-party observability tools or triggering automated incident response playbooks. A key best practice is to utilize this programmatic access to build custom reports and dashboards that align with specific financial reporting needs and to create advanced, automated alerting mechanisms that pipe anomaly data into established operational channels, such as Slack or PagerDuty.
Custom Anomaly Detection & Notification
Although anomalies are detected at the account and organization level, if you desire to detect anomalies at lower levels (e.g. warehouse or table), it is recommended to leverage Snowflake's Anomaly Detection ML class and pair it with a Snowflake alert to notify owners of more granular anomalies that occur within the ecosystem. This ensures all levels of Snowflake cost can be monitored in a proactive and effective way. As a best practice, notifications should be configured for a targeted distribution list that includes the budget owner, the FinOps team, and the technical lead responsible for the associated Snowflake resources, ensuring all stakeholders are immediately aware of a potential cost overrun and can coordinate a swift response.
The Control principle of the Cost Optimization framework is designed to move organizations beyond cost reporting by establishing the necessary automated guardrails and governance policies to manage and secure Snowflake consumption proactively. This framework enforces financial governance by transforming cost visibility into tangible action, utilizing features like budgets and resource monitors to prevent uncontrolled growth and ensure consumption aligns strictly with organizational financial policies. Control is foundational for maximizing the value of the platform by ensuring disciplined and cost-effective resource utilization.
Implementing a comprehensive control framework, supported by features such as Resource Monitors, Budgets, and Tagging Policies, empowers organizations to enforce financial accountability and maintain budget predictability. By adopting these controls, teams can actively manage spend, quickly and automatically mitigate cost inefficiencies, and ensure the disciplined, cost-effective utilization of the entire Snowflake environment. The culmination of all of these controls leads to greater platform ROI and minimized financial risk. To meet this goal, consider the following recommendations based on industry best practices and Snowflake's capabilities:
To effectively manage and control Snowflake spend, it is essential to establish and enforce cost guardrails. Implementing a budgeting system is a key FinOps practice that promotes cost accountability and optimizes resource usage by providing teams with visibility into their consumption and the ability to set alerts and automated actions. Budgeting helps to prevent unexpected cost overruns and encourages a cost-conscious culture.
Set budgets permissions
To establish effective budgets, it's crucial to define roles and privileges by configuring the role, team, or user responsible for the resources. This ensures that budget tracking aligns with specific business units or projects, enabling accurate cost attribution and accountability. By linking consumption to the relevant stakeholders, you can create a clear showback or chargeback model, which is vital for fostering a sense of ownership over spending. This configuration should be part of a broader, consistent tagging strategy to ensure all costs are properly allocated to departments, environments, or projects.
Create budget categories
Categorizing costs is fundamental for granular budget management. You can establish budgets based on the account or create custom categories using Object Tags. Custom tags, such as those for a data product or cost center, are critical for accurately apportioning costs across different departments, lines of business, or specific projects. This granular approach provides a detailed breakdown of where spending occurs, enabling more precise control and informed decision-making regarding resource allocation. Implementing robust tagging policies and naming conventions ensures consistency and facilitates the interpretation of cost data. Because budgets are soft limit objects, objects can be part of more than one budget if different perspectives need to be tracked for cost (e.g., cost center & workload level budgeting).
Implement a notification strategy
Effective budget management relies on timely communication. Setting up alerting through emails or webhooks to collaboration tools like Slack and Microsoft Teams provides proactive notification to key stakeholders when spending approaches or exceeds a defined threshold. These alerts provide teams with an opportunity to review and adjust their usage before it leads to significant cost overruns. This capability positions organizations for security success by mitigating potential threats through comprehensive monitoring and detection.
Notifications are not limited to just budgets; Snowflake alerts can also be configured to systematically notify administrators of unusual or costly patterns, such as those listed in the Control and Optimize sections of the Cost Pillar. This ensures that key drivers of Snowflake consumption can be tracked and remediated proactively, even as the platform's usage grows.
Forecasting Snowflake consumption should be a strategic business function, not a mere technical prediction. The goal is to establish a transparent basis for budgeting and optimizing ROI by linking consumption directly to measurable business outcomes. In a dynamic, usage-based environment where compute costs are the most volatile element of the bill, a robust framework must integrate quantitative analysis of historical usage with qualitative insights into future business drivers. The following framework outlines how to build and maintain a comprehensive consumption forecast.
Establish the Baseline
This phase focuses on understanding the source of spend and establishing granular cost accountability.
Build the predictive model
This phase integrates historical trends with strategic business inputs to create forward-looking projections.
Operationalize and optimize
This phase links the forecast to continuous monitoring, governance, and proactive cost controls.
To effectively manage Snowflake expenditure and prevent unforeseen costs, it is crucial to implement a robust framework of resource controls. These controls act as automated guardrails, ensuring that resource consumption for compute, storage, and other services aligns with your financial governance policies. By proactively setting policies and remediating inefficiencies, you can maintain budget predictability and maximize the value of your investment in the platform.
Compute controls
Controlling compute consumption is often the most critical aspect of Snowflake cost management, as it typically represents the largest portion of spend. Snowflake offers several features to manage warehouse usage and prevent excessive costs.
Storage Controls
While storage costs are generally lower than compute costs, they can grow significantly over time. Understanding the different components of storage cost and implementing policies to manage the types of storage is key to keeping these costs in check.
Serverless Features
For serverless features, which do not use warehouse compute and therefore cannot leverage the Resource Monitor feature, we recommend setting up a budget. Budgets allow you to define a monthly spending limit on the compute costs for a Snowflake account or a custom group of Snowflake objects. When the spending limit is projected to be hit, a notification is sent. While Budgets do not explicitly allow you to suspend serverless features upon reaching a limit (the way that Resource Monitors do), Budgets can be configured to not only send emails, but also send notifications to a cloud message queue or other webhooks (such as Microsoft Teams, Slack, or PagerDuty). This then gives you the ability to trigger other actions for remediation.
To prevent uncontrolled spend as organizations scale, it's essential to have a clear management strategy for Snowflake resources, most notably, virtual warehouses. This strategy should encompass a defined provisioning process, ongoing object management, and automated platform enforcement to foster agility while maintaining financial discipline.
Centralized vs. decentralized management
Organizations tend to adopt one of two primary approaches to managing Snowflake resources:
Striking the balance: the federated model
The most effective strategy often lies in a hybrid, or federated, model. This approach combines centralized governance (policies defined by a CoE) with decentralized execution (teams having the freedom to create resources within those guardrails). This balance enables agility while mitigating financial risk.
Core Principles for Governance
Regardless of the chosen model, these principles are essential for effective governance:
Snowflake offers numerous optimization controls within its platform. These features are designed to enhance efficiency and reduce administrative overhead for your various workloads. Coupled with operational best practices that utilize features in a healthy manner, you can balance performance goals with cost governance requirements to meet business objectives.
By implementing these recommendations, you will be able to:
To foster healthy growth and achieve economies of scale within your organization, we recommend the following, drawing upon industry best practices and Snowflake's capabilities.
Compute is the most significant part of any organization's Snowflake spend, typically accounting for 80% or more of spend. A good warehouse design should incorporate the principles below:
Using the principles above ensures that your compute costs are well managed and balanced with optimal benefits.
Separate warehouses by workload
Different workloads (e.g., data engineering, analytics, AI and applications) have varying characteristics. Separating these to be serviced by different virtual warehouses can help ensure relevant features in Snowflake can be utilized.
Some examples of this include:
Warehouse sizing
Mapping the workload to the right warehouse size and configuration is an important consideration of warehouse design. This should consider several factors like query completion time, complexity, data size, query volume, SLAs, queuing, and balancing overall cost objectives. Warehouse sizing involves a cost-benefit analysis that balances performance, cost, and human expectations. Humans often have expectations that their queries will not be queued or take a long time to complete, so it is recommended to have dedicated warehouses for teams.
Recommendations for choosing the right-sized warehouse include:
Optimal warehouse settings
While Snowflake strives for minimal knobs and self-managed tuning, there are situations where selecting the right settings for warehouses can help with optimal cost and/or performance. Some of the key warehouse settings include
To maintain an optimal balance between cost and performance, regularly monitor your resource usage (e.g., weekly or monthly) and set up resource monitors to alert you to high credit consumption. When workload demands change, adjust your settings as needed.
Warehouse consolidation
If you find yourself with an excess of provisioned warehouses or a shift in workloads necessitating consolidation, apply the aforementioned principles. Begin with the least utilized warehouses and migrate their workloads to an existing warehouse that handles similar tasks.
The WAREHOUSE_LOAD_HISTORY view can help you assess the average number of queries running on a warehouse over a specific period. A useful benchmark is to aim for a warehouse running queries 80% of the time it's active. Continuously monitor your key metrics to ensure they still meet SLA goals and adjust warehouse settings as needed.
To achieve significant operational efficiency and predictable costs, prioritize the use of serverless and managed services. These services eliminate the need to manage underlying compute infrastructure, allowing your organization to pay for results rather than resource provisioning and scaling. Evaluate the following servterless features to reduce costs and enhance performance in your environment.
Storage optimization
Snowflake offers several serverless features that automatically manage and optimize your tables, reducing the need for manual intervention while improving query performance. The following features ensure your data is efficiently organized, allowing for faster and more cost-effective qeuerying without the burden of user management.
Automatic Clustering is a background process in Snowflake that organizes data within a table by sorting it according to predefined columns. This process is critical for optimizing query performance and reducing costs. Benefits include:
Considerations and Best Practices:
Search Optimization Service (SOS) enhances the performance of point lookup searches by creating a persistent search access path. Its primary value lies in achieving better pruning for these specific query types, which is critical for applications requiring quick response times. They can be used in combination with auto-clustering and Snowflake Optima service.
Considerations for SOS:
Materialized Views (MVs) are pre-computed query results stored as a separate table and automatically maintained by Snowflake.
Benefits of MVs:
Considerations for MVs:
Query Acceleration Service (QAS): QAS is a serverless feature that provides a burst of additional compute resources to accelerate specific parts of a query, rather than replacing an appropriately sized warehouse. It's particularly beneficial for large I/O operation queries, eliminating the need to manually scale warehouses up or down. QAS also helps speed up query execution when table clustering cannot be altered due to other workload dependencies. A cost-benefit analysis should always be performed to ensure that the credit consumption from QAS is justified by the performance improvement and the avoided cost of a larger warehouse.
Serverless tasks: Serverless tasks enable the execution of SQL statements or stored procedures on a user-defined schedule, eliminating the need for a user-managed virtual warehouse. This is a cost-effective solution for infrequent workloads where a warm cache offers minimal value, or for unpredictable workloads that don't utilize a minimum 60-second usage.
Next to compute, storage often represents the second-highest cost component in Snowflake. Effective storage governance is a critical concern for many industries due to federal and global regulations. Snowflake's default settings prioritize maximum data protection, which may not always align with the requirements of every workload or environment. This section focuses on how to manage and configure storage-related settings appropriately, ensuring that storage costs remain reasonable and deliver business value.
Optimizing table volume and auditing usage
Optimizing Managed Data Structures and Access
Data egress, the transfer of data from one cloud provider or region to another, can incur substantial costs, particularly when handling large data volumes. Implementing appropriate tools and best practices is essential to minimize these data transfer expenses and maximize business value when data egress is necessary.
Tooling: Enable proactive cost management
Leverage Snowflake's native features to gain visibility and control over data transfer costs before they become a significant expense.
Architectural best practices: Design for minimal data movement
Minimizing data transfer costs for your workloads heavily depends on the architecture of your data pipelines and applications. Adhere to the following best practices to achieve this:
Workload optimization focuses on identifying the efficiency of your data processing activities within Snowflake. This involves a holistic approach encompassing the review of query syntax, data pipelines, table structures, and warehouse configurations to minimize resource consumption and improve performance. By addressing inefficiencies across these areas, organizations can significantly reduce costs and accelerate data delivery.
Query syntax optimization
Inefficient queries often lead to excessive and hidden credit consumption. Organizations can identify performance bottlenecks and understand the cost impact of specific SQL patterns by using Snowflake features and adhering to SQL code best practices. This enables development teams to create more efficient and cost-effective code by highlighting poor performing queries. Refer to the Performance Optimization Pillar of the Snowflake Well-Architected Framework for details on how to do this.
Utilize query history & insights for highlevel monitoring
For broader visibility across all workloads, the Snowsight UI and the ACCOUNT_USAGE schema are indispensable.
Leverage the query profile for deep-dive analysis
After identifying problematic queries, the Query Profile is an essential tool for understanding the execution plan of a query. It provides a detailed, step-by-step breakdown of every operator involved, from data scanning to final result delivery. To gain visibility into inefficiencies, analysts and developers should regularly use the Query Profile to identify common anti-patterns like:
Programmatically deconstruct queries for automated analysis
For advanced use cases and automated monitoring, you can programmatically access query performance data. The GET_QUERY_OPERATOR_STATS function can be used to retrieve the granular, operator-level statistics for a given query ID, showing many of the steps and attributes available in the query profile view. This allows you to build automated checks that, for instance, flag any query where a full table scan accounts for more than 90% of the execution time or where data spillage exceeds a certain threshold. This approach helps scale performance visibility beyond manual checks.
Pipeline optimization
Snowflake pipeline optimization is about designing and managing data ingestion and transformation processes that are efficient, cost-effective, scalable, and low-maintenance, while balancing business value and SLAs (service levels for freshness and responsiveness). Key levers include architecture patterns (truncate & load versus incremental loads), use of serverless managed services (e.g., Snowpipe, Dynamic Tables), and auditing loading practices to maximize cost and performance benefits.
Batch loading
The COPY INTO (table or location) command is a foundational and flexible method for bulk data loading from an external stage into a Snowflake table. Its importance lies in its role as a powerful, built-in tool for migrating large volumes of historical data or loading scheduled batch files. The best practice is to use COPY INTO for one-time or large batch data loading jobs, which can then be supplemented with more continuous ingestion methods like Snowpipe for incremental data. Additional information regarding COPY INTO and general data loading best practices can be found in our documentation here. Some additional best practices are outlined below.
Serverless ingestion
While named similarly, Snowpipe and Snowpipe Streaming are different serverless methods to ingest data. You would utilize one versus the other depending on your SLA requirements for data delivery, and based on how data is landing for consumption.
Data Transformation Optimization
In general, there are two major transformation strategies followed in Snowflake. One is "truncate & load," which involves full data replacement and reloading, and one is incrementally loading new data into an object, possibly requiring an upsert operation. Below is some general guidance on when to use each.
A great example of truncate & load versus incremental can be seen in refresh strategies for Dynamic Tables (DTs). They are also a cost-effective and low-maintenance way to maintain data pipelines. Dynamic tables provide a powerful, automated way to build continuous data transformation pipelines with SQL, eliminating the need for manual task orchestration that was historically architected with streams & tasks in Snowflake. Streams & tasks still have their uses, but general guidance and ease of use see more Snowflake users leaning towards DTs for automated data pipelines since the pipeline definitions are defined in one object or in a chain of objects.
The key concepts of dynamic tables are defined in our documentation. However, best practices and determining when to use DTs versus other methods of pipeline tooling in Snowflake still warrant discussion, and are compared in Snowflake's documentation.
In addition to Snowflake's published best practices, consider the following
More information on dynamic tables versus streams & tasks versus materialized views can be found in the Snowflake documentation here.
Table pruning optimization
Table scanning operations are one of the most resource-intensive aspects of query execution. Minimizing the scan of data partitions in a table (called partition pruning) can provide significant improvements to both performance and cost for data operations in Snowflake. The account usage views TABLE_QUERY_PRUNING_HISTORY and COLUMN_QUERY_PRUNING_HISTORY provide aggregated data on query execution, showing metrics such as partitions scanned and rows matched, which helps identify tables with poor pruning efficiency. By analyzing this data, you can determine the most frequently accessed columns that are leading to a high number of unnecessarily scanned micro-partitions. Common ways to optimize these access patterns are by using Automatic Clustering and Search Optimization.
To determine tables that can most benefit from re-ordering how data is stored, you can review Snowflake's best practice on how to analyze the TABLE_QUERY_PRUNING_HISTORY and COLUMN_QUERY_PRUNING_HISTORY account usage views. Fundamentally, reducing the percentage of partitions in each table pruned to the percentage of rows returned in a query will lead to the most optimized cost and performance for any given workload.

A table's Ideal pruning state is scanning the same % of rows matched as partitions read, minimizing unused read rows.
Warehouse optimization
Warehouse concurrency, type, and sizing can impact the execution performance and cost of queries within Snowflake. Review the compute optimization section for more information into the tuning of the warehouse and its effect on cost and performance.
Optimization is a continuous process that ensures all workloads not only drive maximum business value but also do so in an optimal manner. By regularly reviewing, analyzing, and refining your Snowflake environment, you can identify inefficiencies, implement improvements, and adapt your platform to the ever-evolving business needs. The following set of steps will help you continue to improve your environment as you grow:
Step 1: Identify & investigate workloads to improve
Begin by regularly reviewing (usually on a weekly, bi-weekly, or monthly cadence) workloads that could benefit from optimization, using Snowflake's Cost Insights, deviations in unit economics or health metrics (from the Visibility principle), or objects hitting control limits (e.g., queries hitting warehouse timeouts from the Control principle). Once identified, investigate these findings through the Cost Management UI, Cost Anomaly detection, Query History, or custom dashboards with Account Usage Views to pinpoint the root cause. Then, using the recommendations in the Optimize Pillar, make improvements to the workload or object.
Step 2: Estimate & test
Before implementing changes, estimate the potential impact on cost and performance. Estimation encompasses both the expected amount of time required to make a change (for instance, consolidating warehouses will necessitate more coordination effort for teams using the resource than altering a configuration setting) as well as the hard cost of implementation. Snowflake provides helpful cost estimation functions for serverless features, such as auto clustering and search optimization service, to help make this a more data-driven process. If an estimation tool is not available, making changes in a development or test environment on a subset of the workload can provide an estimate and expected impact.
Step 3: Decide & implement
Based on your estimations and test results, decide whether to move forward with the change, ensuring the cost-benefit aligns with performance or business needs. If approved, proceed to productionize the change, integrating it into your live environment.
Step 4: Monitor & analyze
Finally, monitor and analyze the implemented changes to track and validate the change's success over a period of time. This involves using the same investigation methods, like utilizing the Cost Management UI and Account Usage Views, and comparing cost and performance metrics before and after the change to articulate the business impact. Translate the technical improvements into tangible business benefits. For example, "Optimizing this query reduced monthly warehouse costs by $X and improved report generation time by Y minutes, allowing business users to make faster decisions." This helps to both demonstrate the value of your optimization efforts to stakeholders and business value to the company. Finally, course-correct as needed depending on the results of the monitoring
This continual improvement framework is the culmination of all subtopics within the Cost Optimization Pillar and provides a consistent way for you to grow healthily on Snowflake.