How to Monitor Snowflake Cost?

Table of Contents

Snowflake’s usage-based pricing model offers great flexibility, but without proper monitoring, costs can escalate quickly—especially in data-heavy environments. Whether you’re a data engineer, a finance lead, or someone managing cloud infrastructure, keeping tabs on Snowflake expenses is crucial to avoid budget surprises and ensure efficient usage.

 

The good news is that Snowflake provides several built-in features, alongside third-party tools, to help you track, analyze, and control spend effectively. Keep reading to find out some key metrics and tools that you can use to monitor Snowflake cost.

Understanding Snowflake Cost

Before diving into the monitoring metrics and tools, it’s important to know how Snowflake costs work. Snowflake operates on a usage-based pricing model, meaning you only pay for what you use. But “what you use” can be any of multiple resources—and understanding each one is key to managing and forecasting your bill.

Compute Resources

Compute is usually the biggest contributor to your Snowflake bill. It is often tied to the virtual warehouses but it comes in some other flavors as well. Let’s break it down:

Virtual Warehouses

You spin up these user-managed compute clusters to run queries, load data, or perform transformations. Each warehouse is billed by the second, based on its size (ranging from XS to 6XL). This means the larger your warehouse and the longer it runs, the more it costs.

 

So, if a large warehouse runs longer than needed (or is not suspended when idle), you will rack up unnecessary charges. Snowflake provides useful features, like auto-suspend and auto-resume, to avoid this by turning off compute when it’s not in use.

Serverless Compute

In contrast to popular opinion, virtual warehouses aren’t the only source of compute cost. Snowflake also offers serverless compute for certain operations, like automatic clustering, materialized view maintenance, search optimization service, and query acceleration service.

 

These features don’t rely on user-managed warehouses and are instead powered by Snowflake-managed infrastructure. They are metered separately and can quietly add to your bill. Therefore, it’s important to review their usage regularly and determine if the performance gain justifies the cost.

Cloud Services

Some behind-the-scenes operations, like query parsing, authentication, metadata management, and result caching, also contribute to Snowflake cost. These services normally account for a very small fraction of the total cost. However, it can become significant if your workload has many short-lived queries or frequent metadata operations.

 

NOTE: This cost is charged ONLY if it exceeds 10% of daily warehouse usage.

 

Cloud services are handled by Snowflake so you can’t control these costs, directly. Having said that, being aware of them helps you analyze your overall usage breakdown.

Storage Resources

The cost for storing data in Snowflake is based on the average compressed data volume per month, and it’s typically billed per terabyte (TB). Some of the major components of storage cost include structured data, time travel history, and fail-safe backups. Although they are generally very reasonable, storage costs can escalate if you’re keeping large datasets, historical snapshots, or temporary tables for too long.

Data Transfer Resources

The movement of data out of Snowflake (data egress) is an often-overlooked source of cost. It becomes particularly important if your architecture involves cross-region replication, frequent data exports, or integrations with external tools.

 

The cost of data egress from Snowflake is based on the volume of data transferred out of its platform. It is calculated per byte and the exact cost will depend on the region of your account and the destination cloud provider. For instance, data transfer within the same region is FREE. But the moment you cross regional or cloud boundaries, pricing tiers kick in.

 

Monitoring your data sharing patterns and understanding where data is being transferred—and how often—can help you optimize and reduce these costs.

 

NOTE: Data ingress is free from Snowflake’s side, but your cloud provider may charge you (so do check with them).

Tools to Monitor Snowflake Cost

Monitoring cost in Snowflake is much more than just checking your credit balance. In fact, Snowflake provides a suite of built-in tools that can help you track resource consumption, set thresholds, and manage usage. These tools offer visibility and control to prevent budget surprises and are equally useful for account admins and/or data engineers.

Account Usage Views

Snowflake offers a comprehensive set of system-defined views under the ACCOUNT_USAGE schema. It allows you to analyze historical usage across your entire Snowflake account. These views offer detailed insights into compute usage, query behavior, storage trends, and serverless feature consumption. Hence, they’re ideal for creating cost dashboards, running audits, or triggering alerts based on historical patterns.

 

You can access these views under SNOWFLAKE database → ACCOUNT_USAGE schema. Some of the most commonly used account usage views include:

 

  • WAREHOUSE_METERING_HISTORY
  • WAREHOUSE_LOAD_HISTORY
  • QUERY_HISTORY
  • STORAGE_USAGE
  • TABLE_STORAGE_METRICS
  • SERVERLESS_TASK_HISTORY
  • MATERIALIZED_VIEW_REFRESH_HISTORY

Information Schema Views

These views provide real-time or near-real-time metadata and activity monitoring scoped to a specific database. INFORMATION_SCHEMA is particularly useful for developers and engineers who want to check temporary objects, recent query behavior, or table sizes without waiting for ACCOUNT_USAGE view updates.

 

You can access them at {Select any database} → INFORMATION_SCHEMA schema. Some of the most commonly used account usage views include:

 

  • TABLES
  • STAGES
  • QUERY_HISTORY
  • LOAD_HISTORY

Snowsight

In addition to schemas, Snowflake also provides a modern and interactive web UI (Snowsight) to monitor cost. It is packed with pre-built dashboards for monitoring usage and allows you to explore your data visually—no SQL needed. These dashboards are perfect for high-level overviews, stakeholder reporting, or spotting trends.

 

You can access it at Snowsight → Admin → Usage, Activity, or Billing. Some of the most commonly used dashboards include:

 

  • Usage → Warehouses
  • Usage → Storage
  • Activity → Queries
  • Billing → Overview
  • Billing → Budgets

Resource Monitors

Resource Monitors act as an active line of defense and allow you to enforce compute usage limits through automated alerts or suspensions. They are based on credit thresholds and can be set up at the account or warehouse level. These monitors are great for keeping runaway processes in check or for enforcing monthly spend limits on shared environments.

 

You can access it at Snowsight → Admin → Resource Monitors or via SQL. A sample query SQL to create a Resource Monitor is shared below.

CREATE RESOURCE MONITOR compute_guard

WITH CREDIT_QUOTA = 1000

TRIGGERS ON 80 PERCENT DO NOTIFY

                   ON 100 PERCENT DO SUSPEND;

Budgets and Alerts

Budgets allow you to define soft credit limits for your account, specific warehouses, or workloads. They trigger alerts whenever specified thresholds are hit. In contrast to resource monitors, budgets don’t stop execution. They just help you keep track of financial targets and notify stakeholders, when needed. You can access them at Snowsight → Admin → Billing → Budgets.

Billing and Usage Dashboards

Snowflake offers built-in dashboards that provide time-based breakdowns of your cost and usage trends. These dashboards show daily, weekly, and monthly credit consumption, segmented by user, role, service type, and warehouse. They are particularly helpful for centralized billing, easy export options, finance reviews, and team-level reporting. You can access them at Snowsight → Admin → Billing & Usage

 

tool primary use access location ideal users advantages
Account Usage Views
Historical usage tracking (compute, storage, queries)
SNOWFLAKE.ACCOUNT_USAGE schema
Data engineers, analysts
Deep historical data, customizable, supports dashboards
Information Schema Views
Real-time monitoring of objects and queries
{DB}.INFORMATION_SCHEMA
Developers, operations teams
Near real-time, scoped by database, good for debugging
Snowsight
Visual overview of usage and performance
Snowsight UI → Admin → Usage / Activity
Stakeholders, managers
Easy to use, no SQL needed, interactive filters
Resource Monitors
Enforce compute credit limits
Snowsight → Admin → Resource Monitors (or SQL Query)
FinOps, Admins
Automatically notify or suspend warehouses on threshold
Budgets & Alerts
Set soft spend limits and trigger alerts
Snowsight UI → Admin → Billing → Budgets
Finance teams, project leads
Custom budgets, email/webhook alerts, track team/project spend
Billing & Usage Dashboards
Daily/monthly spend breakdown
Snowsight UI → Admin → Billing & Usage
Admins, finance departments
Clear billing summary, exportable reports, breakdown by user/warehouse

Key Metrics to Monitor for Snowflake Cost

Once we have understood what drives Snowflake costs and what are the tools we can use, the next step is to know how we can monitor them effectively. Snowflake provides a rich set of usage data and views but all metrics are NOT created equal. Some are more relevant to compute, others to storage, and still others to how your queries are running.

 

Based on that, we can divide the key metrics to monitor Snowflake cost into three main categories – compute, storage, and queries. Let’s start with the largest piece of the cost puzzle, compute.

Metrics to Optimize Compute Cost

Compute costs can fluctuate quickly depending on how you use virtual warehouses and serverless features. Tracking the right metrics helps you to ensure your warehouses are sized properly and idle time is minimized. Similarly, you can keep your serverless operations in control so that they won’t inflate your bill. Below are some of the most important compute-related metrics to keep an eye on.

Warehouse Credit Usage

This metric gives you a clear picture of how many Snowflake credits are consumed by each virtual warehouse over time. You can then filter this view by warehouse name, size, and time window to identify the highest-cost workloads and optimize them. This metric is found within the WAREHOUSE_METERING_HISTORY view of ACCOUNT_USAGE.

 

SELECT warehouse_name, start_time, end_time, credits_used

FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

ORDER BY start_time DESC;

Average & Peak Warehouse Utilization

You can estimate average and peak utilization by analyzing how many queries were running compared to the warehouse’s total capacity. A bigger gap between these values could mean you are using oversized warehouses. These insights help you determine if you need to scale down or consolidate workloads. You can monitor this metric through the WAREHOUSE_LOAD_HISTORY and QUERY_HISTORY views of ACCOUNT_USAGE.

 

SELECT

warehouse_name,

AVG(avg_running) AS avg_concurrent_queries,

MAX(avg_running) AS peak_concurrent_queries

FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

GROUP BY warehouse_name

ORDER BY peak_concurrent_queries DESC;

Auto-Suspend & Resume Activity

It is not an optimal setting if your warehouses are being suspended and resumed too frequently. Monitoring this can help you resolve this issue by adjusting your auto-suspend settings. You may need to improve the scheduling or use job batching to minimize inefficiencies.

 

You can monitor this metric through the WAREHOUSE_LOAD_HISTORY and QUERY_HISTORY views of ACCOUNT_USAGE.

SELECT warehouse_name, COUNT(*) AS resume_count

FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY

WHERE state = ‘RESUMING’ AND start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

GROUP BY warehouse_name

ORDER BY resume_count DESC; ;

Query Concurrency per Warehouse

Query concurrency means how many queries are running on each warehouse at the same time. This metric is important to identify underused and maxed-out warehouses. Once you have this information, you can scale down underused resources and figure out whether you need to scale up the overused warehouses or it’s more cost-effective to split across additional compute clusters.

 

You can monitor this metric through the WAREHOUSE_LOAD_HISTORY and QUERY_HISTORY views of ACCOUNT_USAGE.

 

SELECT

warehouse_name,

start_time, avg_running AS concurrent_queries

FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY

WHERE start_time >= DATEADD(‘day’, -1, CURRENT_TIMESTAMP())

ORDER BY start_time DESC;

Serverless Feature Credit Usage

It was very common to ignore the cost of serverless features run outside user-managed warehouses and are billed separately. These tasks include automatic clustering, materialized view refreshes, and search optimization. Fortunately, Snowflake has independent views for all of them to track how much credits are being consumed.

 

You can find these metrics in the SERVERLESS_TASK_HISTORY, MATERIALIZED_VIEW_REFRESH_HISTORY, SEARCH_OPTIMIZATION_HISTORY views.

 

SELECT

task_name,

SUM(credits_used) AS total_credits

FROM SNOWFLAKE.ACCOUNT_USAGE.SERVERLESS_TASK_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

GROUP BY task_name

ORDER BY total_credits DESC;

Metrics to Monitor Storage Cost

Next, we will look into the metrics that must be monitored for optimizing storage cost in Snowflake. Although storage is more affordable than compute, lack of monitoring can lead to unwanted surprises. For example, retaining large tables, unused temp files, or long time travel windows can all lead to bloated bills. Let’s discuss how we can keep them at bay.

Storage Usage by Table

This metric helps you identify which tables are consuming the most storage. You can use it to locate large or unnecessary tables that might need pruning or archiving. You can monitor this metric through the TABLE_STORAGE_METRICS view of ACCOUNT_USAGE.

 

SELECT

table_catalog AS database_name,

table_schema,

table_name,

active_bytes / 1024 / 1024 / 1024 AS active_gb

FROM SNOWFLAKE.ACCOUNT_USAGE.TABLE_STORAGE_METRICS

ORDER BY active_gb DESC;

Time Travel Storage Usage

The time travel feature of Snowflake is very useful as it stores historical versions of your data. However, it’s important to use optimal settings for time travel to avoid a massive surge in your Snowflake bill.

You can monitor this metric through the STORAGE_USAGE view of ACCOUNT_USAGE.

 

SELECT

usage_date,

database_name,

time_travel_bytes / 1024 / 1024 / 1024 AS time_travel_gb

FROM SNOWFLAKE.ACCOUNT_USAGE.STORAGE_USAGE

WHERE usage_date >= DATEADD(‘day’, -7, CURRENT_DATE())

ORDER BY time_travel_gb DESC;

Fail-safe Storage Usage

Fail-safe can be considered an extended version of time travel as it provides a 7-day non-configurable backup beyond the time travel window. The cost of this storage can become a huge problem if you have large historical datasets. You can monitor this metric through the STORAGE_USAGE view of ACCOUNT_USAGE.

 

SELECT

usage_date,

database_name,

failsafe_bytes / 1024 / 1024 / 1024 AS failsafe_gb

FROM SNOWFLAKE.ACCOUNT_USAGE.STORAGE_USAGE WHERE usage_date >= DATEADD(‘day’, -7, CURRENT_DATE())

ORDER BY failsafe_gb DESC;

Stage and Temporary Table Storage

Internal stages and temporary tables are often left behind after ETL or testing. These tables can cause a significant boost to your Snowflake spend if you don’t clean them up regularly.

 

You can monitor this metric through the STAGE_STORAGE_USAGE_HISTORY view of ACCOUNT_USAGE (for Stage Tables) and INFORMATION_SCHEMA.TABLES (for Temp Tables).

 

— Internal stage storage usage

SELECT

stage_name,

SUM(storage_bytes) / 1024 / 1024 / 1024 AS storage_gb

FROM SNOWFLAKE.ACCOUNT_USAGE.STAGE_STORAGE_USAGE_HISTORY

WHERE usage_date >= DATEADD(‘day’, -7, CURRENT_DATE())

GROUP BY stage_name

ORDER BY storage_gb DESC;

— List temporary tables

SELECT

table_catalog,

table_schema,

table_name,

created

FROM INFORMATION_SCHEMA.TABLES

WHERE table_type = ‘TEMPORARY’;

Unused or Low-Access Tables

It is very common for Snowflake users to create temporary tables and then leave them in the system. Although they are not being queried, these tables can contribute to your bill. Therefore, it’s critical to monitor, identify, and clean up such tables to optimize Snowflake cost.

 

You can monitor this metric through the TABLES and QUERY_HISTORY view of ACCOUNT_USAGE.

— List tables not queried in the past 30 days

SELECT

t.table_catalog,

t.table_schema,

t.table_name

FROM SNOWFLAKE.ACCOUNT_USAGE.TABLES t

LEFT JOIN(

      SELECT

      DISTINCT object_name

       FROM

       SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY

       WHERE start_time >= DATEADD(‘day’, -30, CURRENT_TIMESTAMP())

) q ON t.table_name = q.object_name

WHERE q.object_name IS NULL

AND t.deleted IS NULL;

Metrics to Monitor Query Cost

Every query that you run in Snowflake has the potential to consume warehouse credits. For this reason, it’s imperative to find inefficiencies and tune performance to reduce costs without sacrificing productivity. Below are some of the key query-related metrics to track.

Query Execution Time and Credit Usage

Long-running queries consume significant warehouse resources and monitoring their execution times and credit usage is a necessity. It helps you pinpoint expensive operations and prioritize optimization to control your Snowflake spend. You can monitor this metric through the QUERY_HISTORY view of ACCOUNT_USAGE.

SELECT

query_text,

warehouse_name,

execution_time / 1000 AS execution_seconds, credits_used

FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

AND execution_status = ‘SUCCESS’

ORDER BY credits_used DESC LIMIT 50;

Bytes Scanned per Query

The amount of data processed for each query is directly proportional to compute costs. Hence, this metric is important to identify inefficient queries, like SELECT * statements. You can monitor this metric through the QUERY_HISTORY view of ACCOUNT_USAGE.

SELECT

query_text,

warehouse_name,

bytes_scanned / 1024 / 1024 / 1024 AS scanned_gb

FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

ORDER BY scanned_gb DESC LIMIT 50;

Query Frequency

Some queries are light, but they run thousands of times a day. More often than not, it is happening unnecessarily and causing a lot of hit to your bill. Optimizing the frequency of these queries can help you save a lot. You can monitor this metric through the QUERY_HISTORY view of ACCOUNT_USAGE.

SELECT

query_text,

COUNT(*) AS frequency

FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

GROUP BY query_text

ORDER BY frequency DESC LIMIT 20;

Failed or Retried Query Counts

Queries that fail repeatedly waste resources and monitoring them helps you fix issues at the source. Whether it is happening due to application errors, incorrect logic, or unstable pipelines, this metric has got you covered.

It can be monitored through the QUERY_HISTORY view of ACCOUNT_USAGE.

SELECT

query_text,

execution_status,

COUNT(*) AS attempts

FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY

WHERE start_time >= DATEADD(‘day’, -7, CURRENT_TIMESTAMP())

AND execution_status != ‘SUCCESS’

GROUP BY query_text, execution_status

ORDER BY attempts DESC LIMIT 20;

 

Snowflake’s flexible pricing model is great for scaling workloads, but without active monitoring, costs can add up quickly. However, it must be kept in mind that monitoring cost isn’t a one-time task—it’s an ongoing process. Leverage the available tools (like Account Usage views, Resource Monitors, and Snowsight) to track the right metrics and control your budget by monitoring Snowflake cost.

Scroll to Top

01. Home

02. Portfolio

03. Services

04. About

05. Blog

Office

Contact

Follow us