Demo development is crucial for businesses to showcase their AI capabilities and win new customers. Through rapid prototyping and professional presentation tools, businesses can transform weeks of development into minutes of setup, dramatically accelerating sales cycles and proof-of-concept delivery.
In this Quickstart, we will build a comprehensive demo development platform called "Cortex AI Demo Framework". This demonstrates how to use Snowflake Cortex AI functions to create synthetic data, build interactive analytics, deploy search capabilities, and generate complete demonstration environments.
This Quickstart showcases the complete Cortex AI Demo Framework with:
In this step, you'll create the Snowflake database objects and prepare for framework deployment.
If you have Workspaces:
Projects, then Workspaces in the left navigation+ Add new to create a new WorkspaceSQL File to create a new SQL fileIf you have Worksheets:
Projects, then Worksheets in the left navigation+ in the top-right corner to open a new WorksheetThe setup script creates:
CORTEX_FRAMEWORK_DB with BRONZE_LAYER, SILVER_LAYER, APPS, and CONFIGS schemasCORTEX_FRAMEWORK_DATA_SCIENTIST with all necessary permissionsCORTEX_FRAMEWORK_WH for compute resourcesFRAMEWORK_DATA_STAGE, SEMANTIC_MODELS, and DEMO_CONFIGS for file uploadsCSV_FORMAT, YAML_FORMAT, and JSON_FORMAT for data processingSNOWFLAKE.CORTEX_USER role for Cortex functionsDownload these framework files from the GitHub repository:
File | Purpose | Download Link |
Notebook | Setup notebook for framework deployment | |
Environment File | Conda environment configuration for latest Streamlit | |
Synthetic Data Generator | AI-powered synthetic data creation | |
Structured Tables | Data structuring and transformation | |
SQL to YAML Converter | SQL to YAML configuration converter (generates semantic models) | |
Snow Demo | Demo configuration and runner | |
YAML Wizard | Interactive dashboard configuration creator | |
Snow Viz | Advanced visualization dashboard renderer |
cortex_ai_demo_data_scientistCatalog → Database Explorer → AI_FRAMEWORK_DB → APPS → StagesUpload all framework files to the single
AI_FRAMEWORK_APPS
stage:
AI_FRAMEWORK_APPS stage, then click Enable Directory Table and upload all 7 files: 01_ai_framework_synthetic_data_generator.py02_ai_framework_structured_tables.py03_ai_framework_sql_to_yaml_converter.py04_ai_framework_snow_demo.py05_ai_framework_snow_viz_yaml_wizard.py06_ai_framework_snow_viz.pyenvironment.ymlProjects → Notebooks in Snowsight+ Notebook and select Import .ipynb filecortex_ai_demo_framework_setup.ipynb from your downloadscortex_ai_demo_data_scientistAI_FRAMEWORK_DBBRONZE_LAYERcortex_ai_demo_whcortex_ai_demo_whCreate to import the notebookThe notebook creates all 6 Streamlit applications using the single stage approach with automatic environment.yml detection for the latest Streamlit version.
Projects → Notebooks in SnowsightCORTEX_FRAMEWORK_DEMO Notebook to open itRun all to execute all cells in the notebook at onceWhat the notebook does:
The notebook processes sample data and deploys the complete framework application suite.
Projects → Streamlit in SnowsightCreates realistic AI-powered datasets using Cortex LLMs. Saves raw JSON to BRONZE_LAYER tables.
Transforms raw JSON into clean, structured database tables. Outputs analytics-ready data to SILVER_LAYER.
Converts SQL queries into interactive demo configurations for Snow Demo (App 4).
Runs interactive SQL-driven presentations with live visualizations and AI experimentation.
Guided dashboard configuration creator. Generates YAML files for Snow Viz (App 6).
Renders advanced interactive dashboards with multi-tab analytics and AI integration.
1. SYNTHETIC DATA GENERATOR (START HERE)
└─ Creates realistic datasets
│
├─ 2. STRUCTURED TABLES
│ └─ Transforms JSON → SQL tables
│ │
│ └─ 5. YAML WIZARD
│ └─ Generates dashboard configs
│ │
│ └─ 6. SNOW VIZ
│ └─ Renders dashboards
│
└─ 3. SQL TO YAML CONVERTER
└─ Converts queries → demo configs
│
└─ 4. SNOW DEMO
└─ Runs interactive SQL demos
Next: Page 5 shows which apps to use based on your role and goals.
The framework supports 4 different user personas. Find your role below to see which apps you need and in what order.
Who You Are:
What You'll Build: A complete analytics pipeline from data generation to interactive dashboards
Apps You'll Use: Synthetic Data Generator → Structured Tables → YAML Wizard → Snow Viz
Time Required: ~25 minutes
Your Workflow:
What You'll Get:
Who You Are:
What You'll Build: Interactive SQL-driven presentations with live query execution and AI experimentation
Apps You'll Use: Synthetic Data Generator → Structured Tables → SQL to YAML Converter → Snow Demo
Time Required: ~30 minutes
Your Workflow:
What You'll Get:
Who You Are:
What You'll Build: Clean, structured datasets for export to external tools (notebooks, ML pipelines, BI tools)
Apps You'll Use: Synthetic Data Generator → Structured Tables
Time Required: ~15 minutes
Your Workflow:
What You'll Get:
Who You Are:
What You'll Do: View and interact with dashboards created by your data team (no setup required)
Apps You'll Use: Snow Viz only (after colleague completes setup)
Time Required: ~5 minutes
Prerequisites: A colleague must first complete Synthetic Data Generator → Structured Tables → YAML Wizard to create the dashboard. Once that's done, you can view and explore it.
Your Workflow:
What You Can Do:
Ready to get started? Jump to the pages for your persona:
Persona | Apps to Follow | What You'll Build |
Full-Stack Developer | Synthetic Data Generator → Structured Tables → YAML Wizard → Snow Viz | Complete analytics pipeline with dashboards |
SQL Demo Creator | Synthetic Data Generator → Structured Tables → SQL to YAML Converter → Snow Demo | Interactive SQL presentations with AI |
Data Preparation | Synthetic Data Generator → Structured Tables | Clean datasets for ML/BI/external tools |
Dashboard Consumer | Snow Viz only | Explore pre-built dashboards (no setup) |
Or read all app instructions (Pages 6-11) to understand the full framework capabilities.
Purpose: Create realistic AI-powered datasets for any business scenario using Cortex LLMs
Dependencies: None (START HERE)
Output: Raw JSON data saved to BRONZE_LAYER tables

All Personas start here! This is the foundation of the framework.
Navigate to Projects → Streamlit → SYNTHETIC_DATA_GENERATOR
Left Sidebar - Top Section:
For first-time use, leave "Load Configuration" as Create New. If you have saved configurations, select one from dropdown and click 📁 Load Configuration.
Left Sidebar:
Enter your company name and topic/domain:
Acme Corp
Customer Orders
Other Examples:
Left Sidebar - Fields Section:
Enter your fields (one per line):
customer_id
customer_name
email
order_date
product_name
quantity
price
total_amount
Tips:
Left Sidebar:
Set your batch configuration using the sliders:
10 (Slider: 10-200, step 10)10 (Slider: 1-1000)100Why smaller batches?
Left Sidebar:
Configure the Cortex LLM settings:
LARGE (Options: SMALL, MEDIUM, LARGE)mistral-large2 (Recommended for consistent results)0.7 (Slider: 0.0-1.0, step 0.1)4000 (Slider: 100-8000, step 100)Left Sidebar:
Keep "High-Performance Mode" checked for best results!
Left Sidebar:
Check the following options:
AI_FRAMEWORK_DBBRONZE_LAYERGENERATED_DATAExpected Output:
Generated 100 records successfully!
Data saved to: AI_FRAMEWORK_DB.BRONZE_LAYER.GENERATED_DATA
Sample data preview:
| CUSTOMER_NAME | PRODUCT_NAME | QUANTITY | PRICE | TOTAL_AMOUNT |
|---------------|--------------|----------|-------|--------------|
| Sarah Johnson | Laptop Pro | 1 | 1299 | 1299 |
| Mike Chen | Wireless Mouse| 2 | 29 | 58 |
Verification Steps:
GENERATED_DATA)Data Quality Check:
-- Run this query to check your data
SELECT
COUNT(*) as total_batches,
SUM(_META_RECORDS_IN_BATCH) as total_records,
AVG(_META_RECORDS_IN_BATCH) as avg_records_per_batch,
_META_COMPANY_NAME,
_META_TOPIC
FROM AI_FRAMEWORK_DB.BRONZE_LAYER.GENERATED_DATA
GROUP BY _META_COMPANY_NAME, _META_TOPIC;
Expected: 10 batches, 100 total records
Bottom of Main Panel:
Enter a configuration name and click 💾 Save Configuration:
Acme_Corp_Customer_Orders_Config
Save your configuration to reuse later with different batch sizes or models!
Company: ShopSmart
Topic: Product Sales
Fields: product_id, product_name, category, sale_date, sale_amount,
customer_segment, region, payment_method
Company: MedCenter
Topic: Patient Vitals
Fields: patient_id, age, gender, blood_pressure_systolic,
blood_pressure_diastolic, heart_rate, temperature,
oxygen_saturation, recorded_date
Company: FinanceFirst
Topic: Loan Applications
Fields: application_id, applicant_name, loan_amount, credit_score,
income, employment_status, application_date, approval_status
Start small: Test with 10 records × 10 batches first
Use mistral-large2: Best accuracy across all scenarios
Name tables descriptively: Include company/topic in table name
Save configurations: Reuse settings for consistent results
Check data quality: Verify first batch before generating more
Use appropriate temperature: Low for factual, high for creative
For All Personas: → Continue to Page 7 (App 2 - Structured Tables) to transform your data from BRONZE_LAYER to SILVER_LAYER
Your data is now in raw JSON format. App 2 will clean and structure it into proper database columns!
Purpose: Transform raw JSON data into clean, structured database tables
Dependencies: Requires data from App 1
Output: Analytics-ready data in SILVER_LAYER tables

Navigate to Projects → Streamlit → STRUCTURED_TABLES
Main Panel - Left Column:
Select source table with synthetic data from the dropdown (e.g., GENERATED_DATA).
The dropdown shows all tables from BRONZE_LAYER that contain a MESSAGES column (generated by Synthetic Data Generator).
Main Panel - Right Column:
Enter name for structured table (e.g., GENERATED_DATA_STRUCTURED).
The app auto-fills this by adding _STRUCTURED to your source table name. You can customize it if needed.
Main Panel - Filter Section:
Select the company and topic you used when generating data in Step 1 from the dropdowns (e.g., Acme Corp and Customer Orders).
These dropdowns populate automatically from your source table's metadata (_meta_company_name and _meta_topic columns).
Auto-generated after selection:
📊 Data Quality Analysis
Left Column:
Total Records: 10
Valid JSON: 10
Middle Column:
Invalid JSON: 0
Very Short: 0
Right Column:
Avg Length: 2,500 chars
What to look for:
Sample of Cleaned Data section:
| MESSAGES | _META_COMPANY_NAME | _META_TOPIC | _META_RECORDS_IN_BATCH |
|----------|-------------------|-------------|------------------------|
| [{"customer_id": 1, ...}] | Acme Corp | Customer Orders | 10 |
This shows your raw BRONZE_LAYER data with JSON arrays in the MESSAGES column.
Auto-detected fields:
🔍 Fields Analysis
Found 8 fields: customer_id, customer_name, email, order_date,
product_name, quantity, price, total_amount
📝 View SQL Column Names (expandable):
SQL column names: CUSTOMER_ID, CUSTOMER_NAME, EMAIL, ORDER_DATE,
PRODUCT_NAME, QUANTITY, PRICE, TOTAL_AMOUNT
The app automatically detects field names from your JSON structure and shows how they'll appear as SQL column names (uppercase).
Verify all your expected fields are detected!
Bottom Section:
Configuration name:
Acme_Corp_Customer_Orders_GENERATED_DATA
Progress indicator:
Transforming data...
This process:
SILVER_LAYERExpected time: 30 seconds to 2 minutes depending on data volume
Expected Output:
Successfully transformed data to table: GENERATED_DATA_STRUCTURED
📋 Sample of Transformed Data
| CUSTOMER_ID | CUSTOMER_NAME | EMAIL | ORDER_DATE | PRODUCT_NAME | QUANTITY | PRICE | TOTAL_AMOUNT |
|-------------|---------------|-------|------------|--------------|----------|-------|--------------|
| 1 | Sarah Johnson | sarah.j@email.com | 2024-03-15 | Laptop Pro | 1 | 1299 | 1299 |
| 2 | Mike Chen | mike.c@email.com | 2024-03-12 | Wireless Mouse | 2 | 29 | 58 |
📊 Transformation Summary:
Records processed: 100
Target table: AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED
What happened:
Verification Steps:
GENERATED_DATA_STRUCTURED)Data Quality Check:
-- Run this query to verify your structured data
SELECT
COUNT(*) as total_records,
COUNT(DISTINCT customer_name) as unique_customers,
MIN(order_date) as earliest_order,
MAX(order_date) as latest_order,
SUM(total_amount) as total_revenue
FROM AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED;
If you clicked "💾 Save Configuration" before transforming, your settings are saved for reuse:
Load it next time from the configuration dropdown!
1. Cleans LLM Artifacts:
2. Flattens JSON Arrays:
Before (BRONZE_LAYER):
[{"customer_id": 1, ...}, {"customer_id": 2, ...}] ← 1 row, many records
After (SILVER_LAYER):
Row 1: customer_id=1, customer_name=..., email=...
Row 2: customer_id=2, customer_name=..., email=... ← Many rows, structured columns
3. Creates Proper SQL Table:
BRONZE_LAYER (Raw Synthetic Data)
├─ Table: GENERATED_DATA
├─ Structure: Batched JSON arrays
├─ Columns: MESSAGES, _META_* fields
└─ Rows: 10 (one per batch)
↓ Transform ↓
SILVER_LAYER (Structured Data)
├─ Table: GENERATED_DATA_STRUCTURED
├─ Structure: Individual records in columns
├─ Columns: CUSTOMER_ID, CUSTOMER_NAME, EMAIL, ORDER_DATE, etc.
└─ Rows: 100 (individual records)
After transformation, your data is ready for:
Structured tables work with:
Export structured data via:
-- Export to CSV
SELECT *
FROM AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED;
-- Use in Python/Snowpark
session.table("AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED").to_pandas()
-- Connect BI tools directly to SILVER_LAYER tables
For Persona 1 (Full-Stack Developer): → Continue to Page 10 (YAML Wizard) to create dashboard configurations
For Persona 2 (SQL Demo Creator): → Continue to Page 8 (SQL to YAML Converter) to create demo flows
For Persona 3 (Data Preparation): → Export your data from SILVER_LAYER or continue to other apps
For All Personas: Your data is now in clean, structured format in SILVER_LAYER - ready for analytics, dashboards, demos, or export!
Purpose: Convert SQL queries into interactive demo configurations for Snow Demo
Dependencies: Requires tables from App 1 or 2
Output: YAML files for FRAMEWORK_YAML_STAGE

Navigate to Projects → Streamlit → SQL_TO_YAML_CONVERTER
Main Panel - Input SQL Worksheet Section:
Choose Input Method:
◉ Paste SQL
○ Upload File
Select Paste SQL to enter your queries directly, or Upload File to upload a .sql or .txt file.
SQL Input Text Area:
Replace the placeholder SQL with your actual queries from the structured tables you created:
-- Step 1: Customer Overview
SELECT
CUSTOMER_NAME,
EMAIL,
ORDER_DATE,
PRODUCT_NAME,
TOTAL_AMOUNT
FROM AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED
LIMIT 10;
-- Step 2: Revenue by Product
SELECT
PRODUCT_NAME,
COUNT(*) as order_count,
SUM(TOTAL_AMOUNT) as total_revenue,
AVG(TOTAL_AMOUNT) as avg_order_value
FROM AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED
GROUP BY PRODUCT_NAME
ORDER BY total_revenue DESC;
-- Step 3: Top Customers Analysis
SELECT
CUSTOMER_NAME,
COUNT(*) as total_orders,
SUM(TOTAL_AMOUNT) as total_spent,
AVG(TOTAL_AMOUNT) as avg_order_value
FROM AI_FRAMEWORK_DB.SILVER_LAYER.GENERATED_DATA_STRUCTURED
GROUP BY CUSTOMER_NAME
ORDER BY total_spent DESC
LIMIT 10;
Tips:
-- Step X:)Demo Metadata Section (Two Columns):
Left Column:
Topic:
Customer Analytics
Sub-topic:
Order Analysis
Tertiary Topic:
Revenue Insights
Title:
Acme Corp Customer Orders Analytics Dashboard
Right Column:
Logo URL: (optional - leave blank)
Owner:
Data Analytics Team
Database: (leave blank to auto-detect)
Schema: (leave blank to auto-detect)
Overview Description:
Comprehensive analysis of Acme Corp customer order data showcasing:
- Customer order patterns and revenue trends
- Top-performing products and customer segments
- AI-powered customer insights and recommendations
Tips:
- for better formatting in OverviewExpandable Advanced Options Section:
SQL Block Separator: GO
Role: (leave blank)
Warehouse: (leave blank)
Default settings work for most cases. Only change if you have specific requirements.
Bottom of Input Section:
Click the blue [Parse SQL Worksheet] button
What happens:
GROUP BY → Bar ChartSELECT * → TableProcessing time: ~5-10 seconds
Results Section - Tab 1 (Summary):
Key Metrics:
- 3 Total Steps
- 1 Table Referenced
- 2 Visualization Types
Cortex AI Analysis:
- 0 Cortex Complete calls detected
- 0 Interactive Cortex steps created
Interactive Steps:
- None (add CORTEX.COMPLETE() for interactive AI steps)
This shows what the app detected in your SQL and how it will be presented in Snow Demo.
Results Section - Tab 2 (Parsed Blocks):
Step 1: Customer Overview
- Type: Query
- Visualization: Table
- SQL: SELECT CUSTOMER_NAME, EMAIL...
Step 2: Revenue by Product
- Type: Query
- Visualization: Bar Chart
- SQL: SELECT PRODUCT_NAME, COUNT(*) as order_count...
Step 3: Top Customers Analysis
- Type: Query
- Visualization: Table
- SQL: SELECT CUSTOMER_NAME, COUNT(*) as total_orders...
Verify all your steps are correctly parsed and visualization types make sense.
Results Section - Tab 3 (Generated YAML):
Shows the complete YAML configuration that will be used by Snow Demo. This includes:
You don't need to edit this manually - it's automatically generated!
Results Section - Tab 4 (Download & Export):
Configuration Name:
Customer_Analytics_Order_Analysis_Revenue_Insights_20250115
Option 1: Save to Database (Recommended)
AI_FRAMEWORK_DB.CONFIG.DEMO_CONFIGURATIONSOption 2: Download YAML File
.yaml file for uploading to Snow Demo stageSQL Analysis:
Visualization Suggestions:
GROUP BY queries → Bar Chart visualizationsInteractive AI Steps:
CORTEX.COMPLETE() callsYAML Generation:
-- Shows as Table view
SELECT customer_name, order_date, total_amount
FROM my_table
LIMIT 10;
-- Shows as Bar Chart
SELECT product_category, SUM(revenue) as total_revenue
FROM my_table
GROUP BY product_category
ORDER BY total_revenue DESC;
-- Shows as Interactive AI Panel
SELECT
SNOWFLAKE.CORTEX.COMPLETE('mixtral-8x7b',
'Analyze this data: ' || column_name
) as ai_insights
FROM my_table;
Write clear SQL comments: Use -- Step X: format for step detection
Include Cortex AI: Add CORTEX.COMPLETE() for interactive demos
Mix query types: Combine SELECT, GROUP BY, and AI functions
Use descriptive metadata: Clear titles and topics help viewers understand
Test queries first: Run SQL in worksheet before converting
For Persona 2 (SQL Demo Creator):
Your SQL queries are now a professional, interactive demo ready for presentations!
Purpose: Run interactive SQL-driven presentations with live visualizations
Dependencies: Requires YAML configs from App 3 (uploaded to FRAMEWORK_YAML_STAGE)
Output: Live demo orchestration with charts and AI experimentation

Before using Snow Demo, upload your YAML file to Snowflake:
/analytics/ (or choose: sales_demo, customer_insights, etc.)Navigate to Projects → Streamlit → SNOW_DEMO
Left Sidebar: Select the project directory where you uploaded your YAML file (e.g., analytics)
Left Sidebar: Select your YAML configuration file from the dropdown
Left Sidebar: Review the auto-displayed demo metadata, then click [Run Demo]
Main Panel: Each SQL step appears as a section with:
Tips: Change Display Options dropdown to switch visualizations on-the-fly
If your SQL includes SNOWFLAKE.CORTEX.COMPLETE() calls, you'll see an interactive panel where you can:
Prepare ahead: Test demo flow before presentations
Use talk tracks: Add presenter notes in YAML for guidance
Practice transitions: Know when to switch visualizations
Engage audience: Ask for prompt suggestions during AI steps
Keep queries fast: Use LIMIT clause for demo data
For Persona 2 (SQL Demo Creator):
Your demo is complete! You can:
Return to Page 5 to explore other workflows or continue to Page 12 for cleanup instructions.
Purpose: Create dashboard configurations through guided interface
Dependencies: Requires tables from App 1 or 2
Output: YAML files for VISUALIZATION_YAML_STAGE

Navigate to Projects → Streamlit → YAML_WIZARD
Main Panel - Top Section:
◉ Create new (selected by default)
○ Load existing
Database: AI_FRAMEWORK_DB ▼
Schema: SILVER_LAYER ▼
Table: TECHCORP_ORDERS_STRUCTURED ▼
Schema Selection Guide:
Select your structured table from the previous steps.
Configure Dimensions, Metrics, Time Column Section:
Dimensions (Left Column):
Select text/categorical fields to group by:
☑ CUSTOMER_NAME
☑ PRODUCT_NAME
☐ EMAIL
☐ ...
Check 2-5 key categorical fields you want to analyze.
Time Column (Right Column):
Time Column for Trends:
ORDER_DATE ▼
Select your date/timestamp field for time-series analysis.
Metrics (Below Columns):
Auto-generated metrics from your table:
☑ total_rows (COUNT(*))
☑ avg_quantity (AVG(QUANTITY))
☑ sum_total_amount (SUM(TOTAL_AMOUNT))
☑ avg_price (AVG(PRICE))
☐ ...
Check 3-7 key metrics you want to calculate. The app automatically creates aggregation functions.
Tips:
Click the "Dimensions" tab
For each dimension, you can customize:
CUSTOMER_NAME:
Label: Customer Name
Description: Customer who placed the order
Priority: 0
Unique Values: (auto-detected)
PRODUCT_NAME:
Label: Product
Description: Product purchased
Priority: 1
What to customize:
IMPORTANT: After editing, click "Apply All Dimension Changes" button at the bottom!
Click the "Metrics" tab
For each metric, you can customize:
total_rows:
Label: Total Orders
SQL: COUNT(*)
Format: number
Decimals: 0
sum_total_amount:
Label: Total Revenue
SQL: SUM(TOTAL_AMOUNT)
Format: currency
Decimals: 2
avg_price:
Label: Average Price
SQL: AVG(PRICE)
Format: currency
Decimals: 2
What to customize:
IMPORTANT: After editing, click "Apply All Metric Changes" button at the bottom!
Click the "Generate" tab, then enter:
App Name:
Acme Corp Customer Orders Dashboard
Description:
Comprehensive analysis of customer order data
YAML Filename:
acme_corp_orders_dashboard.yaml
Click "Generate Customized YAML" → Generates 8 tabs (Overview, Product/Category, VS, Top N, Self Service, Search, AI Assistant, Raw Data)
Click "Download YAML" button
Optional: Click "Save to AI_FRAMEWORK_DB.CONFIGS" to save your customizations for later editing
Upload your YAML file to Snowflake:
/customer_orders/ (or your project name)What You Created:
Why Two Saves?:
These messages are NORMAL for first-time use:
No Cortex Search services found in this database/schema
Create a Cortex Search service first to enable semantic search
Ignore this - Search services are advanced/optional
Table exists but no configurations found
No configs saved yet.
Configuration table has 0 saved configs
Ignore this - Normal until you save your first config
Start simple: Pick 2-3 dimensions and 3-5 metrics for first try
Use clear labels: "Product Category" is better than "PRODUCT_CATEGORY"
Format metrics: Use currency for money, percent for rates
Save your work: Both download AND save to database
Test in Snow Viz: Verify dashboard works as expected
For Persona 1 (Full-Stack Developer):
You now have a dashboard configuration file! Next steps:
Your data is now ready for visual analytics with 8 interactive dashboard tabs!
Purpose: Render advanced interactive dashboards from YAML configurations
Dependencies: Requires YAML configs from App 5 (uploaded to VISUALIZATION_YAML_STAGE)
Output: Multi-tab analytics dashboards with AI integration

Navigate to Projects → Streamlit → SNOW_VIZ
Left Sidebar - Configuration Source:
Load from:
◉ Stage
○ Local file
Select Stage (recommended - loads from VISUALIZATION_YAML_STAGE)
Left Sidebar - After selecting Stage:
Project: [Select Project] ▼
Available: techcorp_orders, analytics, sales_dashboard
YAML File: [Select YAML] ▼
Available: techcorp_orders_dashboard.yaml
The dashboard will automatically load.
Left Sidebar - Navigation Section:
Select Page:
◉ Overview
○ Product / Category
○ VS (Compare)
○ Top N
○ Self Service
○ Search
○ AI Assistant
○ Raw Data
Each tab provides different analytical views of your data.
Main Panel - Overview Tab:
Time Controls (Top):
Time Window: last_3_months ▼
Options: last_7_days, last_30_days, last_3_months, last_6_months, last_year, all_time
Time Grain: month ▼
Options: day, week, month, quarter, year
Key Metrics Cards:
[Total Orders] [Total Revenue] [Average Price]
1,234 $156,789 $127.15
↑ 12% vs prev ↑ 8% vs prev ↓ 3% vs prev
Metric cards are interactive - click to select which metric to visualize below.
Visualizations:
Left Side: Time Series Chart
Shows trend line for selected metric over time
- X-axis: Time periods (based on Time Grain)
- Y-axis: Metric values
- Hover for exact values
Right Side: Ranked Grid
Dimension: [Select Dimension] ▼
Shows top 10 results in table format:
| Product Name | Total Revenue | % of Total |
|----------------|---------------|------------|
| Laptop Pro | $45,678 | 29% |
| Wireless Mouse | $23,456 | 15% |
Dimension Analysis:
Select Dimension: PRODUCT_NAME ▼
Options: All configured dimensions
Select Metric: Total Revenue ▼
Options: All configured metrics
Shows detailed breakdown by selected dimension with:
Side-by-Side Comparison:
Left Entity: [Select] ▼
Right Entity: [Select] ▼
Metrics to Compare:
☑ Total Orders
☑ Total Revenue
☑ Average Price
☐ ...
Comparison Table:
| Metric | Laptop Pro | Wireless Mouse | Winner | Delta |
|--------------|------------|----------------|---------------|----------|
| Total Orders | 456 | 789 | Wireless Mouse| +73% |
| Total Revenue| $45,678 | $23,456 | Laptop Pro | +95% |
Perfect for comparing products, customers, or any dimension values.
Leaderboard Analysis:
Select Dimension: PRODUCT_NAME ▼
Select Metric: Total Revenue ▼
Top N: 10 ▼
Options: 5, 10, 20, 50, 100
Sort Order:
◉ Descending (highest first)
○ Ascending (lowest first)
Shows ranked list with:
Custom Analysis:
Select Dimensions (grouping):
☑ PRODUCT_NAME
☑ CUSTOMER_NAME
☐ ...
Select Metrics (calculations):
☑ Total Revenue
☑ Average Price
☐ ...
Time Range: last_3_months ▼
Build custom reports by selecting any combination of dimensions and metrics.
Results show in interactive data table with:
Powered by Cortex Analyst:
Type your question in natural language:
What are the top 3 products by revenue in the last quarter?
Click [Ask Analyst] → Select view option (Grid, Bar, or Line chart)
AI Narrative (Optional): Generate AI analysis by selecting a model, adjusting temperature, and clicking [Generate Analysis]
Example Questions:
Semantic Search (if configured):
Enter your search query:
laptop with high ratings
Click [Search] → Shows relevant records based on semantic similarity
Note: Requires Cortex Search service to be configured. If not set up, this tab will show a setup message.
Shows complete dataset in table format with sortable columns and CSV export option.
Use this tab to:
Interactive Elements:
Time Controls:
AI Integration:
Explore systematically: Start with Overview, then drill into specific tabs
Use AI Assistant: Natural language queries are powerful and intuitive
Compare entities: VS tab helps identify top performers
Export insights: Share findings via CSV export
Adjust time windows: Find the right time range for your analysis
For Persona 1 (Full-Stack Developer):
Your complete analytics pipeline is built! You've created:
Share your dashboard with business users and stakeholders!
For Persona 4 (Dashboard Consumer):
You now have an interactive analytics dashboard! You can:
Return to Page 5 to explore other workflows or continue to Page 12 for cleanup instructions.
When you're ready to remove all the resources created during this quickstart:
This will clean up all framework components while preserving any other work in your Snowflake account.
Congratulations! You've successfully built the complete Cortex AI Demo Framework using Snowflake Cortex AI!