This quickstart demonstrates how to build a comprehensive call center analytics solution leveraging Snowflake's Cortex AISQL for intelligent audio transcription (Cortex AI Transcribe), conversation analysis, and speaker identification. The solution integrates Streamlit dashboards to provide organizations with actionable insights from call center conversations through advanced AI-powered analytics, sentiment analysis, and natural language querying capabilities on audio file.
You'll learn to architect an end-to-end analytics pipeline that seamlessly processes audio files, extracts meaningful conversation insights, and delivers an intuitive conversational interface enabling business users to query call center data using natural language—all powered by Snowflake's integrated Cortex AISQL platform.
Source Code - https://github.com/Snowflake-Labs/sfguide-building-cortex-aisql-powered-callcentre-analytics
You will quickly learn how to use Cortex AI Transcribe for automated audio file processing, utilize advanced Cortex AISQL functions including AI_Complete and AI_Sentiment for intelligent data extraction from transcribed conversations, and integrate Cortex Analyst with semantic models to create sophisticated conversational chatbots. Additionally, you will develop interactive Streamlit dashboards that present critical KPIs and comprehensive analytics, empowering business stakeholders with data-driven insights.
A comprehensive call center analytics solution featuring:
We are using CLAUDE-3-5-SONNET models in this solution, if this model is not available in your region please enable cortex cross region inference for your account https://docs.snowflake.com/en/user-guide/snowflake-cortex/cross-region-inference .
To set up your Snowflake environment for the call centre analytics solution perform the following steps:
The script will create:
Ensure the uploaded files has the following folder structure in the stage.
If you are unable to create the API integration and GIT repository in Snowflake you should upload the following files to the stages created.
audio_files
folder from your cloned repo to AUDIO_FILES stage that you have created above.Following two files are for:
streamlit_cortex_analyst_dashboard_sis.py
- Main Streamlit applicationcall_center_analytics_model.yaml
- Semantic model for Cortex AnalystUpload the ‘Call-Centre-Analytics-Notebook.ipynb' notebook from Snowsight.
Navigate to Projects → Notebooks and then click ⌄ to Import ipynb file and select the file Call-Centre-Analytics-Notebook.ipynb. Make sure to choose the notebook location to match call_centre_analytics_db Database and the public Schema, and choose the warehouse we created earlier cca_xs_wh for the query warehouse and the notebook warehouse.
Run through the notebook which does following:
After you complete running through the notebook you can create the streamlit app by navigate to Projects → Streamlit and create a new Streamlit application by selecting the below options:
CALL_CENTRE_ANALYTICS_DB
PUBLIC
cca_xs_wh
warehousePaste the contents of streamlit_cortex_analyst_dashboard_sis.py and run the streamlit app.
While running the app for the first time it will give you package not found error for plotly, please add ploytly in the call_centre_analytics_app streamlit app packages and rerun the app .
The solution provides multiple analytical views:
Overview Dashboard
Advanced Analytics
The heart of the application leverages Snowflake's Cortex Analyst for natural language querying on the audio files. We can have a conversation chatbot where it sends the history for asking question based on previous question.
Chat Interface Implementation
def query_cortex_analyst_rest_api(question: str, conversation_history: List[Dict] = None) -> Dict[Any, Any]:
"""Query Cortex Analyst with conversation context"""
request_body = {
"question": question,
"semantic_model": semantic_model_content,
"conversation": conversation_history or []
}
response = session.post(CORTEX_ANALYST_ENDPOINT, json=request_body)
return response.json()
Question asked How mant calls were not resolved on the first attempt
and below is the response:
Now I am asking a followup question: give me the list of reps and the audio info along with date for the above data
and below is the output
You can see from the screenshot that it is giving us the details for the rep name and the audio file for all the conversation which is not a first call resolution. Following is the query generated by the Cortex Analyst where it filters for first_call_resolution = ‘No' showing how powerful cortex analyst is.
WITH __call_center_interactions AS (
SELECT
audio_file_name,
representative AS representative_name,
firstcallresolution AS first_call_resolution,
datetime AS call_date
FROM call_centre_analytics_db.public.streamlitapptable
)
SELECT
representative_name,
audio_file_name,
call_date
FROM __call_center_interactions
WHERE
first_call_resolution = 'No'
ORDER BY
call_date DESC NULLS LAST
-- Generated by Cortex Analyst
;
Features:
The application provides comprehensive audio file handling capabilities. You can upload the file from your local machine and ask the app to process the audio file.
Key Features:
You can use the file ztest_9.mp3 to test the uploading and processing adhoc audio file from the streamlit app.
This quickstart demonstrates how to build a comprehensive call center analytics solution which is powered by Cortex AISQL. By combining Cortex Analyst's natural language capabilities with Streamlit's interactive interface, you've created a powerful tool for extracting insights from call center audio data.
You've built a complete call center analytics solution featuring: