Learning to Prompt Better
Prerequisites: This guide assumes you've completed:
- Getting Started tutorial
- At least one domain-specific tutorial
- You're familiar with Workbooks, Sources, Views, @mentions, and slash commands
New to Shadowfax? Start here instead.
You've learned the basics. Now let's master the art of prompting to unlock Shadowfax's full analytical potential. This guide covers advanced techniques, best practices, and strategies for becoming an expert at communicating with the AI agent.
Providing Context: The Foundation of Good Prompts
Context is the most critical factor in getting accurate results from Shadowfax. Without proper context, the AI has to guess what you mean—and guesses lead to incorrect analyses.
Shadowfax has a three-layer context model:
1. Knowledge Layer (Persistent Context)
What is Knowledge?
- Documents stored in your Knowledge Library
- Added to your Workbook for the agent to reference automatically
- Contains business context the agent should always know
What to put in Knowledge:
- KPI and metric definitions: "Customer churn means no purchase in 90 days"
- Business rules: "Fiscal year starts in July"
- Data dictionaries: What columns mean, known data quirks
- Calculation methodologies: Standard formulas your company uses
- Reporting standards: How to group data, default filters
Example Knowledge Document:
# Sales Analytics Guide
## Metric Definitions
- **MRR (Monthly Recurring Revenue)**: Sum of subscription_amount
for all subscriptions where status = 'active'
- **Customer Acquisition Cost (CAC)**: total_marketing_spend / new_customers
- **Churn Rate**: customers_lost / total_customers_at_start_of_period
## Business Rules
- Always exclude test accounts (customer_id < 1000)
- Fiscal quarters: Q1 (July-Sept), Q2 (Oct-Dec), Q3 (Jan-Mar), Q4 (Apr-June)
- Revenue includes only transactions with status = 'completed'
## Data Notes
- The 'region' column may contain nulls → treat as 'Unassigned'
- Amounts in the transaction table are in cents, not dollars
- All timestamps are in UTC
- The 'created_at' field is when the record was entered,
'transaction_date' is when it actually occurred
Benefits of Knowledge:
Define once, use everywhere—don't repeat definitions in every prompt
Consistency across analyses—everyone uses the same metrics
Onboard new team members quickly—Knowledge documents the domain
Focus prompts on specifics—skip the boilerplate context
2. @Mention Layer (Per-Prompt Data References)
Always @mention your data sources. For advanced multi-table analyses, chain multiple @mentions to join datasets:
@[customers] @[orders] @[products]
Join all three and calculate customer lifetime value by product category
3. Prompt Layer (Specific Instructions)
This is where you specify:
- What to calculate or analyze
- How to group, filter, or aggregate
- Time ranges and constraints
- Output format preferences
- One-off adjustments or exceptions
Example combining all three layers:
@[transactions] @[customers]
Calculate CAC for Q3 2024, but for this analysis only,
include digital marketing spend (not the full marketing budget).
- Knowledge layer: CAC definition is in your Knowledge document
- @Mention layer: Specifies which data tables to use
- Prompt layer: Adds specific time range and special constraint
Directing the Agent with Slash Commands
/plan - Strategic Planning for Complex Analyses
Use /plan for multi-step analyses where you want to review the approach before execution:
Example:
/plan @[sales_data] @[customer_data]
I need a comprehensive customer segmentation based on purchase behavior,
frequency, recency, and lifetime value.
What happens:
-
Agent proposes a plan:
Here's my approach:
Step 1: Join sales and customer tables
- Use LEFT JOIN to keep all customers
- Aggregate purchase data per customer
Step 2: Calculate RFM metrics
- Recency: days since last purchase
- Frequency: total number of purchases
- Monetary: total amount spent
Step 3: Score each RFM dimension
- Divide into quintiles (1-5 scale)
- Higher scores = better performance
Step 4: Create segment definitions
- VIP: RFM scores all 4-5
- High-Value: Monetary 4-5, others 3+
- Regular: Scores mostly 2-3
- At-Risk: Recency 1-2, Frequency/Monetary 3+
- Inactive: All scores 1-2
Would you like me to proceed? -
You review and respond:
Looks good, but let's use 4 segments instead of 5.
Combine "Regular" and "At-Risk" into one "Active" segment. -
Agent adjusts the plan:
Updated plan:
- VIP: RFM scores all 4-5
- High-Value: Monetary 4-5, others 3+
- Active: All others with at least one purchase
- Inactive: No purchases in the period
Proceed with this approach? -
You approve:
Yes, proceed -
Agent executes step by step, creating SQL views at each stage
Benefits:
- Catch misunderstandings before computation
- Learn analytical techniques from the agent's reasoning
- Ensure alignment with your expectations
- Modify approach based on domain knowledge
/visualize - Advanced Visualization Prompting
Use /visualize to create sophisticated charts. The more specific your prompt, the better the result:
Advanced Example:
/visualize @[monthly_sales]
Create a line chart showing revenue trend with a 3-month moving average
Advanced Example:
/visualize @[product_performance]
Create a scatter plot with:
- X-axis: units sold (log scale)
- Y-axis: profit margin (%)
- Bubble size: represents revenue
- Color: product category
- Add quadrant lines at median values
- Highlight products with margin >15%
Complex Visualization:
/visualize @[customer_journey]
Create a Sankey diagram showing:
- Left: traffic sources
- Middle: first page visited
- Right: conversion outcome (purchased, signed up, or bounced)
- Flow thickness represents user count
The Visualization Workflow:
-
You describe what you want:
/visualize @[regional_sales]
Compare quarterly performance across regions -
Agent proposes a visualization:
I'll create a grouped bar chart with:
- Regions on X-axis
- Revenue on Y-axis
- Bars grouped by quarter
- Would you like any specific color scheme? -
You iterate:
Make it a horizontal bar chart instead, and sort regions by Q4 revenue -
Agent updates the chart
-
You can continue refining:
Add data labels showing percentage change from Q1 to Q4
/recommendations - Discovering Next Steps
Use /recommendations when exploring new data or when stuck on what to analyze next:
Best used when:
- Starting exploration of new data
- Completed one analysis and wondering what's next
- Feeling stuck or unsure what to investigate
- Want to ensure you're not missing important patterns
Example:
/recommendations
I've just connected my sales database. What should I analyze first?
Agent might suggest:
Based on your sales data, here are recommended starting points:
1. **Revenue Trends**
- Analyze monthly revenue patterns to identify growth trends
- Command: @[sales] Calculate monthly revenue totals with MoM growth
2. **Top Products**
- Identify your best-selling products and their contribution
- Command: @[sales] Show top 20 products by revenue with market share %
3. **Regional Performance**
- Compare sales performance across different regions
- Command: @[sales] Summarize revenue by region with growth rates
4. **Data Quality Check**
- I notice some null values in the 'region' column
- Command: @[sales] Show records with null regions for review
5. **Customer Analysis**
- Join with customer data to analyze purchase patterns
- Command: @[sales] @[customers] Calculate average order value by customer segment
Which would you like to explore first?
After completing an analysis:
/recommendations
I've calculated customer churn rates. What should I look at next?
Agent might suggest:
Great work on the churn analysis! Here are logical next steps:
1. **Churn Drivers**
- Analyze characteristics of churned customers to identify patterns
- What segments have the highest churn?
2. **Retention Cohorts**
- Look at retention curves by signup month to identify trends over time
3. **Win-Back Opportunities**
- Identify recently churned high-value customers for re-engagement
4. **Predictive Analysis**
- Build a risk score to identify customers likely to churn soon
Would you like me to help with any of these?
/taxonomy - Discovering Data Categories
Use /taxonomy when you need to categorize or classify data but aren't sure what categories exist. The AI automatically explores your data, discovers patterns, and proposes meaningful categories:
Best used when:
- You have unstructured or free-form text that needs categorization
- Creating AI tables for text classification (support tickets, product categories, feedback, etc.)
- You need to understand what natural categories exist in your data
- Building taxonomy-aware classification systems
Example:
/taxonomy @[support_tickets] Classify by customer issue type using subject and message body
AI will:
- Explore the data - Analyze cardinality, frequency distribution, and patterns
- Propose categories - Suggest 3-7 meaningful categories with coverage statistics and examples
- Show samples - Demonstrate how edge cases would be classified
- Request validation - Get your approval before creating the AI table
You might see:
## Proposed Taxonomy for support_tickets
Based on your 12,000 tickets, I've identified 6 natural categories:
| Category | Examples | Count | Coverage |
|----------|----------|-------|----------|
| refund_request | "refund please", "want money back" | 2,160 | 18% |
| account_access | "forgot password", "locked out" | 1,800 | 15% |
| product_question | "how do I...", "feature request" | 1,440 | 12% |
| bug_report | "error message", "app crash" | 1,200 | 10% |
| billing_inquiry | "charge on card", "invoice question" | 960 | 8% |
| other | misc requests | 4,440 | 37% |
Does this taxonomy work for you?
- Approve: Create the AI table with these categories
- Adjust: Tell me what to change
- Cancel: Stop and try a different approach
After approval, the AI creates an AI table with the taxonomy automatically.
Core Principles: The C.L.E.A.R. Framework
Effective prompts follow these five principles. Think of C.L.E.A.R. as your checklist:
C - Concise
Get to the point with clear, direct language. Avoid unnecessary words that can confuse the agent.
Bad: "I was wondering if maybe you could possibly take a look at the sales numbers and perhaps show me something about how they've been doing recently"
Good: "@[sales] Show monthly revenue for the last 6 months with growth rates"
Tips for conciseness:
- Use direct verbs: "Calculate", "Show", "Compare", "Identify"
- Specify exactly what you want: metrics, time periods, groupings
- Cut filler words: "maybe", "possibly", "kind of", "sort of"
L - Logical
Organize your prompt in a logical, step-by-step manner. Break complex requests into clear parts.
Bad: "@[sales] Show me sales and also customer segments and do some comparisons with last year and maybe break it down by region"
Good:
@[sales] @[customers]
1. First, join sales with customer segments
2. Then calculate revenue by segment and region
3. Finally, compare to the same period last year
Tips for logical structure:
- Use numbered steps for multi-part requests
- Separate different concerns with line breaks
- Follow a natural flow: prepare data → transform → analyze → visualize
E - Explicit
State exactly what you want and don't want. Never assume the agent will infer your preferences.
Bad: "@[data] Tell me about customers"
Good: "@[customers] Create a summary table showing: total customers, average purchase value, and purchase frequency. Group by customer tier (Free, Pro, Enterprise)."
Be explicit about:
- Column names to use
- Calculations and formulas
- Groupings and aggregations
- Filters and exclusions
- Time periods and date ranges
- Output formats
Examples:
@[transactions]
Calculate total revenue (sum of amount column, divide by 100 for dollars).
Only include records where status = 'completed'.
Exclude test accounts (customer_id < 1000).
Group by month and show results as a table.
A - Adaptive
Don't settle for the first result if it's not perfect. Iterate and refine.
First attempt:
@[sales] Show revenue by product
If result isn't quite right:
Add a column showing each product's percentage of total revenue
Further refinement:
Sort by revenue descending and show only top 20 products
Final adjustment:
Great! Now visualize this as a horizontal bar chart
Tips for adaptation:
- Review the intermediate SQL views to understand what the agent did
- Provide specific feedback: "The grouping should be weekly, not daily"
- Ask clarifying questions: "Why did you exclude these rows?"
- Use the agent as a collaborator: "This isn't quite right. Can you suggest alternatives?"
R - Reflective
Learn from each interaction. Notice what prompting styles work best for your data and use cases.
After a successful analysis:
- What made this prompt effective?
- Save successful prompt patterns for reuse
- Note which specifics led to good results
After issues:
- What was ambiguous in my prompt?
- What context did I forget to provide?
- How can I be clearer next time?
Build a personal prompt library: Keep a document of your most effective prompts for common tasks. This becomes a valuable reference and time-saver.
Working with SQL Views and Intermediate Models
Inspecting Intermediate Views
Click on any view in your Workbook to:
- See the actual data at that transformation step
- Review the SQL query that created it
- Verify calculations and logic
- Check for unexpected results
This is invaluable for:
- Understanding: "How did the agent approach this problem?"
- Debugging: "Where did the calculation go wrong?"
- Learning: "What SQL patterns does the agent use?"
Building on Previous Transformations
You can reference any view you've created:
@[monthly_revenue]
Now add a 3-month moving average column
This creates a new view that builds on your existing monthly_revenue view.
Best Practices for Views
Name views descriptively:
// Bad
@[data] Filter this and call it 'view1'
// Good
@[transactions] Create a view of completed orders only, call it 'completed_orders'
Reference views by name:
@[completed_orders]
Calculate daily totals from this filtered data
Verify before proceeding:
@[transactions] Clean the data by removing nulls and outliers
[Review the cleaned view]
Looks good! Now @[cleaned_transactions] calculate summary statistics
Advanced Prompting Techniques
Once you're comfortable with the basics, these advanced techniques can help you get even better results.
Zero-Shot vs. Few-Shot Prompting
Zero-Shot Prompting: Ask directly without examples
@[sales] Calculate customer churn rate
The agent uses its general knowledge of what "churn rate" means.
Few-Shot Prompting: Provide an example of what you want
@[sales]
Calculate churn rate using this formula:
(customers_lost / total_customers_at_start) × 100
Apply this calculation for each month in 2024.
By showing the specific formula, you ensure the agent uses your exact methodology.
When to use few-shot:
- You have a specific calculation method
- There are multiple valid interpretations
- You want a particular output format
- The task is unusual or domain-specific
Setting a Role or Perspective
Tell the agent what perspective to take:
@[financial_data]
Act as a CFO reviewing quarterly performance.
Create an executive summary highlighting key financial metrics,
risks, and opportunities.
Or:
@[customer_data]
Take a marketing analyst perspective.
Identify our most valuable customer segments and suggest
targeting strategies for each.
This helps the agent:
- Choose relevant metrics for that perspective
- Frame insights appropriately
- Use domain-appropriate language
Managing Ambiguity and Ensuring Accuracy
AI can sometimes "hallucinate"—confidently state incorrect information. Reduce this risk:
1. Provide schema details when needed:
@[orders]
Note: The 'status' column contains: 'completed', 'pending', 'cancelled', 'refunded'.
Calculate completion rate as completed / (completed + cancelled).
2. Ask for step-by-step reasoning:
@[data]
Explain your approach before executing. Walk me through how you'll
calculate this metric and what assumptions you're making.
3. Request verification:
After creating the views, show me a sample of 10 rows so I can verify the logic.
4. Be explicit about edge cases:
@[transactions]
Exclude any transactions with negative amounts (those are refunds).
Treat null values in 'region' as 'Unknown'.
Include only transactions from the last complete month.
5. Cross-check important calculations:
Calculate total revenue two ways to verify:
1) Sum of transaction amounts, and
2) Sum of daily totals.
These should match.
Leveraging SQL Knowledge (Optional)
If you know SQL, you can be more specific in your prompts:
@[orders] @[customers]
Use a LEFT JOIN on customer_id to keep all customers,
even those without orders. Then calculate order count and
total revenue per customer.
Or:
@[events]
Create a window function using ROW_NUMBER()
PARTITION BY user_id ORDER BY event_date
to number each user's events sequentially.
But you don't need SQL knowledge! Shadowfax works perfectly well with natural language:
@[events]
Number each user's events in chronological order
The agent will figure out the SQL. However, if you know SQL and have specific requirements, being explicit can ensure you get exactly what you want.
Advanced Visualization with Shadowfax
Creating Effective Visualizations
Start with the story you want to tell:
/visualize @[retention_data]
I want to show that users who complete onboarding have much better
retention than those who don't. Show both cohorts' retention curves
to highlight this difference.
The agent will choose visual elements (colors, annotations, labels) that emphasize the comparison.
Be specific about chart elements when you have preferences:
/visualize @[revenue_by_region]
Create a bar chart with:
- Regions on Y-axis (horizontal bars)
- Revenue on X-axis formatted as currency
- Bars colored by performance: green if above target, red if below
- Sort regions by revenue, highest at top
- Add a vertical line showing the company average
- Title: "Regional Revenue Performance - Q4 2024"
Iterating on Visualizations
Visualization is an iterative process. Start simple and refine:
Round 1:
/visualize @[sales]
Show monthly revenue
Round 2:
Add a trend line showing the overall direction
Round 3:
Change the color scheme to blue tones matching our brand
Round 4:
Add annotations highlighting the two highest revenue months
Round 5:
Perfect! Now add a secondary Y-axis showing transaction count as a line
Each refinement builds on the previous chart—no need to start over.
Example: Complex Layered Visualization
/visualize @[sales_forecast]
Create a layered chart combining:
1. Gray bars: historical monthly revenue
2. Blue line: forecasted revenue
3. Light blue shaded area: 80% confidence interval
4. Vertical dashed line: marking today's date with label "Forecast Start"
5. Horizontal dashed line: showing annual target with label
6. Annotations: highlight actual months that exceeded forecast
Color scheme: Use blues and grays for a professional look.
Title: "Revenue: Historical Performance and 12-Month Forecast"
X-axis: Months abbreviated (Jan, Feb, Mar...)
Y-axis: Format as $XXXk
The agent will create a sophisticated, publication-ready chart from this description.
Tips for Great Visualizations
Choose the right chart for your message:
| Goal | Chart Type |
|---|---|
| Show trend over time | Line chart |
| Compare categories | Bar chart |
| Show distribution | Histogram, box plot |
| Show relationship | Scatter plot |
| Show composition | Stacked bar, treemap |
| Show flow/process | Sankey, funnel chart |
Describe your story:
// Bad
@[data] Make a chart
// Good
/visualize @[campaign_data]
Show how campaign performance improved after the strategy change in July
Request accessibility:
Use colorblind-friendly colors and ensure sufficient contrast for readability
Leverage best practices:
/visualize @[time_series]
Create a time series chart following visualization best practices
The agent will apply proper date formatting, appropriate scales, clear labels, etc.
Multi-Step Analytical Workflows
For complex, multi-step analyses, combine several techniques:
Using /plan for Complex Work
/plan @[customer_database] @[transaction_history] @[product_catalog]
Goal: Identify cross-sell opportunities by analyzing purchase patterns
and customer segments.
Agent proposes:
Here's my proposed approach:
Step 1: Customer Segmentation
- Join customer and transaction data
- Calculate RFM scores
- Create 4 segments: VIP, Active, At-Risk, Inactive
Step 2: Purchase Pattern Analysis
- For each segment, identify most frequently purchased product categories
- Calculate average products per order
- Identify product affinity (what's bought together)
Step 3: Cross-Sell Opportunity Identification
- For each segment, find product categories they haven't purchased
- Rank opportunities by:
a) Popularity in similar segments
b) Average transaction value
c) Purchase propensity scores
Step 4: Prioritization
- Create a matrix: Segment × Product Category × Opportunity Score
- Flag top 5 opportunities per segment
Step 5: Visualization
- Heatmap showing opportunity scores by segment and category
- Bar chart of top 20 cross-sell opportunities overall
Would you like me to proceed with this approach?
You can:
- Approve and proceed
- Adjust steps before execution
- Ask questions about the methodology
- Request alternative approaches
Incremental Building
For complex workflows, build incrementally:
Step 1: Data Preparation
@[raw_data]
Clean the data: remove nulls, handle outliers, standardize formats
[Inspect the cleaned data]
Step 2: Feature Engineering
@[cleaned_data]
Create calculated fields: customer age, tenure, average_purchase_value
[Verify the calculations]
Step 3: Segmentation
@[enriched_data]
Create customer segments based on tenure and purchase value
[Review the segments]
Step 4: Analysis
@[segmented_data]
Compare segments across key metrics: retention, LTV, churn risk
Step 5: Visualization
/visualize @[segment_comparison]
Create a dashboard showing all segment metrics side by side
By breaking it into steps and verifying at each stage, you catch issues early and maintain control.
Common Prompting Patterns for Analytics
Here are proven patterns for common analytical tasks:
Data Exploration
@[table_name]
Show me an overview: column names, data types, row count, and sample rows
@[table_name]
Generate summary statistics for all numeric columns
(count, mean, median, min, max, std dev)
@[table_name]
Identify data quality issues: null percentages, duplicate rows, outliers
Data Cleaning
@[messy_data]
Create a cleaned view that:
- Removes rows where [critical_column] is null
- Deduplicates based on [id_column], keeping the most recent
- Standardizes [text_column] to lowercase and trims whitespace
@[data]
Handle outliers in [numeric_column]:
Flag values more than 3 standard deviations from the mean
Aggregation & Summarization
@[transactions]
Calculate total revenue, transaction count, and average transaction value
Group by month and show with growth rates
@[data]
Create a pivot table:
- Rows: [dimension1]
- Columns: [dimension2]
- Values: sum of [metric]
Trend Analysis
@[daily_data]
Calculate 7-day and 30-day moving averages for [metric]
@[monthly_data]
Add year-over-year comparison: this month vs same month last year
Show both absolute difference and percentage change
Comparative Analysis
@[data]
Compare [groups] side by side across all metrics
Rank them by [key_metric] and show differences from average
@[data]
Show before/after comparison for 30 days before and after [event_date]
Highlight statistically significant changes
Joins & Relationships
@[table1] @[table2]
Join these tables on [key_column] and create a combined view
Include columns: [list specific columns you need]
@[orders] @[customers] @[products]
Join all three tables to show:
Customer name, order date, product name, revenue
Filter to last quarter only
Troubleshooting and Iteration
When results aren't what you expected:
1. Inspect the SQL Views
Click on each view in the transformation pipeline:
- Check the data at each step
- Identify where things went wrong
- Review the SQL to understand the logic
2. Ask the Agent for Clarification
Why did you group by [column] instead of [other_column]?
Show me the SQL for the [view_name] view so I can understand the logic
3. Provide Specific Feedback
Not helpful: "This is wrong"
Helpful: "The revenue totals are too high. I think you're including refunds. Please filter out transactions where status = 'refunded'"
4. Iterate on Specific Views
You don't have to start over:
@[monthly_revenue]
Recalculate this view, but exclude tax amounts from the total
Common Issues and Solutions
| Issue | Solution |
|---|---|
| Wrong time grouping | Be explicit: "group by week starting Monday" or "group by calendar month" |
| Incorrect joins | Specify join type and key: "Use INNER JOIN on customer_id" |
| Missing filters | Add constraints: "Only include records where status = 'active'" |
| Wrong calculation | Provide formula: "Calculate margin as (revenue - cost) / revenue * 100" |
| Unexpected nulls | Specify handling: "Treat null values in [column] as 0" or "Exclude nulls" |
| Data type issues | Be explicit: "Convert [column] to date format YYYY-MM-DD" |
| Aggregation problems | Clarify: "Sum should be at customer level, not transaction level" |
Learning from Iterations
After refining a prompt to get good results:
- Note what made the final prompt effective
- Save it as a template for similar tasks
- Add it to your personal prompt library
- Update your Knowledge if it reveals a standard approach
Power User Tips
Workflow optimization:
- For exploration:
/recommendations→ pick direction → analyze →/visualize - For complex work:
/plan→ review → approve → execute →/visualize - For quick insights: Direct prompts with @mentions → iterate as needed
Name transformations descriptively:
// Good
'completed_orders_last_quarter'
// Bad
'temp_view_1'
Build a prompt library: Keep a document of your best prompts for common tasks:
- Weekly revenue report prompt
- Customer segmentation prompt
- Churn analysis prompt
- Product performance prompt
Leverage the reactive system:
- Update source data → watch downstream views auto-update
- Modify early transformations → dependent views recalculate
- Experiment with different approaches → easy to revert
Document complex transformations:
@[complex_calculation_view]
Add a description explaining what this view does and why we calculate it this way
Share Knowledge across teams:
- Create shared Knowledge documents for company-wide metrics
- Ensure everyone uses the same definitions
- Onboard new analysts faster
Conclusion
Mastering prompt engineering transforms Shadowfax from a helpful tool into a powerful analytical partner. The key principles:
Context → Clarity → Iteration
- Provide context through Knowledge (persistent), @mentions (data), and prompts (specifics)
- Be clear using the C.L.E.A.R. framework: Concise, Logical, Explicit, Adaptive, Reflective
- Iterate freely—inspect views, refine prompts, leverage the reactive system
Remember:
- Shadowfax is transparent: Every transformation is a SQL view you can inspect
- Shadowfax is reactive: Changes propagate automatically through your pipeline
- Shadowfax is flexible: Unlimited visualization possibilities with Vega
- Shadowfax is collaborative: Use
/planto review approaches,/recommendationsfor guidance
The Three-Layer Context Model:
- Knowledge Layer: Business rules, KPI definitions, domain context (set once, use everywhere)
- @Mention Layer: Which specific data to analyze (per prompt)
- Prompt Layer: What to do with that data (per request)
Slash Commands:
/plan: Review approach before execution (complex analyses)/visualize: Create sophisticated charts (any visualization need)/recommendations: Get suggestions (exploration, next steps)/taxonomy: Discover categories and create classification (text classification, AI tables)
By combining these tools—Knowledge, @mentions, clear prompts, slash commands, and the reactive Workbook—you can perform sophisticated analyses without writing a single line of SQL.
Focus on insights. Let Shadowfax handle the execution.
Happy analyzing!