PostgreSQLDatabaseSQLPerformanceCTE

Supercharge Your PostgreSQL Queries with Common Table Expressions (CTEs)

Understand how utilizing CTEs can enhance your PostgreSQL queries and make them easier to read, maintain, and most importantly optimize with the help of practical examples and best known methods.

Supercharge Your PostgreSQL Queries with Common Table Expressions (CTEs)
March 20, 2025
ManhDT
8 min read

Supercharge Your PostgreSQL Queries with Common Table Expressions (CTEs)

I'll be honest - I used to hate writing complex SQL queries. You know the feeling when you're staring at a massive nested query that looks like it was written by someone who clearly had too much coffee? Yeah, that was me about 2 years ago. Then I discovered CTEs, and honestly, they changed everything.

If you're struggling with messy, hard-to-read PostgreSQL queries, this post is for you. We'll dive into how Common Table Expressions can not only make your queries way more readable but also potentially faster (which is always nice).

What are Common Table Expressions anyway?

Think of a CTE as a temporary table that lives just for the duration of your query. It's like having a scratchpad where you can store intermediate results and give them a meaningful name. Here's the basic syntax:

WITH cte_name AS (
    SELECT column1, column2
    FROM table_name
    WHERE condition
)
SELECT *
FROM cte_name;

Pretty straightforward, right? The magic happens when you start using them for more complex scenarios.

The Nightmare of Nested Subqueries

Let me show you what I mean. Here's the kind of query that used to make me question my life choices:

-- Traditional nested subquery approach (aka the headache generator)
SELECT 
    u.username,
    u.email,
    (SELECT COUNT(*) FROM orders o WHERE o.user_id = u.id) as order_count,
    (SELECT AVG(total_amount) FROM orders o WHERE o.user_id = u.id) as avg_order_value
FROM users u
WHERE u.created_at > '2024-01-01'
    AND u.id IN (
        SELECT user_id 
        FROM orders 
        WHERE order_date > '2024-01-01'
    );

This query is a mess for several reasons:

  • We're running the same subquery twice (once for COUNT, once for AVG) - that's just wasteful
  • It's really hard to read - try explaining this to a junior developer
  • Performance can be terrible because PostgreSQL might scan the orders table multiple times

I've seen queries like this bring production databases to thier knees. Not fun.

CTEs to the Rescue

Now, let me show you the same logic using CTEs. This is so much cleaner:

WITH recent_users AS (
    SELECT id, username, email
    FROM users
    WHERE created_at > '2024-01-01'
),
user_orders AS (
    SELECT 
        user_id,
        COUNT(*) as order_count,
        AVG(total_amount) as avg_order_value
    FROM orders
    WHERE order_date > '2024-01-01'
    GROUP BY user_id
)
SELECT 
    ru.username,
    ru.email,
    COALESCE(uo.order_count, 0) as order_count,
    COALESCE(uo.avg_order_value, 0) as avg_order_value
FROM recent_users ru
LEFT JOIN user_orders uo ON ru.id = uo.user_id;

See the difference? Each part has a clear purpose and a descriptive name. Your future self will thank you.

Why CTEs are Game Changers

1. They're Actually Readable

Instead of nested madness, you get logical steps that any developer can follow. It's like the difference between a tangled mess of cables and a properly organized server rack.

2. No More Repeating Yourself

Write a complex calculation once, give it a name, and reuse it. Here's a real example I used last month:

WITH monthly_sales AS (
    SELECT 
        DATE_TRUNC('month', order_date) as month,
        SUM(total_amount) as monthly_total
    FROM orders
    WHERE order_date >= '2024-01-01'
    GROUP BY DATE_TRUNC('month', order_date)
)
SELECT 
    month,
    monthly_total,
    LAG(monthly_total) OVER (ORDER BY month) as previous_month,
    monthly_total - LAG(monthly_total) OVER (ORDER BY month) as growth
FROM monthly_sales
ORDER BY month;

Pretty cool, right? I can reference monthly_sales multiple times without PostgreSQL having to recalculate it each time.

3. Better Performance (Usually)

PostgreSQL's query planner gets much better at optimizing when you give it clear, logical steps to work with. Though I should mention - it's not always faster. More on that later.

Getting Fancy with Advanced Patterns

Recursive CTEs - Perfect for Hierarchical Stuff

This is where CTEs really show off. Need to traverse a tree structure? Recursive CTEs got your back:

WITH RECURSIVE category_hierarchy AS (
    -- Base case: top-level categories
    SELECT id, name, parent_id, 0 as level
    FROM categories
    WHERE parent_id IS NULL
    
    UNION ALL
    
    -- Recursive case: child categories
    SELECT c.id, c.name, c.parent_id, ch.level + 1
    FROM categories c
    JOIN category_hierarchy ch ON c.parent_id = ch.id
)
SELECT * FROM category_hierarchy
ORDER BY level, name;

I used something similar when I had to build a department hierarchy report for our HR system. Saved me from writing a bunch of application-level recursion.

Mixing CTEs with Window Functions

This is where things get really powerful. Here's a query I wrote recently for our analytics dashboard:

WITH daily_revenue AS (
    SELECT 
        DATE(order_date) as order_day,
        SUM(total_amount) as daily_total
    FROM orders
    WHERE order_date >= CURRENT_DATE - INTERVAL '30 days'
    GROUP BY DATE(order_date)
),
revenue_with_stats AS (
    SELECT 
        order_day,
        daily_total,
        AVG(daily_total) OVER (
            ORDER BY order_day 
            ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
        ) as seven_day_avg,
        RANK() OVER (ORDER BY daily_total DESC) as revenue_rank
    FROM daily_revenue
)
SELECT 
    order_day,
    daily_total,
    ROUND(seven_day_avg, 2) as seven_day_avg,
    revenue_rank
FROM revenue_with_stats
ORDER BY order_day DESC;

This gives us daily revenue with a 7-day moving average and ranking - all in one readable query. Try doing that with nested subqueries!

The Reality Check: When CTEs Might Not Help

Look, I'm not gonna lie to you - CTEs aren't always the silver bullet. Here's what I've learned the hard way:

When CTEs Can Actually Hurt Performance

  1. Simple Filters: Sometimes a basic subquery is just faster
  2. Large Result Sets: CTEs are materialized in memory, which can be a problem with big datasets
  3. Over-engineering: Don't use a CTE when a simple JOIN would do

Here's an example where a CTE might be overkill:

-- Sometimes this simple approach is actually better
SELECT * FROM users 
WHERE id IN (SELECT user_id FROM orders WHERE order_date > '2024-01-01');

-- vs the CTE version which might be slower for this simple case:

WITH recent_order_users AS (
    SELECT DISTINCT user_id FROM orders WHERE order_date > '2024-01-01'
)
SELECT u.* FROM users u
JOIN recent_order_users rou ON u.id = rou.user_id;

Always test both approaches! I use EXPLAIN ANALYZE religiously for this stuff.

My Rules of Thumb for CTEs

1. Name Things Properly (Please!)

-- This makes me happy
WITH high_value_customers AS (...)

-- This makes me sad
WITH cte1 AS (...)

Seriously, your teammates will love you for descriptive names.

2. One Job Per CTE

Don't try to do everything in one CTE. Keep them focused and simple.

3. Index Your Base Tables

CTEs are only as fast as the underlying queries. Make sure you have proper indexes!

4. Always Check the Execution Plan

This is crucial. Here's how I typically do it:

EXPLAIN ANALYZE
WITH monthly_sales AS (
    SELECT DATE_TRUNC('month', order_date) as month,
           SUM(total_amount) as total
    FROM orders
    GROUP BY DATE_TRUNC('month', order_date)
)
SELECT * FROM monthly_sales ORDER BY month;

If you see "Seq Scan" on a large table, you probably need an index.

A Real Example From My Day Job

Here's something I actually built for our e-commerce analytics (with sensitive data changed, obviously):

WITH customer_segments AS (
    SELECT 
        user_id,
        COUNT(*) as order_count,
        SUM(total_amount) as lifetime_value,
        MAX(order_date) as last_order_date,
        CASE 
            WHEN COUNT(*) >= 10 THEN 'VIP'
            WHEN COUNT(*) >= 5 THEN 'Regular'
            ELSE 'New'
        END as segment
    FROM orders
    WHERE order_date >= '2024-01-01'
    GROUP BY user_id
),
segment_analytics AS (
    SELECT 
        segment,
        COUNT(*) as customer_count,
        AVG(lifetime_value) as avg_lifetime_value,
        AVG(order_count) as avg_orders_per_customer
    FROM customer_segments
    GROUP BY segment
)
SELECT 
    segment,
    customer_count,
    ROUND(avg_lifetime_value, 2) as avg_lifetime_value,
    ROUND(avg_orders_per_customer, 1) as avg_orders_per_customer,
    ROUND(100.0 * customer_count / SUM(customer_count) OVER(), 1) as percentage
FROM segment_analytics
ORDER BY avg_lifetime_value DESC;

This query helped our marketing team understand customer behavior patterns, and it took me about 15 minutes to write instead of the hour it would have taken with nested subqueries. The business loved it because they could actually understand what the SQL was doing!

Wrapping Up

Look, CTEs aren't magic - they're just a tool. But they're a really, really good tool when used properly. I've seen them turn incomprehensible queries into something a junior developer can understand and modify.

Here's what I wish someone had told me when I was starting out:

  • Start simple - don't try to refactor your most complex query on day one
  • Use CTEs to improve readability first, performance second
  • Test everything - always use EXPLAIN ANALYZE to verify your assumptions
  • Name things well - your future self will thank you
  • Don't overdo it - sometimes a simple subquery is just fine

The best part? Once you start using CTEs regularly, you'll wonder how you ever lived without them. They've literally changed how I approach complex database problems.

Go try rewriting one of your messy queries with CTEs today. I bet you'll be surprised at how much cleaner it becomes.


Got questions about PostgreSQL performance tuning? Drop me a line - I love talking about database optimization over coffee!

Start incorporating CTEs into your PostgreSQL queries today, and you'll find your database code becoming more maintainable and often more performant. Your future self (and your teammates) will thank you for the cleaner, more understandable SQL!


Want to dive deeper into PostgreSQL optimization? Check out our other posts on database indexing strategies and query performance tuning.