COUNT OVER in Spark SQL

This page is a quick reference checkpoint for COUNT OVER in Spark SQL: behavior, syntax rules, edge cases, and a minimal example; plus the official vendor documentation.


Function Details

COUNT OVER returns the number of rows in the window frame.

COUNT used with OVER returns one value per row in the window rather than collapsing rows as in GROUP BY.

If this behavior feels unintuitive, the tutorial below explains the underlying pattern step-by-step.

Standard aggregate count(*) OVER (window_spec) is allowed; Spark explicitly states aggregate functions may be used with OVER.

SELECT category, amount, COUNT(*) OVER (PARTITION BY category) AS category_count FROM sales;

What should you do next?

If you came here to confirm syntax, you’re done. If you came here to get better at window functions, choose your next step.

Understand the pattern

COUNT OVER is part of a bigger window-function pattern. If you want the “why”, start here: Aggregate Window Functions

Prove it with a real query

Reading docs is useful. Writing the query correctly under pressure is the skill.

Order Volume, Customer by Customer

Support Status

  • Supported: yes
  • Minimum Version: 1.4

Official Documentation

For the authoritative spec, use the vendor docs. This page is the fast “sanity check”.

View Spark SQL Documentation →

Looking for more functions across all SQL dialects? Visit the full SQL Dialects & Window Functions Documentation.