This page is a quick reference checkpoint for COUNT OVER in Spark SQL: behavior, syntax rules, edge cases, and a minimal example; plus the official vendor documentation.
COUNT OVER returns the number of rows in the window frame.
COUNT used with OVER returns one value per row in the window rather than collapsing rows as in GROUP BY.
If this behavior feels unintuitive, the tutorial below explains the underlying pattern step-by-step.
Standard aggregate count(*) OVER (window_spec) is allowed; Spark explicitly states aggregate functions may be used with OVER.
SELECT category, amount, COUNT(*) OVER (PARTITION BY category) AS category_count FROM sales;
If you came here to confirm syntax, you’re done. If you came here to get better at window functions, choose your next step.
COUNT OVER is part of a bigger window-function pattern. If you want the “why”, start here: Aggregate Window Functions
Reading docs is useful. Writing the query correctly under pressure is the skill.
For the authoritative spec, use the vendor docs. This page is the fast “sanity check”.
View Spark SQL Documentation →Looking for more functions across all SQL dialects? Visit the full SQL Dialects & Window Functions Documentation.