pyspark.sql.DataFrame.cube#

DataFrame.cube(*cols)[source]#

Create a multi-dimensional cube for the current DataFrame using the specified columns, allowing aggregations to be performed on them.

New in version 1.4.0.

Changed in version 3.4.0: Supports Spark Connect.

Parameters
colslist, str, int or Column

The columns to cube by. Each element should be a column name (string) or an expression (Column) or a column ordinal (int, 1-based) or list of them.

Changed in version 4.0.0: Supports column ordinal.

Returns
GroupedData

Cube of the data based on the specified columns.

Notes

A column ordinal starts from 1, which is different from the 0-based __getitem__().

Examples

>>> df = spark.createDataFrame([("Alice", 2), ("Bob", 5)], schema=["name", "age"])

Example 1: Creating a cube on ‘name’, and calculate the number of rows in each dimensional.

>>> df.cube("name").count().orderBy("name").show()
+-----+-----+
| name|count|
+-----+-----+
| NULL|    2|
|Alice|    1|
|  Bob|    1|
+-----+-----+

Example 2: Creating a cube on ‘name’ and ‘age’, and calculate the number of rows in each dimensional.

>>> df.cube("name", df.age).count().orderBy("name", "age").show()
+-----+----+-----+
| name| age|count|
+-----+----+-----+
| NULL|NULL|    2|
| NULL|   2|    1|
| NULL|   5|    1|
|Alice|NULL|    1|
|Alice|   2|    1|
|  Bob|NULL|    1|
|  Bob|   5|    1|
+-----+----+-----+

Example 3: Also creating a cube on ‘name’ and ‘age’, but using the column ordinal.

>>> df.cube(1, 2).count().orderBy(1, 2).show()
+-----+----+-----+
| name| age|count|
+-----+----+-----+
| NULL|NULL|    2|
| NULL|   2|    1|
| NULL|   5|    1|
|Alice|NULL|    1|
|Alice|   2|    1|
|  Bob|NULL|    1|
|  Bob|   5|    1|
+-----+----+-----+