Posts in 'spark'

mapPartitions vs mapInPandas

Prior to spark 3.0+, to optimize for performance and utilize vectorized operations, you'd generally have to repartition the dataset and invoke mapPartitions.

This had the major drawback of performance impact that was incurred from repartitioning (caused by shuffle) the DataFrame.

With spark 3.0+, if your underlying function is …

Spark Scaling to large datasets

In this post, I will share a few quick tips about scaling your Spark applications to larger datasets without having large executor memory.

  • Increase Shuffle partitions: The default shuffle partitions is 200, for larger datasets, you are better off with larger number of shuffle partitions. This helps in many ways …

Removing Projection Column Ambiguity in Spark

Column ambiguity is quite common when you join two tables. Now this poses a unnecessary hassle when you want to select all the columns from both the tables whilst discarding the duplicate columns. The aforementioned problem is difficult to handle especially, if you have wide tables, where you would want …

Efficient Spark Dataframe Transforms

If you are working with Spark, you will most likely have to write transforms on dataframes. Dataframe exposes the obvious method df.withColumn(col_name,col_expression) for adding a column with a specified expression. Now, as we know that the dataframes are immutable in nature, so we are getting a newly …

Writing Generic UDFs in Spark

Apache Spark offers the ability to write Generic UDFs. However, for an idiomatic implementation, there are a couple of things that one needs to keep in mind.

  1. You should return a subtype of Option because Spark treats None subtype automatically as null and is able to extract value from Some …