In this section we will look at different ways in which Spark uses persistence and caching to help improve performance of our application. We will also see what are the good practices when we cache objects in memory and when should they be released for optimum performance of Spark application.
When we persist or cache an RDD in Spark it holds some memory(RAM) on the machine or the cluster. It is usually a good practice to release this memory after the work is done. But before we remove this memory let us see how to check this when the Spark application is running. We can go to the URL http://localhost:4040 and look at the Storage tab. This shows us all the objects which have been persisted or cached in Spark’s memory. Since we just persisted one RDD in the previous example, as we can see below the Storage tab shows one object.
On clicking the object we can see other details like no. of partitions and size occupied by the object.
Once we are sure we no longer need the object in Spark's memory for any iterative process optimizations we can call the method unpersist(). Once this is done we can again check the Storage tab in Spark's UI. We can note below that the object no longer exists in Spark memory.
Now that we have seen how to cache or persist an RDD and its benefits, we can look into details about RDD storage levels.
Let us see them one at a time:
Here, we learnt about the different modes of persistence in Spark. This understanding will help us to use these in different scenarios that we may encounter while writing a Spark application.
yes you are right...When it comes to data and its management, organizations prefer a free-flow rather than long and awaited procedures. Thank you for the information.
thanks for info
Complete output mode is not supported for parquet format.
I feel very grateful that I read this. It is very helpful and very informative and I learned a lot from it. Thank you!