Search

Scala Vs Python Vs R Vs Java - Which language is better for Spark & Why?

One of the most important decisions for the Big data learners or beginners is choosing the best programming language for big data manipulation and analysis. Just understanding business problems and choosing the right model is not enough but implementing them perfectly is equally important and choosing the right language (or languages) for solving the problem goes a long way. If you search top and highly effective programming languages for Big Data on Google, you will find the following top 4 programming languages: JavaScalaPythonRJavaJava is one of the oldest languages of all 4 programming languages listed here. Traditional Frameworks of Big data like Apache Hadoop and all the tools within its ecosystem are Java-based and hence using java opens up the possibility of utilizing large ecosystem of tools in the big data world.  ScalaA beautiful crossover between object-oriented and functional programming language is Scala. Scala is a highly Scalable Language. Scala was invented by the German Computer Scientist, Martin Odersky and the first version was launched in the year 2003.PythonPython was originally conceptualized by Guido van Rossum in the late 1980s. Initially, it was designed as a response to the ABC programming language and later gained its popularity as a functional language in a big data world. Python has been declared as one of the fastest-growing programming languages in 2018 as per the recently held Stack Overflow Developer Survey. Many data analysis, manipulation, machine learning, deep learning libraries are written in Python and hence it has gained its popularity in the big data ecosystem. It’s a very user-friendly language and it is its biggest advantage.  Fun factPython is not named after the snake. It’s named after the British TV show Monty Python.RR is the language of statistics. R is a language and environment for statistical computing and graphics. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently developed by the R Development Core Team. R is named partly after the first names of the first two R authors and partly as a play on the name of S*. The project was conceived in 1992, with an initial version released in 1995 and a stable beta version in 2000.*SS is a statistical programming language developed primarily by John Chambers and R is an implementation of the S programming language combined with lexical scoping semantics, inspired by Scheme.Every framework is implemented in the underlying programming language for its implementation. Ex Zend uses PHP, Panda Framework uses python similarly Hadoop framework uses Java and Spark uses Scala.However, Spark officially supports Java, Scala, Python and R, all 4 languages. If one browses through Apache Spark’s official website documentation, he/she would find many other languages utilized by the open-source community for Spark implementation.    When any developer wants to start learning Spark, the first question he stumbles upon is, out of these pools of languages, which one to use and which one to master? Solution Architects would have a tough time choosing the right language for spark framework and Organizations will always be wondering, which skill sets are relevant for my problem if one doesn’t have the right knowledge about these languages in the context of Spark.    This article will try to answer all these queries.so let’s start-JavaOldest of all and popular, widely adopted programming language of all. There is a number offeatures/advantages due to which Java is favorite for Big data developers and tool creators:Java is platform-agnostic language and hence it can run on almost any system. Java is portable due to something called Java Virtual Machine – JVM. JVM is a foundation of Hadoop ecosystem tools like Map Reduce, Storm, Spark, etc. These tools are written in Java and run on JVM.Java provides various communities support like GitHub and stack overflow etc.Java is scalable, backward compatible, stable and production-ready language. Also, supports a large variety of tried and tested libraries.It is statically typed language (We would see details of this functionality in later sections, in comparison with others)Java is mostly the choice for most of the big data projects but for the Spark framework, one has to ponder upon, whether Java would be the best fit.One major drawback of Java is its verbosity. One has to write long code (number of lines of code) to achieve simple functionality in Java.Java does not support Read-Evaluate-Print-Loop (REPL) which is a major deal-breaker when choosing a programming language for big data processing.ScalaScala is comparatively new to the programming scene but has become popular very quickly. Above are a few quotes from bigger names in the industry for Scala. From the Spark context, many experts prefer Scala over other programming languages as Spark is written in Scala. Scala is the native language of Spark. It means any new API always first be available in Scala.Scala is a hybrid functional programming language because It has both the features of object-oriented programming and functional programming. As an OO Programming Language, it considers every value as an object and all OOPS concepts apply. As a functional programming language, it defines and supports functions. All operations are done as functions. No variable stands by itself. Scala is a machine-compiled language.Scala and Java are popular programming languages that run over JVM. JVM makes these languages framework friendly. One can say, Scala is an advanced level of Java.Features/Advantages of Scala:It’s general-purpose object-oriented language with functional language properties too. It’s less verbose than Java.It can work with JVM and hence is portable.It can support Java APIs comfortably.It's fast and robust in Spark context as its Spark native.It is a statically typed language.Scala supports Read-Evaluate-Print-Loop (REPL)Drawbacks / Downsides of Scala:Scala is complex to learn due to the functional nature of language.Steep learning curve.Lack of matured machine learning languages.PythonPython is one of the de-facto languages of Data Science. It is a simple, open-source, general-purpose language and is very easy to learn. It has a rich set of libraries, utilities, ready-to-use features and support to a number of mature machine learning, big data processing, visualization libraries.Advantages of Python:It is interpreted language (i.e. support to REPL, Read, Evaluate, Print, Loop.) If you type a command into a command-line interpreter and it responds immediately. Java lacks this feature.Easy to learn, easy debugging, fewer lines of code.It is dynamically typed. i.e. can dynamically defined variable types. i.e. Python as a language is type-safe.Python is platform agnostic and scalable.Drawbacks/Disadvantages:Python is slow. Big data professionals find projects built in Java / Scala are faster and robust than the once with python.Whilst using user-defined functions or third party libraries in Python with Spark, processing would be slower as increased processing is involved as Python does not have equivalent Java/Scala native language API for these functionalities.Python does not support heavy weight processing fork() using uWSGI but it does not support true multithreading.R LanguageR is the favourite language of statisticians. R is fondly called a language of statisticians.  It’s popular for research, plotting, and data analysis. Together with RStudio, it makes a killer statistic, plotting, and data analytics application.R is majorly used for building data models to be used for data analysis.Advantages/Features of R:Strong statistical modeling and visualization capabilities.Support for ‘data science’ related work.It can be integrated with Apache Hadoop and Spark easily.Drawbacks/Disadvantages of R:R is not a general-purpose language.The code written in R cannot be directly deployed into production. It needs conversion into Java or Python.Not as fast as Java / Scala.Comparison of four languages for Apache SparkWith the introduction of these 4 languages, let’s now compare these languages for the Spark framework:These languages can be categorized into 2 buckets basis high-level spark architecture support, broadly:JVM Languages: Java and ScalaNon-JVM Languages: Python and RDue to these categorizations, performance may vary. Let’s understand architecture in little depth to understand the performance implications of using these languages. This would also help us to understand the question of when to use which language.Spark Framework High-level architecture An application written in any one of the languages is submitted on the driver node and further driver node distributes the workload by dividing the execution on multiple worker nodes.JVM compatible Application Execution Flow Consider the applications written are JVM compatible (Java/Scala). Now, Spark is also written in native JVM compatible Scala language, hence there is no explicit conversion required at any point of time to execute JVM compatible applications on Spark. Also, this makes the native language applications faster to perform on the Spark framework.There are multiple scenarios for Python/R written applications:Python/R driver talk to JVM driver by socket-based API. On the driver node, both the driver processes are invoked when the application language is non-JVM language.Scenario 1: Applications for which Equivalent Java/Scala Driver API exists - This scenario executes the same way as JVM compatible applications by invoking Java API on the driver node itself. The cost for inter-process communication through sockets is negligible and hence performance is comparable. This is with the assumption that processed data over worker nodes are not to be sent back to the Driver again.Scenario 1(b): If the assumption taken is void in scenario 1 i.e. processed data on worker nodes is to be sent back to driver then there is significant overhead and serialization required. This adds to processing time and hence performance in this scenario deteriorates.Scenario 2: Applications for which Equivalent Java/Scala Driver API do not exist – Ex. UDF (User-defined functions) / Third party python libraries. In such cases equivalent Java API doesn’t exist and hence, additional executor sessions are initiated on worker node and python API is serialized on worker node and executed. This python worker processes in addition to JVM and coordination between them is overhead. Processes also compete for resources which adds to memory contention.In addition, if the data is to send back to the driver node then processing takes a lot of time and problem scales up as volume increases and hence performance is bigger problem.As we have seen a performance, Let’s see the tabular comparison between these languages.Comparison PointsJavaScalaPythonRPerformanceFasterFaster (about 10x faster than Python)SlowerSlowerLearning CurveEasier than JavaTougher than PythonSteep learning curve than Java & PythonEasiestModerateUser GroupsWeb/Hadoop programmersBig Data ProgrammersBeginners & Data EngineersData Scientists/ StatisticiansUsageWeb development and Hadoop NativeSpark NativeData Engineering/ Machine Learning/ Data VisualizationVisualization/ Data Analysis/ Statistics use casesType of LanguageObject-Oriented, General PurposeObject-Oriented & Functional General PurposeGeneral PurposeSpecifically for Data Scientists.Needs conversion into Scala/Python before productizingConcurrencySupport ConcurrencySupport ConcurrencyDoes not Support ConcurrencyNAEase of UseVerboseLesser Verbose than ScalaLeast VerboseNAType SafetyStatically typedStatically typed (except for Spark 2.0 Data frames)Dynamically TypedDynamically TypedInterpreted Language (REPL)NoNoYesYesMaturated machine learning libraries availability/ SupportLimitedLimitedExcellentExcellentVisualization LibrariesLimitedLimitedExcellentExcellentWeb Notebooks SupportIjava Kernel in Jupyter NotebookApache Zeppelin Notebook SupportJupyter Notebook SupportR NotebookWhich language is better for Spark and Why?With the info we gathered for the languages, let's move to the main question i.e. which language to choose for Spark? My answer is not a straightforward single language for this question. I will state my point of view for choosing the proper language: If you are a beginner and want to choose a language from learning Spark perspective. If you are organization/ self employed or looking to answer a question for solutioning a project perspective. I. If you are beginner:If you are a beginner and have no prior education of programming language then Python is the language for you, as it’s easy to pick up. Simple to understand and very user-friendly. It would prove a good starting point for building Spark knowledge further. Also, If you are looking for getting into roles like ‘data engineering’, knowledge of Python along with supported libraries will go a long way. If you are a beginner but have education in programming languages, then you may find Java very familiar and easy to build upon prior knowledge. After all, it grapevine of all the languages.  If you are a hardcore bigdata programmer and love exploring complexities, Scala is the choice for you. It’s complex but experts say if once you love Scala, you will prefer it over other languages anytime.If you are a data scientist, statistician and looking to work with Spark, R is the language for you. R is more science oriented than Python. II. If you are organization/looking for choice of language for implementations:You need to answer the following important questions before choosing the language:Skills and Proficiency: Which skill-sets and proficiency over language, you already have with you/in your team?Design goals and availability of features/ Capability of language: Which libraries give you better support for the type of problem(s) you are trying to solve.Performance implications Details of these explained below: 1. Skillset: This is very straightforward. Whichever is available skill set within a team, go with that to solve your problem, after evaluating answers of other two questions. If you are self-employed, the one you have proficiency is the most likely suitable choice of language.  2. Library Support:  Following gives high-level capabilities of languages:R: Good for research, plotting, and data analysis.Python: Good for small- or medium-scale projects to build models and analyse data, especially for fast start-ups or small teams.Scala/Java: Good for robust programming with many developers and teams; it has fewer machine learning utilities than Python and R, but it makes up for it with increased code maintenance.In my opinion, Scala/Java can be used for larger robust projects to ease maintenance. Also, If one wants the app to scale quickly and needs it to be robust, Scala is the choice.Python and R: Python is more universal language than R, but R is more science oriented. Broadly, one can say Python can be implemented for Data engineering use cases and R for Data science-oriented use cases. On the other hand, if you discover these two languages have about the same library support you need, then pick the one whose syntax you prefer. You may find that you need both depending on the situation. 3. Performance: As seen earlier in the article, Scala/ Java is about 10x faster than Python/R as they are JVM supported languages. However, if you are writing Python/R applications wisely (like without using UDFs/ Not sending data back to the Driver etc) they can perform equally well.ConclusionFor learning, depending upon your prior knowledge, Python is the easiest of all to pick up. For implementations, Choice is in your hands which language to choose for implementations but let me tell you one secret or a tip, you don’t have to stick to one language until you finish your project. You can divide your problem in small buckets and utilize the best language to solve the problem. This way, you can achieve balance between optimum performance, availability, proficiency in a skill, and sub-problem at hand.  Do let us know how your experience was in learning the language comparisons and the language you think is better for Spark. Moreover, which one you think is “the one for you”, through comments below.

Scala Vs Python Vs R Vs Java - Which language is better for Spark & Why?

9K
Scala Vs Python Vs R Vs Java - Which language is better for Spark & Why?

One of the most important decisions for the Big data learners or beginners is choosing the best programming language for big data manipulation and analysis. Just understanding business problems and choosing the right model is not enough but implementing them perfectly is equally important and choosing the right language (or languages) for solving the problem goes a long way. 

If you search top and highly effective programming languages for Big Data on Google, you will find the following top 4 programming languages: 

  1. Java
  2. Scala
  3. Python
  4. R

Java

Java is one of the oldest languages of all 4 programming languages listed here. Traditional Frameworks of Big data like Apache Hadoop and all the tools within its ecosystem are Java-based and hence using java opens up the possibility of utilizing large ecosystem of tools in the big data world.  

Scala

A beautiful crossover between object-oriented and functional programming language is Scala. Scala is a highly Scalable Language. Scala was invented by the German Computer Scientist, Martin Odersky and the first version was launched in the year 2003.

Python

Python was originally conceptualized by Guido van Rossum in the late 1980s. Initially, it was designed as a response to the ABC programming language and later gained its popularity as a functional language in a big data world. Python has been declared as one of the fastest-growing programming languages in 2018 as per the recently held Stack Overflow Developer Survey. Many data analysis, manipulation, machine learning, deep learning libraries are written in Python and hence it has gained its popularity in the big data ecosystem. It’s a very user-friendly language and it is its biggest advantage.  

Fun fact

Python is not named after the snake. It’s named after the British TV show Monty Python.

R

R is the language of statistics. R is a language and environment for statistical computing and graphics. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently developed by the R Development Core Team. R is named partly after the first names of the first two R authors and partly as a play on the name of S*. The project was conceived in 1992, with an initial version released in 1995 and a stable beta version in 2000.

*S

S is a statistical programming language developed primarily by John Chambers and R is an implementation of the S programming language combined with lexical scoping semantics, inspired by Scheme.

Every framework is implemented in the underlying programming language for its implementation. Ex Zend uses PHP, Panda Framework uses python similarly Hadoop framework uses Java and Spark uses Scala.

However, Spark officially supports Java, Scala, Python and R, all 4 languages. If one browses through Apache Spark’s official website documentation, he/she would find many other languages utilized by the open-source community for Spark implementation.    

When any developer wants to start learning Spark, the first question he stumbles upon is, out of these pools of languages, which one to use and which one to master? Solution Architects would have a tough time choosing the right language for spark framework and Organizations will always be wondering, which skill sets are relevant for my problem if one doesn’t have the right knowledge about these languages in the context of Spark.    

This article will try to answer all these queries.so let’s start-

Java

Oldest of all and popular, widely adopted programming language of all. There is a number of

features/advantages due to which Java is favorite for Big data developers and tool creators:

  1. Java is platform-agnostic language and hence it can run on almost any system. Java is portable due to something called Java Virtual Machine – JVM. JVM is a foundation of Hadoop ecosystem tools like Map Reduce, Storm, Spark, etc. These tools are written in Java and run on JVM.
  2. Java provides various communities support like GitHub and stack overflow etc.
  3. Java is scalable, backward compatible, stable and production-ready language. Also, supports a large variety of tried and tested libraries.
  4. It is statically typed language (We would see details of this functionality in later sections, in comparison with others)

Java is mostly the choice for most of the big data projects but for the Spark framework, one has to ponder upon, whether Java would be the best fit.

One major drawback of Java is its verbosity. One has to write long code (number of lines of code) to achieve simple functionality in Java.

Java does not support Read-Evaluate-Print-Loop (REPL) which is a major deal-breaker when choosing a programming language for big data processing.

ScalaScala

Scala is comparatively new to the programming scene but has become popular very quickly. Above are a few quotes from bigger names in the industry for Scala. From the Spark context, many experts prefer Scala over other programming languages as Spark is written in Scala. Scala is the native language of Spark. It means any new API always first be available in Scala.

Scala is a hybrid functional programming language because It has both the features of object-oriented programming and functional programming. As an OO Programming Language, it considers every value as an object and all OOPS concepts apply. As a functional programming language, it defines and supports functions. All operations are done as functions. No variable stands by itself. Scala is a machine-compiled language.

Scala and Java are popular programming languages that run over JVM. JVM makes these languages framework friendly. One can say, Scala is an advanced level of Java.

Scala

Features/Advantages of Scala:

  1. It’s general-purpose object-oriented language with functional language properties too. It’s less verbose than Java.
  2. It can work with JVM and hence is portable.
  3. It can support Java APIs comfortably.
  4. It's fast and robust in Spark context as its Spark native.
  5. It is a statically typed language.
  6. Scala supports Read-Evaluate-Print-Loop (REPL)

Drawbacks / Downsides of Scala:

  1. Scala is complex to learn due to the functional nature of language.
  2. Steep learning curve.
  3. Lack of matured machine learning languages.

Python

Python is one of the de-facto languages of Data Science. It is a simple, open-source, general-purpose language and is very easy to learn. It has a rich set of libraries, utilities, ready-to-use features and support to a number of mature machine learning, big data processing, visualization libraries.

Advantages of Python:

  1. It is interpreted language (i.e. support to REPL, Read, Evaluate, Print, Loop.) If you type a command into a command-line interpreter and it responds immediately. Java lacks this feature.
  2. Easy to learn, easy debugging, fewer lines of code.
  3. It is dynamically typed. i.e. can dynamically defined variable types. i.e. Python as a language is type-safe.
  4. Python is platform agnostic and scalable.

Drawbacks/Disadvantages:

  1. Python is slow. Big data professionals find projects built in Java / Scala are faster and robust than the once with python.

Whilst using user-defined functions or third party libraries in Python with Spark, processing would be slower as increased processing is involved as Python does not have equivalent Java/Scala native language API for these functionalities.

  1. Python does not support heavy weight processing fork() using uWSGI but it does not support true multithreading.

R Language

R is the favourite language of statisticians. R is fondly called a language of statisticians.  It’s popular for research, plotting, and data analysis. Together with RStudio, it makes a killer statistic, plotting, and data analytics application.

R is majorly used for building data models to be used for data analysis.

Advantages/Features of R:

  1. Strong statistical modeling and visualization capabilities.
  2. Support for ‘data science’ related work.
  3. It can be integrated with Apache Hadoop and Spark easily.

Drawbacks/Disadvantages of R:

  1. R is not a general-purpose language.
  2. The code written in R cannot be directly deployed into production. It needs conversion into Java or Python.
  3. Not as fast as Java / Scala.

Comparison of four languages for Apache Spark

With the introduction of these 4 languages, let’s now compare these languages for the Spark framework:

These languages can be categorized into 2 buckets basis high-level spark architecture support, broadly:

  1. JVM Languages: Java and Scala
  2. Non-JVM Languages: Python and R

Due to these categorizations, performance may vary. Let’s understand architecture in little depth to understand the performance implications of using these languages. This would also help us to understand the question of when to use which language.

Spark Framework High-level architectureSpark Framework High-level architecture 

An application written in any one of the languages is submitted on the driver node and further driver node distributes the workload by dividing the execution on multiple worker nodes.

JVM compatible Application Execution FlowJVM compatible Application Execution Flow 

Consider the applications written are JVM compatible (Java/Scala). Now, Spark is also written in native JVM compatible Scala language, hence there is no explicit conversion required at any point of time to execute JVM compatible applications on Spark. Also, this makes the native language applications faster to perform on the Spark framework.

There are multiple scenarios for Python/R written applications:

Python/R driver talk to JVM driver by socket-based API. On the driver node, both the driver processes are invoked when the application language is non-JVM language.

Scenario 1: Applications for which Equivalent Java/Scala Driver API exists - This scenario executes the same way as JVM compatible applications by invoking Java API on the driver node itself. The cost for inter-process communication through sockets is negligible and hence performance is comparable. This is with the assumption that processed data over worker nodes are not to be sent back to the Driver again.

Scenario 1(b): If the assumption taken is void in scenario 1 i.e. processed data on worker nodes is to be sent back to driver then there is significant overhead and serialization required. This adds to processing time and hence performance in this scenario deteriorates.

JVM compatible Application Execution Flow

Scenario 2: Applications for which Equivalent Java/Scala Driver API do not exist – Ex. UDF (User-defined functions) / Third party python libraries. In such cases equivalent Java API doesn’t exist and hence, additional executor sessions are initiated on worker node and python API is serialized on worker node and executed. This python worker processes in addition to JVM and coordination between them is overhead. Processes also compete for resources which adds to memory contention.

In addition, if the data is to send back to the driver node then processing takes a lot of time and problem scales up as volume increases and hence performance is bigger problem.

JVM compatible Application Execution Flow

As we have seen a performance, Let’s see the tabular comparison between these languages.

Comparison PointsJavaScalaPythonR
PerformanceFasterFaster (about 10x faster than Python)SlowerSlower
Learning CurveEasier than Java
Tougher than Python

Steep learning curve than Java & PythonEasiestModerate
User GroupsWeb/Hadoop programmersBig Data ProgrammersBeginners & Data EngineersData Scientists/ Statisticians
UsageWeb development and Hadoop NativeSpark NativeData Engineering/ Machine Learning/ Data VisualizationVisualization/ Data Analysis/ Statistics use cases
Type of LanguageObject-Oriented, General PurposeObject-Oriented & Functional General PurposeGeneral PurposeSpecifically for Data Scientists.
Needs conversion into Scala/Python before productizing

ConcurrencySupport ConcurrencySupport ConcurrencyDoes not Support ConcurrencyNA
Ease of UseVerboseLesser Verbose than ScalaLeast VerboseNA
Type SafetyStatically typedStatically typed (except for Spark 2.0 Data frames)Dynamically TypedDynamically Typed
Interpreted Language (REPL)NoNoYesYes
Maturated machine learning libraries availability/ SupportLimitedLimitedExcellentExcellent
Visualization LibrariesLimitedLimitedExcellentExcellent
Web Notebooks SupportIjava Kernel in Jupyter NotebookApache Zeppelin Notebook SupportJupyter Notebook Support

R Notebook

Which language is better for Spark and Why?

With the info we gathered for the languages, let's move to the main question i.e. which language to choose for Spark? 

My answer is not a straightforward single language for this question. I will state my point of view for choosing the proper language: 

  1. If you are a beginner and want to choose a language from learning Spark perspective. 
  2. If you are organization/ self employed or looking to answer a question for solutioning a project perspective. 

I. If you are beginner:

  • If you are a beginner and have no prior education of programming language then Python is the language for you, as it’s easy to pick up. Simple to understand and very user-friendly. It would prove a good starting point for building Spark knowledge further. Also, If you are looking for getting into roles like ‘data engineering’, knowledge of Python along with supported libraries will go a long way. 
  • If you are a beginner but have education in programming languages, then you may find Java very familiar and easy to build upon prior knowledge. After all, it grapevine of all the languages.  
  • If you are a hardcore bigdata programmer and love exploring complexities, Scala is the choice for you. It’s complex but experts say if once you love Scala, you will prefer it over other languages anytime.
  • If you are a data scientist, statistician and looking to work with Spark, R is the language for you. R is more science oriented than Python. 

II. If you are organization/looking for choice of language for implementations:

You need to answer the following important questions before choosing the language:

  1. Skills and Proficiency: Which skill-sets and proficiency over language, you already have with you/in your team?
  2. Design goals and availability of features/ Capability of language: Which libraries give you better support for the type of problem(s) you are trying to solve.
  3. Performance implications 

Details of these explained below: 

1. Skillset: This is very straightforward. Whichever is available skill set within a team, go with that to solve your problem, after evaluating answers of other two questions. 
If you are self-employed, the one you have proficiency is the most likely suitable choice of language.  

2. Library Support:  
Following gives high-level capabilities of languages:

  • R: Good for research, plotting, and data analysis.
  • Python: Good for small- or medium-scale projects to build models and analyse data, especially for fast start-ups or small teams.
  • Scala/Java: Good for robust programming with many developers and teams; it has fewer machine learning utilities than Python and R, but it makes up for it with increased code maintenance.
    In my opinion, Scala/Java can be used for larger robust projects to ease maintenance. Also, If one wants the app to scale quickly and needs it to be robust, Scala is the choice.
    Python and R: Python is more universal language than R, but R is more science oriented. Broadly, one can say Python can be implemented for Data engineering use cases and R for Data science-oriented use cases. On the other hand, if you discover these two languages have about the same library support you need, then pick the one whose syntax you prefer. You may find that you need both depending on the situation. 

3. Performance: As seen earlier in the article, Scala/ Java is about 10x faster than Python/R as they are JVM supported languages. However, if you are writing Python/R applications wisely (like without using UDFs/ Not sending data back to the Driver etc) they can perform equally well.

Conclusion

For learning, depending upon your prior knowledge, Python is the easiest of all to pick up. 

For implementations, Choice is in your hands which language to choose for implementations but let me tell you one secret or a tip, you don’t have to stick to one language until you finish your project. You can divide your problem in small buckets and utilize the best language to solve the problem. This way, you can achieve balance between optimum performance, availability, proficiency in a skill, and sub-problem at hand.  

Do let us know how your experience was in learning the language comparisons and the language you think is better for Spark. Moreover, which one you think is “the one for you”, through comments below.

Shruti

Shruti Deshpande

Blog Author

10+ years of data-rich experience in the IT industry. It started with data warehousing technologies into data modelling to BI application Architect and solution architect.


Big Data enthusiast and data analytics is my personal interest. I do believe it has endless opportunities and potential to make the world a sustainable place. Happy to ride on this tide.


*Disclaimer* - Expressed views are the personal views of the author and are not to be mistaken for the employer or any other organization’s views.

Join the Discussion

Your email address will not be published. Required fields are marked *

1 comments

Rahul 01 Jul 2020

Outstanding explanation... Thanks a lot Sir.....

Suggested Blogs

What are Python KeyError Exceptions and How to Handle Them

There are times when you have written your code but while you execute, it might not run. These types of situations occur when the input is inappropriate or you try to open a file with a wrong path or try to divide a number by zero. Due to some errors or incorrect command the output will not be displayed. This is because of errors and exceptions which are a part of the Python programming language. Learn about such concepts and gain further knowledge by joining Python Programming Course.What is Exception Handling?Python raises exceptions when it encounters errors during execution. A Python Exception is basically a construct that signals any important event, such as a run-time error.Exception Handling is the process of responding to executions during computations, which often interrupts the usual flow of executing a program. It can be performed both at the software level as part of the program and also at hardware level using built-in CPU mechanisms.Why is Exception Handling Important?Although exceptions might be irritating when they occur, they play an essential role in high level languages by acting as a friend to the user.An error at the time of execution might lead to two things— either your program will die or will display a blue screen of death. On the other hand, exceptions act as communication tools. It allows the program to answer the questions — what, why and how something goes wrong and then terminates the program in a delicate manner.In simple words, exception handling protects against uncontrollable program failures and increases the potency and efficiency of your code. If you want to master yourself in programming, the knowledge of exceptions and how to handle them is very crucial, especially in Python.What are the Errors and Exceptions in Python?Python doesn’t like errors and exceptions and displays its dissatisfaction by terminating the program abruptly.There are basically two types of errors in the Python language-Syntax Error.Errors occuring at run-time or Exceptions.Syntax ErrorsSyntax Errors, also known as parsing errors, occur when the parser identifies an incorrect statement. In simple words, syntax error occurs when the proper structure or syntax of the programming language is not followed.An example of a syntax error:>>> print( 1 / 0 )) File "", line 1 print( 1 / 0 ))   ^SyntaxError: invalid syntaxExceptionsExceptions occur during run-time. Python raises an exception when your code has a correct syntax but it encounters a run-time issue which it is not able to handle.There are a number of defined built-in exceptions in Python which are used in specific situations. Some of the built-in exceptions are:ExceptionCause Of ErrorArithmeticErrorRaised when numerical computation fails.FloatingPointErrorRaised when floating point calculation fails.AssertionErrorRaised in case of failure of the Assert statement.ZeroDivisionErrorRaised when division or modulo by zero takes place for all numerical values.OverflowErrorRaised when result of an arithmetic operation is very large to be represented.IndexErrorRaised when an index is not found in a sequence.ImportErrorRaised when the imported module is not found.IndentationErrorRaised when indentation is not specified properly.KeyboardInterruptRaised when the user hits interrupt key.RuntimeErrorRaised when a generated error does not fall into any category.SyntaxErrorRaised when there is an error in Python syntax.IOErrorRaised when Python cannot access a file correctly on disk.KeyErrorRaised when a key is not found in a dictionary.ValueErrorRaised when an argument to a function is the right type but not in the right domain.NameErrorRaised when an identifier is not found in the local or global namespace.TypeErrorRaised when an argument to a function is not in the right type.There are another type of built-in exceptions called warnings. They are usually issued in situations where the user is alerted of some conditions. The condition does not raise an exception; rather it  terminates the program.What is a Python KeyError?Before getting into KeyError, you must know the meaning of dictionary and mapping in Python. Dictionary (dict) is an unordered collection of objects which deals with data type key. They are Python’s implementation of data structures and are also known as associative arrays. They comprise key-value pairs, in which each pair maps the key to its associated value.Dictionary is basically a data structure that maps one set of values into another and is the most common mapping in Python.Exception hierarchy of KeyError:->BaseException              ->Exception                         ->LookupError                                       ->KeyErrorA Python KeyError is raised when you try to access an invalid key in a dictionary. In simple terms, when you see a KeyError, it denotes that the key you were looking for could not be found.An example of KeyError:>>> prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} >>> prices['Eraser'] Traceback (most recent call last): File "", line 1, in prices['Eraser'] KeyError: 'Eraser'Here, dictionary prices is declared with the prices of three items. The KeyError is raised when the item ‘Eraser’ is being accessed which is not present in prices.Whenever an exception is raised in Python, it is done using traceback, as you can see in the example code above. It tells why an exception is raised and what caused it.Let’s execute the same Python code from a file. This time, you will be asked to give the name of the item whose price you want to know:# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') print(f'The price of {item} is {prices[item]}')You will get a traceback again but you’ll also get the information about the line from which the KeyError is raised:Get price of: Eraser Traceback (most recent call last): File "prices.py", line 5, in print(f'The price of {item} is {prices[item]}') KeyError: 'Eraser'The traceback in the example above provides the following information:A KeyError was raised.The key ‘Eraser’ was not found.The line number which raised the exception along with that line.Where else will you find a Python KeyError?Although most of the time, a KeyError is raised because of an invalid key in a Python dictionary or a dictionary subclass, you may also find it in other places in the Python Standard Library, such as in a zipfile. However, it denotes the same semantic meaning of the Python KeyError, which is not finding the requested key.An example of such:>>> from zipfile import ZipFile >>> my_zip_file = ZipFile('Avengers.zip') >>> my_zip_file.getinfo('Batman')Traceback (most recent call last): File "", line 1, in File "myzip.py", line 1119, in getinfo 'There is no item named %r in the archive' % name) KeyError: "There is no item named 'Batman' in the archive"In this example, the zipfile.ZipFile class is used to derive information about a ZIP archive ‘Batman’ using the getinfo() function. Here, the traceback indicates that the problem is not in your code but in the zipfile code, by showing the line which caused the problem. The exception raised here is not because of a LookUpError but rather due to the zipfile.ZipFile.getinfo()function call.When do you need to raise a Python KeyError?In Python Programming, it might be sensible at times to forcefully raise exceptions in your own code. You can usually raise an exception using the raise keyword and by calling the KeyError exception:>>> raise KeyError('Batman')Here, ‘Batman’ acts as the missing key. However, in most cases, you should provide more information about the missing key so that your next developer has a clear understanding of the problem.Conditions to raise a Python KeyError in your code:It should match the generic meaning behind the exception.A message should be displayed about the missing key along with the missing key which needs to be accessed.How to Handle a Python KeyError?The main motive of handling a Python KeyError is to stop unexpected KeyError exceptions to be raised. There are a number of number of ways of handling a KeyError exception.Using get()The get()is useful in cases where the exception is raised due to a failed dictionary LookupError. It returns either the specified key value or a default value.# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') price = prices.get(item) if price:   print(f'The price of {item} is {prices[item]}')   else:   print(f'The price of {item} is not known')This time, you’ll not get a KeyError because the get() uses a better and safer method to retrieve the price and if not found, the default value is displayed:Get price of: EraserThe price of Eraser is not knownIn this example, the variable price will either have the price of the item in the dictionary or the default value ( which is None by default ).In the example above, when the key ‘Eraser’ is not found in the dictionary, the get() returns  None by default rather than raising a KeyError. You can also give another default value as a second argument by calling get():price = prices.get(item,0)If the key is not found, it will return 0 instead of None.Checking for KeysIn some situations, the get() might not provide the correct information. If it returns None, it will mean that the key was not found or the value of the key in Python Dictionary is actually None, which might not be true in some cases. In such situations, you need to determine the existence of a key in the dictionary. You can use the if and in operator to handle such cases. It checks whether a key is present in the mapping or not by returning a boolean (True or False) value:dict = dictionary() for i in range(50):   key = i % 10     if key in dict: dict[key] += 1 else: dict[key] = 1In this case, we do not check what the value of the missing key is but rather we check whether the key is in the dictionary or not. This is a special way of handling an exception which is used rarely.This technique of handling exceptions is known as Look Before You Leap(LBYL).Using try-exceptThe try-except block is one of the best possible ways to handle the KeyError exceptions. It is also useful where the get() and the if and in operators are not supported.Let’s apply the try-except block on our earlier retrieval of prices code:# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known')Here, in this example there are two cases— normal case and a backup case. try block corresponds to the normal case and except block to the backup case. If the normal case doesn’t print the name of the item and the price and raises a KeyError, the backup case prints a different statement or a message.Using try-except-elseThis is another way of handling exceptions. The try-except-else  has three blocks— try block, except block and else block.The else condition in a try-except statement is useful when the try condition doesn’t raise an exception. However, it must follow all the except conditions.Let us take our previous price retrieval code to illustrate try-except-else:# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of:') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known') else: print(f'There is no error in the statement')First, we access an existing key in the try-except block. If the Keyerror is not raised, there are no errors. Then the else condition is executed and the statement is displayed on the screen.Using finallyThe try statement in Python can have an optional finally condition. It is used to define clean-up actions and is always executed irrespective of anything. It is generally used to release external sources.An example to show finally:# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known') finally: print(f'The finally statement is executed')Remember, the finally statement will always be executed whether an exception has occurred or not.How to raise Custom Exceptions in Python?Python comprises of a number of built-in exceptions which you can use in your program. However, when you’re developing your own packages, you might need to create your own custom exceptions to increase the flexibility of your program.You can create a custom Python exception using the pre-defined class Exception:def square(x): if x
8273
What are Python KeyError Exceptions and How to Han...

There are times when you have written your code bu... Read More

How to Work With a PDF in Python

Whether it is an ebook, digitally signed agreements, password protected documents, or scanned documents such as passports, the most preferred file format is PDF or Portable Document Format. It was originally developed by Adobe and is a file format used to present and transfer documents easily and reliably. It uses the file extension .pdf. In fact, PDF being the most widely used digital media, is now considered as an open standard which is maintained by the International Standards Organization (ISO). Python has relatively easy syntax which makes it even easier for the ones who are in their initial stage of learning the language. The popular Python libraries are well suited and integrated which allows to easily extract documents from a PDF, rotate pages if required, split pdf to make separate documents, or add watermarks in them.Now an important question rises, why do we need Python to process PDFs? Well, processing a PDF falls under the category of text analytics. There are several libraries and frameworks available which are designed in Python exclusively for text analytics. This makes it easier to play with a PDF in Python. You can also extract information from PDF and use into Natural Language Processing or any other Machine Learning models. Get certified and learn more about Python Programming and apply those skills and knowledge in the real world.History of  pyPDF, PyPDF2, pyPDF4The first PyPDF package was released in 2005 and the last official release in 2010. After a year or so, a  company named Phasit sponsored a branch of the PyPDF called PyPDF2 which was consistent with the original package and worked pretty well for several years.A series of packages were released later on with the name of PyPDF3 and later renamed as PyPDF4. The biggest difference between PyPDF and the other versions was that the later versions supported Python3. PyPDF2 has been discarded recently. But since PyPDF4 is not fully backward compatible with the PyPDf2, it is suggested to use PyPDF2. You can also use a substitute package - pdfrw. Pdfrw was created by Patrick Maupin and allows you to perform all functions which PyPDF2 is capable of except a few such as encryption, decryption, and types of decompression.Some common libraries in PythonLet us look into some of the libraries Python offers to handle PDFs:PdfMiner It is a tool used to extract information from PDF documents. PDFMiner allows the user to analyze text data and obtain the definite location of a text. It provides information such as fonts and lines. We can also use it as a PDF transformer and a PDF parser.PyPDF2PyPDF2 is purely a Python library which allows users to split, merge, crop, encrypt, and transform PDFs. You can also add customized data, view options, and passwords to the documents. Tabula-pyIt is a Python wrapper of tabula-java which can read tables from PDF files and convert into Pandas Dataframe or into CSV/TSV/JSON file formats.SlateIt is a Python package which facilitates the extraction of information and is dependent on the PdfMiner package.PDFQueryA light Python wrapper which uses minimum code to extract data from PDFs.xPDFIt is an open source viewer of PDF which also includes an extractor, converter and other utilities. Out of all the libraries mentioned above, PyPDF2 is the most used to perform operations like extraction, merging, splitting and so on.Installing PyPDF2If you're using Anaconda, you can install PyPDF2 using pip or conda. To install PyPDF2 using pip, run the following command in the command line:pip install PyPDF2The module is case-sensitive. So you need to make sure that proper syntax is followed. The installation is really quick since PyPDF2 is free of dependencies.Extracting Document Information from a PDF in PythonPyPDF2 can be used to extract metadata and all sorts of texts from PDF when you are performing operations on preexisting PDF files. The types of data you can extract are:AuthorCreatorProducerSubjectTitleNumber of PagesTo understand it better, let us use an existing PDF in your system or you can go to Leanpub and download a book sample.The code for extracting the document information from the PDF—# get_doc_info.py from PyPDF2 import PdfFileReader def getinfo(path):     with open(path, 'rb') as f:         PDF = PdfFileReader(f)         information = PDF.getDocumentInfo()         numberofpages = PDF.getNumPages()     print(information)     author = information.author     creator = information.creator     producer =information .producer     subject = information.subject     title = information.title if __name__ == '__main__':     path = 'reportlab-sample.pdf'     getinfo(path)The output of the program above will look like—Here, we have firstly imported PdfFileReader from the PyPDF2 package. The class PdfFileReader is used to interact with PDF files like reading and extracting information using accessor methods. Then, we have created our own function getinfo with a PDF file as an argument and then called the getdocumentinfo(). This returned an instance of DocumentInformation. And finally we got extract information like the author, creator, subject or title, etc.getNumPages() is used to count the number of pages in the document. PdfMiner can be used when you want to extract text from a PDF file. It is potent and particularly designed for extracting text from PDF.We have learned to extract information from PDF. Now let’s learn how to rotate a PDF. Rotating pages in PDFA lot of times we receive PDFs which contain pages in landscape orientation instead of portrait. You may also find certain documents to be upside down, which happens while scanning a document or mailing. However, we can rotate the pages clockwise or counterclockwise according to our choice using Python with PyPDF2.The code for rotating the article is as follows—# rotate_pages.py from PyPDF2 import PdfFileReader, PdfFileWriter def rotate(pdf_path):     pdf_write = PdfFileWriter()     pdf_read = PdfFileReader(path)     # Rotate page 90 degrees to the right     page1 = pdf_read.getPage(0).rotateClockwise(90)     pdf_write.addPage(page1)     # Rotate page 90 degrees to the left     page2 = pdf_read.getPage(1).rotateCounterClockwise(90)     pdf_write.addPage(page2)     # Add a page in normal orientation     pdf_write.addPage(pdf_read.getPage(2))     with open('rotate_pages.pdf', 'wb') as fh:         pdf_write.write(fh) if __name__ == '__main__':     path = 'mldocument.pdf'     rotate(path)The output of the code will be as follows—Here firstly we imported the PdfFileReader and the PdfFileWriter so that we can write out a new PDF file. Then we declared a function rotate with a path to the PDF that is to be modified. Within the function, we created a read object pdf_read and write object pdf_write.Then, we used the getPage() to grab the pages. Two pages page1 and page2 are taken and rotated to 90 degrees clockwise and 90 degrees counterclockwise respectively using rotateClockwise() and rotateCounterClockwise().We used addPage() function after each rotation method calls. This adds the rotated page to the write object. The last page we add is page3 without any rotation.Lastly, we have used write() with a file-like parameter to write out the new PDF. The final PDF contains three pages, the first two will be in the landscape mode and rotated in reversed direction and the third page will be in normal orientation.Now we will learn to merge different PDFs into one.Merging PDFsIn many cases, we need to merge two PDFs into a single one. For example, suppose you are working on a project report and you need to print it and bind it into a book. It contains a cover page followed by the project report. So you have two different PDFs and you want to merge them into one PDF. You can simply use Python to do so. Let us see how can we merge PDFs into one.The code for merging two PDF documents using PyPDF in mentioned below:# pdf_merging.py from PyPDF2 import PdfFileReader, PdfFileWriter def pdfmerger(paths, output):     pdfwrite = PdfFileWriter()     for path in paths:         pdfread = PdfFileReader(path)         for page in range(pdfread.getNumPages()):             # Add each page to the writer object             pdfwrite.addPage(pdfread.getPage(page))     # Write out the merged PDF     with open(output, 'wb') as out:         pdfwrite.write(out) if __name__ == '__main__':     paths = ['document-1.pdf', 'document-2.pdf']     pdfmerger(paths, output='merged.pdf')Here we have created a function pdfmerger() which takes a number of inputs and a single output. Then we created a PdfFileReader() object for each PDF path and looped over the pages, added each page to the write object. Finally, using the write() function the object’s contents are written to the disk.PyPDF2 makes the process of merging simpler by creating the PdfFileMerger class.Code for merging two documents using PyPDF2—# pdf_merger2.py import glob from PyPDF2 import PdfFileMerger def merger(output_path, input_paths):     pdfmerge = PdfFileMerger()     file_handles = []     for path in input_paths:         pdfmerge.append(path)     with open(output_path, 'wb') as fileobj:         pdfmerge.write(fileobj) if __name__ == '__main__':     paths = glob.glob('d-1.pdf')     paths.sort()     merger('d-2.pdf', paths)The PyPDF2 makes it simpler in the way that we don’t need to loop the pages of each document ourselves.  Here, we created the object pdfmerge and looped through the PDF paths. The PyPDF2 automatically appends the whole document. Finally, we write it out.Let’s perform the opposite of merging now!Splitting PDFsThe PyPDF2 package has the ability to split up a single PDF into multiple PDFs. It allows us to split pages into different PDFs. Suppose we have a set of scanned documents in a single PDF and we need to separate the pages into different PDFs as per requirement, we can simply use Python to select pages we want to split and get the work done.Code for splitting a single PDF into multiple PDFs—# pdf_splitter.py import os from PyPDF2 import PdfFileReader, PdfFileWriter def splitpdf(path):     fname = os.path.splitext(os.path.basename(path))[0]     pdf = PdfFileReader(path)     for page in range(pdf.getNumPages()):         pdfwrite = PdfFileWriter()         pdfwrite.addPage(pdf.getPage(page))         outputfilename = '{}_page_{}.pdf'.format(             fname, page+1)         with open(outputfilename, 'wb') as out:             pdfwrite.write(out)         print('Created: {}'.format(outputfilename)) if __name__ == '__main__':     path = 'document-1.pdf'     splitpdf(path)Here we have imported the PdfFileReader and PdfFileWriter from PyPDF2. Then we created a function called splitpdf() which accepts the path of PDF we want to split. The first line of the function takes the name of the input file. Then we open the PDF and create a read object. Using the read object’s getNumPages(), we loop over all the pages.In the next step, we created an instance of PdfFileWriter inside the for loop. Then, we created a PDF write instance and added each page to it for each of the pages in the PDF input. We also created a unique filename using the original filename + the word ‘page’ + the page number + 1.Once we are done with running the script, we will have each of the pages of the input PDF split into multiple PDFs. Now let us learn how to add a watermark to a PDF and keep it secured.Adding Overlays/WatermarksAn image or superimposed text on selected pages in a PDF document is referred to as a Watermark. The Watermark adds security features and protects our rational property like images and PDFs. Watermarks are also called overlays.The PyPDF2 allows us to watermark documents. We just need to have a PDF which will consist of our watermark text, image or signature.Code for adding a watermark in a PDF—# watermarker.py from PyPDF2 import PdfFileWriter, PdfFileReader def watermark(inputpdf, outputpdf, watermarkpdf):     watermark = PdfFileReader(watermarkpdf)     watermarkpage = watermark.getPage(0)     pdf = PdfFileReader(inputpdf)     pdfwrite = PdfFileWriter()     for page in range(pdf.getNumPages()):         pdfpage = pdf.getPage(page)         pdfpage.mergePage(watermarkpage)         pdfwrite.addPage(pdfpage)     with open(outputpdf, 'wb') as fh:         pdfwrite.write(fh) if __name__ == '__main__':     watermark(inputpdf='document-1.pdf',               outputpdf='watermarked_w9.pdf',               watermarkpdf='watermark.pdf')The output of the code will look like— There are three arguments of the function watermark(): inputpdf: The path of the PDF that is to be watermarked. outputpdf: The path where the watermarked PDF will be saved. watermarkpdf: The PDF which contains the watermark.Firstly, we extract the PDF page which contains the watermark image or text and then open that PDF page where we want to give the desired watermark.Using the inputpdf, we create a read object and using the pdfwrite, we create a write object to write out the watermarked PDF and then iterate over the pages.Next, we call the page object’s mergePage and apply the watermark and add that to the write object pdfwrite.When the loop terminates, the watermarked PDF is written out to the disk and it’s done!Encrypting a PDFIn the PDF world, the PyPDF2 package allows an owner password which gives the user the advantage to work as an administrator. The package also provides the user password which allows us to open the document upon entering the password.The PyPDF2 basically doesn’t permit any allowances on any PDF file yet it allows the user to set the owner password and user password.Code to add a password and add encryption to a PDF—# pdf_encrypt.py from PyPDF2 import PdfFileWriter, PdfFileReader def encryption(inputpdf, outputpdf, password):     pdfwrite = PdfFileWriter()     pdfread = PdfFileReader(inputpdf)     for page in range(pdfread.getNumPages()):         pdfwrite.addPage(pdfread.getPage(page))     pdfwrite.encrypt(user_pwd=password, owner_pwd=None,                       use_128bit=True)     with open(outputpdf, 'wb') as fh:         pdfwrite.write(fh) if __name__ == '__main__':     encryption(inputpdf='document-1.pdf',                   outputpdf='document-1-encrypted.pdf',                   password='twofish')We declare a  function named encryption() with three arguments—the input PDF path, the output PDF path and the password that we want to keep. Then we create one read object pdfread and one write object pdfwrite. Now we loop over all the pages and add them to the write object since we need to encrypt the entire document.Finally, we call the encrypt() function which accepts three parameters—the user password, the owner password and the whether or not to use 128-bit encryption. The PDF  will be encrypted to 40-bit encryption if the argument use128bit is set to false. Also if the owner password is set to none, then it will be set to user password automatically.Reading the Table data from PDFSuppose you want to work with the Table data in Pdf, you can use tabula-py to read tables in a PDF. To install tabula-py, run:pip install tabula-pyCode to extract simple Text from pdf using PyPDF2:import tabula # readinf the PDF file that contain Table Data # you can find the pdf file with complete code in below # read_pdf will save the pdf table into Pandas Dataframe df = tabula.read_pdf("document.pdf") # in order to print first 5 lines of Table df.head()If you PDF file contains Multiple Tabledf = tabula.read_pdf("document.pdf",multiple_tables=True)If you want to extract Information from the specific part of any specific page of PDFtabula.read_pdf("document.pdf", area=(126,149,212,462), pages=1)If you want the output into JSON Formattabula.read_pdf("offense.pdf", output_format="json")Exporting PDF into ExcelSuppose you want to export a PDF into Excel, you can do so by writing the following code and convert the PDF Data into Excel or CSV.tabula.convert_into("document.pdf", "document_testing.xlsx", output_format="xlsx")Let us sum up what we have learned in the article:Extraction of data from a PDFRotate pages in a PDFMerge PDFs into one PDFSplit a PDF into many PDFsAdd watermarks or overlays in a PDFAdd password or encryption to a PDFReading table from PDFExporting PDF into Excel or CSVAs you have seen, PyPDF2 is one of the most useful tools available in Python. The features of PyPDF2 makes life easier whether you are working on a large project or even when you quickly want to make some changes to your PDF documents. Learn more about such libraries and frameworks as KnowledgeHut offers Python Certification Course for Programmers, Developers, Jr./Sr Software Engineers/Developers and anybody who wants to learn Python.
8363
How to Work With a PDF in Python

Whether it is an ebook, digitally signed agreement... Read More

Web Development Using PHP And MySQL

PHP (or PHP Hypertext Preprocessor) is a server-side scripting language that is used to create dynamic web pages that can interact with databases. It is a widely-used open source language that is specifically used for web application development and can be embedded within HTML. Why PHP? The distinguishing feature of PHP is that the scripting code is executed on the server, which generates HTML that is sent back to the client. The client receives the result of executing the script without knowing the underlying code. Developers can configure the web server to process all the HTML files (containing the PHP script). PHP course is easy to learn for any newcomer, but also offers advanced programming features. Using PHP with a database system PHP, as a scripting language, is popular among web developers because of its ability to interact with database systems including Oracle and MySQL. This article discusses the use of PHP scripting language with the MySQL database. Any website can require a variety of data or information to display and to retrieve them from the database. This can include display of a simple list to the running of the website based on data stored in the database. Listed below are some examples where PHP and MySQL can be used together: • Digital Ad banners, where the PHP script can be used to retrieve a digital banner from the database, which then selects a random banner from its table records and sends it back to the calling script. The PHP script can also maintain a count of banner views and clicks from the website. • Internet forums or digital boards, which use PHP and MySQL to store and retrieve user messages. • Website designing, where the design of an entire website can be changed using a couple of PHP scripts, instead of changing and uploading each web page. The PHP script can access the MySQL database to retrieve all information about the web page. Setting up the MySQL database The procedure of setting up the MySQL database varies according to the host. Every database would require a user name and password, in order to access the database. Database administration can be done using PHP scripts or using a program like PHPMyAdmin. The next step is to create the database tables for storing the website information. Creating a database table using PHPMyAdmin is also simple. Alternatively, one can create and configure the entire database using the following PHP script: CREATE TABLE tablename { Fields } Where the Fields are coded as fieldname type(length) extra_info Example: first varchar(15) NOT NULL The following command is used in the PHP script to connect to the MySQL database: mysql_connect(localhost,$username,$password); where: • localhost is the server address on which the web site is running, • $username is the user name for the database access • $password is the password for the database access Executing PHP commands After configuring and connecting to the MySQL database, you can start executing PHP commands on the server. Following are the 2 methods of executing a PHP command: • Entering the command in PHP using the following syntax: Mysql_query($query) This form of command can be used to repeat the command simply by changing the variable. • Defining the command as a variable. The result of the operation will be assigned to the variable. Data input and output Inserting data using PHP is identical to the procedure of data input using HTML pages. The advantage of using PHP is that the script does not need to be changed for each new piece of input data. Users can also input their own data on the web page. Following is an example of an HTML page with textboxes that can be used to enter data in a form: Alternatively, you can use variables to input information into the database. Example: $first=$_POST[‘first’]; $last=$_POST[‘last’]; $phone=$_POST[‘phone’]; $mobile=$_POST[‘mobile’]; $fax=$_POST[‘fax’]; $email=$_POST[’email’]; $web=$_POST[‘web’]; … $query = “INSERT INTO contacts VALUES (”,’$first’,’$last’,’$phone’,’$mobile’,’$fax’,’$email’,’$web’)”; mysql_query($query); This script is saved in the insert.php file, which can be called from the HTML form. Using this method, data entered in the web page form is stored in the defined variables, which are then passed to the PHP. To display (or output) the entered data using PHP, you can use the following MySQL command with the result assigned to the variable. $query=”SELECT * FROM contacts”; $result=mysql_query($query); PHP provides 2 submission methods, GET and POST to get the data submitted by the form into your PHP script. GET method displays the variables and the data in the page address, while they are invisible in the POST method. For example, a script can be created that will display different web pages depending on the clicked link. yourpage.php?user=david (to show David’s page) yourpage.php?user=tom (to show Tom’s page)
3959
Web Development Using PHP And MySQL

PHP (or PHP Hypertext Preprocessor) is a server-si... Read More