'unable to install pyspark

I am trying to install pyspark as this:

python setup.py install

I get this error:

Could not import pypandoc - required to package PySpark

pypandoc is installed already

Any ideas how can I install pyspark?



Solution 1:[1]

I faced the same issue and solved it as below install pypandoc before installing pyspark

pip install pypandoc
pip install pyspark

Solution 2:[2]

Try installing pypandoc with python3 with pip3 install pypandoc.

Solution 3:[3]

If you are using window follow the following steps:
1) install Jdk in the computer from link: https://www.oracle.com/technetwork/java/javase/downloads/index.html

2) set the environment variable $JAVA_HOME= /path/where/you/installed/jdk than add path in the PATH=%JAVA_HOME/bin

3)download the spark from the link:- https://spark.apache.org/downloads.html this file in the Zip format extract the file and file name is like spark-2.3.1-bin-hadoop2.7.tgz , move this folder to the C Directory. and set the environment variable

SPARK_HOME=/path/of the /spark 

4)download the scala ide from the link :- http://scala-ide.org/ extract the file and copy the Eclipse folder to the C: directory

5) now open cmd and write spark-shell it will open the scala shell for you.

Solution 4:[4]

2018 version-

Install PYSPARK on Windows 10 JUPYTER-NOTEBOOK with ANACONDA NAVIGATOR.

STEP 1

Download Packages

1) spark-2.2.0-bin-hadoop2.7.tgz Download

2) Java JDK 8 version Download

3) Anaconda v 5.2 Download

4) scala-2.12.6.msi Download

5) hadoop v2.7.1 Download

STEP 2

Create SPARK folder in C:/ drive and extract Hadoop, spark and install Scala using scala-2.12.6.msi in the same directory. The directory structure should be It will look like this

Note: During installation of SCALA, specify C:/Spark folder

STEP 3

Now set the windows environment variables:

  1. HADOOP_HOME=C:\spark\hadoop

  2. JAVA_HOME=C:\Program Files\Java\jdk1.8.0_151

  3. SCALA_HOME=C:\spark\scala\bin

  4. SPARK_HOME=C:\spark\spark\bin

  5. PYSPARK_PYTHON=C:\Users\user\Anaconda3\python.exe

  6. PYSPARK_DRIVER_PYTHON=C:\Users\user\Anaconda3\Scripts\jupyter.exe

  7. PYSPARK_DRIVER_PYTHON_OPTS=notebook

  8. NOW SELECT PATH OF SPARK :

    Click on Edit and add New

    Add "C:\spark\spark\bin” to variable “Path” Windows

STEP 4

  • Make folder where you want to store Jupyter-Notebook outputs and files
  • After that open Anaconda command prompt and cd Folder name
  • then enter Pyspark

thats it your browser will pop up with Juypter localhost

STEP 5

Check if PySpark is working or not !

Type simple code and run it

from pyspark.sql import Row
a = Row(name = 'Vinay' , age=22 , height=165)
print("a: ",a)

Solution 5:[5]

Steps to install PySpark API for jupyter notebook:

  1. Go to this site https://spark.apache.org/downloads.html to download latest spark. The file will be downloaded in .tgz format. Extract this tgz file in a directory where you want to install PySpark.

  2. After extracting the tgz file , you will need to download hadoop because Apache spark requires Hadoop, so download hadoop from https://github.com/steveloughran/winutils/blob/master/hadoop-2.7.1/bin/winutils.exe, A file will be downloaded - 'winutils.exe'. Copy this exe file in the 'bin/' directory of your spark (spark-2.2.0-bin-hadoop2.7/bin)

  3. If you have anaconda installed, there will be .condarc file in C:\Users\, open that, change ssl_verify from true to false. This will help you to install python libraries directly from prompt.(In case if you have restricted network)

  4. Open anaconda prompt and type 'conda install findspark' to install findspark python module.If you are not able to install it, go to this link https://github.com/minrk/findspark and download ZIP,extract it and open anaconda prompt and go to this extracted path and run 'python setup.py install'.

  5. Open ThisPC>> Properties>> Advanced System Settings(You need to have admin access for that).Click on Environment Variables and then Add new user environment variables. Environment variables to add

  6. After creating 4 user variables and adding spark path to 'PATH' system variable, open jupyter notebook and run this code:

    import findspark
    findspark.init()
    import pyspark
    from pyspark.sql import SQLContext
    from pyspark import SparkContext    
    
    sc = SparkContext("local", "First App")
    sqlContext = SQLContext(sc)
    

    If you dont get any error, the installation has been completed successfully.

Solution 6:[6]

what worked for me (windows 10) was:

  1. install pypandoc with pip install pypandoc
  2. add wheel to PATH in windows - if you do pip install wheel and see yellow warning message (wheel not installed not in ... which is not in your PATH) then try doing that. Indication of this problem will be a message during pyspark installation which saying that setup.py is used because wheel is unavailable (it is unavailable because it is not in PATH)

Solution 7:[7]

As of version 2.2 you can directly install pyspark using pip

pip install pyspark

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 viraj ghorpade
Solution 2 geisterfurz007
Solution 3 Sociopath
Solution 4 pvy4917
Solution 5 Ashish C.
Solution 6 pawelek69420
Solution 7 Strick