starting-spark

Starting Spark

Getting started with Spark big data analytics engine

Spark in the Cloud


Preferred: Local Installation on Windows

First, set up Windows machine tools

Verify Prerequisities are installed (some included above)

choco install 7zip.install -y
choco install openjdk11 -y
choco install miniconda3 --params="/AddToPath:1" -y
refreshenv

Verify:

java --version
python --version

Important: Make sure there is only one Java in your path - both system and user path.

Install Spark locally

Spark is written in Scala (a new language for the JVM), but you can interact with it using Scala - or Python.

  1. Read: https://spark.apache.org/
  2. View Spark Downloads: https://spark.apache.org/downloads.html (e.g., to Downloads folder).
  3. NEW: REVERT BACK TO “3.1.2 (Jun 01 2021)” in the dropdown box before downloading.
  4. Use 7zip to extract, extract again.
  5. Move so you have C:\spark-3.1.2-bin-hadoop3.2\bin - MUST BE THE EARLIER 3.1.2 (not 3.2.1).
  6. View the different versions of winutils for Spark from https://github.com/cdarlint/winutils/ and download into your spark bin folder.
  7. UPDATE: Download 3.2.0 winutils.exe from https://github.com/cdarlint/winutils/blob/master/hadoop-3.2.0/bin/winutils.exe into spark bin
  8. UPDATE: Download 3.2.0 hadoop.dll from https://github.com/cdarlint/winutils/blob/master/hadoop-3.2.0/bin/hadoop.dll into spark bin
  9. Recommended: Star the repo to say thanks for providing this helpful service.
  10. Set System Environment Variables - be sure to match the version you actually download.
    • JAVA_HOME = C:\Program Files\OpenJDK\openjdk-11.0.13_8
    • HADOOP_HOME = C:\spark-3.1.2-bin-hadoop3.2
    • SPARK_HOME = C:\spark-3.1.2-bin-hadoop3.2
    • Path - add %SPARK_HOME%\bin
    • Path - add %HADOOP_HOME%\bin
    • Path - verify there is exactly one path to java bin in both user and system paths.

Verify Spark using Scala

  1. In PowerShell as Admin, run spark-shell to launch Spark with Scala (you should get a scala prompt). Be patient - it may take a while.
  2. In a browser, open http://localhost:4040/ to see the Spark shell web UI
  3. Exit spark shell with CTRL-D (for “done”)

Verify PySpark (install if needed)

  1. The new /bin includes PySpark.
  2. In PS as Admin, run pyspark to launch Spark with Python. You should get a Python prompt »>.
  3. Quit a Python window with the quit() function.
  4. SHOULD NOT BE NEEDED: If you DON’T see a version after the command above, follow instructions at https://anaconda.org/conda-forge/pyspark. Open Anaconda prompt and run: conda install -c conda-forge pyspark

Run a PySpark Script

Try to run an example, open PS as Admin in the following location and run this command.

C:\spark-3.1.2-bin-hadoop3.2> bin/spark-submit examples/src/main/python/wordcount.py README.md

Required:

Enter miniconda directory and create a copy of python.exe renamed as python3.exe. Spark requires the python3 command name.

Solution provided by:

Required: Copy log4j template to properties

Option: Manage App Execution Aliases


Spark Scala Examples

Read the example code. What examples are avaiable? What arguments are needed?

.\bin\run-example SparkPi
.\bin\run-example JavaWordCount
.\bin\run-example JavaWordCount README.md

Warnings

If you see a WARN about trying to compute pageszie, just hit ENTER. This command works in Linux, but not in Windows.


Experiment with Spark & Python


Experiment with Spark & Java

Build custom apps using Java.

See https://github.com/denisecase/spark-maven-java-challenge

java -cp target/spark-challenge-1.0.0-jar-with-dependencies.jar edu.nwmissouri.isl.App "data.txt"

Experiment with Spark & Scala

Create a new RDD from an existing file. Use common textFile functions.

Implement word count.

val textFile = sc.textFile("README.md")
textFile.count()
textFile.first()
val linesWithSpark = textFile.filter(line => line.contains("Spark"))
val ct = textFile.filter(line => line.contains("Spark")).count()

Find the most words in a single line. All data in -> one out.

  1. First, map each line to words and get the size.
  2. Then, reduce all sizes to one max value.

Which functions should we use? Which version do you prefer?

textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b))

val maxWords = textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)

MapReduce in Spark & Scala

  1. First, flatMap each line to words.
  2. Then, map each word to a count (one).
  3. Then, reduceByKey to aggregate a total for each key.
  4. After transformations, use an action (e.g. collect) to collect the results to our shell.

Can you modify to get max by key?

Can you modify to get max by key?

val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
wordCounts.collect()

Python: Conda Environments (Do not recommend pip as we do not include instructions for pip virtual environments)

C:\Users\dcase\miniconda3\envs
C:\Users\dcase\.conda\envs
C:\Users\dcase\AppData\Local\conda\conda\envs

A Few Terms

Resources