Category "apache-spark"

VCores used is always less than VCores total in Spark on YARN on AWS EMR?

I'm using Spark to run a grid search job using spark sklearn package. Here's my config NUM_SLAVES = 14 DRIVER_SPARK_MEMORY=53 # "spark.driver.memory" EXECUTOR_

VCores used is always less than VCores total in Spark on YARN on AWS EMR?

I'm using Spark to run a grid search job using spark sklearn package. Here's my config NUM_SLAVES = 14 DRIVER_SPARK_MEMORY=53 # "spark.driver.memory" EXECUTOR_

Spark dataframe transform multiple rows to column

I am a novice to spark, and I want to transform below source dataframe (load from JSON file): +--+-----+-----+ |A |count|major| +--+-----+-----+ | a| 1| m

Spark on Windows - java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0

In Win10, in IntelliJ this path("C:/hive/Orders_[0-9]*.csv") works good when run as stand alone java spark job. But not working as Spring Boot spark job. Seems

spark3.2.1 cache throw NullPointerException

a job running some time about 1 day will throw the exception when i upgrade spark version to 3.2.1 i set it a driver and 2 executors executor allocate 2g memory

Increasing Spark application timeout in Jupyter/Livy

I'm using a shared EMR cluster with Jupyterhub installed. If my cluster is under heavy load, I get an error How do I increase the timeout for a spark applicati

org.apache.hadoop.hbase.io.ImmutableBytesWritable exception in HBase

We tried to test the following example code for accessing HBase tables (Spark-1.3.1, HBase-1.1.1, Hadoop-2.7.0): import sys from pyspark import SparkContext

How to stream data from mongodb in Structured Streaming?

Is it possible to use spark structured streaming to read data from mongo db with a readStream ? For standard use of structured streaming, I usually do so: va

Access Apache Spark WebUI running in Vagrant

So I setup a vagrant environment with Spark 1.5.0 installed. Then I use sbin/start-all.sh to start Spark. Inside VM I can curl localhost:8080 to get the HTML co

convert df.apply to spark to run parallely iusing all the cores

We have a panda dataframe that are using. We have a function we use in retail data which runs on a daily basis row by row to calculate the item to item differe

Pyspark-pandas not working on Spark 3.1.2

I am using spark 3.1.2 and attempting to use pyspark-pandas. However when attempting from pyspark import pandas as ps I am getting the following error: ImportEr

Job aborted due to stage failure: ShuffleMapStage 20 (repartition at data_prep.scala:87) has failed the maximum allowable number of times: 4

I am submitting Spark job with following specification:(same program has been used to run different size of data range from 50GB to 400GB) /usr/hdp/2.6.0.3-8/

Databricks display() function equivalent or alternative to Jupyter

I'm in the process of migrating current DataBricks Spark notebooks to Jupyter notebooks, DataBricks provides convenient and beautiful display(data_frame) functi

How can you parse a string that is json from an existing temp table using PySpark?

I have an existing Spark dataframe that has columns as such: -------------------- pid | response -------------------- 12 | {"status":"200"} response is a st

Where to set the S3 configuration in Spark locally?

I've setup a docker container that is starting a jupyter notebook using spark. I've integrated the necessary jars into spark's directoy for being able to access

SparkSQL error: collect_set() cannot have map type data

For SparkSQL on hive, when I used named_struct in the query, it returns results: SELECT id, collect_set(emp_info) as employee_info FROM ( SELECT t.id,

How to split a list to multiple columns in Pyspark?

I have: key value a [1,2,3] b [2,3,4] I want: key value1 value2 value3 a 1 2 3 b 2 3 4 It seems that in scala I can wr

Save Spark dataframe as dynamic partitioned table in Hive

I have a sample application working to read from csv files into a dataframe. The dataframe can be stored to a Hive table in parquet format using the method df.

Run spark program locally with intellij

I tried to run a simple test code in intellij IDEA. Here is my code: import org.apache.spark.sql.functions._ import org.apache.spark.{SparkConf} import org.apa

Airflow/Luigi for AWS EMR automatic cluster creation and pyspark deployment

I am new to airflow automation, i dont now if it is possible to do this with apache airflow(or luigi etc) or should i just make a long bash file to do this. I