'Spark saveAsTextFile() results in Mkdirs failed to create for half of the directory
I am currently running a Java Spark Application in tomcat and receiving the following exception:
Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603031703_0001_m_000000_5
on the line
text.saveAsTextFile("/opt/folder/tmp/file.json") //where text is a JavaRDD<String>
The issue is that /opt/folder/tmp/ already exists and successfully creates up to /opt/folder/tmp/file.json/_temporary/0/ and then it runs into what looks like a permission issue with the remaining part of the path _temporary/attempt_201603031703_0001_m_000000_5
itself, but I gave the tomcat user permissions (chown -R tomcat:tomcat tmp/
and chmod -R 755 tmp/
) to the tmp/ directory. Does anyone know what could be happening?
Thanks
Edit for @javadba:
[root@ip tmp]# ls -lrta
total 12
drwxr-xr-x 4 tomcat tomcat 4096 Mar 3 16:44 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 file.json
drwxrwxrwx 3 tomcat tomcat 4096 Mar 7 20:01 .
[root@ip tmp]# cd file.json/
[root@ip file.json]# ls -lrta
total 12
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 _temporary
drwxrwxrwx 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 .
[root@ip file.json]# cd _temporary/
[root@ip _temporary]# ls -lrta
total 12
drwxr-xr-x 2 tomcat tomcat 4096 Mar 7 20:01 0
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 .
[root@ip _temporary]# cd 0/
[root@ip 0]# ls -lrta
total 8
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 2 tomcat tomcat 4096 Mar 7 20:01 .
The exception in catalina.out
Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603072001_0001_m_000000_5
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:438)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:799)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1193)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Solution 1:[1]
saveAsTextFile
is really processed by Spark executors. Depending on your Spark setup, Spark executors may run as a different user than your Spark application driver. I guess the spark application driver prepares the directory for the job fine, but then the executors running as a different user have no rights to write in that directory.
Changing to 777 won't help, because permissions are not inherited by child dirs, so you'd get 755 anyways.
Try running your Spark application as the same user that runs your Spark.
Solution 2:[2]
I suggest to try changing to 777
temporarily . See if it works at that point. There have been bugs/issues wrt permissions on local file system. If that still does not work let us know if anything changed or precisely same result.
Solution 3:[3]
I also had the same problem, and my issue has been resolved by using full HDFS path:
Error
Caused by: java.io.IOException: Mkdirs failed to create file:/QA/Gajendra/SparkAutomation/Source/_temporary/0/_temporary/attempt_20180616221100_0002_m_000000_0 (exists=false, cwd=file:/home/gajendra/LiClipse Workspace/SpakAggAutomation)
Solution
Use full HDFS path with hdfs://localhost:54310/<filePath>
hdfs://localhost:54310/QA/Gajendra/SparkAutomation
Solution 4:[4]
Could it be selinux/apparmor
that plays you a trick? Check with ls -Z
and system logs.
Solution 5:[5]
So, I've been experiencing the same issue, with my setup there is no HDFS and Spark is running in stand-alone mode. I haven't been able to save spark dataframes to an NFS share using the native Spark methods. The process runs as a local user, and I try to write to the users home folder. Even when creating a subfolder with 777 I cannot write to the folder.
The workaround for this is to convert the dataframe with toPandas()
and after that to_csv()
. This magically works.
Solution 6:[6]
I have the same issue as yours.
I also did not want to write to hdfs but to a local memory share.
After some research, I found that for my case the reason is: there are several nodes executing, however, some of the nodes has no access to the directory where you want to write your data.
Thus the solution is to make the directory available to all nodes, and then it works~
Solution 7:[7]
We need to run the application in local mode.
val spark = SparkSession
.builder()
.config("spark.master", "local")
.appName("applicationName")
.getOrCreate()
Solution 8:[8]
Giving the full path works for me. Example:
file:/Users/yourname/Documents/electric-chargepoint-2017-data
Solution 9:[9]
this is tricky one, but simple to solve. One must configure job.local.dir variable to point to working directory. Following code works fine with writing CSV file:
def xmlConvert(spark):
etl_time = time.time()
df = spark.read.format('com.databricks.spark.xml').options(rowTag='HistoricalTextData').load(
'/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/dataset/train/')
df = df.withColumn("TimeStamp", df["TimeStamp"].cast("timestamp")).groupBy("TimeStamp").pivot("TagName").sum(
"TagValue").na.fill(0)
df.repartition(1).write.csv(
path="/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/result/",
mode="overwrite",
header=True,
sep=",")
print("Time taken to do xml transformation: --- %s seconds ---" % (time.time() - etl_time))
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName('XML ETL') \
.master("local[*]") \
.config('job.local.dir', '/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance') \
.config('spark.driver.memory','64g') \
.config('spark.debug.maxToStringFields','200') \
.config('spark.jars.packages', 'com.databricks:spark-xml_2.11:0.5.0') \
.getOrCreate()
print('Session created')
try:
xmlConvert(spark)
finally:
spark.stop()
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow