'pyspark wordcount sort by value

I'm learning pyspark, I'm trying below code. Can someone help me to understand what wrong?

>>> pairs=data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b)
>>> pairs.collect()
[(u'one', 1), (u'ball', 4), (u'apple', 4), (u'two', 4), (u'three', 1)]

pairs=data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).map(lambda a,b: (b,a)).sortByKey()

I'm trying to sort based on value, above code giving me error

19/09/25 08:55:07 WARN TaskSetManager: Lost task 1.0 in stage 36.0 (TID 67, dbsld0107.uhc.com, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/opt/mapr/tmp/hadoop-mapr/nm-local-dir/usercache/avenkat9/appcache/application_1565204780647_2728325/container_e07_1565204780647_2728325_01_000003/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/opt/mapr/tmp/hadoop-mapr/nm-local-dir/usercache/avenkat9/appcache/application_1565204780647_2728325/container_e07_1565204780647_2728325_01_000003/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 346, in func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 1041, in <lambda>
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 1041, in <genexpr>
TypeError: <lambda>() takes exactly 2 arguments (1 given)


Solution 1:[1]

I think you are trying to sort by value . Try this

data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).sortBy(lambda a:a[1]).collect()

If you want your code to be fixed, try below

data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).map(lambda a:(a[1],a[0])).sortByKey().collect()

Solution 2:[2]

words =  rdd2.flatMap(lambda line: line.split(" "))
counter = words.map(lambda word: (word,1)).reduceByKey(lambda a,b: a+b)

print(counter.sortBy(lambda a: a[1],ascending=False).take(10))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Newbie_Bigdata
Solution 2 Aramis NSR