'Running shell script in parallel

I have a shell script which

  1. shuffles a large text file (6 million rows and 6 columns)
  2. sorts the file based the first column
  3. outputs 1000 files

So the pseudocode looks like this

file1.sh 

#!/bin/bash
for i in $(seq 1 1000)
do

  Generating random numbers here , sorting  and outputting to file$i.txt  

done

Is there a way to run this shell script in parallel to make full use of multi-core CPUs?

At the moment, ./file1.sh executes in sequence 1 to 1000 runs and it is very slow.

Thanks for your help.



Solution 1:[1]

Check out bash subshells, these can be used to run parts of a script in parallel.

I haven't tested this, but this could be a start:

#!/bin/bash
for i in $(seq 1 1000)
do
   ( Generating random numbers here , sorting  and outputting to file$i.txt ) &
   if (( $i % 10 == 0 )); then wait; fi # Limit to 10 concurrent subshells.
done
wait

Solution 2:[2]

Another very handy way to do this is with gnu parallel, which is well worth installing if you don't already have it; this is invaluable if the tasks don't necessarily take the same amount of time.

seq 1000 | parallel -j 8 --workdir $PWD ./myrun {}

will launch ./myrun 1, ./myrun 2, etc, making sure 8 jobs at a time are running. It can also take lists of nodes if you want to run on several nodes at once, eg in a PBS job; our instructions to our users for how to do that on our system are here.

Updated to add: You want to make sure you're using gnu-parallel, not the more limited utility of the same name that comes in the moreutils package (the divergent history of the two is described here.)

Solution 3:[3]

To make things run in parallel you use '&' at the end of a shell command to run it in the background, then wait will by default (i.e. without arguments) wait until all background processes are finished. So, maybe kick off 10 in parallel, then wait, then do another ten. You can do this easily with two nested loops.

Solution 4:[4]

There is a whole list of programs that can run jobs in parallel from a shell, which even includes comparisons between them, in the documentation for GNU parallel. There are many, many solutions out there. Another good news is that they are probably quite efficient at scheduling jobs so that all the cores/processors are kept busy at all times.

Solution 5:[5]

There is a simple, portable program that does just this for you: PPSS. PPSS automatically schedules jobs for you, by checking how many cores are available and launching another job every time another one just finished.

Solution 6:[6]

IDLE_CPU=1
NCPU=$(nproc)

int_childs() {
    trap - INT
    while IFS=$'\n' read -r pid; do
        kill -s SIGINT -$pid
    done < <(jobs -p -r)
    kill -s SIGINT -$$
}

# cmds is array that hold commands
# the complex thing is display which will handle all cmd output
# and serialized it correctly

trap int_childs INT
{
    exec 2>&1
    set -m

    if [ $NCPU -gt $IDLE_CPU ]; then
        for cmd in "${cmds[@]}"; do
            $cmd &
            while [ $(jobs -pr |wc -l) -ge $((NCPU - IDLE_CPU)) ]; do
                wait -n
            done
        done
        wait

    else
        for cmd in "${cmds[@]}"; do
            $cmd
        done
    fi
} | display

Solution 7:[7]

You might wanna take a look at runp. runp is a simple command line tool that runs (shell) commands in parallel. It's useful when you want to run multiple commands at once to save time. It's easy to install since it's a single binary. It's been tested on Linux (amd64 and arm) and MacOS/darwin (amd64).

Solution 8:[8]

While the previous answers do work, IMO they can be hard to remember (except of course GNU parallel).

I am somewhat partial to a similar approach to the above (( $i % 10 == 0 )) && wait. I have also seen this written as ((i=i%N)); ((i++==0)) && wait

where: N is defined as the number of jobs that you want to run in parallel and i is the current job.

While the above approach works, it has diminishing returns as you have to wait for all processes to quit before having a new set of processes work, and this wastes CPU time for any task with any execution time (A.K.A. every task). In other words, the number of parallel tasks must reach 0 before starting new tasks with the previously described approach.

For me, this issue became apparent when executing a task with an inconsistent execution time (e.g. executing a request to purge user information from a database - the requestee might or might not exist, and if they do exist there could be orders of magnitudes of differences for records associated with different requestees). What I notices was some requests would be immediately fulfilled, while others would be queued to start waiting for one slightly longer running task to succeed. This translated to a task that would take hours/days to complete with the previously defined approach only taking tens of minutes.

I think that the below approach is a better solution for maintaining a constant task loading on systems without GNU parallel (e.g. vanilla macOS) and hopefully easier to remember than the above alphabet soup:

WORKER_LIMIT=6 # or whatever - remember to not bog down your system

while read -r LINE; do # this could be any kind of loop
    # there's probably a more elegant approach to getting the number of background processes.
    BACKGROUND_PROCESSES="$(jobs -r | wc -l | grep -Eow '[0-9]+')"

    if [[ $BACKGROUND_PROCESSES -eq $WORKER_LIMIT ]]; then
        # wait for 1 job to finish before starting a new one
        wait -n 
    fi

    # run something in a background shell
    python example.py -item "$LINE" &
done < something.list

# wait for all background jobs to finish
wait

Solution 9:[9]

generating random numbers is easy. suppose u got a huge file like a shop database and u want to rewrite that file on some specific basis. My idea was to calculate number of cores, split file into how many cores, make a script.cfg file , split.sh and recombine.sh split.sh will split file in how many cores, clone script.cfg ( script that changes stuff in that huge files), clone script.cgf in how many cores, make them executable, search and replace in clones some variables that have to know what part of the file to process and run them in background when a clone is done generate a clone$core.ok file, so when all clones are done will tell to a loop to recombine partial results into a single one only when all .ok files are generated. it can be done with " wait" but i fancy my way

http://www.linux-romania.com/product.php?id_product=76 look at the bottom ,is partially translated in EN in this way i can procces 20000 articles with 16 columns in 2 minutes(quad core) instead of 8(single core) You have to care about CPU temperature, coz all cores are running at 100%

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 Tony Delroy
Solution 4 Eric O Lebigot
Solution 5 Andiamo Va
Solution 6 Zakaria
Solution 7 jreisinger
Solution 8
Solution 9 Bash Coder