'Understanding the Big O for squaring elements in an array

I was working on a problem where you have to square the numbers in a sorted array on leetcode. Here is the original problem

Given an array of integers A sorted in non-decreasing order, return an array of the squares of each number, also in sorted non-decreasing order.

I am trying to understand the big O for my code and for the code that was given in the solution.

This is my code

def sortedSquare(A):
    new_A = []
    for num in A:
        num = num*num
        new_A.append(num)
    return sorted(new_A)


print(sortedSquare([-4, -1, 0, 3, 10]))

Here is the code from the solution:

def sortedSquares(self, A):
    return sorted(x*x for x in A)

For the solution, the Big O is

NlogN

Where N is the length of the array. I don't understand why it would be logN and not just N for the Big O?

For my solution, I am seeing it as Big O of N because I am just iterating through the entire array.

Also, is my solution a good solution compared to the solution that was given?



Solution 1:[1]

Your solution does the exact same thing as the given solution. Both solutions square all the elements and then sort the resultant array, with the leetcode solution being a bit more concise.

The reason why both these solutions are O(NlogN) is because of the use of sorted(). Python's builtin sort is timsort which sorts the array in O(NlogN) time. The use of sorted(), not squaring, is what provides the dominant factor in your time complexity (O(NlogN) + O(N) = O(NlogN)).

Note though that this problem can be solved in O(N) using two pointers or by using the merge step in mergesort.

Edit:

David Eisenstat brought up a very good point on timsort. Timsort aggregates strictly increasing and strictly decreasing runs and merges them. Since the resultant squared array will be first strictly decreasing and then strictly increasing, timsort will actually reverse the strictly decreasing run and then merge them in O(N).

Solution 2:[2]

The way complexity works is that the overall complexity for the whole program is the worst complexity for any one part. So, in your case, you have the part that squares the numbers and you have the part that sorts the numbers. So which part is the one that determines the overall complexity?

The squaring part is o(n) because you only touch the elements once in order to square them.

What about the sorting part? Generally it depends on what sorting function you use:

  • Most sort routines have O(n*log(n)) because they use a divide and conquer algorithm.
  • Some (like bubble sort) have O(n^2)
  • Some (like the counting sort) have O(n)

In your case, they say that the given solution is O(n*log(n)) and since the squaring part is O(n) then the sorting part must be O(n*log(n)). And since your code uses the same sorting function as the given solution your sort must also be O(n*log(n))

So your squaring part is O(n) and your sorting part is O(n*log(n)) and the overall complexity is the worst of those: O(n*log(n))

Solution 3:[3]

If extra storage space is allowed (like in your solution), the whole process can be performed in time O(N). The initial array is already sorted. You can split it in two subsequences with the negative and positive values.

Square all elements (O(N)) and reverse the negative subsequence (O(N) at worse), so that both sequences are sorted. If one of the subsequences is empty, you are done.

Otherwise, merge the two sequences, in time O(N) (this is the step that uses extra O(N) space).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Jerry Jeremiah
Solution 3 Yves Daoust