'Computational Complexity of Higher Order Functions?

Map and filter seem like they would be linear O(n) because they only have to traverse a list once, but is their complexity affected by the function being passed? For example are the two examples below of the same order?

map (+) list

map (complex_function) list


Solution 1:[1]

In virtually all cases, when the documentation of an higher order function states its complexity is O(f(n)), this is assuming that the higher order function has constant time complexity O(1). Further, the exact meaning of n can vary, but when not explicitly stated it should be clear from the context: e.g. the length of a list, the number of elements/associations in a set/map, and so on.

Assume we have a higher order function g called as g h ... where h is a function and the ... are first order data. Without any other information about the higher-order function g, if the documentation states it's O(f(n)), you can get a more realistic worst-case bound of O(f(n)) * CostH where CostH represents the cost of calling H once.

Of course, CostH will also depend on which arguments are begin passed to h: here, all we know is that those arguments are being built in O(f(n)) time. It's hard to get a useful general bound on the size of the arguments for h since, for instance, if h takes trees as input, then you can build a very large tree in a short time:

foo :: Int -> Tree ()
foo 0 = Tree []
foo m = Tree [t,t] where t = foo (m-1)

The above builds a tree having 2^m leaves in m time (thanks to sharing). This can be adapted to 3^m or to b^m trivially keeping b*m complexity.

However, there might be some way to exploit parametricity to recover a more useful bound. For instance, if our order function has type

g :: (a -> Int) -> [a] -> Int

then we know that g h [...] can only call h with arguments taken from the list. Similarly, the library function call sortBy h [...] can only use h to compare two elements in the supplied list.

I have however no idea about how to formalize and prove the sketched claim above. It's quite possible that there are some research papers on the topic.

Solution 2:[2]

You seem to have a (common) misunderstanding that complexity is a function of n.

What is n?

n is just one parameter that measures something about your input. It might not be a sufficient (or even necessary) statistic for describing your input -- you might need other variables to accurately describe the complexity of your input.

So, map and filter are are linear in n. Their dependence on other variables depends on the function you pass, but their dependence on n generally doesn't.

(Footnote: yes, you can pass a function to map and filter that actually performs more work as it processes more elements, but that's uninteresting and beside the point I'm trying to make here.)

Solution 3:[3]

For some background on complexity and higher-order functions, see, e.g.,

Hofmann, Martin. An Application of Category-Theoretic Semantics to the Characterisation of Complexity Classes Using Higher-Order Function Algebras.

Bull. Symbolic Logic 3 (1997), no. 4, 469--486. https://doi.org/10.2307/421100

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 chi
Solution 2
Solution 3 Martin