'Hill Climbing non global maximum with Manhattan Distance

Consider a shortest-path finding problem, such that the shortest path from green to red must be found. For this, I would like to use the Hill Climbing approach and using the Manhattan Distance as my heuristics. I calculated some of those distances already as you can see. Moreover, there are walls where green cannot pass through. enter image description here

In this scenario, the agent green would go the field which has a MD of 3, and after that the algorithm would already end. We did not arrive at the global maximum or the best possible solution. Now, I am looking for a scenario, where the Hill Climbing approach, given the Manhattan Distance as the heuristic and the path finding problem as described, where the agent actually finds a path, which is NOT globally optimal. I could not come up with any example, which I guess cannot be the case.



Solution 1:[1]

Maybe I did not understand your problem, but my impression is:
Using the Manhattan distance as if there would no walls is wrong. The box below the green box does not have a Manhattan distance 3 to the red box, but 9, and the box left or right of the green box does not have a Manhattan distance 5 to the red box, but 7.
The 1st step, thus, is to compute the Manhattan distances right. Afterwards, you could represent your path finding problem by a weighted graph and use a standard shortest path finding algorithm.

EDIT:

Your problem is the combination of a heuristic that ignores obstacles and thus favors a dead end, with an optimization strategy that finds the next local optimum: The Manhattan distance (ignoring the obstacles) claims the min distance is 3, and Hill Climbing goes in this direction and is unable to turn back.
So either you use another heuristic that makes realistic claims, or you use an optimization strategy that temporarily accepts worse solutions in order to escape from a local optimum.
In the 1st case, you had to find possible paths which requires to traverse your plane with some random component, and then to select an acceptable good path.
In the 2nd case, you had to try worse solutions as new starting point for an optimization until an acceptable good local optimum is found.

EDIT 2:

I am sorry, I think only now I understand your question.
You want to use the Manhattan distance that ignores the obstacles to direct Hill Climbing towards the locally best direction, and you want to find an example where this algorithm finds a local optimum.
Well, your example is exactly what you are looking for: Starting at the green square, the heuristic suggests to go down, but there the algorithm realizes that it is stuck at a square, where the actual path to the red square, considering the obstacles, has length 9.
This is the local optimum: The only possible path (considering the obstacles) is to go from the square with distance 3 to the green square with distance 4, i.e. downhill. But Hill Climbing is not able to do so, and thus the algorithm is stuck.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1