'Unable to connect to a url with python request module

I'm unable to connect to the URL using requests module, but it works fine when browsed in a browser. Could it be some robots.txt issue Allowed/Disallowed issue ?

Below is the codebase.

import requests

r = requests.get('https://myntra.com')

print(r)


Solution 1:[1]

Some websites block access from non-web browser ‘User-Agents’ to prevent web scraping, including from the default Python’s requests ‘User-Agent’.

so you need to pass a user agent as like a web browser, for example:

r = requests.get('https://myntra.com/', headers = {
        "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:100.0) Gecko/20100101 Firefox/100.0",
    },)

The ‘User-Agent’ string contains information about which browser is being used, what version and on which operating system.

Solution 2:[2]

The URL shown in the question requires that a User-Agent is passed with the GET request.

import requests
AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_5_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15'
headers = {'User-Agent': AGENT}
(r := requests.get('https://myntra.com', headers=headers)).raise_for_status()

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 eshirvana
Solution 2 Albert Winestein