'AWS Session Manager can't connect unless opening SSH port
I'm trying to use AWS Systems Manager Session Manager to connect to my EC2 instances.
These are private EC2 instances, without public IP, sitting on a private subnet in a VPC with Internet access through a NAT Gateway.
Network ACLs are fully opened (both inbound and outbound), but there's no Security Group that allows SSH access into the instances.
I went through all the Session Manager prerequisites (SSM agent, Amazon Linux 2 AMI), however, when I try to connect to an instance through the AWS Console I get a red warning sign saying: "We weren’t able to connect to your instance. Common reasons for this include".
Then, if I add a Security Group to the instance that allows SSH access (inbound port 22) and wait a few seconds, repeat the same connection procedure and the red warning doesn't come up, and I can connect to the instance.
Even though I know these instances are safe (they don't have public IP and are located in a private subnet), opening the SSH port to them is not a requirement I would expect from Session Manager. In fact, the official documentation says that one of its benefits is: "No open inbound ports and no need to manage bastion hosts or SSH keys".
I searched for related posts but couldn't find anything specific. Any ideas what I might be missing?
Thanks!
Solution 1:[1]
Please make sure you are using Session Manager Console, not EC2 Console to establish the session.
From my own experience, I know that sometimes using EC2 Console option of "Connect" does not work at first.
However, if you go to AWS Systems Manager
console, and then to Session Manager
you will be able to Start session
to your instance. This assumes that your SSM agent, role and internet connectivity are configured correctly. If yes, you should be able to see the SSM managed instances for which to start your ssh session.
Also Security Group should allow outbound connections. Inbound ssh are not needed if you setup up everything correctly.
Solution 2:[2]
Despite what all the documentation says, you need to enable HTTPS inbound and it'll work.
Solution 3:[3]
Thanks for your response. I tried connecting using Session Manager Console instead of EC2 console and didn't work. Actually I get the red warning only the first time I try to connect without the SSH port opened. Then I assign a security group with inbound access to port 22 and can connect. Now, when I remove the security group and try connecting again, I don't get the red warning in the console but a blank screen, nothing happens and I can't get in.
That being said, I found that my EC2 instances didn't have any outbound port opened in the security groups. I opened the entire TCP port range for the output, without opening SSH inbound and could connect. Then I restricted the outbound port range a little bit: tried opening only the ephemeral range (reserved ports blocked) and that problem came up again.
My conclusion is that all the TCP port range has to be opened for the outbound. This is better than opening the SSH port 22 for inbound, but there's something I still don't fully understand. It is reasonable that outbound ports are needed in order to establish the connection and communicate with the instance, buy why reserved ports? Does the SSH server side use a reserved port for the backwards connection?
Solution 4:[4]
I was stuck with this similar issue. My Security Groups and NACLS had inbound and outbound ports open only to precise ports and IPs as needed in addition to ephemeral port range of 1024~65535 for all internal IPs.
Finally what worked was, opening up Port 443 outbound for all internet IPs. Even restricting 443 outbound to internal IP ranges did not work.
Solution 5:[5]
The easiest way to do this would be to create the 3 VPC interface endpoints that SSM requires in your VPC and associated subnets (Service Names: com.amazonaws.[REGION].ssm
, com.amazonaws.[REGION].ssmmessages
and com.amazonaws.[REGION].ec2messages
).
Then, you can add an ingress and an egress rule for only port 443 that allows communication within the VPC.
This is more secure than opening up large swathes of the Internet to your private instances and faster since the traffic stays on AWS' own network and does not have to traverse NATs, or gateways.
Here are some helpful links to AWS documentation:
- https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-vpc-endpoints/
- https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html
- https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-privatelink.html
- https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.html
Solution 6:[6]
Another item that tripped me up: Make sure the security group for your VPC endpoints is open to all inbound connections on 443, and all outbound.
I had mine originally tied to the security group of the EC2 instances I was connecting to (e.g. SG1), and when I created another security group (e.g. SG2), I could not connect. The above reason was why... originally I set up my VPC Endpoints Security Group to reference SG1, instead of all inbound connections on 443.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Wolfgang Kuehn |
Solution 2 | A Kingscote |
Solution 3 | Nicolás García |
Solution 4 | |
Solution 5 | eatsfood |
Solution 6 | j7skov |