From Local to Production

My Journey to Hosting on AWS

From Local to Production

Hey there! So, I recently took the plunge and moved my local app to AWS. It’s been quite the adventure, and I thought I'd share the step-by-step process I followed. Let’s dive in!


Setting Up the Domain and SSL

First off, I grabbed a domain—nothing fancy, just the usual process of buying from a registrar. For this setup, I used Cloudflare to manage the DNS. Here’s how it went down:

Pointing the Domain to Cloudflare

I signed up on Cloudflare, added my domain, and followed their instructions to point my domain to their nameservers. Simple enough, right? Just had to update the nameservers with my domain registrar.

Configuring SSL/TLS

Next up, I wanted to make sure everything was secure. In Cloudflare, I set the SSL/TLS encryption mode to "Strict". This mode ensures that traffic between Cloudflare and my origin server is encrypted.

I also generated a certificate for my origin server. This certificate would come into play later when I set up Nginx.


Dockerizing the App

Alright, with the domain and SSL sorted, it was time to Dockerize my app and get things rolling on AWS.

I kicked things off by Dockerizing my app, and that meant writing up a Docker Compose file. Nothing too crazy—just setting up Nginx alongside my app in the same file.

Remember that origin certificate I created earlier in Cloudflare? Well, I mounted that crt and key files in the Nginx container and used it for SSL.

Then, I made sure everything was working by opening my app through Nginx. Success!


Setting Up AWS

With Docker set up, it was time to get AWS involved. I launched an EC2 instance, making sure to configure it with the right VPC, subnet, route table, and security groups to allow traffic on ports 22, 80, and 443. I added my PC's SSH key as well.

💡
I will remove port 22 (SSH) later. Don't want to leave that open for public.

It was time to put Docker to work for what it was built for. So, I went ahead and installed Docker on my EC2 machine and copied my project over using SCP.

Once that was done, I SSHed into the EC2 instance and started up the container. Simple as that!

I verified everything was set up correctly by opening the public IP of my EC2 in the browser.

Yahhooo! My hobby project was officially live on the internet. But hold up—was I really going to stick with the IP address? Do I have to type out the IP instead of using mydomain.com?


Final Touches: DNS and Portainer

With the app running, it was time to polish things off.

Configuring Cloudflare DNS

I went back to Cloudflare and added an A Record pointing to my EC2’s public IP. I also added a CNAME Record for the www subdomain. And voilà—my website was live with my own domain!

Thank god I didn't have to use IP anymore.

Managing Docker Containers with Portainer

To keep an eye on my Docker containers, I went ahead and installed Portainer, hooking it up to the Nginx network.

Then, I tweaked the Nginx config to route traffic to portainer.mydomain.com. I also added WebSocket config in Nginx since I needed to access the container's shell in Portainer, and WebSockets makes that happen.

After that, I jumped back into Cloudflare and added a subdomain for Portainer, pointing it right at the EC2’s public IP.

I checked to make sure Portainer was set up properly, and yep, everything was working smoothly—I could access everything without a hitch. But, there was a catch: everything was out there on the public internet.

Portainer’s own docs even mention that its default authentication isn’t exactly recommended for security.


Setting Up Tailscale VPN

After setting up Portainer for managing my Docker containers, I realized I didn’t want it to be accessible over the public internet because Portainer has more powers than Superman himself.

The solution? Using Tailscale, a mesh VPN that makes it easy to securely access services over a private network.

First, I needed to install Tailscale on my EC2 instance. And I did exactly that. Then, started Tailscale and logged in with my account to connect the EC2 instance to my Tailscale network.

Once Tailscale was up and running, it assigned a private IP to my EC2 instance, something like 100.x.x.x. I noted this IP down because I’d be using it to access Portainer securely.

Now that my EC2 instance had a private Tailscale IP, I changed A record for portainer.mydomain.com in Cloudflare to point at that private IP.

With everything set up, I connected my laptop to the Tailscale network, which gave me access to the private IP of the EC2 instance.

Now, whenever I go to portainer.mydomain.com, it securely routes the traffic through the VPN, keeping Portainer unreachable from the public internet.

There was another hiccup: Cloudflare’s certificate only covers Cloudflare and my server, so when I used a private IP for Portainer, SSL went kaput.

To fix that, I grabbed an SSL certificate for the subdomain using ZeroSSL and installed it in Nginx. Problem solved!

💡
I also blocked the SSH connection in my AWS security group, allowing me to SSH in my EC2 through tailscale only.

Final Words

Finally, I set up GitHub Actions for automatic deployments.

And that’s a wrap! From Dockerizing my app to setting up AWS and automating deployments, it’s been a wild ride.

I hope this helps if you’re on a similar journey. Feel free to drop any questions or thoughts in the comments. Cheers to seamless deployments!