* This post may have affiliate links. Please see my disclosure
Nginx proxy manager and Cloudflare with the custom domain in a Raspberry Pi 4
Since I started learning docker, I discovered many lightweight and useful apps that I could easily self-host on my home server, such as a Plex media server, Nextcloud, and many other microservices apps for both work and leisure. With all those cool applications running on my docker host server in many different ports, I caught myself thinking, what if I decide to make some of these services reachable outside of my network? Perhaps publish a small website, share my media server with friends, or create a public web service to showcase my work.
What will a good approach to safely configure and expose my network to the internet?
The most straightforward solution would be to create ports forward on the modem or router to publish the service that I want public, right? Although this path seems to be the simplest way to achieve this purpose, this also comes with some downsides and limitations, and additional security concerns that could bring vulnerabilities to my home network. Even though I’m not hosting critical applications, I want to ensure that my home network is secure by avoiding open ports on my firewall.
It was then, the idea of a reverse proxy came out!
Having a reverse proxy is an excellent option without compromising security in my network due to the issues mentioned above.
But, what is a Reserve Proxy?
A reverse proxy is an intermediate server between a user and a back-end server that forwards client traffic to the real web servers or microservices applications hosted inside your internal network. This can be achieved without opening different ports and is a secure way to expose your services and application to the public internet.
The graphic below illustrates what a reverse proxy is:
Here are the main benefits:
- Security – Like a corporate network, the less you expose your network, the less you will be vulnerable to attack. The proxy server will be the only server open to the internet instead of exposing all server infrastructures
- Centralized SSL certificate: In modern times, it is highly recommended to use SSL certificates, even if you don’t record visitors’ data. Without a reverse proxy, you will need to install an SSL certificate in individual servers or microservices.
- Only have one public IP: If you intend to publish some of your application services, managing one public IP is less maintenance and cost-effective. A reverse proxy can come in handy just for this purpose alone.
Publish all your domain traffic through your public IP on ports 80 and 443, and the reverse proxy server will do all the forwarding to the applications. In this article, I will teach you how to set up an open-source reverse proxy server solution called Nginx Proxy Manager combined with Cloudflare free DNS services.
I believe that the topics I outlined above have given you a better understanding of a reverse proxy and the benefits of adopting one.
But what reverse proxy should I use? There are a lot of options out there you can choose from. The one that I will cover in this article is Nginx, one of the most popular reverse proxy open source projects. It is super easy to set up and implement.
Why Nginx proxy manager?
Nginx provides all the main benefits mentioned above, fulfilling all reverse proxy needs for home server users like me, easy to set up and maintain, and free to use!
The major features:
- Simple and secure web admin interface.
- Intuitive configuration panel to create forwarding domains, redirections, streams, and 404 hosts.
- Free SSL using Let’s Encrypt or provide your custom SSL certificates.
- Access Lists and basic HTTP Authentication.
- User management, permissions, and audit logs
You can visit the official Nginx Proxy Manager website for more information.
I’m going to walk you through the process of setting up a scenario with Nginx, from the installation with docker and the configuration of a host proxy application using a domain on Cloudflare.
Scenario installation example:
My docker server is a Raspberry Pi 4 and works very well! I have few other apps running on the same Raspberry Pi with Docker container. For the article, I will only focus on Nginx and the application exposed to the internet. For this demonstration, I selected the Cloud Commander web-based file manager application.
My host server is raspberry PI 4 8GB, and the gateway router of my network is a Sophos UTM, where it will create the port forward to allow ports 80 and 443 (HTTP/HTTPS) to be the only public faced service inside of my network.
Installing Nginx Proxy Manager
One of the things I love about containerized services is how easy it is to deploy a new application. I like to use Portainer, a UI interface to manage and deploy containers, lifting away any remaining complexity you may find by installing with the command line.
Now, let’s get started with Nginx installation.
First, go to the Nginx website setup guide and get the docker-composer code, which has pretty much everything you need to install your Nginx container.
The docker-composer looks like this:
On Portainer, you select stacks:
Before you start copy and paste the code, let me explain a few things you should adjust to make this work correctly.
1 – At the time of writing the article, Portainer (2.0.1) does not support version “3”
In this case, you can change the version to “2.1”. This will have no impact on the end result.
2- Nginx developers enabled the option to use SQLite, which is great since SQLite doesn’t require another container installation because it is a self-contained database. In other words, a server-less application database that doesn’t require additional configuration, which makes our installation even more lightweight than it already is.
Side note: MariaDB-Aria is not compatible with ARM architecture, which is used on the Raspberry Pi. Avoid it and skip the hassle of finding another compatible DB container.
3- Make sure you don’t have another container within your server IP that is already using ports 80, 443, and 81. Those ports will be bonded to the Nginx container and host server.
Side note: In most cases, you can map different ports to the docker host while the container is mapped to another. However, for Nginx, you need to use the same map in both container and docker host because Let’s Encrypt requires the host server to have ports 80 and 443 to issue the SSL certificate to your proxy host applications.
With all the adjustments complete, the docker-compose stack will look like snipped code below:
version: "2" services: app: image: jc21/nginx-proxy-manager:latest restart: always ports: # Public HTTP Port: - '80:80' # Public HTTPS Port: - '443:443' # Admin Web Port: - '81:81' environment: DB_SQLITE_FILE: "/data/database.sqlite" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt
Past the code on the stack, and hit deploy stack:
Simple and clean installation, without all that MySQL lines.
Under the container list, you should be able to see your new Nginx container running in a healthy status:
Once the installation is completed, you should be able to connect the Nginx via web UI through http://hostserverip:81.
The default login is “[email protected]” and the password is “changeme”.
After you input the temporary credentials, you will be prompted to change your credentials:
Although the steps above complete the installation of the Nginx part, to make your new proxy host operational, you need to create a DNAT policy to open the ports 80/443 to your host server IP. This allows external requests to be accessible to your applications and achieve the prerequisites of Let’s Encrypt.
Create the NAT on Sophos Firewall
Even if you use another firewall or router, those steps will be pretty similar. In my case, I have a Sophos as my gateway router, and to create the NAT, it will be under Network Protection>NAT, then you add a new NAT rule:
|Rule type||DNAT destination|
|For traffic from||Any (my nase only enabled ipv4)|
|Using service||443 HTTPS|
|Going to||WAN IP of your network|
|Change destination to||IP address of your raspberry server 192.168.22.8|
|And the services to||443 HTTPS|
You repeat the same steps to create a second NAT for HTTP. You just need to change the port to 80.
NOTE: Some ISP might block for incoming traffic on the port 80 and 443. They usually don’t like home users hosting services and applications from their home router. To validate if the ISP is blocking those ports, you can use the online Port Forwarding Tester:
The result should look like this:
If your ISP blocks the ports, you can try to contact them and see if they can open those ports for your router, or you can go through another path using WireGuard and a VPS to bypass ISP port blocking. You can follow this written guide. If you prefer, you can watch the youtube video showing the step-by-step guide to accomplish this.
Create DNS Records on Cloudflare
I’m going to assume you already have a Cloudflare account and also purchased a domain name. In case you don’t, you can follow this guide to set up your free account. It will only take a few minutes.
Under DNS settings, you can either create an A record or a CNAME record. I usually like to use A records for domain and sub-domain, but it is up to you.
For my example, I will create a subdomain with A record, and add the name cloudcmd.mysitetest.com and add my public IP 126.96.36.199. Before you save, make sure to disable the proxy status as DNS only.
Now under SSL/TLS settings, for SSL/TLS encryption mode, select FULL.
With all those settings out the way, you finally create your first proxy host!
Navigate to your Nginx admin panel http://hostserverip:81. In my example, the IP is https://192.168.22.8:81; use your newly created login and password.
Under Host> Add Proxy Host
Add the subdomain records you created on Cloudflare, follow by the IP and the port where the application responds, in this case, 7000.
Next step, under the SSL section, select the option “Request a New SSL certificate”, and as a best practice, it is good you toggle on the options “force SSL” and “HTTP/2 Support.
And lastly, agree with Let’s Encrypt terms of service to issue your certificate.
Hopefully, after all those steps are done, your application should be publicly accessible via HTTP and HTTPS www.cloudcmd.mysitetes.com.
I hope you have enjoyed this article, and let me know in the comments section below if you have any questions about Nginx proxy manager.