This content originally appeared on DEV Community and was authored by 이관호(Gwanho LEE)
TL;DR: I was calling my Rust SMS proxy on a Google Cloud VM by IP:8080, which caused Cloudflare 1003 errors and browser warnings. I mapped a hostname (
example-proxy.example.com
) to the VM, opened ports 80/443, put Caddy in front of my Rust app, and let it auto-issue a Let’s Encrypt certificate. Result: clean HTTPS endpoint that “just works,” and an easy env var for my Cloudflare Worker:SMS_API_URL=https://example-proxy.example.com/sms/send
.
Why this post?
I built a Rust program that proxies SMS requests. It ran fine at http://203.0.113.45:8080
, but when I integrated it with a Cloudflare Worker and the frontend, I started seeing:
-
404 Not Found
(from my origin on routes I hadn’t defined yet) -
Cloudflare 1003
(when requests were made by IP instead of a hostname)
This post documents what the errors actually mean, how the web stack sees your request, and the exact commands/configs I used to fix it—for real production use.
The stack (before & after)
Before
- Rust SMS proxy (Axum/Actix/Hyper) → listening on 0.0.0.0:8080
- Calls from Cloudflare Worker / browser sometimes used IP:8080 directly
- No HTTPS on origin → mixed content and proxy restrictions
After
- DNS hostname:
example-proxy.example.com
→ A record →203.0.113.45
- Caddy on the VM listens on 80/443, terminates TLS with Let’s Encrypt
- Caddy reverse_proxies to Rust at 127.0.0.1:8080
- Worker ENV:
SMS_API_URL=https://example-proxy.example.com/sms/send
The symptoms and what they actually mean
404 Not Found: The origin (your Rust app or upstream server) doesn’t have a route for the path you requested (e.g.,
/health
). It’s not Cloudflare’s fault; it’s your app/router.Cloudflare 1003: Cloudflare does not allow you to access its network by raw IP or with a mismatched Host header. If your code
fetch()
eshttp://203.0.113.45:8080/...
through a CF zone, expect this. Cloudflare wants a domain that the zone knows about.Mixed content / TLS issues: If your frontend is HTTPS but your API is HTTP, browsers can block or warn. You want HTTPS → HTTPS.
Key vocabulary (fast, practical definitions)
-
Hostname / FQDN: A human-readable name (e.g.,
example-proxy.example.com
) that maps to an IP via DNS. - A record: DNS record that maps a hostname to an IPv4 address.
- Reverse proxy: A public-facing server that forwards incoming requests to a private backend (our Rust app).
-
TLS (SSL): Encryption for HTTP. Gives you
https://
and the browser’s green lock. - Let’s Encrypt: A free CA that issues certificates automatically. Caddy handles it for you.
- SNI: The TLS extension that lets one IP serve many hostnames by telling the server which cert to use.
- HTTP-01 challenge: Let’s Encrypt check that proves you control a domain by answering a request on port 80.
- Origin: Your actual application server (here: Rust on 127.0.0.1:8080).
- Edge: The public-facing entrypoint (here: Caddy on 80/443; sometimes Cloudflare or a load balancer).
- CORS: Browser security policy for cross-origin requests. If a Cloudflare Worker calls your origin server-to-server, you usually don’t need CORS on the origin.
- Project ID (GCP): The string identifier for your Google Cloud project (not the numeric project number).
The fix — exact steps (copy/paste friendly)
1) Point a hostname to your VM
I used DuckDNS to create example-proxy.example.com
pointing to 203.0.113.45
.
Quick check:
dig +short example-proxy.example.com
# Expect: 203.0.113.45
2) Open ports 80 and 443 in Google Cloud
We need 80 for Let’s Encrypt (HTTP-01) and 443 for real HTTPS traffic.
gcloud compute firewall-rules create allow-http --allow tcp:80 --description="Allow HTTP"
gcloud compute firewall-rules create allow-https --allow tcp:443 --description="Allow HTTPS"
If
gcloud
complains about your project, set it first:gcloud config set project YOUR_PROJECT_ID
3) Install Caddy
sudo apt update
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | \
sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | \
sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install -y caddy
4) Configure Caddy as a reverse proxy to Rust
Create /etc/caddy/Caddyfile
:
example-proxy.example.com {
reverse_proxy 127.0.0.1:8080
}
Reload Caddy and check status:
sudo systemctl reload caddy
sudo systemctl status caddy --no-pager
What happens here: Caddy automatically obtains a Let’s Encrypt certificate for
example-proxy.example.com
(using the HTTP-01 challenge on port 80), then starts serving HTTPS on port 443 with that cert.
5) Bind your Rust app to localhost
Keep port 8080 private behind Caddy:
./your_proxy_binary --host 127.0.0.1 --port 8080
If you had previously opened 8080 to the internet, you can remove/disable that firewall rule for security.
6) Test like a pro
If you have a health route:
curl -I https://example-proxy.example.com/health
If not, hit your real endpoint:
curl -i -X POST \
-H "content-type: application/json" \
-d '{"to":"+821012345678","text":"hello"}' \
https://example-proxy.example.com/sms/send
You should see a clean HTTPS response from your Rust service through Caddy.
7) Point your Cloudflare Worker / backend env to the hostname
In code:
const endpoint = env.SMS_API_URL || "https://example-proxy.example.com/sms/send";
In wrangler.toml:
[vars]
SMS_API_URL = "https://example-proxy.example.com/sms/send"
Deploy your Worker/Pages and you’re done.
Why this works (the request flow explained)
-
DNS resolves
example-proxy.example.com → 203.0.113.45
. - The browser (or Worker) connects to port 443 and says via SNI: “I’m
example-proxy.example.com
.” - Caddy presents the valid Let’s Encrypt certificate for that hostname.
- Caddy reverse_proxies the request to
http://127.0.0.1:8080
(your Rust app). - The response streams back over the secure TLS connection to the client.
Result: a trusted green lock, no mixed content, no Cloudflare 1003, and a single stable URL for your app.
Troubleshooting & FAQs
Q: Caddy failed to get a certificate.
- Ensure
example-proxy.example.com
resolves to203.0.113.45
. - Make sure ports 80/443 are open and reachable from the internet.
- Check logs:
journalctl -u caddy -n 100 --no-pager
. - Confirm nothing else is already listening on port 80.
Q: I still get 404 on /health
.
- That’s your Rust router, not Caddy. Either add a route or test
/sms/send
.
Q: Do I need CORS?
- If the browser calls your Rust origin directly, yes. If your Worker calls the origin server-to-server, you typically don’t need CORS on the origin.
Q: Could I use a Google HTTPS Load Balancer instead?
- Yes. It’s managed TLS and scales nicely, but setup is heavier and may add cost. Caddy is a sweet spot for a single VM.
Q: Could I avoid opening ports with Cloudflare Tunnel?
- Yes. A tunnel maps a public hostname to your private
127.0.0.1:8080
without exposing ports. It’s also a good option.
Security & production hardening
- Bind Rust to 127.0.0.1 so only Caddy can reach it.
- Close 8080 from the internet if you previously opened it.
- Use env vars for endpoints and secrets.
-
Rate limit / auth your
/sms/send
to prevent abuse. -
Monitor logs (
journalctl
, app logs) and set alerts. - Backups & updates: keep Caddy and OS packages updated.
Bonus: Minimal health route in Rust
Axum
use axum::{routing::{get, post}, Router};
use std::net::SocketAddr;
async fn health() -> &'static str { "ok" }
async fn send_sms() { /* your handler */ }
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/health", get(health))
.route("/sms/send", post(send_sms));
let addr: SocketAddr = "127.0.0.1:8080".parse().unwrap();
axum::Server::bind(&addr).serve(app.into_make_service()).await.unwrap();
}
Actix‑web
use actix_web::{get, post, App, HttpServer, Responder, HttpResponse};
#[get("/health")]
async fn health() -> impl Responder { HttpResponse::Ok().body("ok") }
async fn send_sms() -> impl Responder { HttpResponse::Ok().finish() }
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| App::new().service(health).route("/sms/send", actix_web::web::post().to(send_sms)))
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Final checklist
- [ ] DNS:
example-proxy.example.com → 203.0.113.45
- [ ] GCP Firewall: 80/443 allowed
- [ ] Caddy installed and
reverse_proxy 127.0.0.1:8080
- [ ] Rust bound to 127.0.0.1:8080
- [ ]
curl -I https://example-proxy.example.com/…
works - [ ] Worker env:
SMS_API_URL=https://example-proxy.example.com/sms/send
Takeaways
- Don’t call your backend by raw IP—use a hostname.
- Put a reverse proxy (Caddy/Nginx) at the edge to handle TLS.
- Keep your app private on localhost; only expose 80/443.
- Prefer env-driven endpoints so deployments are clean.
Now your Rust SMS proxy is production-ready with a proper HTTPS URL that your frontend and Workers can trust.
This content originally appeared on DEV Community and was authored by 이관호(Gwanho LEE)