Sitemap

Deploying Nebula Mesh VPN on RHEL 8/9/10

8 min readJun 4, 2025

--

VPN Mesh For Everyone

A mesh VPN is a type of virtual private network (VPN) where each device (or node) in the network connects directly to every other device. This is different from a traditional “hub-and-spoke” VPN, where all devices connect to a central server. Nebula is designed to create this kind of mesh network.

To understand how a mesh VPN like Nebula operates, it’s important to define two key concepts: overlay and underlay networks:

  • Underlay Network: This is the physical network infrastructure that your data travels over. It could be your home network, a corporate network, the internet, or a combination of these. In simpler terms, it’s the “real” network.
  • Overlay Network: This is a virtual network that is built on top of the underlay network. The mesh VPN created by Nebula is an overlay network. Devices in the overlay network communicate as if they were directly connected, even though their traffic is actually being routed through the underlay network.

In the context of our document:

  • our RHEL 9 servers and clients are connected to an underlay network (e.g., your local network and the internet).
  • Nebula creates an overlay network on top of that. Each RHEL 9 machine running Nebula becomes a node in this overlay network. They communicate with each other using the Nebula IP addresses (e.g., 10.100.0.1, 10.100.0.2) as if they were on a private, dedicated network. The actual network traffic, however, travels through your existing physical network (the underlay).

The Lighthouse

A Very important note to mentioned is that the Server running the lighthouse will not be part of the VPN mesh communication so the actual VPN Mesh with be between all the nodes which are not a lighthouse server.
As you can see in the Diagram at the beginning of the tutorial the lighthouse is there to make sure the connection can be established but once the VPN is up and running there is no communication to the lighthouse (not even on it’s internal IP address)

Technical Tutorial

This tutorial outlines the steps to deploy both the server and client components of Nebula, a mesh VPN, on RHEL 9.

Reference: This tutorial is based on the official Nebula documentation and best practices from the Nebula GitHub repository: https://github.com/slackhq/nebula.git

Introduction

Nebula is an open-source, scalable, and secure mesh VPN designed for ease of use. It allows you to create a private network over untrusted networks, such as the internet. Unlike traditional VPNs, Nebula nodes connect directly to each other, improving performance and scalability.

Prerequisites

  • RHEL 9 installed on the machines you intend to use as Nebula lighthouse server and both clients.
  • Basic understanding of networking concepts.
  • root or sudo privileges on all machines.
  • Firewall configured to allow necessary traffic.

Part 1: Installing Nebula

Nebula provides pre-built binaries, which simplifies the installation process.

1. Download the Binary:

  • On both the server and client machines, download the appropriate Nebula binary for your architecture from the official GitHub releases page. For RHEL 9, you’ll likely use the linux-amd64 version.
# Example for amd64 architecture:
export NEBULA_VERSION=$(curl -s "https://api.github.com/repos/slackhq/nebula/releases/latest" | jq -r .tag_name)
export NEBULA_VERSION=$(echo $NEBULA_VERSION | sed -e 's/v//')
export ARCH="amd64"
# now for the download assuming you have wget installed
curl https://github.com/slackhq/nebula/releases/download/v${NEBULA_VERSION}/nebula-linux-amd64.tar.gz \
-o - | tar \
-zxvf - \
-C /usr/local/bin/
  • (Note: Replace v1.9.0 with the actual latest release version.)

2. Extract the Binary:

Extract the downloaded archive to /usr/local/bin:

tar -C /usr/local/bin -xvf nebula-linux-amd64.tar.gz

3. Set Permissions:

Ensure the binary is executable:

chmod +x /usr/local/bin/nebula
chmod +x /usr/local/bin/nebula-cert

Part 2: Configuring Nebula

Nebula uses a single configuration file (typically config.yaml) for both server and client. The key difference is in the static_host_map section, which defines the static IP addresses and public keys of the other nodes in the network.

1. Generate Certificates:

  • Nebula uses certificates for authentication. You’ll need to generate a certificate authority (CA) certificate and key, and then use that CA to sign certificates for your server and clients.
  • all of the following command should be run from the /etc/nebula/pki/ directory. if the directory does not exist then go ahead and create it first

Generate CA Certificate and Key (Only ONCE):

create and change to the required directory :

mkdir -p /etc/nebula/pki && cd /etc/nebula/pki

And create the CA for signing the certificate:

nebula-cert ca -name nebula-ca -out-crt ca.crt -out-key ca.key

Generate for the lighthouse Server Certificate and Key:

nebula-cert sign  \
-ca-key ca.key \
-name <server_hostname>\
-ip <server_ip>/<subnet_mask> \
-out-crt <name>.crt \
-out-key <name>.key
  • Replace <server_hostname> with the hostname of your server (e.g., nebula-server).
  • Replace <server_ip> with the Nebula IP address you want to assign to the server (e.g., 10.100.0.1). This IP address MUST be within the Nebula network range you plan to use.
  • Replace <subnet_mask> with the subnet mask for the Nebula network (e.g., 24).
  • Replace the <name> with the name of the certificate (in our example we will generate certificates for the lighthouse , client1 and for client2

Generate the Client Certificate and Key (for each client):

nebula-cert sign  \
-ca-key ca.key \
-name <client_hostname> \
-ip <client_ip>/<subnet_mask> \
-out-crt <name>.crt \
-out-key <name>.key
  • Replace <client_hostname> with the hostname of the client (e.g., nebula-client1).
  • Replace <client_ip> with the Nebula IP address for the client (e.g., 10.100.0.2 or 10.100.0.3). Ensure this is unique within your Nebula network.
  • Replace <subnet_mask> with the subnet mask (e.g., 24).
  • Replace client1.crt and client1.key with appropriate names for each clients.

2. Create the Nebula Configuration File (config.yaml):

  • Create a config.yaml file on both the server and client machines. The contents will be similar, but with some key differences.
  • Lighthouse Server Configuration (config.yaml):
pki:
ca: /etc/nebula/pki/ca.crt
cert: /etc/nebula/pki/lighthouse.crt
key: /etc/nebula/pki/lighthouse.key

lighthouse:
am_lighthouse: true
interval: 5s

listen:
host: 0.0.0.0
port: 4242

logging:
level: info
format: text

tun:
disabled: false
dev: nbl1
mtu: 1400
listen_address: 10.100.0.1/24 # Nebula IP address for the lighthouse
lighthouse:
- 10.100.0.1:4242 # Its own address for initial announcement (can be omitted if only one lighthouse)

# Nebula security group configuration
firewall:
outbound_action: drop
inbound_action: drop
default_local_cidr_any: true
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000

outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any

inbound:
# Allow icmp between any nebula hosts
- port: any
proto: any
host: any
  • Client Configuration (config.yaml):

the client configuration look very similar to the lighthouse configuration but with a few small changes :

pki:
ca: /etc/nebula/pki/ca.crt
cert: /etc/nebula/pki/client1.crt
key: /etc/nebula/pki/client1.key

lighthouse:
am_lighthouse: false
hosts:
- 10.100.0.1 # Replace with the actual IP of your lighthouse

static_host_map:
"10.100.0.1": ["192.168.122.12:4242"]

listen:
host: 0.0.0.0
port: 0

logging:
level: info
format: text

tun:
disabled: false
dev: nbl1
mtu: 1400
listen_address: 10.100.0.2/24 # Unique Nebula IP address for this client
lighthouse:
- 192.168.122.118:4242 # Replace with the actual IP for lighthouse

firewall:
default_local_cidr_any: true
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000

outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any

inbound:
# Allow icmp between any nebula hosts
- port: any
proto: any
host: any

Key Points:

  • pki: Specifies the paths to the certificate and key files. The CA certificate is needed by both server and client.
  • service: listen: The server listens on a specific port (4242 by default). Clients typically use port 0 (random port).
  • tun: dev: The name of the tunnel interface (e.g., nebula1).
  • tun: cidr: The Nebula IP address and subnet mask for the device. This IP address is how this machine is addressed on the Nebula network.
  • static_host_map: This is CRUCIAL. It maps the Nebula hostnames (which MUST match the names used when generating certificates) to their Nebula IP addresses. Every node (server and client) needs to have the static host map of every other node in the network.
  • unsafe_routes: (Optional) These are routes that Nebula will push to the host's routing table. Use with caution.

3. Distribute Files:

  • Securely copy the following files:
  • ca.crt: To all server and client machines.
  • server.crt, server.key, and config.yaml (server version): To the server machine.
  • client1.crt, client1.key, and config.yaml (client1 version): To the client1 machine.
  • Repeat for each client, ensuring each gets its own certificate/key and a correctly populated config.yaml.
  • Place these files in a consistent location on each machine (e.g., /etc/nebula/).

Part 3: Configuring the Firewall

You need to configure the RHEL 9 firewall to allow Nebula traffic.

  • Allow Nebula Port:
  • On both the server and client machines, allow traffic on the port Nebula is using (4242 by default):
firewall-cmd --permanent --add-port=4242/udp
firewall-cmd --reload
  • Nebula uses UDP for its traffic.

Part 4: Starting Nebula

Start Nebula using the following command on both the server and client machines:

nebula -config /etc/nebula/config.yaml

if you chose another path Replace /etc/nebula/config.yaml with the actual path to your configuration file.

2. Verify the Connection:

  • Once Nebula is running on both the server and client, you should be able to ping the Nebula IP address of the server from the client, and vice-versa.
  • On the client1: ping 10.100.0.1 or ping 10.100.0.3
  • On the client2: ping 10.100.0.2 or ping 10.100.0.1
  • If the ping is successful, your Nebula VPN is working!

Part 5: Setting up systemd (Optional but Recommended)

To ensure Nebula starts automatically on boot, create a systemd service file:

  1. Create the Service File:
cat > /etc/systemd/system/nebula.service << EOF
[Unit]
Description=Nebula Mesh VPN
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yaml
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  1. Enable and Start the Service:
systemctl daemon-reload && systemctl enable --now nebula

Part 6: Creating a Nebula RPM Package (Optional)

For easier distribution and management, you can create an RPM package for Nebula. This involves creating a spec file and using RPM build tools.

  1. Install RPM Build Tools:
dnf group install -y rpm-development-tools

2. Create Directories:

mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS}

3. Create a Spec File (nebula.spec):

Name: nebula
Version: 1.9.5
Release: 2%{?dist}
Summary: Nebula Binaries for VPN Mesh Network

Group: Utilities
License: GPLv3
Source0: https://github.com/slackhq/nebula/archive/refs/tags/v%{version}.tar.gz
BuildArch: x86_64
BuildRequires: golang
BuildRequires: systemd-rpm-macros

%description
The Nebula binary tool for creating a VPN Mesh Network

%define debug_package %{nil}
%define _build_id_links none
%global debug_package %{nil}

%prep
%autosetup

%build
make bin

%install
rm -rf $RPM_BUILD_ROOT
install -d -m 0755 $RPM_BUILD_ROOT/usr/local/bin/
install -m 0755 nebula $RPM_BUILD_ROOT/usr/local/bin/nebula
install -m 0755 nebula-cert $RPM_BUILD_ROOT/usr/local/bin/nebula-cert

mkdir -p $RPM_BUILD_ROOT/etc/nebula/
mkdir -p $RPM_BUILD_ROOT/etc/nebula/pki
mkdir -p $RPM_BUILD_ROOT/usr/lib/systemd/system/

install -m 0755 config.yaml $RPM_BUILD_ROOT/etc/nebula/config.yaml
install -Dpm 644 %{name}.service %{buildroot}%{_unitdir}/%{name}.service

%clean
rm -rf $RPM_BUILD_ROOT
rm -f debugfiles.list debuglinks.list debugsourcefiles.list debugsources.list elfbins.list


%files
%defattr(-,root,root,-)
/usr/local/bin/nebula
/usr/local/bin/nebula-cert
%config(noreplace) /etc/nebula/config.yaml
%{_unitdir}/%{name}.service

%post
%systemd_post %{name}.service

%preun
%systemd_preun %{name}.service

%changelog
* Thu Feb 06 2025 Oren Oichman <two.oes@gmail.com>
-

4. Create the Source Tarball:

#Make sure you are in the directory where the nebula binary is located
wget https://github.com/slackhq/nebula/archive/refs/tags/v1.9.5.tar.gz
tar -zxvf v1.9.5.tar.gz
cp /etc/systemd/system/nebula.service nebula-1.9.5/

#Create directories to include certificates and sample config.yaml.
wget https://raw.githubusercontent.com/slackhq/nebula/refs/heads/master/examples/config.yml -O nebula-1.9.5/
tar -czvf v1.9.5.tar.gz nebula-1.9.5
mv v1.9.5.tar.gz ~/rpmbuild/SOURCES/

5. Build the RPM:

rpmbuild -ba rpmbuild/SPECS/nebula.spec

This will create an RPM that installs the Nebula binary and configuration files to the appropriate locations. You can then distribute and install this RPM on other RHEL 9 systems.

6. Install the RPM (Optional):

Install the RPM package

dnf install \
-y ~/rpmbuild/RPMS/<your_architecture>/nebula-1.9.5-2.el9.x86_64.rpm

* Replace the file name with the actual name of your RPM file.

All we need now is restarting the nebula service (make sure systemd is reload before we start it)

rm -f /etc/systemd/system/nebula.service
systemctl daemon-reload && systemctl enable nebula.service

That is it !!

If you have any question feel free to respond/ leave a comment.
You can find on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman

--

--

Oren Oichman
Oren Oichman

Written by Oren Oichman

Open Source contributer for the past 15 years

No responses yet