Skip to main content

BGP Worker Node

This page covers deploying BGP-enabled anycast worker nodes using the netactuate-ansible-bgp-bird2 playbook. Nodes are provisioned across multiple NetActuate Points of Presence, configured with BIRD2 for BGP peering, and announce your anycast prefix to NetActuate's edge routers. For FRR instead of BIRD2, see the FRR Alternative section below.

Prerequisites

  • A NetActuate account with API access (Account → API Access in the portal)
  • BGP group ID (see Networking → BGP or Networking → Anycast groups in the portal)
  • ASN (your own, or request a NetActuate-assigned ASN)
  • IPv4 and/or IPv6 prefix(es) to announce
  • Contract ID (scroll to the bottom of Account → API Access in the portal)
  • SSH key pair for node access

Playbook Repository

Clone the playbook from GitHub:

git clone https://github.com/netactuate/netactuate-ansible-bgp-bird2
cd netactuate-ansible-bgp-bird2

The repo uses a roles-based structure. createnode.yaml provisions VMs via the netactuate.compute.node module. bgp.yaml fetches BGP peer details from the NetActuate API for each node, then runs three roles: ssh (key distribution), rc.local (anycast IP binding to loopback and sysctl tuning), and bird (BIRD2 installation and configuration).

Configuration

group_vars/all

Copy group_vars/all.example to group_vars/all and fill in your values:

auth_token: "YOUR_API_KEY"
bgp_group: YOUR_BGP_GROUP_ID
contract_id: YOUR_CONTRACT_ID
operating_system: "Ubuntu 24.04 LTS (20240423)"

bgp_networks:
IPv4:
- net: "203.0.113.0/24"
ips: ["203.0.113.1"]
origin: "65000"
IPv6:
- net: "2001:db8::/32"
ips: ["2001:db8::1"]
origin: "65000"

Inventory (hosts)

Each node is one line under [nodes] with a hostname, location code, BGP flag, and plan:

[nodes]
worker-LAX.example.com location=LAX bgp_enabled=True plan='VR8x4x50'
worker-AMS.example.com location=AMS bgp_enabled=True plan='VR8x4x50'
worker-SYD.example.com location=SYD bgp_enabled=True plan='VR8x4x50'

Add your SSH public key to keys.pub.

Deployment

ansible-playbook createnode.yaml
ansible-playbook bgp.yaml

Nodes are provisioned in parallel. Each node reboots once after initial provisioning — the playbook waits for login availability before proceeding. Re-run createnode.yaml if any nodes time out during provisioning; it is idempotent.

What Gets Configured

The BIRD2 configuration is generated per node from a Jinja2 template. For each node, the playbook produces:

  • Router ID set to the node's public IPv4 address
  • Static announcement protocols for each prefix (blackhole routes that BIRD announces via BGP)
  • Export filters that only announce prefixes from the announcement protocols and reject everything else
  • Two IPv4 BGP sessions and two IPv6 BGP sessions per node (redundant peers from NetActuate)
  • Graceful restart enabled on all sessions

A representative snippet of the generated bird.conf:

router id 192.0.2.10;

protocol static announcement4 {
ipv4;
route 203.0.113.0/24 reject;
}

filter export_upstream_v4 {
if proto = "announcement4" then { accept; } else { reject; }
}

protocol bgp {
local as 65000;
source address 192.0.2.10;
neighbor 192.0.2.1 as 36236;
graceful restart on;
ipv4 { import all; export filter export_upstream_v4; };
}

The rc.local role binds your anycast IPs to the loopback interface (ip addr add 203.0.113.1/32 dev lo) and configures sysctl for IPv4/IPv6 forwarding.

Verification

SSH to any node and check BGP status:

birdc show protocols

All BGP sessions should show state Established. You will typically see four sessions — two IPv4 peers and two IPv6 peers.

birdc show route

Your prefix should appear in the route table as a static route being exported to BGP peers.

FRR Alternative

For FRR (Free Range Routing) instead of BIRD2, use the FRR variant:

git clone https://github.com/netactuate/netactuate-ansible-bgp-frr

The same variables, inventory format, and deployment commands apply. Verify with:

vtysh -c "show bgp summary"

Teardown

ansible-playbook deletenode.yaml

This terminates all nodes in the inventory and cancels billing. BGP sessions are automatically cleaned up when the server is deleted.

Need Help?

If you need assistance with BGP worker node configuration, visit our support page.