Howto setup pi-hole HA
Scenario
I used to use pfBlocker-ng on my pfSense installation. This worked quite well but caused some issues and I felt some instabilities while pfBlocker was installed. As soon as I removed pfBlocker, I did not get any issues anymore. I tried to track them down but eventually gave up on it. It might be a problem on my installation, configuration or combination with other things. The second thing is that I consider switching to OpnSense . Netgate does not focus a lot of resources on pfSense anymore and I fear they will abandon the system. OpnSense will have to prove all the features I need first but this is something for another post. A friend of mine recommended pi-hole. So I just give it a try but on steroids :-D
High availability setup
Why should I use a more complicated setup to provide high availability when I just enter two pi-hole setups as dns-servers? That is a really good question. The answer is pretty simple. As soon as one dns server fails (due to updates, reboot, hw failuer etc), many systems start behave very special. Some switch to the working dns server and some run into timeouts multiple times. If you have a machine strong enough to handle all the queries to pi-hole, there is no need to do any kind of load balancing. The setup is abaut availability and fast switching to the second server.
Network diagram
The following diagram provides you an overview of the final setup I use. You might find yourself in an easier setup. Even then you should be able to use this guide for the installation process. This is not a complete map of my network segments and of course it does not represent my real internal ip addresses.
master
192.168.1.11] virtIP -->|slave| pihole2[pihole2
slave
192.168.1.12] end subgraph Firewall pfSense[pfSense
192.168.0.1
192.168.1.1
192.168.3.1] -->|forward| virtIP{VirtualIp
192.168.1.10} end subgraph Internet pihole1 -->|forward| provider[provider / public dns] pihole2 -->|forward| provider[provider / public dns] end subgraph LAN Client[Client
192.168.0.30] -->|dns query| pfSense end subgraph GUEST ClientGuest[Guest client
192.168.3.15] -->|dns query| pfSense end
Setup the pi-hole hosts
Base setup
To setup this scenario we need to systems. The following base setup has to be done on both systems.
Install OS
I use two normal virtual machines running on my two Proxmox hosts. So there should always one host be available except I shut everything down or the power fails.
These two VMs get the current Debian Buster 10.3 installation. Here I just install the basic tools and the ssh server using tasksel during setup.
After rebooting I also make sure that rsync
is installed. This can be done using apt install rsync
.
Set your IP addresses to static
As first step we have to set the network ip addresses to a static setup. This is required as this should be a server and the ip addresses should not change. I have an IPv6 Prefix delegated that is reserved so I assign them here as well.
pihole1:
allow-hotplug ens18
iface ens18 inet static
address 192.168.1.11
netmask 255.255.255.0
broadcast 192.168.1.255
network 192.168.1.0
gateway 192.168.1.1
dns-search example.com
iface ens18 inet6 static
address xxx:yyy:zzz::11
netmask 64
pihole2:
allow-hotplug ens18
iface ens18 inet static
address 192.168.1.12
netmask 255.255.255.0
broadcast 192.168.1.255
network 192.168.1.0
gateway 192.168.1.1
dns-search example.com
iface ens18 inet6 static
address xxx:yyy:zzz::12
netmask 64
Also edit the hostsfile and add the two pi-hole ipv4 addresses as they are used to build the ssh connection. This makes sure the pi-hole hosts can be resolved without issue.
nano /etc/hosts
The file should look something like this:
27.0.0.1 localhost
192.168.1.11 pihole1.example.com pihole1
192.168.1.12 pihole2.example.com pihole2
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Add user to sync the hosts
Next we need a synchronization user. This user has to exist on both systems and will be used to ensure the same state on both hosts.
adduser gemini
visudo
Add the line below after # User alias specification
to enable sudo
without password for user.
gemini ALL=(ALL) NOPASSWD:ALL
Save and exit the editor to activate the changes.
generate ssh id
Next you have to change to the newly created user. Just use su gemini
to switch the context or log in with the user gemini.
On pihole1:
cd
ssh-keygen
ssh-copy-id gemini@192.168.1.12
On pihole2:
cd
ssh-keygen
ssh-copy-id gemini@192.168.1.11
Now you should be able to log in without any password using the user gemini on both systems. Test this using the following commands:
on pihole1: ssh gemini@192.168.1.12
on pihole2: ssh gemini@192.168.1.11
Backup your VM
When you reach this point and everything is set up on both hosts, you probably should backup your VM here. Just in case something goes really wrong, that you have a point to restore to.
Install pi-hole
As I am a lazy guy. I just used the automated installer of pi-hole to setup the systems. Just use the command below to start the installation.
curl -sSL https://install.pi-hole.net | bash
Several questions will show up during installation. One of the first question is which upstream dns you like to use. As I have INIT7 as provider I just use their dns servers as my upstream. You may also use one of the public servers from the list during installation. If you like to use your own, just select custom and then enter the IPs of the hosts like this 77.109.128.2,213.144.129.20
. The IPv6 DNS servers may be added later in the web ui. The DNS-Servers of your provider should be available on your provider support page or just use one of the big default providers.
When everything was successful, you should be able to log in to the pi hole admin page on both installations. You should probably set the password for your web ui on both systems. The example system here would be http://192.168.1.11/admin
to access the admin page.
Now you have two pi hole servers but you need to make them work together. The next chapter shows the steps to make this work.
Setup sync for pi-hole hosts
Setup the gemini script
First we create the gemini script that is responsible for syncronizing the two systems.
touch /usr/local/bin/pihole-gemini
chmod +x /usr/local/bin/pihole-gemini
nano /usr/local/bin/pihole-gemini
Now paste the content of the file you downloaded from the link above or just get the file on the system you like to run it.
Bevore closing and saving this file, we have to make sure the IP-Addresses are correct and the local ip is set to the right value.
PIHOLE1=192.168.1.11
SSHPORT1=22
# Secondary Pi-hole ip and ssh port
PIHOLE2=192.168.1.12
SSHPORT2=22
# the IP of the local system
LOCALIP=$PIHOLE1
On the secondary pi-hole, the LOCALIP
has to be set to $PIHOLE2
.
make sure the gemini script gets executed
The last step here is to make sure the script runs when some lists change. To make this work, we will have to modify a script of the pi-hole installation.
Important notice: If you update the pi-hole installation, it might be required to modify this file again or your system will run out of sync.
cp /opt/pihole/gravity.sh /opt/pihole/gravity.sh.bak
nano /opt/pihole/gravity.sh
#... other stuff
su -c /usr/local/bin/pihole-gemini - gemini
# this should be the last line of the script.
# insert command before this line
“${PIHOLE_COMMAND}” status
Probably we could use cron
to run this script every 5 or 10 minutes to sync up the backup. This would generate more traffic between both systems but would not require the modification above.
I did not have a chance to test it yet.
If you just want to have two dns that synchronize their settings you can stop here and add these two ip addresses to your dhcp server config to be used as dns servers. If you are like me and want to have a high available forwarder for your pfsense firewall, there is more to do. We need a virtual ip that automatically switchs the host.
Configure high availability for your pi-hole
First we need to declare a virtual IP address that gets used as the dns server ip address.
We will use 192.168.1.10
for this setup.
Script to monitor pi-hole ftl service
mkdir /etc/scripts
nano /etc/scripts/chk_ftl
chmod 755 /etc/scripts/chk_ftl
#!/bin/sh
STATUS=$(ps ax | grep -v grep | grep pihole-FTL)
if [ "$STATUS" != "" ]
then
exit 0
else
exit 1
fi
Install required packages
Keepalived will monitor the systems and make sure the virtual ip gets redirected to the slave if the master host has some issues or is not reachable.
apt install keepalived libipset11 -y
Configure keepalived
The next step is to create the keepalived.conf
file and tell the service what to do.
The example config files are shown below.
When done and saved, we have to enable the service and start it.
NOTICE: Do NOT forget to change the password and the network interface name!
nano /etc/keepalived/keepalived.conf
systemctl enable keepalived.service
systemctl restart keepalived.service
Configuration for the primary host: pihole1
global_defs {
router_id pihole1
script_user root
enable_script_security
}
vrrp_script chk_ftl {
script "/etc/scripts/chk_ftl"
interval 1
weight -10
}
vrrp_instance PIHOLE_ipv4 {
state MASTER
interface ens18
virtual_router_id 55
priority 150
advert_int 1
unicast_src_ip 192.168.1.11
unicast_peer {
192.168.1.12
}
authentication {
auth_type PASS
auth_pass xxxXXxxx
}
virtual_ipaddress {
192.168.1.10/24
}
track_script {
chk_ftl
}
}
vrrp_instance PIHOLE_ipv6 {
state MASTER
interface ens18
virtual_router_id 55
priority 150
advert_int 1
unicast_src_ip xxx:yyy:zzz::11
unicast_peer {
xxx:yyy:zzz::12
}
virtual_ipaddress {
xxx:yyy:zzz::10/64
}
track_script {
chk_ftl
}
}
Configuration for the backup host: pihole2
global_defs {
router_id pihole2
script_user root
enable_script_security
}
vrrp_script chk_ftl {
script "/etc/scripts/chk_ftl"
interval 1
weight -10
}
vrrp_instance PIHOLE_ipv4 {
state BACKUP
interface ens18
virtual_router_id 55
priority 145
advert_int 1
unicast_src_ip 192.168.1.12
unicast_peer {
192.168.1.11
}
authentication {
auth_type PASS
auth_pass xxxXXxxx
}
virtual_ipaddress {
192.168.1.10/24
}
track_script {
chk_ftl
}
}
vrrp_instance PIHOLE_ipv6 {
state BACKUP
interface ens18
virtual_router_id 55
priority 145
advert_int 1
unicast_src_ip xxx:yyy:zzz::12
unicast_peer {
xxx:yyy:zzz::11
}
virtual_ipaddress {
xxx:yyy:zzz::10/64
}
track_script {
chk_ftl
}
}
Finishing your pfSense Setup
Now we have a High available dns server that automatically falls back to the backup system without cause strange behaviors on the hosts.
Why should I use pfSense as my dns server for the clients? This is easy as the pfSense box also provides dhcp and some dns overrides of the internal zone. I don’t want to loose this internal DNS entries. It makes network segments also easier. Currently I have six different network segments and using pfSense as dns is straight forward and easy. At the same time all segments wil have the DNS blocks of pi-hole active.
General Setup
First we have to enter the virtual ip addresses as dns server. To do this log in to your pfSense box and switch to System->General Setup
.
Remove the dns servers and add two records. One for the IPv4 address and one for the IPv6 address.
If you did not install the ha system, just add 4 servers and the ip addresses of both systems.
Also make sure the Option Allow DNS server list to be overridden by DHCP/PPP on WAN
is unchecked or your pi-hole DNS will not be used when you get a DNS server from your provider.
Save and apply the changes.
Configure DNS Forwarder
If you use your pfSense box as DNS server for the LAN clients as I do in my setup, then we have to make sure the DNS resolver uses the pi-hole systems as forwarder.
To do this we navigate to Services->DNS Resolver
and make sure the option Enable Forwarding Mode
is checked. If not we activate this feature and save it.
Do not forget to apply the changes to take effect.
Credits
I found this tutorial on
reddit
. Thanks to Panja0
for the post on reddit.
Also big thanks to
GeorgeT
for the pihole-gemini sync script which I slightly modified to make configuration easier on the two nodes.
The official
pi-hole documentation
page is also a good source to get more information on pi-hole.