Hosting WordPress Locally

Using WordPress for many websites over the years, I have hosted in a variety of public shared environments. Mitigating the challenges with such an arrangement has been interesting. Given the rising price associated with hosting, I decided I would move my hosting in-house. While this is not ideal for many individuals, this presented an opportunity to re-write my initial blogposts covering hosting a site within an environment managed by Hosting Controller.

While I had initially received negative feedback on these blog posts from individuals within the community, much of their frustration related to the corporate nature of the site. This challenge can be overcome with ease, as the purpose of writing the blog was to wind down my employment with a hosting provider, and give clear and concise direction in response to common questions resulting from misconfiguration or ambiguous vendor documentation. Looking back at those blog posts, they can be summarized as hosting configuration, WordPress configuration, and email configuration for the latest Microsoft cloud offering, Office365.

Much has changed, resulting in this updated blog post. The prior few years’ website hosting had been done with simple pages written in markdown, and served from a public facing git repository. While this is functional and quick to write, it does not provide much in the way of ease of use for specific elements of blog functionality such as date sorting and image embedding features.

At present, this site is hosted on a Ubuntu base box with LightSpeed’s one-click automated install, without a single click. Following is a vagrantfile describing the box. This brings in the image cloud-image/ubuntu-24.04 from Hashicorp’s public repository of base OS images. Internally, I am using a GNU/Linux server running KVM/QEMU, vagrant, and libvirt support. Machine configuration can be handled in a manual fashion using virsh commands. This configuration could be directed to the cloud by changing the provider out for the appropriate vendor-supplied API. For example, I’ve used this to deploy DigitalOcean droplets, Linode hosts, VMware vSphere hosts running ESXI, as well as directing hosts to AWS EC2.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
config.vm.define :ubuntu_litespeed do |ubuntu_litespeed|
ubuntu_litespeed.vm.box = "cloud-image/ubuntu-24.04"
ubuntu_litespeed.vm.provider :libvirt do |libvirt|
libvirt.id_ssh_key_file = "/home/jason/projects/kali_remote/id_ssh.key"
#libvirt.uri = "qemu+ssh://10.130.10.50/system"
libvirt.host = '10.130.10.50'
libvirt.connect_via_ssh = true
libvirt.username = "jason"
libvirt.password = ''
libvirt.memory = 2048
libvirt.cpus = 2
end
end

# Default false
config.ssh.forward_agent = true
config.ssh.forward_x11 = true

# Default true
config.ssh.keep_alive = false

# Default 300s
config.vm.boot_timeout = 900

# Create a private network, which allows host-only access to the machine
config.vm.network "public_network",
:dev => "bridge0",
:mode => "bridge",
:type => "bridge",
:ip => "192.168.100.12"

config.vm.provision "file", source: "~/projects/gm-com-chain.pem", destination: "~/"

config.vm.provision "shell", inline: <<-SHELL
echo foo
sudo cp /home/vagrant/gm-com-chain.pem /etc/ssl/certs/
#sudo mv ~/gdroot-g2.pem /etc/pki/ca-trust/source/anchors/
#sudo update-ca-trust
# Base requirements for installing X11
#sudo dnf update -y
#sudo dnf install -y git
# Add desktop environment
#sudo dnf install -y xorg-x11-xauth
#sudo dnf install -y xclock
#sudo dnf install -y gnome-shell
#sudo dnf install xorg-x11-apps
#sudo dnf install -y xorg*
#sudo dnf install -y @xfce-desktop-environment
sudo sed -i '/#X11Forwarding no/s//X11Forwarding yes/' /etc/ssh/sshd_config
sudo rm -rf /root/.Xauthority
sudo rm -rf /root/.serverauth.*
SHELL

config.vm.provision "shell", privileged: false, inline: <<-SHELL
echo bar
echo export PATH="$PATH:$HOME/.cargo/bin:$HOME/veilid/target/release:$HOME/.local/bin" >> ~/bash.bashrc
echo export PATH="$PATH:$HOME/.cargo/bin:$HOME/veilid/target/release:$HOME/.local/bin" >> ~/.bash_profile
echo $PATH
SHELL


config.vm.provision "shell", privileged: false, inline: <<-SHELL
echo basby
cd ~/
#wget https://openlitespeed.org/packages/openlitespeed-1.7.16.tgz
#tar -zxvf openlitespeed-*.tgz
#cd openlitespeed
#sudo ./install.sh
#sudo /usr/local/lsws/bin/lswsctrl start
#sudo /usr/local/lsws/bin/lswsctrl status
sudo ufw allow http
sudo ufw allow https
sudo ufw allow 7080/tcp
wget https://raw.githubusercontent.com/litespeedtech/ols1clk/master/ols1clk.sh
sudo chmod +x ols1clk.sh
echo y | sudo ./ols1clk.sh -A 123456789! -R 123456789! --wordpressplus www.jasonbreitwieser.com
SHELL

config.vm.synced_folder ".", "/vagrant", type: "rsync",
rsync__exclude: ".git/, ./vagrant"
end

Rather than providing the internet direct access to these hosts, I use an Nginx proxy. This provides a chicane in the form of a bastion host. The configuration can be hardened further, but the basics are covered here, enough for deployment.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
config.vm.define :ubuntuproxy do |ubuntuserver|
ubuntuserver.vm.box = "generic-x64/ubuntu2310"
#ubuntuserver.vm.network :public_network,
# dev: 'enp9s0',
# auto_config: true
ubuntuserver.vm.network :forwarded_port, :guest => 80, :host => 8080, :host_ip => "0.0.0.0", :gateway_ports => true
ubuntuserver.vm.provider :libvirt do |libvirt|
libvirt.id_ssh_key_file = "/home/jason/projects/kali_remote/id_ssh.key"
#libvirt.uri = "qemu+ssh://10.130.10.50/system"
libvirt.host = '10.130.10.50'
libvirt.connect_via_ssh = true
libvirt.username = "jason"
libvirt.password = ''
libvirt.memory = 2048
libvirt.cpus = 2
end
end

# Default false
config.ssh.forward_agent = true
config.ssh.forward_x11 = true

# Default true
config.ssh.keep_alive = false

# Default 300s
config.vm.boot_timeout = 900

# Create a private network, which allows host-only access to the machine
config.vm.network "public_network",
:dev => "bridge0",
:mode => "bridge",
:type => "bridge",
:ip => "192.168.100.11"

config.vm.provision "file", source: "~/projects/gm-com-chain.pem", destination: "~/"

config.vm.provision "shell", inline: <<-SHELL
echo foo
sudo cp /home/vagrant/gm-com-chain.pem /etc/pki/fwupd/
#sudo mv ~/gdroot-g2.pem /etc/pki/ca-trust/source/anchors/
#sudo update-ca-trust
#/etc/pki/fwupd/
# Base requirements for installing X11
#sudo dnf update -y
#sudo dnf install -y git
# Add desktop environment
#sudo dnf install -y xorg-x11-xauth
#sudo dnf install -y xclock
#sudo dnf install -y gnome-shell
#sudo dnf install xorg-x11-apps
#sudo dnf install -y xorg*
#sudo dnf install -y @xfce-desktop-environment
sudo sed -i '/#X11Forwarding no/s//X11Forwarding yes/' /etc/ssh/sshd_config
sudo rm -rf /root/.Xauthority
sudo rm -rf /root/.serverauth.*
SHELL

config.vm.provision "shell", privileged: false, inline: <<-SHELL
echo foo
Install the prerequisites:

sudo apt install -y curl gnupg2 ca-certificates lsb-release ubuntu-keyring

#Import an official nginx signing key so apt could verify the packages authenticity. Fetch the key:

curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

#Verify that the downloaded file contains the proper key:

gpg --dry-run --quiet --no-keyring --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg

#The output should contain the full fingerprint 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 as follows:

#pub rsa2048 2011-08-19 [SC] [expires: 2024-06-14]
# 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
#uid nginx signing key <[email protected]>

#If the fingerprint is different, remove the file.

#To set up the apt repository for stable nginx packages, run the following command:

#echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
#http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
# | sudo tee /etc/apt/sources.list.d/nginx.list

#If you would like to use mainline nginx packages, run the following command instead:

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/mainline/ubuntu `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list

#Set up repository pinning to prefer our packages over distribution-provided ones:

echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \
| sudo tee /etc/apt/preferences.d/99nginx

#To install nginx, run the following commands:

sudo apt update
sudo apt-get install -y net-tools
sudo apt-get install -y nginx
sudo apt-get install -y locate
sudo updatedb
sudo ufw enable
sudo ufw allow http
sudo ufw reload
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl status nginx

SHELL

config.vm.provision "shell", privileged: false, inline: <<-SHELL
echo bar
#vi /etc/nginx/conf.d/default.conf

SHELL

config.vm.provision "shell", privileged: false, inline: <<-SHELL
echo bas
SHELL

config.vm.synced_folder ".", "/vagrant", type: "rsync",
rsync__exclude: ".git/, ./vagrant"
end

Using these hosts together provided me the opportunity to constrain access to specific webpage hosts, while allowing for redeployment in the event of an attack. By taking precautions such as regularly backing up the contents of the website as well as the configuration of the Nginx host, we can re-deploy the site if we suspect compromise. This is important, as it represents a subtle shift in my perception of the challenge. If we begin with a disaster response in mind, we can be prepared for the inevitable challenges associated with it. By automating the re-deployment of the host, we can get focused on restoring services faster.