This is an updated tutorial of my two previous tutorials of how to setup a Ghost blog on a DigitalOcean CentOS 7.6 VPS Droplet and making the server both Secure and as responsive as possible using HTTP/2.0. They can be found here and here.

In this tutorial I will go through some security measures I take and what I do to setup a very responsive web app/site served over HTTP2 which automatically includes HTTPS. I will use NGINX as server and all will be done on a CentOS 7.6 VPS hosted on DigitalOcean. As an example I will be installing a ghost blog, but if you are mainly looking for how to get HTTP/2 roling on your CentOS 7.6 server then skip a head to the Certbot and Nginx sections below.

To start I will assume that you have a fresh Droplet with CentOS 7.5 installed on it. At the time of writing CentOS 7.5 is the highest version available on DigitalOcean even though CentOS 7.6 has been around for a while.

Start by logging in to your VPS using ssh in your terminal of choice.

# ssh root@host to login and change your password

Then first of all we'll start by updating CentOS with the following command.

# yum update

Then to check that we are on the latest version of CentOS type the following.

# cat /etc/centos-release

Which at the time of writing outputs:

CentOS Linux release 7.6.1810 (Core)


Next up we'll install a firewall to get some added control on what we want to have open to the public. Yum is CentOS "app store" where you can download and install apps to your OS. While systemctl can start and stop the installed apps or services. systemctl enable makes sure your app starts if the server is restarted and status checks the current status of the app. The following commands will install, start, enable and check the status of firewalld, the app we'll be installing.

# yum install firewalld
# systemctl start firewalld
# systemctl enable firewalld
# systemctl status firewalld

Status should output something similar to the following.

firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-12-28 15:01:12 UTC; 11s ago
     Docs: man:firewalld(1)
 Main PID: 8509 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─8509 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Dec 28 15:01:12 temp-test systemd[1]: Starting firewalld - dynamic firewall daemon...
Dec 28 15:01:12 temp-test systemd[1]: Started firewalld - dynamic firewall daemon.

Change ssh port

As an extra security measure I like to change the standard port used for ssh connection. I do this because if I keep port 22 and remote root access enabled I get hundreds of unsuccessful attempts to login to my root account on my server. Statistically its not likely anyone will be able to login if you have a secure password, but lets be a little safer.

Lets edit the ssh config file. My preferred text editor is "vi" but you can use which ever you prefer. If you are new to "vi" there are just a few commands you need to know to use it.

Once the file is open press "i" to start edit the file. You can move around using the arrow keys. Once you are finished editing press the ESC key and type : and wq and press return/enter. The w stands for write and q for quit. Some times if you have made changes and don't want to save them but just quit, add a ! after q to force quit without changes (like so :q!).

# vi /etc/ssh/sshd_config

Find and uncomment #Port 22 (remove the # in the beginning of the line) and change port number to anything you want, in this example I will use port 2244.

CentOS comes with SELinux (Security-Enhanced Linux) which is a kernel module to manage security policies in linux. What we need to do is to allow sshd to use port 2244. We can do that like this.

# semanage port -a -t ssh_port_t -p tcp 2244

We also need to open this port in the firewall and then reload the firewall to apply the changes.

# firewall-cmd --permanent --zone=public --add-port=2244/tcp
# firewall-cmd --reload

And as a last thing we'll restart the ssh service to apply the change of using port 2244 instead of port 22.

# systemctl restart sshd

Now you can logout from your server and try to reconnect to your server the same way as the first time to see if its blocked and then try to login using the new port.
To use ssh with a different port add "-p ####" to then end when logging in.

# ssh root@host -p 2244

New user

Next up we will add another user as its not good practice to use the root user to login remotely to your server. Lets call this user toor.

# useradd toor
# passwd toor
# usermod -aG wheel toor

useradd creates the user. passwd lets you add a password to the user. And usermod -aG wheel grants your new user sudo privileges.
Now try to logout and login with your new user to see that it works.

# ssh toor@host -p 2244

Now we will allow certificate logins in ssh. To do this we need to modify the sshd_config file again.

# sudo vi /etc/ssh/sshd_config

As we are not the root user anymore you will need to add sudo in front of changes you make which are outside your user directory. You will be prompted to type your password.

Find the line with PubkeyAuthentication and remove the # at the beginning to uncomment it and make sure it says yes after it

PubkeyAuthentication yes

Save and quit vi. And restart the sshd service.

# sudo systemctl restart sshd

If this works fine, then logout again and we will create a certificate to allow password less logins to your server to make life slightly better.
To create a new certificate type the following.

# ssh-keygen -t rsa

When it asks for a password just hit enter to create a password less certificate. This command works on both Mac and Linux. Windows however has slightly different commands, but there is a way to get Ubuntu Bash terminal in Windows 10 natively. If you don't have this yet I do recommend you to try it out. You can follow this tutorial on how to enable it:

Next up we want to copy the certificate to the server. There is a pretty simple command for this.

# ssh-copy-id -i ~/.ssh/id_rsa toor@host -p 2244

If this is successful will try to login again.

# ssh toor@host -p 2244

This should bring you straight in to your server without the need for a password.
Next we will make sure root logins are not allowed anymore. And if you want you can also disable password logins to your server. This means the only way to login is to user your certificate. If you loose it or if you try to access your server from another computer you are screwed. DigitalOcean does have a backup plan though, you can login to their control panel and get root ssh access that way if you get locked out. I locked myself out making this tutorial by taking some of the steps in the wrong order, but gained access again through the online terminal.

Now lets disable remote root logins

# sudo vi /etc/ssh/sshd_config

Find the line with PermitRootLogin, uncomment it and make sure it say no after it.

#PermitRootLogin yes -> PermitRootLogin no

Then if you want you can find the line wich says PasswordAuthentication yes and change it to


Which will disable any attempt to login to your server through ssh with a password.
Save and quit. And restart the sshd service.

# sudo systemctl restart sshd

If you need to access your root account, login with your normal account then just type su and then your root password.


Next up we will install node.js. There are multiple ways to do this via yum, nvm and more. Personally I prefer nvm (node version manager), because later on its easier to change node version if needed. nvm only installs node for the currently logged in user, which means that you can have multiple users using different node versions which can be handy if you are in to that.

Start by heading over to and check what the latest version is. v0.34.0 at the time of writing this. The install it by writing

# curl -o- | bash

Then choose your node version. I usually go with the latest LTS release. Check at which at the time of writing this is v10.15.1

# nvm install 10.15.1
# node -v

node -v should output the version.


Ghost needs a MySQL database to run, so next we'll install that. Did you know that My in MySQL is actually the name of the daughter of the creator and not actually pronounced like English My(Mine)? This guy has two daughters, and you might have heard of a MySQL alternative called MariaDB, named after his second daughter.

Start by heading to and scroll to the bottom and find what the latest version is. It says in a pretty small text in a parenthesis below which Linux distro its for, for CentOS check under Red Hat enterprice. At the time of writing the version is mysql80-community-release-el7-2.noarch.rpm.

To download the installation package type.

# wget

If wget isn't installed you can get it with and try to download MySQL with the above command again.

# sudo yum install wget

Just as an extra security measure you can check the MD5 checksum of the downloaded package with

# md5sum mysql80-community-release-el7-2.noarch.rpm

then compare the result with the checksum on the same line on the website where we checked the latest MySQL version. If its the same continue by installing mysql and start and enable the database with systemctl.

# sudo rpm -ivh mysql80-community-release-el7-2.noarch.rpm
# sudo yum install mysql-server
# sudo systemctl start mysqld
# sudo systemctl status mysqld
# sudo systemctl enable mysqld

During the installation MySQL generated a temporary password for the root user which we need to find. We can get it like this

# sudo grep 'temporary password' /var/log/mysqld.log

Copy the password and continue with the installation.

# sudo mysql_secure_installation

First enter the temporary password. Then create a new password, write this down and store somewhere safe, if you loose it its just a pain to create a new one without loosing data from the db.
For some reason I'm asked to enter a new root password twice. Once you are done with the password part just answer yes to all questions.

Then we will create a new user and database for ghost. Start by logging in to the MySQL shell as root.

# mysql -u root -p

Then we'll create a user called ghost with the password @ghost3spookY. Choose whatever password you want. And we'll grant this user all privileges.

# mysql> CREATE USER 'ghost'@'localhost' IDENTIFIED BY '@ghost3spookY';
# mysql> GRANT ALL PRIVILEGES ON *.* TO 'ghost'@'localhost' WITH GRANT OPTION;

Then, ghost was created for a slightly older version of MySQL but they have changed some ways of accessing MySQL since, but we can enable the old way again by doing the following.

# mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '@SecretPassword2';
# mysql> ALTER USER 'ghost'@'localhost' IDENTIFIED WITH mysql_native_password BY '@ghost3spookY';

After that we'll logout from the MySQL shell. And login with the new user and create a new database for ghost.

# mysql> \q
# mysql -u ghost -p
# mysql> CREATE DATABASE ghost_blog;
# mysql> \q


Time to install Ghost and for that we need ghost-cli which we can install like this

# npm install -g ghost-cli

As ghost-cli doesn't support CentOS officially I usually do a little hack to make it work with CentOS. To get it working we're going to edit one of the ghost-cli files.

# vi .nvm/versions/node/v10.15.1/lib/node_modules/ghost-cli/extensions/systemd/systemd.js

Find the following lines

isRunning() {
        return this.ui.sudo(`systemctl is-active ${this.systemdName}`)
            .then(() => true)
            .catch((error) => {
                // Systemd prints out "inactive" if service isn't running
                // or "activating" if service hasn't completely started yet
                if (error.stdout && error.stdout.match(/inactive|activating/)) {

in the last line above you can see inactive|activating and there we are going to add |unknown so it looks like this

    if (error.stdout && error.stdout.match(/inactive|activating|unknown/))

Save and quit.
Unfortunately you need to redo this hack every time you update ghost-cli.

Ghost cli creates a new user and we need to let this new user access the folder where you want ghost to be installed. So we'll change the permissions that folder. In my case I'm installing ghost in a new folder in my current user's home folder. So we'll change the permission on the home folder so the ghost user can access it.

# sudo chmod 755 /home/toor

Next create a folder to install ghost in.

# mkdir ghost && cd ghost
# ghost install

Ghost-cli will complain that you aren't on Ubuntu, just say yes and continue anyway and answer all the questions like below and match the password, user and database used in the MySQL Installation and whatever url you have for your site.

? Enter your blog URL:
? Enter your MySQL hostname: localhost
? Enter your MySQL username: ghost
? Enter your MySQL password: [hidden]
? Enter your Ghost database name: ghost_blog
? Sudo Password [hidden]
? Do you wish to set up "ghost" mysql user? No
? Do you wish to set up Nginx? No
? Do you wish to set up Systemd? Yes
? Do you want to start Ghost? No

Done! Ghost installed. But before we can start it we need to setup the Firewall, SELinux and NGINX. We will also get a certificate so we can use HTTPS and HTTP/2.0 to access the blog.
In the firewall we want to open port 80 which is the default port for HTTP and port 443 which is the default for HTTPS and we want to add the http and https protocols to the firewall as well. Do the following commands and check that the results looks similar to the results below.

# sudo firewall-cmd --state

# sudo firewall-cmd --get-default-zone

# sudo firewall-cmd --get-active-zones
  interfaces: eth0

# sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  services: ssh dhcpv6-client
  ports: 2244/tcp
  masquerade: no
  rich rules: 

# sudo firewall-cmd --zone=public --permanent --add-service=http

# sudo firewall-cmd --zone=public --permanent --add-service=https

# sudo firewall-cmd --zone=public --permanent --add-port=80/tcp

# sudo firewall-cmd --zone=public --permanent --add-port=443/tcp

# sudo firewall-cmd --reload

# sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  services: ssh dhcpv6-client http https
  ports: 2244/tcp 80/tcp 443/tcp
  masquerade: no
  rich rules:

And thats it, the firewall is ready.

Let's encrypt & Certbot

In this tutorial I will use Let's encrypt to get free certificates for the site to be used with HTTPS but you can obviously use whatever provider you want. The following two commands will install certbot on your server.

# sudo yum install epel-release
# sudo yum install certbot-nginx

Next we will request a certificate for the domain which this server is connected to and where you want your blog to be. In this step I expect that you already have a domain and its connected to the droplet you're currently connected to.

# sudo certbot certonly

Here you get three options, I usually go with: spin up temp web server. Fill in all info for you and your domain. If all goes well you will get a long success message and the location to where your certificates are installed.


Now its time to install NGINX and set it up to serve your ghost blog over HTTP/2.0 which includes HTTPS by default and we'll redirect all the normal HTTP traffic to HTTPS.
Yum doesn come with the NGINX repo in it so we'll start by adding a new repofile like this

# sudo vi /etc/yum.repos.d/nginx.repo

and paste the following text in the file.

name=nginx repo

Save and quit, then install nginx.

# sudo yum install nginx
# sudo systemctl status nginx

We wont start NGINX just yet, so the status should be inactive. Then we will edit the nginx config file. What we want to do is change min max allowed file upload size to 50mb, the default is 2mb, so you can upload photos or files larger than 2mb to your blog. Then we will add some settings for https with which protocols and ciphers to use. And the last three lines are for activating some cache so your blog can be served faster. Look out for any duplicate lines.

# sudo vi /etc/nginx/nginx.conf

Add the following after line with #gzip on;

    client_max_body_size 50M;

    ssl_session_cache    shared:SSL:10m;
    ssl_session_timeout  10m;
    # Forward secrecy settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    ssl_stapling on;
    ssl_stapling_verify on;

    include /etc/nginx/conf.d/*.conf;

    server {
        server_name localhost;

    proxy_cache_path /home/toor/ghostcache levels=1:2 keys_zone=ghostcache:60m max_size=300m inactive=24h;
    proxy_cache_key "$scheme$request_method$host$request_uri";
    proxy_cache_methods GET HEAD;

save quit

Next we will add a config file specifically for the domain you want to use, which I will call "" here. Change it to whatever name you want.

# sudo vi /etc/nginx/conf.d/

What you see here below is the settings I use on my domain. I'm mainly defining everything which should happen on port 443 (https) and as a last thing I redirect any requests on port 80 (http) to port 443. Ghost generally runs on port 2368 and since it runs on this server we will tell NGINX to point any requests to, if you use a different port just change to that port in the two places below.

Then you can see ssl_certificate and ..._key, they are the keys which was created during the certbot step. Put the correct file locations below. We will again specify some ciphers and protocols for https.

Then in the 4 sections starting with location we're setting up some caching, some response headers, and the physical file locations to be use on the server for serving some of the blog content.

So copy and paste the below part with your changes.

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
server {
    listen 443 ssl http2;
    listen [::]:443 ipv6only=on ssl http2;

    gzip off;


    ssl_certificate        /etc/letsencrypt/live/;
    ssl_certificate_key    /etc/letsencrypt/live/;
    ssl_dhparam            /etc/nginx/ssl/dhparam.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    add_header Strict-Transport-Security max-age=31536000;
    add_header X-Frame-Options SAMEORIGIN;

    location / {
        proxy_cache ghostcache;
        proxy_cache_valid 60m;
        proxy_cache_valid 404 1m;
        proxy_cache_bypass $http_cache_control;
        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Set-Cookie;
        proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
        proxy_ignore_headers Cache-Control;
        add_header X-Cache-Status $upstream_cache_status;

        limit_req zone=one burst=20 nodelay;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_buffering off;

    location ^~ /assets/ {
        root /home/toor/ghost/content/themes/casper;

    location ^~ /content/images/ {
        root /home/toor/ghost;

    location ^~ /ghost/ {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

server {
    listen 80;
    listen [::]:80 ipv6only=on;
    return         301 https://$server_name$request_uri;

Then save and quit.

In the file above you might have seen linking to a file called dhparam.pem. This is an extra certificate for some added security over https which we need to create which can be done with the two following steps.

# sudo mkdir /etc/nginx/ssl
# sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096

This can take a while so go and grab yourself something to drink.
When done we also need to create the folder we have specified above for the caching.

# sudo mkdir /home/toor/ghostcache

Done! Now we want to check that all the next NGINX configurations are correct. Just type nginx -t, -t is for test.

# sudo nginx -t

Which hopefully will output

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

At this moment you can try to start NGINX, but it will most likely fail.

# sudo systemctl start nginx

Job for nginx.service failed because a configured resource limit was exceeded. See "systemctl status nginx.service" and "journalctl -xe" for details.

Check the NGINX status to find the problem.

# sudo systemctl status nginx.service

● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: resources) since Wed 2019-02-13 11:59:42 UTC; 19s ago
  Process: 28161 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)

Feb 13 11:59:41 temp-test systemd[1]: Starting nginx - high performance web server...
Feb 13 11:59:42 temp-test nginx[28161]: nginx: [emerg] open() "/var/run/" failed (13: Permission denied)
Feb 13 11:59:42 temp-test systemd[1]: Failed to read PID from file /var/run/ Invalid argument
Feb 13 11:59:42 temp-test systemd[1]: Failed to start nginx - high performance web server.
Feb 13 11:59:42 temp-test systemd[1]: Unit nginx.service entered failed state.
Feb 13 11:59:42 temp-test systemd[1]: nginx.service failed.

selinux and nginx problem: nginx: [emerg] open() "/var/run/" failed (13: Permission denied)

As you can see SELinux is blocking NGINX, so we need to let SELinux know that NGINX is cool. For that we need to install policycoreutils-devel and allow NGINX to run.

# sudo yum install -y policycoreutils-devel
# sudo grep nginx /var/log/audit/audit.log | audit2allow -M nginx
# sudo semodule -i nginx.pp

Try again to start NGINX.

# sudo systemctl start nginx
# sudo systemctl status nginx.service

● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-02-13 12:05:49 UTC; 5s ago
  Process: 28917 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
 Main PID: 28918 (nginx)
   CGroup: /system.slice/nginx.service
           ├─28918 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
           ├─28919 nginx: worker process
           ├─28920 nginx: cache manager process
           └─28921 nginx: cache loader process

Feb 13 12:05:49 temp-test systemd[1]: Starting nginx - high performance web server...
Feb 13 12:05:49 temp-test systemd[1]: PID file /var/run/ not readable (yet?) after start.
Feb 13 12:05:49 temp-test systemd[1]: Started nginx - high performance web server.

Now nginx is running. Try to visit your domain!

It should show you a 502 bad gateway error because we haven't started ghost yet. But before we do that we need to allow http to use the port ghost uses in SELinux.

# sudo semanage port --add --type http_port_t --proto tcp 2368

And we also need to allow nginx serve files directly and allow it to cache which again we need to tell SELinux.

# sudo chcon -R -t httpd_sys_content_t /home/toor/ghost

And finally lets start ghost.

# cd ghost
# ghost start

Visit your domain again!

Let me know if you liked this tutorial.