Tuning Nginx for Ghost

Ghost is a pretty fast application for blogging and can handle a good number of requests per second, but there is still a lot to be done to have a great performance and availability.

I’ve created this post to share my experience and how I increased the performance of my own blog.

Gzip

The last ghost version is already serving gziped content, but older versions are not and probably in the future it will be a configurable option. In this case I would prefer to manage gzip on Nginx.

/etc/nginx/nginx.conf

gzip on;
gzip_disable "MSIE [1-6]\.";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 1000;
gzip_buffers 16 128k;
gzip_http_version 1.0;
gzip_types image/svg+xml text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript text/x-component font/truetype font/opentype;

Cache

The biggest impact of this tuning will probably come from caching. Nginx is able to cache almost everything in a very efficient and easy way.

Statics#

On ghost, the static content is versioned using a variable in the get request, for example:

<link rel="stylesheet" href="/assets/css/main.min.css?v=26e134506f"/>

So we can cache all the static content without any problem.

Pages#

I’m using Disqus for the comment system, so the comments are loaded via ajax after the page is loaded, with this setup I can configure Nginx to cache the full page for a longer period and that will work just fine.

API and Admin#

Your admin area, which is using the API (in the last Ghost version), shouldn’t be cached.

Get your hands dirty#

The first step is to configure your cache zone:

/etc/nginx/conf/cache.conf

# Create a cache zone called "APP" stored on "/data/nginx/cache"
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=APP:100m inactive=24h max_size=2g;

proxy_cache_key "$scheme$host$request_uri";

# In case of proxy errors (i.e. ghost is not running), use the cache
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_http_version 1.1;

# Add header for debugging
add_header X-Cache $upstream_cache_status;

And now we can create the virtual host configuration:

/etc/nginx/sites-enabled/site.conf

server {
	listen *:80;
    
    # Send all necessary headers to ghost
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    
    # Static files
    location ~* \.(jpg|jpeg|svg|png|gif|ico|css|js|eot|woff)$ {
    	# Use the nginx cache zone called APP
        proxy_cache APP;
        # For valid responses, cache it for 30 days
        proxy_cache_valid 200 30d;
        # For not found, cache it for 10 minutes
        proxy_cache_valid 404 10m;

        # Ghost sends Cache-Control max-age=0 on CSS/JS for now
        # See https://github.com/TryGhost/Ghost/issues/1405?source=c#issuecomment-28196957
        proxy_ignore_headers "Cache-Control";
        access_log off;
        # Allow the browser to cache static files for 30 days
        expires 30d;
        proxy_pass http://proxy_app;
    }
    
    # API
    location ~ ^/(?:ghost/api) {
		# Tell the browser to don't cache API calls
        expires 0;
        add_header Cache-Control "no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0";
        proxy_pass http://proxy_app;
    }
    
    # Admin
    location ~ ^/(?:ghost) {
    	# For extra security, add a basic auth for admin area
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/include/htpasswd;

		# Tell the browser to don't cache API calls
        expires 0;
        add_header Cache-Control "no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0";
        proxy_pass http://proxy_app;
    }
    
    # Pages
    location / {
	    # Use the nginx cache zone called APP
        proxy_cache APP;
        # For valid responses, cache it for 1 day
        proxy_cache_valid 200 1d;
        # For not found, cache it for 10 minutes
        proxy_cache_valid 404 10m;

		# Ghost sends cookies and cache headers that breaks the nginx caching, so we have to ignore them
        proxy_ignore_headers "Set-Cookie";
        proxy_hide_header "Set-Cookie";
        proxy_ignore_headers "Cache-Control";
        proxy_hide_header "Cache-Control";
        proxy_hide_header "Etag";

		# Allow the browser to cache the full page for 10 minutes
        expires 10m;

        proxy_pass http://proxy_app;
    }
}

upstream proxy_app {
	# Use the default ghost port (2368)
    server 127.0.0.1:2368;
}

SPDY

SPDY is a Google project that is aiming to be the HTTP 2.0 and it’s already available and stable for use. By conception, SPDY has to run over HTTPS, so you will need a SSL certificate to make it work.

One of the big features of SPDY is multiplexing , which speeds up the download of the static content and increases your security as you will be running over HTTPS.

SSL#

I’m using a free SSL certificate, StartSSL , but you can use the one you prefer. Please follow the normal process to generate your certificate.

Nginx#

SPDY versions before 3 are not very well supported, and only Nginx versions after 1.5 supports SPDY 3 or newer.

It depends a lot on which Linux distribution you are using, but normally the default version of Nginx will not be the 1.5 or newer, so, before anything we have to upgrade Nginx.

The easiest way of doing this is by using the official stable repository, you can read more about it at the Nginx official documentation . In my case (Ubuntu), I’ve used the ppa:nginx/stable repo.

After adding the repo, just apt-get update and apt-get upgrade (for Ubuntu/Debian) and you should get the last stable version of Nginx, at this moment it’s 1.6.0.

Now it’s a very simple change. At the vhost file, just change from:

server {
	listen *:80;
    
    # ...
}

To:

server {
	# Add support for SPDY and HTTPS
	listen *:443 ssl spdy;

	# Add your certificate
    ssl_certificate /etc/nginx/ssl/your_cert.crt;
    ssl_certificate_key /etc/nginx/ssl/your_cert.key;
    
    # Some extra optimizations for SSL
    ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:!ADH:!AECDH:!MD5;
    ssl_session_cache shared:SSL:20m;
    ssl_session_timeout 10m;
    
    # ...
}

Results

I’ve used Apache AB to execute some simple tests (very simple, I’m not considering a lot of variables, but it’s just to have an overall performance idea).

I’m using a local VM with 2GB and 1 CPU and not running over SSL, just testing the difference between with and without caching for the homepage.

Running 5000 requests and 500 concurrent users.

Vanilla Ghost#

ab -c 500 -n 5000 http://10.10.0.100/
Server Software:        nginx
Server Hostname:        10.10.0.100
Server Port:            80

Document Path:          /
Document Length:        8224 bytes

Concurrency Level:      500
Time taken for tests:   93.831 seconds
Complete requests:      5000
Failed requests:        4149
   (Connect: 0, Receive: 0, Length: 4149, Exceptions: 0)
Write errors:           0
Non-2xx responses:      3677
Total transferred:      15495649 bytes
HTML transferred:       14480858 bytes
Requests per second:    53.29 [#/sec] (mean)
Time per request:       9383.117 [ms] (mean)
Time per request:       18.766 [ms] (mean, across all concurrent requests)
Transfer rate:          161.27 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   4.3      0      22
Processing:    99 8808 2487.3   9998   10297
Waiting:       99 8808 2487.3   9998   10297
Total:         99 8809 2487.4   9998   10318

Percentage of the requests served within a certain time (ms)
  50%   9998
  66%  10000
  75%  10003
  80%  10020
  90%  10154
  95%  10200
  98%  10245
  99%  10283
 100%  10318 (longest request)

Optimized Ghost#

ab -c 500 -n 5000 http://10.10.0.100/
Server Software:        nginx
Server Hostname:        10.10.0.100
Server Port:            80

Document Path:          /
Document Length:        8171 bytes

Concurrency Level:      500
Time taken for tests:   1.911 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      42370000 bytes
HTML transferred:       40855000 bytes
Requests per second:    2616.64 [#/sec] (mean)
Time per request:       191.085 [ms] (mean)
Time per request:       0.382 [ms] (mean, across all concurrent requests)
Transfer rate:          21653.72 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   3.8      0      20
Processing:    39  182  44.5    177     561
Waiting:       38  181  44.5    177     561
Total:         48  183  44.9    177     578

Percentage of the requests served within a certain time (ms)
  50%    177
  66%    184
  75%    193
  80%    199
  90%    219
  95%    231
  98%    233
  99%    234
 100%    578 (longest request)

Conclusion#

500 concurrent users is quite a big number, I hope I can get this amount of visitors on my blog at some point, but probably it will take some time.

Without Nginx caching, around 83% of the requests just fail, NodeJS is not able to answer all of them on time. It handled only 53.29 requests per second.

With caching, 100% of the requests were successful, with 2616.64 requests per second.

It’s an easy change that will enable your blog to handle an enterprise volume of visitors, speed up and increase the security of your blog.

comments powered by Disqus