Strategies for serving static content with reactive javascript Single Page Applications

There are a number of popular and excellent javascript web frameworks to choose from for creating a single page application (SPA) including:

Developers benefit from using these frameworks through a combination of:

  • powerful and productive tooling
  • data binding
  • reactive interfaces
  • publish and subscribe mechanisms
  • separation of concerns

What all of these frameworks have in common is that they are used to create a single page application. A single page application is a web application that provides a more fluid user experience by presenting the entire application as a single page. In an SPA, all HTML, JavaScript, CSS, and assets are retrieved with a single page load or dynamically loaded and added to the page as necessary. As a visitor interacts with the page, the page does not reload and control is not transferred to other pages.

Single Page, Single URL?

Single page does not mean that an SPA only resonds to a single URL. In fact, mapping URLs to routes is an important part of almost all SPAs. For many applications, routing is used to provide:

  • bookmarking - so that users can bookmark a URL in their web browser to save content they want to come back to later.
  • sharing - so that users can share content with others by sending a link to a certain "page"
  • navigation - URLs are used to drive the web browser's back/forward functions.

Static content

In addition to the dynamically generated views provided by your SPA, you will often need to serve static content alongside or within your SPA. Examples of static content include:

  • downloads
  • PDF or other files
  • static HTML pages

In this article we will discuss strategies for serving static content with your single page application. In particular we will focus on:

  • modularity of the application
  • separation of concerns
  • maintainability of your deployment

There are three typical strategies for serving static content within or alongside an SPA:

  • using the routing mechanism of the view framework
  • using the routing capabilities of your controller
  • using a reverse proxy to pre-process URL requests

In almost all cases the first two approaches will seem to be easier to implement up front, but will lead to issues with modularity, separation of concerns and ultimately, the maintainability of the deployment.

  • modularity - embedding routing logic into your SPA runs counter to modularity. If the URL to your static resource changes, you need to update and redeploy your entire application.
  • separation of concerns - adding routing logic to your controller layer just moves the modularity problem to a different layer without solving the problem.
  • maintainability - needing to redeploy either your front or back-end service to update a static URL impacts maintainability. Ask yourself, in six month's time are you going to remember where the routing logic is located? Do you want to schedule downtime to update a single PDF or download URL?

Using an Nginx reverse-proxy to serve static content

A sensible approach to managing static content is to use a reverse proxy to tie together all of the layers of your application including:

  • the single page application
  • any back-end controllers
  • static content (documents, downloads, HTML pages, etc.)

Example configuration- (/etc/nginx/conf.d/default.conf):

server {
    listen       80;
    server_name  _;

    root /usr/share/nginx/html;

    proxy_set_header HOST $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_Header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Proxy "";

    location / {
        # proxy this request to the single page application
        proxy_pass http://127.0.0.1:8000/;
    }

    location /controller {
        # proxy this request to the controller
        proxy_pass http://127.0.0.1:8001/;
    }

    location /downloads/ {  
        # do nothing and let nginx handle this as usual
        # the static content should be located in /usr/share/nginx/downloads
    }   
}

In the example above, we have a stripped down default.conf file for Nginx. This could also be in a site-specific configuration file in /etc/nginx/sites-enabled/some_site.conf - please refer to the Nginx Beginner's Guide for more details.

Let's break this down:

Nginx is listening on port 80 for http requests. The server name is set to _ which means that this server block will respond to any requests on port 80 (i.e. requests to URLs and IP addresses for this server).

listen       80;
server_name  _;

Nginx will look for static content starting in the /usr/share/nginx/html/ folder.

root /usr/share/nginx/html;

When we proxy requests we want to set up the appropriate proxy headers. We also are setting the Proxy header to "" to mitigate the HTTPoxy vulnerability.

proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_Header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";

The following directives tell Nginx to proxy requests to / (e.g. www.mysite.com) and /controller (e.g. www.mysite.com/controller) to other web applications on this server that are listening on ports 8000 and 8001 respectively. Depending on your choice of framework these could be other Nginx servers, NodeJS applications, apache, other web or application servers, etc.

location / {
    # proxy this request to the single page application
    proxy_pass http://127.0.0.1:8000/;
}

location /controller {
    # proxy this request to the controller
    proxy_pass http://127.0.0.1:8001/;
}

The next directive tells Nginx to serve static content when it receives a request for /downloads.

location /downloads/ {    
    # do nothing and let nginx handle this as usual
    # the static content should be located in /usr/share/nginx/downloads
}

A request to www.mysite.com/downloads/file.pdf * would serve *file.pdf if there is a file called file.pdf located at /usr/share/nginx/downloads/file.pdf.

Sidebar: Running multiple applications on different ports

You may be wondering how this is possible. It can get quite complicated to run multiple web servers and applications. This is where running web applications inside of containers is a real game changer.

Need apache and nginx to coexist on a server? Put them in their own containers.

Need to run an application that runs best on Ubuntu on a CoreOS server? Containers to the rescue.

By using containers you increase the portability of your application and make it much easier to evolve and scale your application. For example, if you wanted to offload your controller to another server, you would simply deploy the container to the new server and change the IP address in the /controller location directive!

In the example above, you could use three containers:

  • one running nginx with the configuration file above (and listening on port 80)
  • one running nginx that serves up your single page application (and listening on port 8000)
  • one running node that handles your controller logic (and listening on port 8001)

Conclusion

The benefits of this approach are:

  • Enforcing modularity, separation of concerns and maintainability.
  • Updating static content can be as simple as uploading a new file to a single file location without having to change your code.
  • Making changes without planning any downtime for your application.

If you package your application in containers you also get the benefit of portability and even greater modularity, separation of concerns, and maintainability.