S3 Media Server Login

Introduction

First, it has been awhile since I’ve written anything. A lot has been happening in the last few months: I’ve changed jobs and we’ve had the holidays to contend with. However, I’ve also still been doing some projects, I just haven’t been good about blogging for it.

In short, I’ve been using the S3 media server the way I had left it off, and started to think about how I might want to improve it. Specifically, I have been running it locally on my laptop, and wanted to see about moving it off onto one of my web servers. When I considered that, I realized I didn’t want to get nasty lawyer-letters telling me to stop sharing my media with the world, so I will need some sort of authentication to ensure that only valid users can access my content.

The Sketch

Adding authentication surely complicates things. In the spirit of learning, I decided to try to do this in the “right-ish” way. I decided that this meant Docker-izing my setup so that I could separate the front end UI from the back-end logic. I decided to use three Docker containers: nginx as a reverse proxy to direct traffic to the right places, my Go backend server, and a front end setup.

For the front-end I fairly arbitrarily chose ReactJS–it’s fairly lightweight and easy to work with, and it is supported by my WebStorm IDE (again, by JetBrains).

Security

I decided that I wanted to implement two types of autnentication: Google OAUTH and U2F with my Yubikey. This was somewhat arbitrary, but I figured Google OAUTH would be pretty simple and likely the one I’d want to use, and the U2F authentication seemed fun. I bought a Yubikey awhile ago and I’ve enjoyed learning about how to use it.

The Docker Setup

Backend

Moving the Go backend into a Docker container was fairly straightforward. As I proceeded, however, it struck me that I wanted to move to a REST API using JSON objects to describe content. This conversion was pretty simple at the outset. I decided to simply use the npm-based content server for the front-end (at least to start), so I could remove the HTML serving and simply serve JSON.

I realized that this backend would be responsible for two things: validating a user and serving up JSON responses for media. Using gorilla/mux in conjunction with negroni and go-jwt-middleware I was able to configure two “domains”: an authenticated path and an authentication path. The authentication path requires no prior authorization and is used to validate a user, and the authenticated path is used to serve up media results.

Frontend

I set up a fairly generic ReactJS front-end for development, and went to work figuring out how to connect everything together. At a high level, I was able to simply use the react-google-login package to issue the authorization request using the component. This allowed me to set up callback functions for receiving the JWT (JSON Web Token) from Google after the user had signed in. Aside from following the very helpful Google guide for integrating Google Sign-In for Websites, the Google setup was quite boring, so I’ve left that part out.

In short, Google will respond with a JWT that contains user information that we can use to authenticate someone; in my case, I’ll simply check that the JWT is signed by Google for my G-Mail address. In reading about user authentication, some folks seem pretty adamant that you should stay away from JWT for user sessions, but others seem to feel that while there are corner cases that JWT aren’t great for, in general they are fine. And given that they are what Google issues, I’m just going to use those for now, consequences be damned. If I need to change it in the future, I will.

Reverse Proxy (nginx)

Using a reverse proxy makes my life easier by allowing me to separate out the containers that ultimately will still serve the same IP address and port. This will be important later when I consider U2F authentication. Basically I use an nginx container to route traffic to the correct container (e.g. backend for authentication).

Here are the contents of my /etc/nginx/nginx.conf file:

error_log /dev/stdout info;

events { worker_connections 1024; }

http {
    upstream apiserver {
        # These are references to our backend containers, facilitated by
        # Compose, as defined in docker-compose.yml
        server apiserver1:8081;
    }

    upstream frontendserver {
        # These are references to our front-end containers, facilitated by Compose.
        server frontend1:3000;
    }

    server {
        listen 8080;
        server_name localhost;

        location / {
            proxy_pass http://frontendserver;
            proxy_set_header Host $host;
        }

        location /api {
            proxy_pass http://apiserver;
            proxy_set_header Host $host;
        }
    }
}

Note that this setup is intended to use docker-compose to set up the various Docker files that are working together for my server. This allows the Docker containers to find each other and to use a shared network (e.g. note the references to apiserver1).

Looking at this configuration file, you can see that this container is listening on port 8080 and serving up the base URL to the frontend server (the ReactJS stuff) on port 3000. The calls from the frontend to the backend will be done via the /api path, which is then forwarded off to the apiserver, at port 8081.

docker-compose.yml

Finally, here is the docker-compose file that I use for bringing up the various containers for use together.

version: '2'

services:
  # Front-end code lives in its own container, and depends on the API
  frontend1:
    build: static
    environment:
      - PUBLIC_URL=https://jfisher.tv
    links:
      - apiserver1

  # nginx reverse proxy
  nginx:
    build: nginx
    ports:
      - "127.0.0.1:8080:8080"
    links:
      - frontend1

  # API code runs in its own container
  apiserver1:
    build: .
    environment:
      - APP_ID=https://jfisher.tv
    env_file:
      - aws-creds.env

Of note here is that I am setting environment variables for the API server and the frontend. Specifically, the public URL of the service is important because since these services are running in Docker containers that aren’t really aware of the outside world, and of course the AWS credentials that are used for accessing the S3 bucket.

Google OAuth 2.0

On the frontend, this step is pretty simple. As I mentioned before, using the react-google-login package takes care of the setup for this so that I can simply serve up the <GoogleLogin> component with a callback:

googleResponse = (response) => {
	fetch(BASE_USER_ENDPOINT, {
	    method: 'POST',
	    body: JSON.stringify(response),
	}).then(response => response.text())
	  .then((resp) => {
		this.setState({isAuthenticated: true, token: resp});
	  });
};

onFailure = (error) => {
	console.log("ERROR: " + error);
};

render() {
	return (
		<div>
			<GoogleLogin
				onSuccess={this.googleResponse}
				onFailure={this.onFailure}
				clientId={config.GOOGLE_CLIENT_ID}
				buttonText="Login"
			/>
		</div>
		);
}

This simplified code presents a “Login” button to users that, when clicked, prompts the user to log in to their Google account. When finished, the callback is triggered with a response parameter from Google. Note the GOOGLE_CLIENT_ID value of the clientId field; this is used by Google to determine which app is requesting the user’s login.

After the user has logged in, this code will then post the response to the BASE_USER_ENDPOINT URL, which is set as follows:

var BASE_USER_ENDPOINT = process.env.PUBLIC_URL + "/api/v1/authenticate";

If you refer to the Docker setup, the PUBLIC_URL is set by the docker-compose build configuration.

The last step is to save our backend’s response token in the React state variables for use for further calls. This will be used as a Bearer token so that the API can authenticate that we are authorized to make privileged calls. We’ll see more of that in the backend code. Note that the React state variables are probably the wrong place to keep this, since a page refresh or reload can lose this value, causing the user to have to log back in. We probably want to save it to a token someplace.

Of note here is that the token in the response from our API is not the same token as from the Google JWT; this is to allowed for a unified experience after login. Regardless of whether you authenticated from Google or by using U2F, once you’re logged in, we no longer want to be beholden to the method you used to log in.

Summary

Ok, that is all for tonight! Next time we will look more at the backend and what it will take to authenticate a Google JWT, then to generate our own token for the frontend. If I have enough time, I will start looking into the U2F stuff, which is pretty interesting in its own right!