Quick Webserver with Nginx and Docker

Need to serve files on localhost? Do it 10 seconds using nginx, docker, and the following one-liner

docker run --name webserver -d -p 8080:80 -v /some/content/to/serve:/usr/share/nginx/html nginx

See you files on your browser by navigating to localhost:8080. By default, the server will look for an index.html from the mounted volume so add one for a homepage. See here for more information about the nginx docker image

-d – Daemon mode. Runs the docker image in a background process

-v – Mount a volume. Maps files from your local machine to a location on the dockerfile. The above example takes the index.html and places it into the default location for nginx to serve files.

-p – Expose ports. Maps internal port 80 on docker to port 8080 of the host machine.

The result

Want to take it to the next level and serve these files over the web? Follow the steps here!

Using sshuttle

Problem: You need access to a machine on private network. The IP address to the machine is NOT public

Solution: If you have ssh access to a machine on the target network, use sshuttle to create proxy, allowing access to the rest of the network

Block diagram of an ssh connection
  1. Establish the ssh tunnel sshuttle -r <USER>:<PASSWORD>@<Host IP Address> <Allowable-Connections> -D
    1. -r – flag to input the hostname and user/password on the command line
    2. USER – user of the host machine
    3. PASSWORD – password to the host machine
    4. Host IP Address – IP address of the host proxy server
    5. Allowable-Connections – You can establish a range of IP addresses that will route through your ssh tunnel. Define this range using CIDR notation. By default, all connections are allowed (0/0)
    6. -D – flag to run sshuttle in a background process
  2. The tunnel created is an open connection to the private network via an ssh connection to a server on the private network
  3. Access a machine on the private network!

Helpful links:

Sshuttle Docs

Sshuttle Video Tutorial

Sshuttle Manual Page

Dockerize your runnable application

Ever run into an issue with environments when running other peoples code? Don’t have the proper modules in your environment? Don’t want to modify your existing environment just to run a CLI program?

Dockerize it! Make the application runnable on any machine using docker. Using a default alpine program date as an example

Step 1: Create a Dockerfile wrapping your application

# Base image
FROM alpine
# Copy your runnable program with the COPY 
# docker command.

# The first argument of the ENTRYPOINT will be command executed when 
# the docker container is run
ENTRYPOINT ["date"] 

Step 2: Build the dockerfile

docker build .

Step: 3 Run the program with docker run

docker run [container-name] -h

Full example

Slack Stats

I wanted an app that showed how often messages in a slack channel were interacted with whether it be from messages in threads or slack reactions. The linked application is a docker program that downloads slack conversations and user information into a database to query for information and statistics.

Tech Stack

  • JavaScript
  • PostgreSQL
  • Docker / Docker Compose

Implementing a Concourse Resource

Felt a few pains using concourse to manage my teams very large testing & release structure. One major pain was the way script dependencies were being used within tasks and jobs. In order to provide consistency with the scripts and programs that all tasks had access to, we would build a docker image with these scripts and executable.

After a while the docker image bloated to a few GB’s in size which wasn’t ideal because every time a task would run for the first time, it would have to pull that large image making pipeline runs slower. Overall it’s probably an anti-pattern; concourse images should have what’s required to run its associated task.

I had a “novel” idea about a custom concourse resource that could download and use executable binaries needed within tasks instead of preloading them into docker images. Source code here

Continue reading