Skip to main content

My setup for downloading & streaming movies and tv

I recently signed up for Netflix and am retiring my headless home media pc. This blog will have to serve as its obituary. The box spent about half of its life running FreeNAS, and half running Archlinux. I’ll briefly talk about my experience with FreeNAS, the migration, and then I’ll get to the robust setup I ended up with.
SilverStone DS380
The machine itself cost around $1000 in 2014. Powered by an AMD A4-7300 3.8GHz cpu with 8GB of memory. A SilverStone DS380 case is both functional, quiet and looks great. The hard drives have been updated over the last two years until it had a full compliment of 6 WD Green 4TiB drives - all spinning bits of metal though.

Initially I had the BSD based FreeNAS operating system installed. I had a single hard drive in its own ZFS pool for TV and Movies, and a second ZFS pool comprised of 5 hard drives for documents and photos.

FreeNAS is straight forward to use and setup, provided you only want to do things supported out of the box or by plugins. Each plugin is installed into its own jail giving you full control over what data is accessible to each jail. There were several things that I really liked about FreeNAS; the web administration interface worked a charm, also had no problems using ZFS. Quite mind blowing taking disks offline and increasing the size of the pool by adding in larger drives. I haven’t used any BSD systems before this so making custom jails, installing custom software, running jails
with a VPN etc were all quite frustrating tasks.

Eventually these differences got too annoying, I decided to use what I knew from working with Docker and use my normal operating system: ArchLinux. I had read a lot about btrfs when I first set up FreeNAS so I was keen to switch filesystems at the same time. The migration was an exercise in care. I had approximately ~6TB of data. The ZFS pool could operate in degraded mode without any two of its disks - so in theory I could make the migration work without using external disks. Long story short I backed everything to two disks, replaced the ZFS pool with a BTRFS volume, and migrated the data. The ability to add additional disks to the live btrfs volume was very impressive. I almost got caught out by not removing the zfs partition labels with wipefs. I’ve been warned that raid5 and raid6 isn’t very well tested in btrfs, but I’ve gotten away with it so far.

I had these services running on both operating systems:
  • Emby for streaming content to chromecast and other devices
  • Couchpotato for finding movie torrents
  • Sickrage for finding tv show torrents
  • Headphones for finding music torrents
  • Transmission for downloading the torrents
  • OpenVPN for basic hiding of downloading activity
  • Nginx webserver & reverse proxy to help access the above services and host local content
The arch setup had:
  • systemd controlled docker containers for each service
  • btrfs subvolumes created for each docker data volume
  • anything that even thinks about torrents accessing the internet via VPN
With systemd it is very easy to set up dependencies between services, and with docker it is easy to link containers together. I’ve had a bit of experience running various Docker containers under CoreOS so it wasn’t much effort to get these services running under systemd.

They all follow the same general template:

Description=Some Dockerized Service



ExecStartPre=-/usr/bin/docker kill container-name
ExecStartPre=-/usr/bin/docker rm container-name
ExecStartPre=/usr/bin/docker pull user/upstream-container-name

ExecStart=/usr/bin/docker run --net=host --rm \
    -e TZ="Australia/Sydney" \
    -v /mnt/drive/server-configs/container-name:/config \
    -v /mnt/drive/Video:/media \
    -v /mnt/drive/Music:/music \
    -v /mnt/drive/Downloads:/downloads \
    --name=container-name \

ExecStop=/usr/bin/docker stop -t 2 container-name

  • I’m directly mounting volumes from /mnt/drive
  • This setup carries out updates on restart, a more robust approach would be to pin the container version.
  • Note directives with =- are allowed to fail without consequence.
  • This template doesn’t link to any other docker containers.
An example where one service depends on another is transmission depending on the VPN:

Description=Transmission Server



ExecStart=/usr/bin/docker run \
    --net=container:vpn \

Management is all done with the systemctl tools. I’ve enabled all the services
to start at boot.

Since each service is really just running a server inside docker, these are the containers I settled on:
The nginx proxy is a nice addition because it allows you to decide how to expose each service. Instead of having different ports for each service (port 8096 for emby, 9091 for transmission etc), you can instead visit http://your-machine/emby or http://your-machine/transmission. I set up a simple home page that nginx was serving to point users at the correct services.

I setup the mount points so that each service could only access what it needs to access. For example transmission can only write files in two download directories - one for music, and one for tv/movies. (Actually that isn’t 100% accurate - in order to not lose in-progress downloads when the docker container is restarted the incomplete downloads folder is also a mount point).

You can see the service files for each container on github at

Popular posts from this blog

Driveby contribution to Python Cryptography

While at PyConAU 2016 I attended the Monday sprints and spent some time looking at a proposed feature I hoped would soon be part of cryptography. As most readers of this blog will know, cryptography is a very respected project within the Python ecosystem and it was an interesting experience to see how such a prominent open source project handles contributions and reviews.

The feature in question is the Diffie-Hellman Key Exchange algorithm used in many cryptography applications. Diffie-Helman Key Exchange is a way of generating a shared secret between two parties where the secret can't be determined by an eavesdropper observing the communication. DHE is extremely common - it is one of the primary methods used to provide "perfect forward secrecy" every time you initiate a TLS connection to an HTTPS website. Mathematically it is extremely elegant and the inventors were the recipients of the 2015 Turing award.

I wanted to write about this particular contribution because man…

Python, Virtualenv and Docker

Unsurprisingly I use some very popular Scientific Python packages like Numpy, Scipy and Scikit Learn. These packages don't get on that well with virtualenv and pip as they take a lot of external dependencies to build. These dependencies can be optional libraries like libblas and libatlas which if present will make Numpy run faster, or required dependencies like a fortran compiler.

Back in the good old days you wouldn't pin all your dependency versions down and you'd end up with a precarious mix of apt-get installed and pip installed packages. Working with other developers, especially on different operating system update schedules could be a pain. It was time to update your project when it breaks because of a dependency upgraded by the operating system.

Does virtualenv fully solve this? No, not when you have hard requirements on the binaries that must be installed at a system level.

Docker being at a lower level gives you much more control without adding too much extra comp…