Synology and Docker are a great combination, so long as you have purchased the correct platform. Ensure that you're using an Intel chipset if you want take full advantage of Docker functionality.

Additionally, adding additional hardware resources (like adding memory) is a great way to maximize your Synology Docker host. To find out if your Synology Diskstation has an Intel chipset, look no further than the Synology Wiki. Below are some useful development tools that you can run with your Synology NAS.


If you haven't already installed Docker on your DSM platform, you should do so by logging into your DSM and opening up Package Center. Once there, you can search for the Docker application and easily install it onto your system.

The first thing you will want to consider is backend data persistence. Pretty convenient that we're running this whole Docker environment on a NAS, right? You can pick your own, or you can use my examples.

Docker Installation on DSM

The next thing you'll want to do is enable CLI connectivity to your Diskstation (if you don't already have this enabled). My assumption is that if you're interested in Docker, you probably already have this enabled, but if not I'll provide these steps below:

  1. Go to your Synology Diskstation Control Panel and select "Terminal and SNMP". Once there, check "Enable SSH Service" and I would highly suggest to pick a random port within the TCP Ephemeral Port Range.

SSH High Port

There's no sense in documenting this process much further, but you get the idea. My main concern is that you're enabling SSH, and using a random high port. For further reading, following the instructions at Synology's wiki.

  1. Next, you'll want to create the persistent data directories for Docker to use. I am going to stick with Synology's method as close as possible. So if you haven't already run Docker, create a directory at /volume1/docker/.

     DISKSTATION01> mkdir -p /volume1/docker/
     DISKSTATION01> chown root:root /volume1/docker/

Anything directly in /volume1/ should be owned by root.

  1. Now you want to create the directory for your Docker containers. I'll show you how to do that for each example.
Container Examples on Synology

Below are some examples that you can use. These are more developer-focused, but you'll get the idea.

Ghost Blog
(Not this Ghost blog)

The first example is a pretty simple one; a Ghost Blog! Ghost is what I'm using to present you with this walk-through. I like it because it's simple and attractive. It uses a Markdown language that's easy to use. This is a similar language used by GitHub/Gist, GitLab, Atlasian Stash and many tools. So if I create a blog walk-through, in most cases I can use the same code for my GitHub repositories, like I've done for my Kubelab Examples. Enough with the benefits of Ghost, let's get to the the part where we run it.

  1. Create the directory for Ghost and change permission for Docker to use it correctly.

     DISKSTATION01> mkdir -p /volume1/docker/jinkit/ghost
     DISKSTATION01> chown 1000:1000 /volume1/docker/jinkit/ghost

Notice that I've created a /volume1/docker/jinkit/ directory. I recommend doing this so you have one place to store all of your custom Docker configuration data. This keeps your personalized Docker containers separate from what Synology would include, should you choose to use their default Docker applications (like Gitlab or Redmine).

I also changed the permissions to 1000:1000 which is the user Synology uses for their default Docker data folder mappings. This is something I want to look into further though, especially after reading Alexander Morozov's (LK4D4 on Github) namespace example, because I'd really like to limit running containers in privileged mode, and so should you (where possible).

  1. Next, run the container.

     DISKSTATION01> docker run  -d --name jinkit_ghost --restart=always -p <tcp-high-port>:2368 -v /volume1/docker/jinkit/ghost:/var/lib/ghost ghost:latest

So what I've done is I've told Docker to restart this container always (pretty obvious), use a tcp high port (which you will want to change yourself) and map the data folder we've created to the default Ghost configuration data folder within the container.

The really nice thing about Docker is that the mapped host directory takes priority over what exists in the container (as I experienced it). So if there is no data in /volume1/docker/jinkit/ghost, the jinkit_ghost container will create it. If there is data in that folder (like you're previous settings and Ghost Blog) than it will prioritize that over the container. This is great for destroying and re-creating the container.

  1. Lastely, to<http-port> to access jinkit_ghost.

Ghost Blog


GitLab is an extremely powerful development tool. I'm not going to say that I like it better than GitHub, but there are some features that I really like about it. One of those features is the ability to map my GitHub repos to my GitLab repo locally, and then running Jenkin's build tests again the repo. (I keep telling myself that I'm just a network security guy, and not a developer). More on Jenkin's later, because I want to talk about bringing up GitLab first.

  1. Start off by creating a few directories and changing the file permissions for each of them to match what we used above for our Ghost blog.

     DISKSTATION01> mkdir -p /volume1/docker/jinkit/gitlab/postgresql
     DISKSTATION01> mkdir -p /volume1/docker/jinkit/gitlab/redis
     DISKSTATION01> mkdir -p /volume1/docker/jinkit/gitlab/gitlab
     DISKSTATION01> chown -R 1000:1000 /volume1/docker/jinkit/

GitLab Postgres Container

  1. Next, start each of the containers starting with postgres.

     docker run --name jinkit_postgres -d \
         -h db.gitlab.<> \
         --env 'DB_NAME=<db-user>' \
         --env 'DB_USER=gitlab' --env 'DB_PASS=<db-pass>' \
         --volume /volume1/docker/jinkit/gitlab/postgresql:/var/lib/postgresql \

Make sure that's running correctly with a docker ps command at the command-line. If you don't see it, run a docker ps -a to see all of your containers (running or stopped), and run a docker logs <container-id> to read the log information for further troubleshooting.

It's very important to change the values for <db-user> and <db-pass>, and make sure they match across each of the containers that reference your jinkit_postgres container.

GitLab Redis Container

  1. Start the Redis container.

     docker run --name jinkit_redis -d \
         -h redis.gitlab.<> \
         --volume /volume1/docker/jinkit/gitlab/redis:/var/lib/redis \

Well, that's easy enough! Now let's string it all together with the GitLab container.

GitLab GitLab Container

  1. Start the GitLab container.

     docker run --name jinkit_gitlab -d \
         -h gitlab.<> \
         --link jinkit_postgres:postgresql --link jinkit_redis:redisio \
         --publish <ssh-port>:22 --publish <http-port>:80 \
         --env 'GITLAB_PORT=<http-port>' --env 'GITLAB_SSH_PORT=<ssh-port>' \
         --env 'GITLAB_SECRETS_DB_KEY_BASE=<secrets-key>' \
         --env 'GITLAB_HOST=gitlab.<>' \
         --env 'SMTP_ENABLED=true' \
         --env 'SMTP_DOMAIN=www.<>' \
         --env '' \
         --env 'SMTP_PORT=587' \
         --env 'SMTP_USER=<user>@<>' \
         --env 'SMTP_PASS=,<smtp-pass>' \
         --volume /volume1/docker/jinkit/gitlab/gitlab:/home/git/data \
         --volume /volume1/docker/jinkit/gitlab/gitlab/config:/home/git/gitlab/config \

Ok, first a couple of tripping points (this is really important).

  • Change <http-port> and <ssh-port> port to your own tcp high ports.
  • The <secrets-key> is a long and random alpha-numeric string that you will need to keep secure somewhere. Don't lose it! I would make it very long, and very random.
  • I have included an example for an Office 365 Email account. There are other documented examples out on the world inter-webs for you to explore, should you need something different.

    That's it! Use the same troubleshooting steps with docker ps, docker ps -a. and docker logs if you're having trouble. My original reference was the awesome Docker Registry Hub examples provided by Sameersbn. And for those who are wondering, this is the exact same repo that Synology uses for their Docker packages for GitLab and Redis!

  1. Go to<http-port> to access jinkit_gitlab.

To login, the default user/pass are:

User: root

Pass: 5iveL!fe

GitLab Blog


Oh Jenkins...what an awesome tool! Developers know it well, as it's similar to Travis-CI, Bamboo and others. But what makes Jenkins awesome, other than the fact that it's open source, is that it can actually be used to spin up Docker workers based on load! That's pretty powerful, and it's something we'll get to later (when I have more time).

Additionally, you can use Jenkins to take a Docker Declaration file (known as a Docker file), test-build the Docker image and report back any errors. This process [in a nutshell] is known as Continual Integration. It's pretty cool that we can do this on our own Synology Diskstation!

  1. First, let's create the directory and make sure we have the correct permissions.

     DISKSTATION01> mkdir -p /volume1/docker/jinkit/gitlab/postgresql
  2. Next, run Jenkins.

     docker run --name jinkit_jenkins -d \
         -h jenkins.<> \
         -p <htttp-port>:8080 -p <ci-worker-port>:50000 \
         -v /volume1/docker/jinkit/jenkins:/var/jenkins_home jenkins:latest

Make sure to change the <htttp-port> and <ci-worker-port> to your own values.

  1. Next, go to<http-port> to access jinkit_jenkins.


Wrapping Up

Synology makes it really nice to use Docker. I have to admit that the inclusion of Docker in my Synology platform has given me a whole new sense of understanding around containers as well. If you've ever wanted to explore containers, perhaps the amazing Synology Diskstation could be your gateway drug into next generation computing. Well done Synology!

I hope that in some small way, these examples provide a little help to someone out there! After you've completed them, check out the awesome little management tool that Synology has included (making use of Docker's powerful API). Have fun with it!

Docker on Synology