What are the pros and cons of using Named vs Anonymous volumes in Docker for self-hosting?

I’ve always used “regular” Anonymous volumes, and that’s what is usually in official docker-compose.yml examples for various apps:

volumes:
  - ./myAppDataFolder:/data

where myAppDataFolder/ is in the same folder as the docker-compose.yml file.

As a self-hoster I find this neat and tidy; my docker folder has a subfolder for each app. Each app folder has a docker-compose.yml, .env and one or more data-folders. I version-control the compose files, and back up the data folders.

However some apps have docker-compose.yml examples using named volumes:

services:
  mealie:
    volumes:
      - mealie-data:/app/data/
volumes:
  mealie-data:

I had to google documentation https://docs.docker.com/engine/storage/volumes/ to find that the volume is actually called mealie_mealie-data

$ docker volume ls
DRIVER    VOLUME NAME
...
local     mealie_mealie-data

and it is stored in /var/lib/docker/volumes/mealie_mealie-data/_data

$ docker volume inspect mealie_mealie-data
...
  "Mountpoint": "/var/lib/docker/volumes/mealie_mealie-data/_data",
...

I tried googling the why of named volumes, but most answers were talking about things that sounded very enterprise’y, docker swarms, and how all state information should be stored in “the database” so you shouldnt need to ever touch the actual files backing the volume for any container.

So to summarize: Named volumes, why? Or why not? What are your preferences? Given the context that we are self-hosting, and not running huge enterprise clusters.

  • Semi-Hemi-Lemmygod@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 hours ago

    Named volumes let you specify more details like the type of driver to use.

    For example, say you wanted to store your data in Minio, which is like S3, rather than on the local file system. You’d make a named volume and use the s3 driver.

    Plus it helps with cross-container stuff. Like if you wanted sabnzbd and sonarr and radarr to use the same directory you just need to specify it once.

    • mbirth@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      Or just something as simple as using a SMB/CIFS share for your data. Instead of mounting the share before running your container, you can make Docker do it by specifying it like this:

      services:
        my-service:
          ...
          volumes:
            - my-smb-share:/data:rw
      
      volumes:
        my-smb-share:
          driver_opts:
            type: "smb3"
            device: "//mynas/share"
            o: "rw,vers=3.1.1,addr=192.168.1.20,username=mbirth,password=supersecret,cache=loose,iocharset=utf8,noperm,hard"
      

      For type you can use anything you have a mount.<type> tool available, e.g. on my Raspberry this would be:

      $ ls /usr/sbin/mount.*
      /usr/sbin/mount.cifs*  /usr/sbin/mount.fuse3*       /usr/sbin/mount.nilfs2*  /usr/sbin/mount.ntfs-3g@  /usr/sbin/mount.ubifs*
      /usr/sbin/mount.fuse@  /usr/sbin/mount.lowntfs-3g@  /usr/sbin/mount.ntfs@    /usr/sbin/mount.smb3@
      

      And the o parameter is everything you would put as options to the mount command (e.g. in the 4th column in /etc/fstab). In the case of smb3, you can run mount.smb3 --help to see a list of available options.

      Doing it this way, Docker will make sure the share is mounted before running the container. Also, if you move the compose file to a different host, it’ll just work if the share is reachable from that new location.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      On a simpler level, it’s just an organizational thing. There are lots of other ways data from docker is consumed, and looking through a bunch of random hashes and trying to figure out what is what is insane.

  • peregus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Good question, I’m interested too. Personally I use this kind of mapping

    volumes:
      - /var/docker/contanier_name/data:/data
    

    because it helps me with backups, while I keep all the docker-compose.yaml in /home/user/docker-compose/container_name so I can mess with the compose folder whithout worrying too much about what’s inside you 🙈

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    I like named volumes, externally created, because they are less likely to be cleaned up without explicit deletion. There’s also a few occasions I need to jump into a volume to edit files but the regular container doesn’t have the tools I need so it’s easier to mount by name rather than hash value.

  • tofuwabohu@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I choose depending on whether I’ll ever have to touch the files in the volume (e.g. for configuration), except for debugging where I spawn a shell. If I don’t need to touch them, I don’t want to see them in my config folder where the compose file is in. I usually check my compose folders into git, and this way I don’t have to put the volumes into gitignore.