

There is an update on the RSS situation of selfh.st; TL, DR: seems to be related to ways to monetize, so now it’s available to paid subscription, but for free have to visit site to read.
There is an update on the RSS situation of selfh.st; TL, DR: seems to be related to ways to monetize, so now it’s available to paid subscription, but for free have to visit site to read.
As others have already commented, what you need is a Dynamic DNS service, where you register a subdomain, and setup a small program or script on your computer that pings the DDNS server every few minutes, that way you leave that running on the background, and if the program detects that the IP with the request changes, it will update the subdomain to point to it automatically.
You could access the blog from the subdomain of the DDNS directly or if you get your own domain, you can point it to the DDNS.
If you want a recommendation, I have been using DuckDNS for years, and it has been pretty reliable.
what is a good solution to keep a music folder backed up
syncthing (file sync, update: removed this, not needed, actually need a backup solution)
Backup solution, you could use Borg or Restic, they are CLI, but there are also GUI for them
how can I back up my Docker setup in case I screw it up and need to set it all up again?
learn to use Dockage to replace Portainer (done, happy with this)
If you did the switch to Dockge, it might be because you prefer having your docker compose files accessible easily on the filesystem, the question is if you have the persistent data of your containers in bind mounts as well, so they are easy to backup.
I have a git repo of my stacks folder, with all my docker compose files (secrets on env files that are ignored), so that I can track all changes made to them.
Also, I have a script that stops every container while I’m sleeping and triggers backups of the stacks folder and all my bind mount folders, that way I have a daily/weekly backup of all my stuff, and in case something breaks, I can roll back from any of these backups and just docker compose up, and I’m back on track.
An important step, is to frequently check that backups are good, I do this by stopping my main service and running from a different folder with the backed up compose file and bind mounts
Used Gitea for a while, and decided to switch to Forgejo before the hard fork split (no more code from Gitea), been using it since, In my opinion both work well, but prefer Forgejo.
Having the ability to shut down the main instance of an app and run a secondary instance from backups without much hassle is the best feeling ever, I recently updated from Nextcloud v26 to v31, and having the ability to just go back to a working version if anything was wrong saved me from so much stress.
Yeah, these are pretty solid advice, would say that you should be safe with patch version updates, like from 1.17.1 to 1.17.4
Should be able to jump from 1.17.4 to 2.0.1 and from 2.0.1 to 2.1.3, etc. going straight to the last patch of the next version, but should go one by one minor version, paying close attention to those versions that have breaking changes in the release notes. And always backup and test before each version jump.
This probably is the issue, when you download a script or binary from the internet it doesn’t have execution permission, you would have to right click on folder to open in terminal (that way don’t have to cd to it), and check permissions with ls -la
if it doesn’t have permission, change it with chmod
In my mind it would be super useful, I could sync my photos when my PC is on and when is off rely on my local photos only since my main goal is having a backup of them.
You could do this perfectly with the docker version, so just curiosity here, why not user docker?
Is it because you don’t want to install docker for only Immich? (you could also install other selfhosted server/apps as bonus),
would you be against snap? As someone already mentioned, there is a snap version.
If the important thing is having backups of your photos, there are alternative apps with different packaging formats.
You could make a request for flatpak, and see if other users also would like it, but you would have to wait for feedback from devs and understand if they don’t have the resources or willingness to maintain it.
Am I crazy or it makes sense?
If I’m interested in a specific app, I see what packaging formats it has and see how to install it and try it out. Only if I’m having issues with it (that can’t be solved), or can’t run it on my specific distro with the available packaging formats, I try to suggest/request a different format.
I have no experience with this app in particular, but most of the time there is an issue like this that you can’t reach an app or any other path besides the index, is because the app itself doesn’t work well with path redirection of subfolders, meaning the app expects paths to be something like
domain.tld/index.html
instead ofdomain.tld/subfolder/index.html
for all its routes.Some apps let you add a prefix to all its routes they can work, so you not only have to configure nginx but the app itself to work with the same subfolder.
Other apps will work with the right configuration in nginx if they do a new full page load every time the page changes its path/route.
If it is a PWA that doesn’t do a page load every time the path is changed, it’s not going to work with subfolders as they don’t do any page refresh that goes through nginx, and just rewrite the visible URL on the browser
What I can recommend is to switch to a subdomain like
2fa.domain.tld
instead of a subfolder and test if it works, as subdomains are the modern standard for this kind of thing these days, to avoid this type of issues.Edit: looking at the app demo, it seems to be a vue.js PWA that doesn’t do any full page refreshes on a path change, so as stated you will probably have to switch to a subdomain to make it work.