

That’s a neat little tool that seems to work pretty well. Turns out the files I thought I’d need it for already have embedded OCR data, so I didn’t end up needing it. Definitely one I’ll keep in mind for the future though.
🇨🇦
That’s a neat little tool that seems to work pretty well. Turns out the files I thought I’d need it for already have embedded OCR data, so I didn’t end up needing it. Definitely one I’ll keep in mind for the future though.
That works magnificently. I added -l so it spits out a list of files instead of listing each matching line in each file, then set it up with an alias. Now I can ssh in from my phone and search the whole collection for any string with a single command.
Thanks again!
Started a new job as a tool tech in a rental center; maintaining, repairing, and simply showing people how to operate, a ton of different tools, some of which I’ve never even seen before.
First thing I did is setup a file share on my server that I’ve populated with 70+ manuals and growing by the day…
Read through them all myself to understand the nuances of each machine and be able to explain the details to customers; plus I can print them a fresh copy on demand just for good measure.
Interesting; that would be much simpler. I’ll give that a shot in the morning, thanks!
Yeah, your home server is still able to reach plex.tv so there’s no problem there.
It’s people actually hosting there that got screwed over.
Plex blocked Hetzner IPs, so servers hosted there can’t reach plex.tv to auth users or validate plex pass.
DNS-01 is in the pipeline at least, so hopefully we’ll see that bring wildcard certs along with it.
It’s nice to see this being integrated into nginx. I’ve been using ACME.sh for around a decade instead. It just triggers through a script on a crontab schedule grabbing a new cert via DNS-01 if necessary, then refreshing nginx to recognize the new file.
You’ve always got the human element, bypassing security features; but extra little hurdles like a password manager refusing to autofill an unknown url is at least one more opportunity for the user to recognize that something’s wrong and back away.
If you’re already used to manually typing in the auth details, you may not even notice you’re not on the site you were expecting.
Our new AI has dubbed itself ‘MechaHitler’, we should give it a body to control… (or a few thousand bodies)
Note; that project is no longer being maintained.
https://github.com/filebrowser/filebrowser/discussions/4906
There is a fork working it’s way out of beta though.
You have to explicitly enable directory indexing; but then it will automatically generate simple http pages listing directory contents.
https://nginx.org/en/docs/http/ngx_http_autoindex_module.html
An $11/yr domain pointed at my IP. Port 443 is open to nginx, which proxies to the desired service depending on subdomain. (and explicitly drops any connection that uses my raw ip or an unrecognized name to connect, without responding at all)
ACME.sh automatically refreshes my free ssl certificate every ~2months via DNS-01 verification and letsencrypt.
And finally, I’ve got a dynamic IP, so DDClient keeps my domain pointed at the correct IP when/if it changes.
There’s also pihole on the local network, replacing the WAN IP from external DNS, with the servers local IP, for LAN devices to use. But that’s very much optional, especially if your router performs NAT Hairpinning.
This setup covers all ~24 of the services/web applications I host, though most other services have some additional configuration to make them only accessible from LAN/VPN despite using the same ports and nginx service. I can go into that if there’s interest.
Only Emby/Jellyfin, Ombi, and Filebrowser are made accessible from WAN; so I can easily share those with friends/family without having to guide them through/restrict them to a vpn connection.
You can use cloudflares DNS and not use their WAF (the proxy bit) just fine. I have been for almost a decade.
The thing is, until someone actually faces any consequences in modern times for atrocities such as these; simply saying how bad they are has become meaningless.
I’m not sure whether this is specific to this project, docker, or YAML in general.
Looking through my other 20 or so compose files, I use the array notation for most of my environment variables, but I don’t have any double quotation marks elsewhere. Maybe they’re not supposed to work in this format, idk.
Good to keep in mind I guess.
Dev replied to my github discussion.
Apparently it’s an issue with array style env variable layout.
environment:
key:"value"
Instead of
environment:
- key=value
Trying to set that up to try out, but I can’t get it to see/use my config.yaml.
/srv/filebrowser-new/data/config.yaml
volumes:
Says ‘/config/config.yaml’ doesn’t exist and will not start. Same thing if I mount the config file directly, instead of just its folder.
If I remove the env var, it changes to “could not open config file ‘config.yaml’, using default settings” and starts at least. From there I can ‘ls -l’ through docker exec and see that my config is mounted exactly where it’s supposed to be ‘/config/config.yaml’ and has 777 perms, but filebrowser insists it doesn’t exist…
My config is just the example for now.
I don’t understand what I could possibly be doing wrong.
/edit: three hours of messing around and I figured it out:
Must not have quotation marks. Removed them and now it’s working.
FolderSync selectively syncs files/folders from my phone back to my server via ssh. Some folders are on a schedule, some monitor for changes and sync immediately; most are just one-way, some are two-way (files added to the server will sync back to the phone as well as uploading data to the server). There’s even one that automatically drops files into paperless-ngx’ consume folder for automatic document importing.
From there BorgBackup makes a daily backup of the data, keeping historical backups for years with absolutely incredible efficiency. I currently have 21 backups of about ~550gb each. Borg stores this in 447gb of total disc space.
Without authentication; it’s possible to randomly generate UUIDs and use them to retrieve media from a jellyfin server. That’s about the only actually concerning issue on that list, and it’s incredibly minor IMO.
With authentication, users (ie, the people you have trusted to access your server) can potentially attack each other, by changing each others settings and viewing each other’s watch history/favorites/etc.
That’s it. These issues aren’t even worth talking about for 99.9% of jellyfin users.
Should they be fixed? Sure, eventually. But these issues aren’t cause to yell about how insecure jellyfin is in every single conversation, and to go trying to scare everyone off of hosting it publicly. Stop spreading FUD.
Connecdicut or Connecticud?