

deleted by creator
deleted by creator
After some googling:
Some Linux distributions (at least Debian, Ubuntu) enable init_on_alloc option as security precaution by default. This option can help to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic.
Unfortunately, it can lower ARC throughput considerably (see bug).
If you’re ready to cope with these security risks 6, you may disable it by setting init_on_alloc=0 in the GRUB kernel boot parameters.
I think it’s set to 1 on Raspberry Pi OS, you set it in /boot/cmdline.txt
I think.
sync=disabled will make ZFS write to disk every 5 seconds instead of when software demands it, which maybe explains your LED behavior.
Jeff Geerling found that writes with Z1 was 74 MB/sec using the Radxa Penta SATA HAT with SSDs. Any HDD should be that fast, the SATA hat is likely the bottleneck.
Are you performing writes locally, or over smb?
Can try iostat
or zpool iostat
to monitor drive writes and latencies, might give a clue.
How much RAM does the Pi 5 have?
My understanding is that it’s technically against their TOS but loosely enforced. They don’t specify precise limits since they probably change over time and region. Once you get noticed, they’ll block your traffic until you pay. Hence you can find people online that have been using it for years no problem, while other folks have been less lucky.
Basically their business strategy is to offer too-good-to-be-true free services that people start using and relying on, then charging once the bandwidth gets bigger.
It used to be worse, and all of cloudflare’s services were technically limited to HTML files, but selectively enforced. They’ve since changed and clarified their policy a bit. As far as I’ve ever heard, they don’t give a toss about the legality of your content, unless you’re a neo Nazi.
I’m guessing the cloudflared daemon isn’t connecting to jellyfin. You want to use http://
. Also is jellyfin
the hostname of the VM? Using localhost
or 127.0.0.1
might be better ways to specify the same VM without relying on DNS for anything.
Personal opinion, but I wouldn’t bother with fail2ban, it’s a bit of effort to get it to work with cloudflare tunnel and easy to lock yourself out. Cloudflare’s own zero trust feature would be more secure and only need fiddling around cloudflare’s dashboard.
It runs basically the same PebbleOS, so they’ll work with any app that works with the original Pebbles. They plan to keep using the community app hosting at https://apps.rebble.io/. There’s also GadgetBridge that’s compatible. Eric mentioned on HN the intention for an official open source library that can be used to make other companion apps too.
Yeah the mobile app is open source too https://github.com/pebble-dev/mobile-app
I had a 5 II too, used lineageOS for years, worked great. Doesn’t totally solve the battery or fingerprint reader. My screen got the dreaded green lightsaber too. Nail in the coffin was Australia turning off 3G so it can’t make calls anymore. (Wasn’t officially sold here so they didn’t bother loading it with VoLTE profiles)
Seems weird to have a separate app read sent and received messages? Is it poking holes in the Messages app sandbox?
Consider something like the aoostar R1 with Intel N100. Small and low power like a commercial consumer NAS but cheaper and you can chuck whatever OS you want.
I’ve been using pcloud. They do one time upfront payments for ‘lifetime’ cloud storage. Catch a sale and it’s ~$160/TB. For something long term like backups it seems unbeatable. To the point I sort of don’t expect them to actually last forever, but if they last 2-3 years it’s a decent deal still.
Use rclone to upload my files, honestly not ideal though since it’s meant for file synchronisation not backups. Also they are dog slow. Downloading my 4TBs takes ~10 days.
Ah kay, definitely not a RAM size problem then.
iostat -x 5
Will print out per drive stats every 5 seconds. The first output is an average since boot. Check all of the drives have similar values while performing a write. Might be one drive is having problems and slows everything down, hopefully unlikely if they are brand new drives.zpool iostat -w
Will print out a latency histogram. Check if any have a lot above 1s and if it’s in the disk or sync queues. Here’s mine with 4 HDDs in z1 working fairly happily for comparison:The
init_on_alloc=0
kernel flag I mentioned below might still be worth trying.