

Uh. I’d really prefer if people experimented with new technology a bit more cautiously and not directly jump to “the biggest release […] ever done”.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.


Uh. I’d really prefer if people experimented with new technology a bit more cautiously and not directly jump to “the biggest release […] ever done”.


I feel Anti-DDOS and Cloudflare as a web application firewall has traditionally been a lot of snake-oil as well. Sure there’s applications for it. Especially for the paid plans with all the enterprise functions. And all the way at the other end of the spectrum, where it serves as a means to circumvent NAT and replace DynDNS. But there’s a lot in-between where I (personally) don’t think it’s needed in any way. Especially before AI.
From my own experience, personal blogs, websites of your local club, church, random smaller projects, small businesses… rarely need professional DDoS protection. I’ve been fine hotsing it myself for decades now. And I’m not sure if people know what they’re paying with. I mean everytime we get a Cloudflare hiccup (or AWS…) we can see how the internet has become very centralised. Half of it just goes down for an hour or so, because we all rely on the same few, big tech services. And if you’re terminating SSL there, or use it to look inside of the packets to prevent attacks, you’re giving away all information about you and your audience/customers. They don’t just get all metadata, but also read all the transferred content/data.
It all changed a bit with the AI crawlers. We definitely need countermeasures these days. I’m still fine without Anubis or Cloudflare. I block their IP ranges and that seems to do most of the job. I think we need to pay a bit more attention to what’s really happening. Which tools we have, instead of always going with the market leader with the biggest marketing budget. Which problems we’re faced with in the first place and what tools are effective. I don’t think there’s a one size fits all solution. And you can’t just roll out random things without analyzing the situation properly. Maybe the correct answer is Cloudflare, but there’s also other way less intrusive and very effective means available. And maybe you’re not even the target of script kiddies or annoyed users. And maybe your your convoluted Wordpress setup isn’t even safe with the standard web application firewall in front.
Anubis is an entirely different story. It’s okay concerning privacy and centralisation. It doesn’t come without downsides, though. I personally hate if that thing pops up instead of the page I requested. I don’t like how JavaScript is mandatory now to do anything on the web. And certain kinds of crawler protection contribute to the situation how we can’t google anything anymore. With all the people locking down everything and constructing walled gardens, the internet becomes way less useful and almost impossible to navigate. That’s all direct consequences of how we decide to do things.


Hmmh. I’m not entirely satisfied with any of them. Crowdsec is a bit too complex and involved for my taste. And oftentimes there’s no good application config floating around on the internet, neither do I get any sane defaults from my Linux distribution. Whereas fail2ban is old and eats up way too much resources for what it’s doing. And all of it is a bit too error-prone(?) As far as I remember I had several instances when I thought I had set it up correctly, but it didn’t match anything. Or it was looking for some logfile per default but my program wrote to the SystemD journal. So nowadays, I’ll double-check everything. I wish programs like sshd and webapps came with that kind of security built in in some foolproof way.


For remote management, I just enable SSH, configure it to run on some non-standard port and enable Fail2ban… Make sure I use certificates or secure passwords and also check if fail2ban is actually doing its job. Never had any issues with that setup.
For the services I’ll either use a reverse proxy, plus configure the applications not to allow infinite login attempts, or Wireguard / a VPN.


Continuwuity. I’m using it. And contrary to other projects, it’s a community effort. So I have my hopes up it’ll last and not depend on any singular person.
And I wouldn’t recommend Conduit or Conduwuit. Conduit development is very slow, that’s why we got the forks in the first place. And Conduwuit is discontinued, so it wouldn’t be wise choice at all. So you’re left with 2 choices, Tuwunel and Continuwuity. One is a one-man show and they’re calling it the “official” successor. The other one is a community project… They both work fine.


I follow a similar strategy. I back up my important stuff. And I’m gonna have to re-rip my DVD collection and redownload the Linux ISOs in the unlikely case the RAID falls apart. That massively cuts down on the amount of storage needed.
Good question! That’s exactly one of the major issues with biometric authentication. And there’s no way around it. You need a second factor. Configure your phone so it only unlocks if you also input something you know. Like a password or pin.


Yeah, they often get quite warm. Some day I’ll be in the same situation as OP. And I can’t wait to throw out that supid modem. No clue, though what kind of SFP the fiber provider requires. I mean there’s quite a selection available…


Maybe correct? Though my cable modem gobbles down some 15W… Without even doing the Wifi… So, I bet this isn’t a universal truth, as a Mini-PC will comsume less and provide all kinds of extra services, networking, NAS…


You should have all kinds of options on Lemmy… You can edit a post and change it to whatever you like. Or delete it and optionally post a new one…


Just delete the post if it’s a mess-up.


Livekit can be used to build voice assistants. But it’s more a framework to build an agent yourself, not a ready-made solution.


And there’s another custom component, integrating all servers with an OpenAI-compatible API endpoint: https://github.com/jekalmin/extended_openai_conversation


I think there’s a lot of nuance here. I mean the Fediverse isn’t super efficient. But it manages to do what it’s supposed to do. And it really depends. Which Fediverse software. How many people are on those servers, how are they distributed. Do groups of people mingle on certain servers. Do they all subscribe to all the same content out there. Are there really big groups on servers with happen to have a slow internet connection… And then of course can we come up with improvements if we need to.
I think we’re going to find out once (or if) the Fediverse grows substantially. Some design decisions of the Fediverse are indeed a bit of a challenge for unlimited growth. Oftentimes technical challenges can be overcome, though. With clever solutions. Or things turn ot differently than we anticipated. So I don’t think there’s a good practical and straightforward answer to the question.


HA isn’t the only option. I think there’s two other open source smarthome solutions out there(?) And you could probably do with just an MQTT broker and a Python script, or something like that…
But HA isn’t a bad choice. They’re doing a phenomenal job. And related projects like ESPHome make it really easy to integrate microcontrollers. And if you want to do more smarthome stuff, it has a plethora of features, integrations, an app…
Extra hardware isn’t absolutely necessary. I have one server at home which does NAS, and I use 4GB of it’s RAM to run a virtual machine with Home Assistant. That’s enough for it, including a bunch of Addons.
You could try to debug the permission issue… Like take a note of the current permissions, chmod the certificates to 666 and the parent directories to 777 and see if that works. Then progressively cut them down again and see when it fails. And/or give caddy all the group permissions ssl, acme, certwarden… and then check which one makes it fail or work.
Kind of the reason why I quit Netflix. For once it got more expensive each year. And at some point there was less and less of my favorite shows on there, so I’d need to subscribe to a second service for Star Trek… then a third one for all the good stuff that’s Disney… And I don’t even watch that much TV. So instead, I just quit. Maybe one day I’m gonna read a book on a Friday evening 😆 Or the stuff the government forces me to pay for.
Puh, ziemlich fokussiert auf Video. Ich ersetze damit mehr meinen Chat, Cloud-Office, Social Media… Aber der Return of Investment ist da weniger die Abo-Gebühr 😅


Sure. I’m not entirely sure how PCIE works these days. But in it good old days we had methods to read pretty much arbitrary memory regions via PCIE or early Thunderbolt(?).
I just figured it’d be massively complicated to wait for the user to pull something on the screen, do computationally expensive OCR, some AI image detection to puzzle documents back together, and then you’d only get a fraction of what’s really stored on the computer and you’d still need a way to send that information home… When you could just pick a plethora of easy options like read all the files from the harddisk and send just them somewhere. I think it’s far more likely they do some easy and straightforward solution. And it’d be more effective as well.
Thanks for the link! As a short aside for the other people here: Try not to spam developers. That usually achieves the opposite and makes them miserable, when we want them to not burn out, and write good software for us. A thumbs-up emoji is the correct reaction for the average person. Or for the pros - a code-review highlighting specific issues within the code.