• 0 Posts
  • 511 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Again, get off your high horse.

    They just came out swinging, for no reason.

    You already know how most self-hosted folks feel about vibe coding, or you wouldn’t have taken immediate offence to the initial comment (which ia valid, btw. You did not mark the project as vibe-coded or ai-assisted.) MARK YOUR PROJECT AS AI-ASSISTED.

    Explain where you expect inefficiency and how I can fix it, and I will.

    I’m looking to replace my cron-timed ffmpeg bash and ash scripts for encoding. Three of the four projects I looked at have double- and triple-work loops for work that should be done once. This seems to be a theme in vibe-coded projects.

    And incidentally, the fact that this is a personal project I shared in case someone might find it useful is another reason that coming in here and throwing shade is a shitty thing to do.

    Once again, I’m interested in the project, but I have my own thresholds of quality and security. If you can’t handle questions about your project, personal or not, then maybe don’t share it.

    But why try to make me feel bad about it, because you don’t like the way I built it?

    Sir/Madam, your feeling are your responsibility, not mine. I did not utter any pejoratives your way. Grow up.


  • No one is being a jerk here, stop being defensive.

    What fixes did you apply. That’s what we want to know. It’s not a trick question.

    • Did you use unit tests?
    • Did you check the logic flow so that if I run this code x 10,000 on a ton of media, it isn’t using terribly inefficient settings that will make my 40h workload take two weeks?
    • how are you deploying this thing?

    If you want to present your project, be prepared to explain it. That is completely above board for us to ask.




  • Cgroups is not a really a security feature (from what I understand). It is about controlling process priority, hierarchy, and resources limiting (among other things).

    With respect, I think you misunderstand what gvisor does and containerization in general. cgroups2 is the isolation mechanism used by most modern Linux containers, including docker and lxc both. It is similar to the jail concept in BSD, and loosely to chroot. It limits child process access to files, devices, memory, and is the basis for how subprocesses are secured against accessing host resources without the permission to do so.

    Gvisor adds more layers of control over this system by adding a syscall control plane to prevent a container from accessing functions in the host’s kernel that might not be protected by cgroups2 policy. This lessens the security risk of the host running a cutting-edge or custom kernel with more predictable results, but it comes with caveats.

    Gvisor is not a universally “better” option, especially for homelab, where environment workloads vary a lot. Gvisor comes with an IO performance penalty, incompatibility with selinux, and its very strength can prevent containers from accessing newer syscalls on a cutting edge host kernel.

    My original comment was that ultimately, there is no blanket answer for “how secure is my virtualization stack”, because such a decision should be made on a case-by-case basis. And any choice made by a homelabber or anyone else should involve some understanding of the differences between each type.




  • For context, I’ve also been using ZFS since Solaris.

    I was wrong about compression on datasets vs pools, my apologies.

    By “almost no impact” (for compression), I meant well under 1% penalty for zstd, and almost unmeasurable for lz4 fast, with compression efficiency being roughly the same for both lz4 and zstd. Here is some data on that.

    Lz4 compression on modern (post-haswell) CPUs is actually so fast, that lz4 can beat non-compressed writes in some workloads (see this). And that is from 2015.

    Today, there is no reason to turn off compression.

    I will definitely look into the NFS integrations for ZFS, I use NFS (exports and mounts) extensively, I wonder what I’ve been missing.

    Anyway, thanks for this.


  • non_burglar@lemmy.worldtoSelfhosted@lemmy.worldRaid Z2 help
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 days ago

    With respect, most of this comment is wrong.

    • Both lz4 and zstd have almost no performance impact on modern hardware.
    • compression acts on blocks in ZFS, therefore it is enabled at the pool level
    • ZFS does indeed need to allocate some space at the front and end of a pool for slop, metaslab, and metadata. I think you are confusing filesystem and datasets.

    Also remember that many permissions like nfs export settings are done on a per filesystem basis

    • I’m not sure what you’re trying to say about NFS and ZFS, here but this is completely false, even if you mean datasets.


  • Running suricata on your wan interface is just generating a ton of noise and will be really confusing for you if you haven’t reviewed packet inspection alerts before. Not a lot of value in it unless you have many users “phoning home”.

    Just run it on the lan interface.

    Your approach of deny all until something complains is pretty much the most solid way to get a grip on security.

    I assess and recommend security practices for a living, and I would say the most important first step is understanding where your data lives and where it goes. Once you know that, the rest is relatively easy with the tools available to us.





  • non_burglar@lemmy.worldtoSelfhosted@lemmy.worldRaid Z2 help
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    That’s still true, but performance has changed a lot since Jim Salter wrote that. There was a time When 2x mirrored vdevs (the equivalent to raid 10) would have been preferable to raidz2, but performance of both ZFS and disks themselves has improved enough that there wouldn’t be much of a difference in performance between these two in a home lab.

    Personally, I agree with you in that mirrors are preferable, mostly because I don’t really need high availability as much as I want an easier time restoring if a disk fails.