I take my shitposts very seriously.

  • 3 Posts
  • 278 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle
  • Linux has two different kinds of “used” memory. One is memory allocated for/by running processes that cannot be reclaimed or reallocated to another process. This memory is unavailable. The other kind is memory used for caching (ZFS, write-back cache, etc) that can be reclaimed and allocated for other things as needed. Memory that is not allocated in any way is free. Memory that is either free or allocated to cache is available.

    It looks like htop only shows unavailable memory as “used”, while proxmox shows the sum of unavailable and cached memory. Proxmox “uses” 11 GB, but it’s not running out of memory because most of it is “available”.











  • Proxmox is a great starting point. I use it in my home server and at work. It’s built on Debian, with a web interface to manage your virtual machines and containers, the virtual network (trivial unless you need advanced features), virtual disks, and installer images. There are advanced options like clustering and high availability, but you really don’t have to interact with those unless you need them.









  • THEN (and this is the part you don’t seem to understand) the client process has to waste time solving the challenge, which is, by the way, orders of magnitudes lighter on the server than serving the actual meaningful content, or cancel the request. If a new request is sent during that time, it will still have to waste time solving the challenge. The scraper will get through eventually, but the challenge delays the response and reduces the load on the server because while the scrapers are busy computing, it doesn’t have to serve meaningful content to them.


  • It’s not client-side because validation happens on the server side. The content won’t be displayed until and unless the server receives a valid response, and the challenge is formulated in such a way that calculating a valid answer will always take a long time. It can’t be spoofed because the server will know that the answer is bullshit. In my example, the server will know that the prime factors returned by the client are wrong because their product won’t be equal to the original semiprime. Delegating to a sub-process won’t work either, because what’s the parent process supposed to do? Move on to another piece of content that is also protected by Anubis?

    The point is to waste the client’s time and thus reduce the number of requests the server has to handle, not to prevent scraping altogether.