• 1 Post
  • 45 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle
  • I can think of a couple of reasons off the top of my head.

    You don’t say, but I assume you are working on-site with your work system. So, the first consideration would be a firewall at your work’s network perimeter. A common security practice is to block outbound connections on unusual ports. This usually means anything not 80/tcp or 443/tcp. Other ports will be allowed on an exception basis. For example, developers may be allowed to access 22/tcp outbound, though that may also be limited to only specific remote IP addresses.

    You may also have some sort of proxy and/or Cloud Access Security Broker (CASB) software running on your work system. This setup would be used to inspect the network connections your work system is making and allow/block based on various policy settings. For example, a CASB might be configured to look at a domain reputation service and block connections to any domain whose reputation is consider suspect or malicious. Domains may also be blocked based on things like age, or category. For this type of block, the port used won’t matter. It will just be “domain something.tld looks sketchy, so block all the things”. With “sketchy” being defined by the company in it’s various access policies.

    A last reason could be application control. If the services you are trying to connect to rely on a local program running on your work system, it’s possible that the system is set to prevent unknown applications from running. This setup is less common, but it growing in popularity (it just sucks big old donkey balls to get setup and maintain). The idea being that only known and trusted applications are allowed to run on the system, and everything else is blocked by default. This looks like an application just crashing to the end user (you), but it provides a pretty nice layer of protection for the network defenders.

    Messing with the local pc is of course forbidden.

    Ya, that’s pretty normal. If you have something you really need to use, talk with your network security team. Most of us network defenders are pretty reasonable people who just want to keep the network safe, without impacting the business. That said, I suspect you’re going to run into issues with what you are trying to run. Something like SyncThing or some cloud based storage is really useful for businesses. But, businesses aren’t going to be so keen to have you backing their data up to your home server. Sure, that might not be your intention, but this is now another possible path for data to leave the network which they need to keep an eye on. All because you want to store your personal data on your work system. That’s not going to go over well. Even worse, you’re probably going to be somewhat resistant when they ask you to start feeding your server’s logs into the businesses log repository. Since this is what they would need to prove that you aren’t sending business data to it. It’s just a bad idea all around.

    I’d suspect Paperless is going to run into similar issues. It’s a pretty obvious way for you to steal company data. Sure, this is probably not your intention, but the network defenders have to consider that possibility. Again, they are likely to outright deny it. Though if you and enough folks at your company want to use something like this, talk with your IT teams, it might be possible to get an instance hosted by the business for business use. There is no guarantee, but if it’s a useful productivity package, maybe you will have a really positive project under your belt to talk about.

    FreshRSS you might be able to get going. Instead of segregating services by port, stand up something like NGinx on port 443 and configure it as a reverse proxy. Use host headers to separate services such that you have sync.yourdomain.tld mapped to your SyncThing instance, office.yourdomain.tld mapped to your paperless instance and rss.yourdomain.tld mapped to FreshRSS. This gets you around issues with port blocking and makes managing TLS certificates easier. You can have a single cert sitting in front of all your services, rather than needing to configure TLS for each service individually.



  • I run Pi-Hole in a docker container on my server. I never saw the point in having a dedicated bit of hardware for it.
    That said, I don’t understand how people use the internet without one. The times I have had to travel for work, trying to do anything on the internet reminded me of the bad old days of the '90s with pop-ups and flashing banners enticing me to punch the monkey. It’s just sad to see one of the greatest communications platforms we have ever created reduced to a fire-hose of ads.







  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.



  • a Bambu Labs compatible heat sink, an E3D V6 ring heater, and a heat break assembly are required
    a fan was sacrificed to mount a Big Tree Tech control board. Most everything ended up connecting to the new board without issue, except for the extruder.
    made a custom mount for the ubiquitous Orbiter extruder.
    The whole project was nicely tied up with a custom-made screen mount.

    So, other than the enclosure and print bed, what’s actually left of the original printer? It seems like the way to get a Bambu printer to run FOSS is to open the box from Bambu Labs, toss everything inside the box in the trash, drop a custom built printer in the box, and then proceed with your unboxing.


  • I have to agree with @paf@jlai.lu on this. I’d much rather have those models as part of the ecosystem than not. I do think part of the 3d printing hobby is learning to look at a model and recognize what can be printed on what type of printer, where supports are needed and where modifications may need to be made. For example, I recently purchased a model through TitanCraft. And the models they create are clearly designed with a resin printer in mind. they have some small features which are difficult or impossible to print on an FDM printer. While I knew that mini-figure models can be challenging on FDM, I went ahead with the purchase anyway. And the resulting min-fig’s staff was so thin my printer just couldn’t print it cleanly. I had to load the STL into Blender and spend an hour or two separating the staff out from the rest of the model and then I thickened it considerably. Sure, the haft of the shaft is a bit thick for the proportions of the model, but not too bad.

    I make a similar evaluation of stuff I see on the various model sharing sites, before I try to print it. Does it need supports? Are some of the details going to be very hard or impossible for my printer to make? Should I split the model? And, while I am pretty crap at Blender, I may consider doing some simple edits to make a model easier to print and/or make changes I want. For example, I liked these ghosts but didn’t care for the spring and just wanted them hollow so I could stuff a UV LED inside them. With glow in the dark PLA, these look neat at night. So, I beat my head against Blender until I had them how I wanted them.

    So, I wouldn’t want to stifle other peoples’ creativity. Let them create and enjoy the fact that people are willing to create and release this stuff for you to print. If it doesn’t work out, fix it and re-release it.




  • I’d suggest looking into some sort of auto bed leveling upgrade. My previous printer (Monoprice MP10 Mini) had the bed leveling sensor fail and be non-replaceable. The amount of futzing with first layer setting was a nightmare, even with a glass bed. My new printer (Creality K1C) does automatic bed leveling with a load sensor and the difference has been night and day. Most prints, I can hit start and not have to fight anything (except TPU, holy hell TPU has been a fight). The sensor won’t guarantee perfect first layers, but goddamn it’s a lot easier to get something reasonable.





  • It’s been a few of years since did my initial setup (8 apparently, just checked); so, my info is definitely out of date. Looking at the Ubuntu site they still list Ubuntu 16.04, but I think the info on setting it up is still valid. Though, it looks like they only list setting up a mirror or a stripe set without parity. A mirror is fine, but you trade half your storage space for complete data redundancy. That can make sense, but usually not for a self hosting situation. A stripe set without parity is only useful for losing data, never use this. The option you’ll want is a raidz, which is a stripe set with parity. The command will look like:

    zpool create zpool raidz /dev/sdb /dev/sdc /dev/sdd
    

    This would create a zpool named “zpool” from the drives at /dev/sdb, /dev/sdc and /dev/sdd.

    I would suggest spending some time reading up on the setup. It was actually pretty simple to do, but it’s good to have a foundation to work with. I also have this link bookmarked, as it was really helpful for getting rolling snapshots setup. As with the data redundancy given by RAID, it does not replace backups; but, can be used as part of a backup strategy. They also help when you make a mistake and delete/overwrite a file.

    Finally, to answer your question about hardware, my recollection and experience has been that ZFS is not terribly demanding of CPU. I ran a Intel Core i3 for most of the server’s life and only upgraded when I realized that I wanted to game servers on it. Memory is more of an issue. The minimum requrement most often cited is 8GB, but I also saw a rule of thumb that you want 1GB of memory for each TB of storage. In the end, I went with 8GB of RAM, as I only had 4TB of storage (3 2TB disks in a RAIDZ1). But, also think about what other workloads you have on the system. When built, I was only running NextCloud, NGinx, Splunk, PiHole and WordPress (all in docker containers). And the initial 8GB of RAM was doing just fine. When I started running game servers, I stared to run into issues. I now have 16GB and am mostly fine. Some game servers can be a bit heavy (e.g. Minecraft, because fucking Java), but I don’t normally see problems. Also, since the link I provided mentioned it, skip ECC memory. it’s almost never worth the cost, and for home use that “almost never” gets much closer to “actually never”.

    When choosing disks, keep in mind that you will need a minimum of 2 disks and you effectively lose the storage space of one of the disks in the pool to parity storage (assuming all disks are the same size). Also, it is best for all of the disks to be the same size. You can technically use different size disks in the same pool; but, the larger disks get treated as the same size as the smaller disks. So long as the pool is healthy, read speeds are better than a single disk as the read can be spread out among the pool. But, write speeds can be slower, as the parity needs to be calculated at write time. Otherwise, you’re pretty free to choose any disks which will be recognized by the OS. You mention that 1TB is filling up; so, you’ll want to pick something bigger. I mentioned using spinning disks, as they can provide a lot more space for the money. Something like a 14TB WD Red drive can be had for $280 ($20/TB). With three of those in a RAIDZ1 pool, you get ~28TB of storage and can tolerate one disk failure , without losing data. With solid state disks, you can expect costs closer to $80/TB. Though, there is a tradeoff in speed. So, you need to consider what type of workloads you expect the storage pool to handle. Video editing on spinning rust is not going to be fun. Streaming video at 4k is probably OK, though 8k is going to struggle.

    A couple other things think about are space in the chassis, drive connections and power. Chassis space is pretty obvious, you gotta put the disks in the box. Technically, you don’t have to mount the disks, they can just be sitting at the bottom of the case, but this can cause problems with heat shortening the lifespan of the drives. It’s best to have them properly mounted and fans pushing air over them. Drive connections are one of those, you either have the headers or you don’t. Make sure your motherboard can support 3 more drives with the chosen interface (SATA, NVMe, etc.) before you get the drives. Nothing sucks more than having a fancy new drive only to be unable to plug it into the motherboard. Lastly, drives (and especially spinning drives) can be power hungry. Make sure your power supply can support the extra power requirements.

    Good luck whatever route you pick.