

Intriguing, but not within the scope of this post. I’m not asking for KVM solutions.
Also find me on sh.itjust.works and Lemmy.world!
https://sh.itjust.works/u/lka1988
https://lemmy.world/u/lka1988


Intriguing, but not within the scope of this post. I’m not asking for KVM solutions.


I’m already very familiar with the AMT portion of vPro, all three of my Proxmox nodes have it enabled and locked down. Really handy to get in there when needed. The KVM route is rather expensive as I would need one that supports at least 5 systems.
vPro’s out-of-band management is the entire reason I use it, because my little lab is tucked in a utility room all the way in the basement, where I would have to cross the treacherous lands of scattered children’s toys.


I’ve already built the “new” NAS. Just trying to figure out the CPU situation before I take the plunge and swap the data drives over.
As for documentation, it really depends on the vendor, but the general process is the same overall. Here’s a PDF guide from MeshCentral that goes into more detail.
I use the CPU lists on Wikichip (Kaby Lake linked) to figure out what CPUs are compatible with vPro. Something to keep in mind is both the CPU and motherboard require vPro support for it to work properly.


Also a good point. Speaking of, that generation Optiplex SFF had a 300W PSU as an option in the XE3 variant (basically a 7050 meant for point-of-sale use) vs the stock 180W PSU. It’s plug-and-play, too. One of my Proxmox nodes runs a 7050 SFF with that PSU. It’s rock solid.


Yes, exactly. The shop was probably thinking of the cheap Molex ones.


I went to a local computer store and they were not very helpful. I asked if I could use a splitter for the power port and they said I would fry my board.
They aren’t wrong. Those SATA power splitters can be problematic due to subpar wiring and have been known to burn/melt.


Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There’s also a licensing system in place with limitations on what kind of systems you can connect to in the community edition as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.
I appreciate the up-front attitude here, legitimately.


Sounds like it’s time to fire up another dedicated VM


Card games and board games with people. New Year’s isn’t really the time or place for this kind of thing.


Planka is fantastic kanban software. There is a 3rd party mobile app, but so far it hasn’t yet been updated to support Planka v2. Planka’s own mobile web UI is better than it used to be, but it’s not quite there yet.


The mobile app is 3rd party and has not yet been updated for Planka v2.
Source: I use Planka a LOT.
No need to be antagonistic. I merely suggested the method I use for my home lab after learning the “hard way” to containerize and separate certain things.


OpenMediaVault is based on Debian. I think it’s currently OMV 7, but I’m not at home at the moment so I can’t check. Very solid system though.
This absolutely overkill
Hardly. Keeping the file server separate is good for reliability in case you bork an unrelated service, so you don’t take out everything else with it. That’s also partly why things like VMs, LXC, and Docker exist.


My NAS is a 2014 Mac Mini running OMV. It works great, very capable little Linux machine. Don’t bother with Mac OS.
There you go, that’s another option.
For the file server conundrum, something to keep in mind is that Proxmox is not NAS software and isn’t really set up to do that kind of thing. Plus, the Proxmox devs have been very clear about not installing anything that isn’t absolutely necessary outside of Proxmox (on the same machine).
However, you can set up a file server inside an LXC and share that through an internal VLAN inside Proxmox. Just treat that LXC as a NAS.
For your *arr stack, fire up an exclusive VM just for them. Install Docker on the VM, too, of course.
LLMs
If you’re gonna use that, please make sure you comb through the output and understand it before implementing it.
I use a ryzen 3600x and 5600 or 5700xt with 16gb of ram
Solid. My gaming PC runs a 5800X3D, 7900XTX, and 32GB RAM.
My idea was to get a new, smaller case to fit my mitx board and psu in and use the old one with a cpu which supports “all” codes, 32gb ram.
Fair. For what it’s worth, the 3600X will easily support 4K streaming.
The old case has enough space for everything I’ll ever need, but the question is, would it be worth the effort.
“worth the effort” is highly subjective. IMO, never take on a hobby with the expectation of a return on your investment - you’ll never see it. Do it to learn and further your knowledge.
As far as the case goes, most “standard” ATX cases should fit your needs. I harvested the case from an old HP Proliant ML110 G2 from 2004 which, shockingly, is (mostly) ATX-compliant, and will become the new home of my NAS…at some point.
With transcoding ticked off my issue list, my last remaining point is storage and the uncertainty, whether using usb-c connected, direct attached storage (DAS) systems to set up a fileserver is inherently problematic or not.
My NAS is strictly just a NAS. It’s a 2014 Mac Mini running, Open Media Vault, with a Sabrent DS-SC4B 4-bay hard drive enclosure connected via USB. All 4 drives are in a RAID5 array.
My Plex and Jellyfin instances run within a VM under Proxmox, on an entirely separate machine. It works pretty well for what it is. Though, like I mentioned above, I plan on moving the whole NAS to a larger case where I can mount the drives inside and directly connect them to the motherboard, instead of relying on USB.
I don’t understand ZFS
Neither do I, and I have neither the time nor energy to figure it out. Solidarity!
and docker would yield a ton of chaos if i used it.
Docker is actually pretty sweet once you get the hang of it. I would recommend skipping over docker run commands entirely and going straight to Docker Compose. You write a “compose” file (using YAML) that defines the service, container, volumes, ports, and other (optional) environment variables, find a central place to keep your compose files (separated by folder, because they all tend to be named compose.yaml - it can be anywhere as long as you have the right permissions), point your terminal at whatever compose file you want to run, and tell Docker to fire it up with `docker compose up -d’. And the neat part is that most self-hosted projects will already have an “example” compose file that is easily tweaked to fit your own use case.
There is also a project called “Dockge” (not a typo) that really helps to streamline that whole process with a simple web UI. Made by the same dude who created Uptime Kuma. I run Dockge on everything that runs Docker, including my laptop and gaming PC. They can all be linked together.
Setting up shared network directories in a somewhat polished user interface seems more achievable for me without causing a bottle neck.
For this, I use NFS mounts. OMV makes it pretty easy. Those mounts are then mapped to the appropriate containers inside of my compose.yaml files.
But I had issues when I rebooted VMs / containers with usb pass throughout which took to long to recover. A dedicated NAS would mitigate that issue but would be more costly.
I always advocate for a dedicated NAS, because you can reboot VMs and containers on a separate hypervisor (even the hypervisor itself) willy-nilly without affecting the actual files.
At the moment, I am looking at a terramaster d5 DAS to give my file server a trial…
Looks nice, but I wouldn’t recommend a hardware RAID. If the hardware dies, your data is fucked. With software RAID like mdadm, you can move the array between machines with zero issues as long as the new machine has mdadm installed. It recognizes the array immediately. Really handy.
What hardware is your current PC running? Are you intending to replace the aforementioned Thinkpad with this PC? Or are they one and the same?
Something to remember is that “NAS” is just an acronym for Network-Attached Storage, i.e. a fileserver. That’s it. It is not specific to any particular software or hardware, so long as whatever you implement functions as a fileserver. But unfortunately, like most things, that acronym has been co-opted by many companies as a catch-all marketing term for their proprietary “home server that does all the things” systems.
Fair points. My entire homelab setup of five PCs pulls a total of 90-120W at any given time.
I’m gonna go check that 6th gen now that I’m home…