Conversation

a tiny mouse lost in a labyrinth of tables in a Postgres database

Hngh. I really, really wanna build a home NAS, which would also kindof double as "the rest of my self-hosted services".

It's... ugh. Expensive! Even if I cheap out on buying storage, and get the bare minimum, it's still a hefty price.

Refurbished ThinkCentres would be cheaper, but... I need more storage space than thinkcentres have, I think. I'm currently using ~4TiB, 3 of that on HDD, 1 on SSD. I don't want to spread it over N+1 computers, I want a nice little raid in one, and an off-site backup (but that's already taken care of).

I also need to have space for a few more drives, both SSD and HDD. So... yeah, ThinkCentres are not suitable for this purpose.

I couuuld maybe buy a cheaper ThinkCentre, just so I can move my services into the homelab, and then invest into a NAS later. But once I have a NAS, the ThinkCentre would become rather pointless.

3
0
0

Like, the setup I had in mind, with a motherboard that has 6 SATA + 2 M2 (ASROCK B550M Pro4), a nice case (JONSBO N4 NAS), 32GB RAM, 128GB + 1TB SSD clocks in at ~€750.

This setup is enough to move the rest of my self-hosted services into the homelab, and thanks to 6xSATA, I can add more drives later and turn it into a NAS too.

A ThinkCentre with 16GB RAM and 1TB SSD (~€250) would be able enough for my self-hosting needs, but I'd still need another box for storage then, so the total price would go up considerably.

1
0
0

In the long run, I do want my services & my NAS separated. That's not practical in the short term, though.

My desktop doubles as the family NAS at the moment, and that's fine for now. Far from ideal, but it works.

Moving the rest of my self-hosted services home is much more pressing, and more important. But I don't want to move that to a setup that is hard to extend.

On the other hand, the difference between a ~€250 ThinkCentre that is sufficient for services, and a ~€750 custom build that will be able to act as a NAS too in the longer run is quite noticable.

1
0
0

Hrm. Maybe I'll go with the ThinkCentre first anyway. Put it in a nice rack, and then later, I can place the NAS either in the rack, or on top of it. And then I'd have a usable switch, too.

1
0
0

Or, as multiple people suggested in the thread, something like a refurbished Dell PowerEdge 730. That's ~€325 with 32GB RAM and a dual-core Xeon.

The custom I clicked together would be ~€640 without drives, so the PowerEdge is half that. Less CPU power, but I don't care much about that right now. The most CPU intensive thing I plan to do are CI builds, and I can set up my desktop & perhaps my work laptop to provide runners.

1
0
0

The Dell is likely louder, but... so is my desktop, so who cares.

I mean, as long as it's not louder than my desktop, it should be ok.

1
0
0

Hm. This shop here has HP Proliant DL380p too. 8-core Xeon, 16GB DDR3 RAM, 8 3.5" HDD hot swap bay, for like ~€140.

With a drive thrown in, that's comparable to a ThinkCentre, and can be turned into a NAS. CPU & RAM should be enough for the self-hosted services too, until I move those to a dedicated host.

2
0
0

From what I can tell, I can put an SSD into this thing (HP Proliant DL380p Gen8) too?

That'd be perfect.

1
0
0

There are also a number of IBM, Intel, Cisco and other vendors here, all refurbished, with various amounts of RAM and CPUs.

Nice, nice.

I think I found the place to buy from, now I wait until I have the means!

~€300 for a server that can be turned into a NAS + enough drives to move the self-hosted stuff to feels like a better deal than a ~€250 ThinkCentre that's never gonna be a NAS.

Heck, €300 server + €250 ThinkCentre is still cheaper than my original custom.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

Edited 1 month ago

Okay, lets do a quick chart!

  • ThinkCentre M910q Tiny: 16GiB RAM, 1TB SSD, 1Gbit ethernet, ~65W/90W PSU, 4-core Intel i5-7500T 2.7GHz CPU, ~€275.
  • HP ProLiant DL380p Gen8: 16GiB RAM, no disks, 1Gbit ethernet, 2x750W PSU, 8-core Xeon E5-2650 CPU, ~€130 + SSD. 8x3.5" drives.
  • Dell PowerEdge R630: 32GiB RAM, no disks, 4x1GbE ethernet, 2x495W PSU, 6-core Intel Xeon Et-2620 CPU, ~€240 + SSD. 8x2.5" drives
  • Dell PowerEdge R730: 32GiB RAM, no disks, 4x1GbE etherenet, 2x750W PSU, 2x8-core Intel Xeon Et-2630 CPU, ~€260 + SSD. 8x2.5" drives.
  • Intel Rwhatever: 16GiB RAM, no disks,2x750W PSU, 6-core Xeon E5-2620 CPU, 4x1GbE ethernet, €235 + SSD. 12x3.5" drives.
  • IBM System x3550 M5: 16GiB RAM, no disks, 2x550W PSU, 8-core E5-2620 CPU, 4x1GbE ethernet, €210 + SSD, 8x2.5" drives.
  • Custom: 32GiB RAM, 128G + 1TB SSD, 750W PSU, AMD Ryzen 5 CPU, 1GbE ethernet, 6 drives. ~€750.

Well, that's not a terrible selection.

1
0
0

Well, apart from the custom, because that feels like a huge overkill, and quite pricey compared to the rest.

1
0
0

Okay, so... hm. There aren't many 3.5" SSDs, so I guess the 3.5" options are out? I do need at least one, but preferably two SSDs, and for the NAS, I don't need more than 1TB of SSD.

So, uhh, yeah, the 3.5" options are buh-bye.

That leaves me with ThinkCentre M910Q Tiny (not for NAS, but for services), Dell PowerEdge, IBM System x3550 M5, and... yeah, lets not count the custom for now.

3
0
0

@algernon there are conversion caddies that you can screw 2.5" drive into to put it in a 3.5" bay

1
0
0

@algernon also, it's hard to get large non-shingled HDDs in 2.5" form factor

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl FML! 3.5"s back on the table then, 2.5"s out, I guess.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl ack, came up with the same results. 2.5"s are definitely out then.

1
0
0

@algernon oh, and speaking of servers and disks

Servers tend to to have the disks connect into a SAS backplane (works with SATA disks too), which is then connected to either a hardware RAID controller, or a HBA card.

If it's a HBA, that just passes through each disk as a separate SCSI device to the host OS.

If it's a RAID controller, you have to define RAID arrays and add disks to them in the controller's settings, either through the BIOS menu, or vendor tools on Linux.

1
0
0

@algernon Of course, you can do a single-disk RAID0 for every disk, and handle the rest in the OS, but it doesn't passthrough all the stuff, like smartctl, etc.

On some RAID cards you can flash alternative firmware that makes them behave like a HBA (eg. "IT mode" firmware for Dell PERC controllers).

Although there are also "cheaper" (originally) servers that don't have a backplane and you have to connect a SATA cable to each HDD and it goes straight to motherboard.

1
0
0

@algernon
Best to check the server's manual before buying.
They're available as PDFs on vendors' websites, and at least the Dell manuals are very comprehensive with lots of pictures and instructions how to replace even the parts that you're not supposed to replace yourself.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl Oh. Lovely. Why can't things be easy sigh.

1
0
0

@algernon because you're jumping head-first into an unfamiliar field that is optimized for a bit of a different usecase...

(Windows can't boot from software RAID)

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl But but but but but...

Oh. Windows mentioned. I can blame Windows?! I WILL BLAME WINDOWS! Fuck windows for making my life hard!

Now I feel better.

1
0
0

@algernon see? :D

Also (re "I don't know these words") , I don't know how much you're complaining vs how much you actually want answers...

I could keep replying but I don't want to overflow you with information

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl It's mostly venting. I have a relative who is very good at hardware, and enjoys it, so once I have something reasonably looking, I can ping him to verify it makes sense, and will work for what I want it for.

So I won't need to understand any of those fancy words!

(Half my brain would love to be more comfortable with hardware, but the other half is sabotaging it. Whenever I try to dive deeper into hardware, it shuts off.)

1
0
0

@algernon server hardware has many peculiarities on top of PC hardware, but if your relative has experience with that, you should be good.

I was lucky enough that while I was still at uni, I got a part-time job at an organization that still had its own servers, and I managed everything from the hardware to the python webapp.

I can tell you some hardware "war stories" if you want, though I guess now might not be a great time.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl Oh, I'm always happy to hear stories! If you're in a story telling mood, please go ahead!

I worked at a couple of places where they had their own server hardware, but luckily for me, in every single case, there was someone else who dealt with any and all hardware things, and I could avoid all that, and focus on the software parts.

The closest I ever got to hardware is when I replaced hot-swappable switches in my keyboard.

No, I did not put my desktop PC together either.

1
0
0

@algernon
Ok, so story time.

At $previousJob we had a bunch of old PowerEdge 1950 servers acting as a worker pool for some app.

However, some of them were down. Not only did their main IP not respond to anything, even their management IPs weren't reachable. You couldn't ask the BMC if the server is on or off, or try to restart it.

So I thought, Dell probably made a buggy BMC that hanged for some reason. Gonna have to go to the server room and restart it.

1/

1
0
0

@algernon
So next time we were able to go to the server room, we looked at those unreachable servers, and most of them were on, fans spinning... we may've even been able to plug a monitor into some of them, and see them stuck in a kernel panic or sth...

Anyway, we unplugged the power from them, plugged it back in, and the BMC became reachable.

But later, we noticed when we try to reboot those servers with Linux's reboot(8), they get stuck in that state again.

2/

1
0
0

@algernon
Curiously, they don't get stuck if you hard-reset them using the BMC.

Oh, and fun fact: there was only one ethernet cable plugged into each of those servers.

The same network card was being shared by both the BMC and the host system.

One NIC, two mac addresses, two IPs.

And of course we couldn't see what's going on the screen when we try to reboot one of those by watching the serial console through the BMC, because the BMC was losing access as soon as we typed `reboot`.

3/

1
0
0

@algernon

We just lived with that issue, only doing hard resets, until one day I tried running ifdown on one of those servers.

I think I ran it through serial-over-LAN, knowing that I'd lose SSH. Or maybe I ran it through ssh with a follow-up sleep and ifup, knowing I could restart the server with BMC if anything went wrong.

Well... that ifdown disconnected the BMC too.

It turned out that `ip link down` on the shared NIC actually brings the link down, even though the BMC is still using it.

1
0
0

@algernon
Oh, and also, ip link up would bring it back.

And also, there was an ifdown being run as part of Debian's shutdown/reboot sequence. And since the rootfs was mounted over network via NFS, it would get some IO error after that, unable to finish rebooting, so it'd never bring the interface back up.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl :D :D :D :D

Now this is the kind of story that is great fun to read, but was probably very annoying to live through. Love it that each WTF discovery led to an even worse WTF discovery.

Thank you!

1
0
0

Oh, wow. This shop has a Sun SPARC T3-1 for ~€530.

How well is that thing supported on Linux? I assume poorly. Maybe a BSD? But... this would be a vanity project, and .. lets not go there.

1
0
0

So... uhhh... I'm told 2.5" may not be a smart choice for HDDs. Nor do I need 3.5" SSDs, because conversion caddies exist. I should've know this, I have one in my desktop PC.

This is why I hate hardware. I utterly suck at it. I don't even know what half the words mean.

2
0
0

@algernon what is the purpose of your NAS? Is it leaning more towards performance or more towards capacity? What kind of software do you plan to run as NAS?

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@ij In the long run: mostly cold-ish storage. Music, picture & video collection of the family, backups, and long-lived assets (which will be cached elsewhere, so the access speed isn't very relevant).

I plan to run NixOS on it, with either SeaweedFS or Garage (or a combination of those), and maaaaybe a few other things.

Movies will likely not be streamed directly from there, I plan to use another server for Jellyfin. But that server would cache the active movies from the NAS. Although, if I can put an SSD into the NAS, then I guess I can use that to cache the relevant things, and stream from there.

In the shorter term, I might run the rest of my self-hosted things on it too: GotoSocial, Forgejo, email (postfix + dovecot + rspamd, etc), and XMPP, and my old Mastodon server (though, I'm tempted to wind that down). These will eventually run elsewhere, but my #1 priority atm is moving them to my homelab.

So, leaning more towards capacity.

1
0
0

@algernon Well, when you plan to use NixOS, then Truenas wouldn't be an option for you, I guess. It would basically provide all requirement either bei installing something like Jellyfin from the Apps catalog or running your desired software in a VM or container...

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@ij Yup, TrueNAS isn't an option. Their hardware is waaay out of my budget, and on the OS/software side, I have things covered.

0
0
0

@algernon actually, it was in the period of my life when I was like "cursed shit? yes please, tell me all about it!"

so it was fun to live through

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl my "cursed shit? yes please" period was limited to getting linux running on somewhat cursed hardware (without fiddling with the hardware much).

My main workstation was an... uhh... I think it was an IBM ThinkPad 755C or so. 486DX-2, 2 or 3MiB ram... not sure about the hard disk, but it had no ethernet, nor parallel port (or if it had one, it didn't work), so I had to do some SLIP crimes to connect to the internet. This was in ~2002-2003. Upgrading Debian on that was a PAIN (at one point, I had swap on NFS, because I ran out of disk space to use for swap...). Still have it somewhere, and it still booted 2003-era Debian sid the last time I found it and turned it on (~2 years ago).

I also had a PowerMac G4 (Quicksilver, don't remember the CPU & RAM, but it is sitting under my table at the moment, right next to my leg :P), which wasn't that cursed as the laptop, but... it had some Problems™. Like, it refused to show anything on the screen until booted into an operating system, no matter how hard I tried. It also defaulted to trying to boot from CD. Installing Linux on that was a pain in the backside, and I sadly do not remember how I did it anymore. Last time I tried to turn it on, it didn't boot into Linux, and as the screen is blank until then, my only hope would be to attach something to it on serial. Except it has a proprietary serial port.

Also ran a DEC Alpha that was almost thrown out by the university, because it crashed all the time, and they didn't have the budget to get it repaired, nor to replace the faulty part. Friend of mine made some cursed hardware hacks, which he explained, but I never understood. That resulted in it only crashing about once a week. So I used it as a server for a couple of years, rebooting it from a cron job every 2-3 days.

Aaah, good times.

Not nearly as cursed as yours, but... closest to cursed hardware I ever got to =)

1
0
0

@algernon idk, that hardware fix for the DEC alpha and a cron to reboot it is probably up there...

The only cursed thing in my story is what the NIC vendor did, and what Debian did.

2
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl NIC vendor + Debian's stuff => doubly cursed!

1
0
0

@algernon
Also, I was thinking why my patience for this stuff was much higher before I graduated...

Maybe because university was more stimulating, and I was used to learning new stuff every day

Maybe because whenever I got frustrated, I could always switch to "the other thing I'm supposed to be doing"

Maybe because university assignments were designed to be cleanly solvable

Maybe because at that point I still believed everything is the way it is for a reason, even if the reason is stupid

0
0
0

@algernon like, I wasn't the one that made it cursed

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@wolf480pl Of course, I didn't mean to imply otherwise, sorry. You only suffered the double curse!

1
0
0

HP Proilant DL380p Gen8 665552-B21 Intel Xeon Eight Core E5-2650 2GHz 8Core HT 16Threads maxTurbo 2,8GHz FCLGA2011 20MB Cache 8GT/s 95W CPU SR0KQ Processzor, 16GB DDR3 RAM, 8x LFF 3,5' HDD Hot Swap bay, 0GB HDD, HP Smart Array P420i 1GB FBWC RAID RAID 0/1/1+0/5/5+0/6/6+0, HP Smart Array 1GB Flash Backed Write Cache FBWC Module P420 P421 P430 P431 P822 P222 HP 633542-001 610674-001, Optional HP Flash Backed Write Cache FBWC Battery Pack Capacitor Module P222, P420i, P420, P421, P430, P822 HP 660093-001, Intel C600 Chipset, HP 4port 1GbE 331FLR Adapter, Integrated Matrox G200 video standard, HP iLO (Firmware: HP iLO 4), 2x 750Watt Power Supply Redundans PSU, 2U Rack

I mean... excuse me, I understood the parentheses. What the flying fuck do all of these words mean.

1
0
0

Ok, so... lets see the 3.5" options?

That's the HP Proliant DL380p Gen8, an Intel Rwhatever, and... that's about it? Hm. I may have to look again.

1
0
0

Oh, I will also need to figure out what filesystem I want to use. I don't want ext4, the luks+lvm+ext4 trio makes me a little sad, for... reasons. Anything else that needs luks & lvm are a no-go too. Luks alone is... fine.

That pretty much leaves me with btrfs as the only in-tree filesystem, I believe.

I generally try to avoid out of tree stuff, but... $work infected me with a hint of ZFS appreciation. Maybe I should give it a go.

The nix store will be on btrfs, and so will be the permanence drive ('cos it is easier to boot that way), but the main storage? That's where I'm considering zfs.

1
0
0

@algernon I've recently finished rebuilding my NAS! What a co-incidence. Some notes:

* Went from Silverstone CS380 to Fractal Design Node 304 case, nominally for six drives, but I can only fit in three. (Big old clumsy fingers.)
* Any cheap PC case will do, if it fits your drives.
* Any CPU will work. More RAM is more cache, but you'll be HDD constrained for large I/O anyway, so maybe save money.
* An SSD for system is a good idea.
* Used hardware is fine.
* Wake-on-LAN is lovely.

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@liw Heh, I was eyeing the Fractal Design Node 304 case too, but I'm currently in love with JONBSO 4. Same price range, supports 6 drives too, but a bit smaller, and has a nice, decorative wood panel in front.

CPU-wise I'm limited by what the motherboard supports, and I need a motherboard with 6 SATA + 2 M.2 (I want a smaller, ~128GB ssd for the OS, and an 1TB one for stuff that needs faster storage, like my GtS media, CI images & assets & etc, that kind of stuff).

Since I plan to use SSD for speed-sensitive storage, the HDD constraints will be less severe, I expect.

The plan is to have stuff like pictures, music, video, on-site backups on HDD, because HDD speed is perfectly fine for those. I'd put those in raid, because they're important.

I don't need raid for the SSDs, because there's nothing super important there. If they fail, I have backups on the HDDs & off-site.

Wake-On-Lan is something I haven't thought of! That's a great idea, thank you!

0
0
0
@algernon I'd probably use btrfs. If you do use ZFS, I recommend using LUKS rather than ZFS's encryption because you can't really ever change the password with ZFS (you can sort-of change it, but it's CoW so the old key isn't guaranteed to be removed and may be recoverable)
1
0
1

a tiny mouse lost in a labyrinth of tables in a Postgres database

@noisytoot Uh-huh. That's not great.

But using LUKS would mean I have to unlock each drive separately, before bringing up ZFS, right? Not hard, not even challenging, but inconvenient.

And if I don't use ZFS's encryption, then it has little to offer on top of btrfs for me. (My needs are rather modest, I believe.)

Thanks!

1
0
0

@algernon Yes, it might also reduce performance a bit since it’d need to do the encryption separately for each drive it reads to/writes from. You can use the same password on all disks and have your initramfs only ask you once and cache the password. Debian apparently has decrypt_keyctl for this.

Another thing you could maybe do is use ZFS encryption, but instead of using your password directly, use a long random keyfile which you store encrypted with your actual password on another filesystem, and if you want to change your password just re-encrypt the keyfile instead of changing it in ZFS.

What I plan to do for my new server is btrfs RAID-1 in LUKS on 2 SSDs, with a weird boot process (coreboot loading linux as a payload with an initramfs containing sshd so I can SSH into it to unlock the disks remotely).

1
0
1

a tiny mouse lost in a labyrinth of tables in a Postgres database

@noisytoot Oh, the weird boot process I have mastered!

initramfs boots up with network preconfigured, and wireguard set up, then it tries to do some tang & clevis magic to get half the secret from my server, if that fails, I can SSH in, or unlock manually.

Also, / is tmpfs. There is no /, there is only /nix/store & /persist (both of which are luks+btrfs for convenience).

Great idea about the keyfile! I already use a secret system where my passwords that I need to give to various services end up as files under /run (with appropriate permissions and ownerships, of course). I should be able to point ZFS' keyfile to one such, and done. That works!

1
0
1

@algernon yeah, but I wish I made something cursed myself! or with friends!

0
0
0
@algernon tang & clevis seem interesting, I hadn't heard of them before.
0
0
0

@algernon 3.5” SSD isn’t really a thing. And frankly the circuit boards in the 2.5” drives barely occupy any of the available space

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@directhex yeah, I noticed =)

But caddies exist, so I can shove a 2.5" SSD into a compat caddy, and into a 3.5" slot it goes, whee!

0
0
0

@algernon my 2023 build https://apebox.org/wordpress/tech/1325

Sadly the cost of the drives I used has gone from $200 to nearly $600, and I didn't buy any cold spares

1
0
0

a tiny mouse lost in a labyrinth of tables in a Postgres database

@directhex Heh, I see many similarities with the setup I came up with, but also a few possibilities where I could tweak mine.

Thanks!

1
0
0

@algernon the biggest change since then is I added a GPU last week for h265 and to enable doing image processing on a future planned security camera setup. Obviously this means fewer PCIe lanes available. PCIe lanes are really the biggest problem for DIY, as consumer CPUs only have around 24-28 lanes, and those lanes are allocated for maximum performance rather than maximum number of devices

0
0
0