A NAS build: part 4 - Getting new ideas and parts

| Jun 26, 2025

Howdy!

Rejoice people, rejoice! My homelab is taking shape: with the arrival of my first 2 HDD and the components to hook them up, I was able to review and correct some of the previous steps and fix some trivial mistakes.

Using my magic skill to transform money into products, this time on my table landed 2 Seagate IronWolf and a PCIe/Sata card, together with a slim Sata cable to make a good use of the backplane of the case.

After dealing with the usual shenanigan related to the lack of space within the case (mount this, move that, remove this, move that again and so on) I booted the homelab and… surprise, no sign of life from the Proxmox WebUI. I quickly hooked up my monitor and my keyboard and within the machine everything seemed working. Everything except for the LAN connectivity.

Wait a moment, I already had such a problem with the old GPU. If with my 1050 Ti I could blame the card, this time I was having the same exact problem with a brand new card on a different slot.

So, after some debugging, it appeared that the interface shown with ip a was different from the one in /etc/network/interfaces. Apparently I had an entry for enp0s7 but the interface was enp0s8.

I fixed the configuration, restarted the networking services with ifreload -a and systemctl restart networking just to be on the safe side and it worked. Well, being the nerd I am, I had to try the GPU now.

I plugged everything, invoked a couple of ancients Gods to assist me with the cable management, switch the machine on and… same problem! Although now I had enp0s8 in the config and enp0s9 as output of ip -a. I applied the fixes, and I wanted to cry. Of joy of course! Everything was now working as intended and my heart felt so light knowing I won’t have to toss a GPU in the garbage.

Let’s talk a bit more about the goodies I got:

  • About the hard disks, not much to say, aside the fact that they use a technology called PMR (perpendicular magnetic recording), which is equivalent to the CMR (conventional magnetic recording). Alternatively, on the market there are disks (actually almost all the consumer kind of disks) which use SMR (shingled magnetic recording), brilliant as it allows the manufacturer to increase the density of the plates, but detrimental for ZFS users, as the density is increased through a small overlap of the magnetic tracks and an increased size does the magic. Unfortunately, due to the nature of ZFS, this behaviour might and will lead to data destruction. Someone in the ZFS community refers to SMR as the “slow kiss of death”;
  • A PCIe card equipped with the ASM1068 chip. I was very specific about this chip, because other ones, especially the ones with ASM1064 + JMB575 they act as a port multiplier, which is not ideal for ZFS. Actually, let me tell you a little secret: ASM1068,sometimes, is a commercial name for ASM1064, but often you simply find 2 ASM1064 chips on the same card. Anyway, before even thinking to hook up the disks, I did a quick check with lspci -vnn | grep -A20 1064 and here the results:
05:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1064 Serial ATA Controller [1b21:1064] (rev 02) (prog-if 01 [AHCI 1.0])
Subsystem: ZyDAS Technology Corp. ASM1064 Serial ATA Controller [2116:2116]
Flags: bus master, fast devsel, latency 0, IRQ 127, IOMMU group 15
Memory at 82c82000 (32-bit, non-prefetchable) [size=8K]
Memory at 82c80000 (32-bit, non-prefetchable) [size=8K]
Expansion ROM at 82c00000 [disabled] [size=512K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [80] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [130] Secondary PCI Express
Kernel driver in use: vfio-pci
Kernel modules: ahci
06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1064 Serial ATA Controller [1b21:1064] (rev 02) (prog-if 01 [AHCI 1.0])
Subsystem: ZyDAS Technology Corp. ASM1064 Serial ATA Controller [2116:2116]
Flags: bus master, fast devsel, latency 0, IRQ 147, IOMMU group 16
Memory at 82b82000 (32-bit, non-prefetchable) [size=8K]
Memory at 82b80000 (32-bit, non-prefetchable) [size=8K]
Expansion ROM at 82b00000 [disabled] [size=512K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [80] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [130] Secondary PCI Express
Kernel driver in use: vfio-pci
Kernel modules: ahci

So, no multipliers, no multiplexers… life is good! A quick mention to the fact that I seriously evaluated to buy a SAS card, to further expand my storage capabilities beyond the 8 slots of the case, maybe with an external stack of hard disks. But I quickly abandoned the idea: I do not have this many data and these cards heats up so quickly. Even if a pure SATA solution is not the best in term of performances, is more than enough for my use case.

Haste is a bad counselor

What kind of article would be if I do not screw up something half way? So, here I did ;)

Basically, after connecting the hard disks and switching everything back on, I spun up a VM with Openmediavault, created the ZFS pool with the OMV plugin and started to tinkering with Samba and rsync. Files were transferring from the old instance on my Raspberry, but when I tried to access the SMART data from OMV, I could not see my IronWolf hard disks: they became a QEMU EMULATED DRIVE, and of course no SMART data for them!

So, after reading some literature and a couple of good prompts to chat gpt, I was missing the whole point of a full passthough. For what I had in mind to work exactly how I have imagined, I needed a PCIe passthrough, instead of a disk passthrough. In that way, the disks would have invisible to Proxmox and full managed by OMV. It wasn’t just my fixation, this painful road has its benefits of course:

  • SMART read and managed by OMV;
  • Disks can idle properly, as Proxmox would not do its usual ZFS mambo jumbo. Plus, if a VM is using the disks, even if such VM is idling, Proxmox would not spin down the disks;
  • Conflict avoidance between Proxmox processes and OMV on those disks.

Thanks to ZFS, the first thing I did after stopping any transfer and disabling Samba and exporting the pool, from the OMV WebUI. Basically, exporting a pool means to safely eject it. Of course doing this for the first time made me do a couple of prayers. In any case, would not have been a problem losing what I’d just copy, as the original data were still intact.

Later on, after also swicthing off the OMV VM, I started tinkering to enable a full PCIe passthrough. This list of commands is more for me than for you, as your configuration might be different, so I do not take any responsibility if you blind copy and paste and destroy your machine :P

  • Enable Intel VT-x or AMD IOMMU in your bios;
  • Edit GRUB with nano /etc/default/grub updating this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
  • Apply the settings and reboot update-grub && reboot
  • After the reboot, find your PCIe card IOMMU group and Vendor ID with lspci -nn | grep 1064 This should output something like 05:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1064 [1b21:1064]. In my case, 05:00.0 was of my IOMMU group numbers and 1b21:1064 was my Vendor ID
  • Bind the device to vfio-pci, by creating or editing nano /etc/modprobe.d/vfio.conf. adding the line options vfio-pci ids=1b21:1064
  • Blacklist the drivers from the kernel, so Proxmox won’t load the modules, hence won’t see the peripheral with echo -e "blacklist ahci\nblacklist libahci" > /etc/modprobe.d/blacklist-ahci.conf. In my case was ahci. To see the drivers your peripheral uses, check lspci -nnk | grep -A3 1064
  • Regenerate initramfs and reboot with update-initramfs -u -k all && reboot
  • After the reboot, check the driver status with lspci -nnk | grep -A3 1064. In case you see Kernel driver in use: vfio-pci, congrats! You can pop the most expensive bottle of wine. If you do not see such, my friend… you are in for a long troubleshooting and debug session.
  • Enter the config of your VM with nano /etc/pve/qemu-server/<VMID>.conf and add hostpci0: 05:00.0,pcie=1. In my case 05:00.0 was my IOMMU group number, check the previous steps to see how you can check yours.
  • BEFORE BOOTING THE VM, be sure in your VM config file you have machine: q35. The PCIe is only supported there! Is worth noting that you can still use the default machine with the pci option, even if is a PCIe card. The speed will be capped to the older standard.
  • Save, pour your wine, spin the VM as everything should go smoothly.

Of course this is a very low level mix of concepts, as we touch delicate aspects of our machines and even if I grasped the basic concepts, the way everything works it still look like magic for me!

Final considerations

An authentic rollercoaster of nerd-motions! Being able to centralized the management of my NAS hard disks, knowing they are safe under Openmediavault and with the rock solid ZFS makes me sleep a bit better at night. Also, I noticed that with the lesser overhead, and maybe also by eliminating the virtualization layer for disk I/O, I gained a solid 12MB/s during my rsync. And about the 1050 Ti? I think I will remove it, as it restrict the airflow too much, making my case’s fan spin too fast for too long. But again, I learnt something, so for me is still a great success!

For today is everything, so…

Happy building ⚒