ZFS

SmartOS Home Datacenter - Update 1

 

Update 1: Alternative Migration Method via DD

Turns out when it came time to put my previous backup/restore doco to the test it failed with a "disk.*.0" JSON error from vmadm receive. I haven't looked into it yet though I thought I would list what I used to recover my firewall image

 

Recover /opt and /usbkey files 

 

1) All the hardware migration was complete so I had one of the two "zones" disks sitting on a USB 2 caddy. However now you have two zpools called "zones" what do to?

 

  zfs import zones usbpool

  

  NOTE: I had to use -f but excluded it so you can read the implications yourself first!

  

2) Mounting ZFS to a mountpoint (my GNU\Linux heritage showing) is a two step process.

 

  zfs set mountpoint=legacy usbpool/opt

  mount -F zfs usbpool/opt /opt/usb

  

3) Fortunately at this point I had a backup copy of the firewalls JSON (created with "vmadm get UUID >> vm.json").

 

  vmadm create -f /opt/usb/firewall.json

  

For me step three recreated the firewall image with a new UUID other than that specified in the JSON. It didn't really impact me but was not something I expected.

 

Recover VM images

 

1) Now my VM has been created at a new UUID we can migrated the disk image

 

  dd if=/dev/zvol/dsk/usbpool/19a7af4f-5441-4b3e-bedf-fd3a2c6223df-disk0 if=/dev/zvol/dsk/zones/471c4e6c-8feb-11e2-8bff-e33695415b08-disk0

  

2) Once complete you can disconnect your USB disk with

 

  umount /opt/usb

  zfs export usbpool

Building a SmartOS home data center

For the main bulk of my short career as a Systems Administrator I have been squarely sat in the Redhat Enterprise Linux (and friends) camp, however recently I became disheartend with a QNAP TS-410 hosting all of my production guests and data at home given a number of problems with the device such as:

  • Weak hardware leading to sad NFS and iSCSI response times
  • Administration that is relatively painful via its GUI interface
  • Generally suffering from security issues in the firmware
  • Uses no battery backed cache for the RAID10 (write cache disabled furthering the performance pain)
  • I am paranoid about loosing things so the appeal of # zfs scrub is very high!

Finally the primary reason for looking into a solution to replace my existing/old kit was:

  1. Wife Agro - My existing kit makes far too much heat in our small office during a typical Queensland summers day. It had to get consolidated!

So I needed to consolidate the following hardware:
2x Intel E8200 hosts running as KVM hosts on CentOS
1x Intel E4200 host running as a dedicated Sophos UTM 9 gateway
1x QNAP TS-410 with 4x 1TB SATA disks with a further 2x external 1TB standalone disks

Syndicate content