Adding NFS Storage to the Data Virtual Appliance

Note: If you are using Horizon Workspace 1.5 check the notes at the end of the post.

Before letting users add their files in Horizon Workspace it is important to dedicate enough storage space.

To add space to the data-va you have 2 options:

  • Add a larger VMDK
  • Add an NFS mount

Since it’s not advisable to go the VMDK way if you have more than 6TB of data, i always like to use NFS mount. Another reason is that files won’t be sitting in the data-va so whatever happens to that virtual machine we don’t have to worry about data.

In my lab environment i use Nexenta as primary mean of storage for all needs, including VMware datastore, so i just added another share to export via NFS to use with the data-va:

1

2

I added the data-va IP address as root access for the share and also added the Extra Option “anon=0”,

SSH to the data-va with the user ‘sshuser’:

su -
cd /opt/vmware-hdva-installer/bin
./mount-nfs-store.pl --nfs 192.168.110.15:/volumes/vsphere_01/data-va

Note: “192.168.110.9” is my data-va interface on the Nexenta network segment while “192.168.110.15” is my Nexenta interface and “/volumes/vsphere_01/data-va” is the folder path of my NFS export on Nexenta.

You should get an output like this:

NFS: 192.168.110.15:/volumes/vsphere_01/data-va
HOST: 192.168.110.15
192.168.110.15 is alive.
mount.nfs: timeout set for Wed Jul 31 11:47:39 2013
mount.nfs: trying text-based options 'hard,rsize=32768,wsize=32768,intr,addr=192.168.110.15'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.110.15 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.110.15 prog 100005 vers 3 prot UDP port 33327
192.168.110.15:/volumes/vsphere_01/data-va on /opt/zimbra/store29 type nfs (rw,sync,noatime,hard,rsize=32768,wsize=32768,intr)
Error occurred: directory does not exist or is not writable: /opt/zimbra/store29
zmvolume failed at ./mount-nfs-store.pl line 49.

You see an error? Well, it’s kind of normal. I don’t know why but this script always fails at the same point. After a while and with the help of VMTN Community i figured out where it fails and how to manually finish the job.

Let’s first check that at least the NFS is mounted:

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda3 39G 1.9G 35G 5% /
udev 2.0G 152K 2.0G 1% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sda1 128M 21M 101M 17% /boot
/dev/mapper/zimbra_vg-zimbra 9.9G 1.1G 8.3G 12% /opt/zimbra
/dev/mapper/store_vg-store 9.9G 151M 9.2G 2% /opt/zimbra/store
/dev/mapper/db_vg-db 30G 1.2G 27G 5% /opt/zimbra/db
/dev/mapper/index_vg-index 9.9G 151M 9.2G 2% /opt/zimbra/index
/dev/mapper/redolog_vg-redolog 12G 159M 12G 2% /opt/zimbra/redolog
/dev/mapper/log_vg-log 9.9G 153M 9.2G 2% /opt/zimbra/log
/dev/mapper/backup_vg-backup 20G 174M 19G 1% /opt/zimbra/backup
/dev/mapper/data_vg-data 30G 173M 28G 1% /opt/zimbra/data
192.168.110.15:/volumes/vsphere_01/data-va 87G 32K 87G 1% /opt/zimbra/store29

Let’s change permission on the mount point so the zimbra user can write on the path:

chown -R zimbra:zimbra /opt/zimbra/store29
su - zimbra -c 'zmvolume -l'

The output would be something similar to this:

Volume id: 1
name: message1
type: primaryMessage
path: /opt/zimbra/store
compressed: false
current: false

Volume id: 2
name: index1
type: index
path: /opt/zimbra/index
compressed: false
current: true

Volume id: 3
 name: store78
 type: primaryMessage
 path: /opt/zimbra/store29
 compressed: false
 current: true

If you see “type: primaryMessage” and “current: true” for the mount point of the NFS mount then you are good to go, the new path is the primary storage for files.

You might find yourself with the NFS share mounted but with the missing entry in the output of the command ‘zmvolume -l’; in that case after changing the permissions we create the entry manually:

su - zimbra -c 'zmvolume -a -n store78 -t primaryMessage -p /opt/zimbra/store29 --compress false'
su - zimbra -c 'zmvolume -l | tail -7 | head -1 | cut -f2 -d:'
su - zimbra -c 'zmvolume -sc -id 3'
su - zimbra -c 'zmvolume -l'

The output now should look like the above one. Let me explain the above commands:

  • ‘zmvolume -a -n store78 -t primaryMessage -p /opt/zimbra/store29 –compress false’ : Creates an uncompressed store of type primaryMessage on the NFS mount point;
  • ‘zmvolume -l | tail -7 | head -1 | cut -f2 -d:’ : Finds the ID of the store we just created;
  • ‘zmvolume -sc -id 3’ : Sets the newly created storage as current using the store id (3 in my case);

Now files added by users to the data-va should be written on the NFS share.

Take a look at the CLI Command for Horizon Workspace Data Guide for more info on data-va commands.

8/7/13 UPDATE1: I fixed some IP addresses and paths in the outputs that don’t match my configuration. I have 2 labs and i seem to have taken a part of them in one lab and a part in another. Sorry for the inconvenience.

8/7/13 UPDATE2: It would seem like they fixed the script to mount the NFS share in Horizon Workspace version 1.5 since i used it to mount an NFS share in my lab today and i had no issue at all.