Clustered Ontap…vservers

The next step is to setup a vserver.  In fact, there are a few (special purpose) vservers setup as soon as you install your first node in the cluster.  These would be the cluster and node vservers, each with their own management network & ip as we configured earlier.

There is 1 more type of vserver, a data vserver, which actually takes care of serving data in our storage environment.  These are highly configurable and can exist wherever you wish them to in your storage network (by virtue of where you place the associated data lifs).

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
3 entries were displayed.

How many vservers do you need?  As with many things in the IT world, the great answer is “it depends”.  Do you want to have separate vservers to be managed by different departments (or even different customers, perhaps, ie multi-tenancy) then great, you create however many vservers you have as customers or tenants.  The limits depends on the hardware in your cluster but a few hundred is easily accomplished.

Here we are going to go through the cli setup wizard and look at some of the things we run into:

kitt::*> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.

You can enter the following commands at any time:
“help” or “?” if you want to have a question clarified,
“back” if you want to change your answers to previous questions, and
“exit” if you want to quit the Vserver Setup Wizard. Any changes
you made before typing “exit” will be applied.

You can restart the Vserver Setup Wizard by typing “vserver setup”. To accept a default
or omit a question, do not enter a value.

Vserver Setup wizard creates and configures only data Vservers.
If you want to create a repository Vserver use the vserver create command.

No data aggregates exist in the cluster. You must first create an aggregate.
Enter the new aggregate name [aggr3]: aggr1_data
Enter the number of disks to use for the aggregate [10]: 3
Aggregate creation might take some time to finish…

Initializing…
A 744.9GB aggregate named aggr1_data was created.

Step 1. Create a Vserver.
You can type “back”, “exit”, or “help” at any question.

Enter the Vserver name: vsNFS1
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:
nfs
Choose the Vserver client services to be configured {ldap, nis, dns}:

Enter the Vserver’s root volume aggregate [aggr1_data]:
Enter the Vserver language setting, or “help” to see all languages [C]:

Enter the Vserver root volume’s security style {unix, ntfs, mixed} [unix]:

Vserver creation might take some time to finish….

Vserver vsNFS1 with language set to C created. The permitted protocols are
nfs.

 

Here we have created a vserver named “vsNFS1” using a 3 disk aggregrate “aggr1_data”, taking all default options.

Step 2: Create a data volume
You can type “back”, “exit”, or “help” at any question.

Do you want to create a data volume? {yes, no} [yes]: yes
Enter the volume name [vol1]: vsNFS1_vol_data1
Enter the name of the aggregate to contain this volume [aggr1_data]:

Enter the volume size: 500G
Enter the volume junction path [/vol/vsNFS1_vol_data1]: /vsNFS1_vol1_data1
It can take up to a minute to create a volume…
Volume vsNFS1_vol_data1 of size 500GB created on aggregate aggr1_data
successfully.
Do you want to create an additional data volume? {yes, no} [no]:
no

Now we have created “vsNFS1_vol_data1” instead of “aggr1_data” to serve out to NFS clients.  Junction paths are a feature that is specific to clustered data ontap which allows cDOT to maintain the name space that way the mountpoint for your clients is maintained no matter where the data to the served in the cluster lives.  Junction paths deserve a better explaination than this and this will be taken up in a future post.  For now, understand that this is your mount point.

Step 3: Create a logical interface.
You can type “back”, “exit”, or “help” at any question.

Do you want to create a logical interface? {yes, no} [yes]:
Enter the LIF name [lif1]: vsNFS1_data_lif1
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-01
Enter the home port [e0a]:
Enter the IP address: 10.3.3.131
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

LIF vsNFS1_data_lif1 on node kitt-01, on port e0a with IP address 10.3.3.131
was created.
Do you want to create an additional LIF now? {yes, no} [no]: yes
Enter the LIF name [lif2]: vsNFS1_data_lif2
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-02
Enter the home port [e0a]:
Enter the IP address: 10.3.3.132
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

Warning: Network route setup failed. duplicate entry
LIF vs_NFS1_data_lif2 on node kitt-02, on port e0a with IP address 10.3.3.132
was created.
Do you want to create an additional LIF now? {yes, no} [no]: no
Step 4: Configure NFS.
You can type “back”, “exit”, or “help” at any question.

NFS configuration for Vserver vsNFS1 created successfully.

Vserver vsNFS1, with protocol(s) nfs has been configured successfully.

 

Now we have configured a data lif on both kitt-01 and kitt-02 so that data can be served from either node.

 

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
vsNFS1 cluster running root_vol aggr1_data file file
4 entries were displayed.


We now see your vserver for nfs running.

kitt::vserver nfs*> show
Virtual General
Server Access v2 v3 v4.0 UDP TCP SpinAuth
———— ——- ——- ——– ——– ——– ——– ——–
vsNFS1 true disabled enabled disabled – enabled disabled

By default, we are using nfs v3 only. 

The next things to understand is to see how this is exported and who has access:

From a linux client, here is the output when we do a showmount:

hhh@hhh-bigt:~$ showmount -e 10.3.3.131
Export list for 10.3.3.131:
/ (everyone)
hhh@hhh-bigt:~$ showmount -e 10.3.3.132
Export list for 10.3.3.132:
/ (everyone)

The exported path is “/” even though we defined the junction as “vsNFS1_data_vol1”.  We are able to mount this but we can’t do anything elese yet:

hhh@hhh-bigt:~$ sudo mount 10.3.3.131:/ nfs1

10.3.3.131:/ on /home/hhh/nfs1 type nfs (rw,addr=10.3.3.131)
hhh@hhh-bigt:~$

But the mountpoint blocks us:

drwx—— 3 root daemon 4096 2013-03-31 13:47 nfs1

hhh@hhh-bigt:~$ cd nfs1
bash: cd: nfs1: Permission denied

 

 

Can’t remove a NFS datastore even though its already offline?

Ran into this tonight, wanted to make a quick post for others who may or may not have seen this before:

This is the message I got after right clicking the NFS datastore and selecting unmount.  Most commonly, this will happen because Vmware thinks a file from the datastore is still being use in by a VM that is running on this host.  Most likely, this is where you kept your ISOs to build your VMs and you forgot to unmount your isos.  Edit the properties of your VMs and make sure there are no mounts still pointing to this datastore.  The easier way to do this would be to use a PowerCLI script, it’ll take care of things like this for you much faster than the GUI.

 

Building a lab datacenter: Setting up shared storage

In this post, we will setup the initial shared storage.

For this demonstration, I am using an Thecus M3800 with firmware 1.00.05.1.  This was my first NAS purchased about 3 years ago.  At the time I got this, my thinking was hooking it up directly to a TV for movies and such, but those thoughts quickly exited.  While the unit does come with a remote control, what formats it will and wont play is an issue as well as the clunky interface.  It has 3x 1TB drives in it, take care to make sure the drives you intend to use are on the HCL (Hardware compatibility list that most vendors these days have) or you could end up with interesting and unexpected results.

Here is our basic interface:

From here I am going to destroy the current raid on the system and recreate it since this is  now the oldest of my NAS’s and is being repurposed for this.  After clicking ‘Remove’ above, we get this:

Raid destroyed.

Now we go back to the raid interface and must select the raid type and stripe size:

I selected a stripe size of 4k and raid 5.  Here is the build time for this specific system.  This will vary depending on the stripe size, raid 5, the size of your hard drives, and your nas system.  Expect it to take a while.  Later I will have a demo of this on a NetApp system.

The raid build is complete, as we are using raid5, we lose 1 disk for parity.

Since our intention is to use VMware, we need to add a NFS share.  With VMware you can have LUN (block storage that use iScsi or Fiber Channel) or NAS storage that supports NFS only (no CIFS).  This Thecus does not support iSCSI, therefore we will be creating 2 NFS shares on it, 1 for virtual machines and a different share for data.

After the share is created, we have to add nfs permissions for VMware to be able to mount the share.  This is like any other NFS share.  VMware ESXi requires root= access as well.  Here we are saying the host 10.5.5.240 is allowed to mount the share with read/write and root access.

Another share was created as ‘data1’ with similar permissions.

The storage is now set up.  In a future post, I will walk through the installation of ESXi 5.