Clustered Ontap…vservers

The next step is to setup a vserver.  In fact, there are a few (special purpose) vservers setup as soon as you install your first node in the cluster.  These would be the cluster and node vservers, each with their own management network & ip as we configured earlier.

There is 1 more type of vserver, a data vserver, which actually takes care of serving data in our storage environment.  These are highly configurable and can exist wherever you wish them to in your storage network (by virtue of where you place the associated data lifs).

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
3 entries were displayed.

How many vservers do you need?  As with many things in the IT world, the great answer is “it depends”.  Do you want to have separate vservers to be managed by different departments (or even different customers, perhaps, ie multi-tenancy) then great, you create however many vservers you have as customers or tenants.  The limits depends on the hardware in your cluster but a few hundred is easily accomplished.

Here we are going to go through the cli setup wizard and look at some of the things we run into:

kitt::*> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.

You can enter the following commands at any time:
“help” or “?” if you want to have a question clarified,
“back” if you want to change your answers to previous questions, and
“exit” if you want to quit the Vserver Setup Wizard. Any changes
you made before typing “exit” will be applied.

You can restart the Vserver Setup Wizard by typing “vserver setup”. To accept a default
or omit a question, do not enter a value.

Vserver Setup wizard creates and configures only data Vservers.
If you want to create a repository Vserver use the vserver create command.

No data aggregates exist in the cluster. You must first create an aggregate.
Enter the new aggregate name [aggr3]: aggr1_data
Enter the number of disks to use for the aggregate [10]: 3
Aggregate creation might take some time to finish…

Initializing…
A 744.9GB aggregate named aggr1_data was created.

Step 1. Create a Vserver.
You can type “back”, “exit”, or “help” at any question.

Enter the Vserver name: vsNFS1
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:
nfs
Choose the Vserver client services to be configured {ldap, nis, dns}:

Enter the Vserver’s root volume aggregate [aggr1_data]:
Enter the Vserver language setting, or “help” to see all languages [C]:

Enter the Vserver root volume’s security style {unix, ntfs, mixed} [unix]:

Vserver creation might take some time to finish….

Vserver vsNFS1 with language set to C created. The permitted protocols are
nfs.

 

Here we have created a vserver named “vsNFS1” using a 3 disk aggregrate “aggr1_data”, taking all default options.

Step 2: Create a data volume
You can type “back”, “exit”, or “help” at any question.

Do you want to create a data volume? {yes, no} [yes]: yes
Enter the volume name [vol1]: vsNFS1_vol_data1
Enter the name of the aggregate to contain this volume [aggr1_data]:

Enter the volume size: 500G
Enter the volume junction path [/vol/vsNFS1_vol_data1]: /vsNFS1_vol1_data1
It can take up to a minute to create a volume…
Volume vsNFS1_vol_data1 of size 500GB created on aggregate aggr1_data
successfully.
Do you want to create an additional data volume? {yes, no} [no]:
no

Now we have created “vsNFS1_vol_data1” instead of “aggr1_data” to serve out to NFS clients.  Junction paths are a feature that is specific to clustered data ontap which allows cDOT to maintain the name space that way the mountpoint for your clients is maintained no matter where the data to the served in the cluster lives.  Junction paths deserve a better explaination than this and this will be taken up in a future post.  For now, understand that this is your mount point.

Step 3: Create a logical interface.
You can type “back”, “exit”, or “help” at any question.

Do you want to create a logical interface? {yes, no} [yes]:
Enter the LIF name [lif1]: vsNFS1_data_lif1
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-01
Enter the home port [e0a]:
Enter the IP address: 10.3.3.131
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

LIF vsNFS1_data_lif1 on node kitt-01, on port e0a with IP address 10.3.3.131
was created.
Do you want to create an additional LIF now? {yes, no} [no]: yes
Enter the LIF name [lif2]: vsNFS1_data_lif2
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-02
Enter the home port [e0a]:
Enter the IP address: 10.3.3.132
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

Warning: Network route setup failed. duplicate entry
LIF vs_NFS1_data_lif2 on node kitt-02, on port e0a with IP address 10.3.3.132
was created.
Do you want to create an additional LIF now? {yes, no} [no]: no
Step 4: Configure NFS.
You can type “back”, “exit”, or “help” at any question.

NFS configuration for Vserver vsNFS1 created successfully.

Vserver vsNFS1, with protocol(s) nfs has been configured successfully.

 

Now we have configured a data lif on both kitt-01 and kitt-02 so that data can be served from either node.

 

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
vsNFS1 cluster running root_vol aggr1_data file file
4 entries were displayed.


We now see your vserver for nfs running.

kitt::vserver nfs*> show
Virtual General
Server Access v2 v3 v4.0 UDP TCP SpinAuth
———— ——- ——- ——– ——– ——– ——– ——–
vsNFS1 true disabled enabled disabled – enabled disabled

By default, we are using nfs v3 only. 

The next things to understand is to see how this is exported and who has access:

From a linux client, here is the output when we do a showmount:

hhh@hhh-bigt:~$ showmount -e 10.3.3.131
Export list for 10.3.3.131:
/ (everyone)
hhh@hhh-bigt:~$ showmount -e 10.3.3.132
Export list for 10.3.3.132:
/ (everyone)

The exported path is “/” even though we defined the junction as “vsNFS1_data_vol1”.  We are able to mount this but we can’t do anything elese yet:

hhh@hhh-bigt:~$ sudo mount 10.3.3.131:/ nfs1

10.3.3.131:/ on /home/hhh/nfs1 type nfs (rw,addr=10.3.3.131)
hhh@hhh-bigt:~$

But the mountpoint blocks us:

drwx—— 3 root daemon 4096 2013-03-31 13:47 nfs1

hhh@hhh-bigt:~$ cd nfs1
bash: cd: nfs1: Permission denied