Clustered Ontap…vservers

The next step is to setup a vserver.  In fact, there are a few (special purpose) vservers setup as soon as you install your first node in the cluster.  These would be the cluster and node vservers, each with their own management network & ip as we configured earlier.

There is 1 more type of vserver, a data vserver, which actually takes care of serving data in our storage environment.  These are highly configurable and can exist wherever you wish them to in your storage network (by virtue of where you place the associated data lifs).

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
3 entries were displayed.

How many vservers do you need?  As with many things in the IT world, the great answer is “it depends”.  Do you want to have separate vservers to be managed by different departments (or even different customers, perhaps, ie multi-tenancy) then great, you create however many vservers you have as customers or tenants.  The limits depends on the hardware in your cluster but a few hundred is easily accomplished.

Here we are going to go through the cli setup wizard and look at some of the things we run into:

kitt::*> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.

You can enter the following commands at any time:
“help” or “?” if you want to have a question clarified,
“back” if you want to change your answers to previous questions, and
“exit” if you want to quit the Vserver Setup Wizard. Any changes
you made before typing “exit” will be applied.

You can restart the Vserver Setup Wizard by typing “vserver setup”. To accept a default
or omit a question, do not enter a value.

Vserver Setup wizard creates and configures only data Vservers.
If you want to create a repository Vserver use the vserver create command.

No data aggregates exist in the cluster. You must first create an aggregate.
Enter the new aggregate name [aggr3]: aggr1_data
Enter the number of disks to use for the aggregate [10]: 3
Aggregate creation might take some time to finish…

Initializing…
A 744.9GB aggregate named aggr1_data was created.

Step 1. Create a Vserver.
You can type “back”, “exit”, or “help” at any question.

Enter the Vserver name: vsNFS1
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:
nfs
Choose the Vserver client services to be configured {ldap, nis, dns}:

Enter the Vserver’s root volume aggregate [aggr1_data]:
Enter the Vserver language setting, or “help” to see all languages [C]:

Enter the Vserver root volume’s security style {unix, ntfs, mixed} [unix]:

Vserver creation might take some time to finish….

Vserver vsNFS1 with language set to C created. The permitted protocols are
nfs.

 

Here we have created a vserver named “vsNFS1” using a 3 disk aggregrate “aggr1_data”, taking all default options.

Step 2: Create a data volume
You can type “back”, “exit”, or “help” at any question.

Do you want to create a data volume? {yes, no} [yes]: yes
Enter the volume name [vol1]: vsNFS1_vol_data1
Enter the name of the aggregate to contain this volume [aggr1_data]:

Enter the volume size: 500G
Enter the volume junction path [/vol/vsNFS1_vol_data1]: /vsNFS1_vol1_data1
It can take up to a minute to create a volume…
Volume vsNFS1_vol_data1 of size 500GB created on aggregate aggr1_data
successfully.
Do you want to create an additional data volume? {yes, no} [no]:
no

Now we have created “vsNFS1_vol_data1” instead of “aggr1_data” to serve out to NFS clients.  Junction paths are a feature that is specific to clustered data ontap which allows cDOT to maintain the name space that way the mountpoint for your clients is maintained no matter where the data to the served in the cluster lives.  Junction paths deserve a better explaination than this and this will be taken up in a future post.  For now, understand that this is your mount point.

Step 3: Create a logical interface.
You can type “back”, “exit”, or “help” at any question.

Do you want to create a logical interface? {yes, no} [yes]:
Enter the LIF name [lif1]: vsNFS1_data_lif1
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-01
Enter the home port [e0a]:
Enter the IP address: 10.3.3.131
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

LIF vsNFS1_data_lif1 on node kitt-01, on port e0a with IP address 10.3.3.131
was created.
Do you want to create an additional LIF now? {yes, no} [no]: yes
Enter the LIF name [lif2]: vsNFS1_data_lif2
Which protocols can use this interface [nfs]:
Enter the home node {kitt-01, kitt-02} [kitt-02]: kitt-02
Enter the home port [e0a]:
Enter the IP address: 10.3.3.132
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 10.3.3.254

Warning: Network route setup failed. duplicate entry
LIF vs_NFS1_data_lif2 on node kitt-02, on port e0a with IP address 10.3.3.132
was created.
Do you want to create an additional LIF now? {yes, no} [no]: no
Step 4: Configure NFS.
You can type “back”, “exit”, or “help” at any question.

NFS configuration for Vserver vsNFS1 created successfully.

Vserver vsNFS1, with protocol(s) nfs has been configured successfully.

 

Now we have configured a data lif on both kitt-01 and kitt-02 so that data can be served from either node.

 

kitt::*> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
———– ——- ——— ———- ———- ——- ——-
kitt admin – – – – –
kitt-01 node – – – – –
kitt-02 node – – – – –
vsNFS1 cluster running root_vol aggr1_data file file
4 entries were displayed.


We now see your vserver for nfs running.

kitt::vserver nfs*> show
Virtual General
Server Access v2 v3 v4.0 UDP TCP SpinAuth
———— ——- ——- ——– ——– ——– ——– ——–
vsNFS1 true disabled enabled disabled – enabled disabled

By default, we are using nfs v3 only. 

The next things to understand is to see how this is exported and who has access:

From a linux client, here is the output when we do a showmount:

hhh@hhh-bigt:~$ showmount -e 10.3.3.131
Export list for 10.3.3.131:
/ (everyone)
hhh@hhh-bigt:~$ showmount -e 10.3.3.132
Export list for 10.3.3.132:
/ (everyone)

The exported path is “/” even though we defined the junction as “vsNFS1_data_vol1”.  We are able to mount this but we can’t do anything elese yet:

hhh@hhh-bigt:~$ sudo mount 10.3.3.131:/ nfs1

10.3.3.131:/ on /home/hhh/nfs1 type nfs (rw,addr=10.3.3.131)
hhh@hhh-bigt:~$

But the mountpoint blocks us:

drwx—— 3 root daemon 4096 2013-03-31 13:47 nfs1

hhh@hhh-bigt:~$ cd nfs1
bash: cd: nfs1: Permission denied

 

 

Clustered Ontap….Epislon in 2 nodes?

Yes, we are not supposed to have epislon in 2 nodes.  In order to disable this, we need to do  “cluster ha modify -configured true”.  We will do this now and then verify the status of SFO (Storage Fail over):

 

kitt::*> cluster ha show
High Availability Configured: false

kitt::*> cluster ha modify -configured true

Warning: High Availability (HA) configuration for cluster services requires
that both SFO storage failover and SFO auto-giveback be enabled. These
actions will be performed if necessary.
Do you want to continue? {y|n}: y

Notice: modify_imp: HA is configured in management.

kitt::*> cluster ha show
High Availability Configured: true

kitt::*> storage failover show
Takeover
Node Partner Possible State
————– ————– ——– ————————————-
kitt-01 kitt-02 true Connected to kitt-02
kitt-02 kitt-01 true Connected to kitt-01
2 entries were displayed.

 

At this point we have a 2 node cluster configured properly.  The next step is to configure a vserver (or 100) as this is the entity that serves data.

 

Clustered Ontap…NVRAM slot on a 60×0 part 2

So now I have moved the nvram card on kitt-01 to slot 1:

kitt::> system node run -node kitt-01 sysconfig -a 1

slot 1: NVRAM (NVRAM VI)
Revision: F0
Memory Size: 2048 MB
DIMM Size: 2048 MB
Battery1 Status: Battery sufficiently charged (3996 mV)
Charger1 Status: OFF
Battery2 Status: Battery sufficiently charged (3996 mV)
Charger2 Status: OFF
Running Firmware: 11 (4.8.940)
Cluster Interconnect Port 1: disconnected
Cluster Interconnect Port 2: 4x fiber
LIDs: 0x11dc[0x0000], 0x11dc[0x2460]

 

Verifying the same on kitt-02:

kitt::> system node run -node kitt-02 sysconfig -a 1

slot 1: NVRAM (NVRAM VI)
Revision: G0
Memory Size: 2048 MB
DIMM Size: 2048 MB
Battery1 Status: Battery sufficiently charged (3960 mV)
Charger1 Status: ON
Battery2 Status: Battery fully charged (4014 mV)
Charger2 Status: OFF
Running Firmware: 11 (4.8.940)
Cluster Interconnect Port 1: disconnected
Cluster Interconnect Port 2: 4x fiber
LIDs: 0x2460[0x0000], 0x2460[0x11dc]

 

Now we were verify the interconnect status:

kitt::> system node run -node kitt-02 ic status
kitt::> system node run -node kitt-01 ic status

So, we can’t use this command without being in advanced mode, same as 7G:
kitt::> set advanced

Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

kitt::*> system node run -node kitt-01 ic status

Link 0: down
Link 1: up
cfo_rv connection state : CONNECTED
cfo_rv nic used : 0

kitt::*> system node run -node kitt-02 ic status

Link 0: down
Link 1: up
cfo_rv connection state : CONNECTED
cfo_rv nic used : 0

Our nodes can see each other.

kitt::*> storage failover show
Takeover
Node Partner Possible State
————– ————– ——– ————————————-
kitt-01 kitt-02 true Connected to kitt-02
kitt-02 kitt-01 true Connected to kitt-01
2 entries were displayed.

Our HA pair failover situation looks good.
kitt::*> cluster show
Node Health Eligibility Epsilon
——————– ——- ———— ————
kitt-01 true true true
kitt-02 true true false
2 entries were displayed.

kitt::*> node show
Node Health Eligibility Uptime Model Owner Location
——— —— ———– ————- ———– ——– —————
kitt-01 true true 00:52 FAS6070 lab1
kitt-02 true true 00:38 FAS6070 lab1
2 entries were displayed.

 

Our cluster still appears to be in good shape, although there is 1 thing interesting here.  One of the nodes has Epsilon even though we have a 2 node cluster.  It was discussed in a previous post what to do about this.  We’ll walk through this again in a future post.

 

 

 

Clustered ontap….NVRAM slot on a 60×0

At the end of the last post, we were having an issue with storage failover.

kitt::> event log show
Time Node Severity Event
——————- —————- ————- —————————

3/28/2013 01:36:32 kitt-01 ERROR cmds.sysconf.logErr: sysconfig: NetApp NVRAM6 2GB card (PN X3148) in slot 2 must be in slot 1.

 

kitt::> system node run -node kitt-01 sysconfig -a 2

slot 2: NVRAM (NVRAM VI)
Revision: F0
Serial Number: 604572
Memory Size: 2048 MB
DIMM Size: 2048 MB
Battery1 Status: Battery partially discharged (3816 mV)
Charger1 Status: OFF
Battery2 Status: Battery sufficiently charged (3996 mV)
Charger2 Status: OFF
Running Firmware: 11 (4.8.940)
Cluster Interconnect Port 1: disconnected
Cluster Interconnect Port 2: 4x fiber
LIDs: 0x11dc[0x0000], 0x11dc[0x0000]

 

So it seems clear the system wants the NVRAM card in the other slot.  Lets move it to slot 1 and reassess the situation.