Building a lab datacenter: Setting up shared storage

In this post, we will setup the initial shared storage.

For this demonstration, I am using an Thecus M3800 with firmware 1.00.05.1.  This was my first NAS purchased about 3 years ago.  At the time I got this, my thinking was hooking it up directly to a TV for movies and such, but those thoughts quickly exited.  While the unit does come with a remote control, what formats it will and wont play is an issue as well as the clunky interface.  It has 3x 1TB drives in it, take care to make sure the drives you intend to use are on the HCL (Hardware compatibility list that most vendors these days have) or you could end up with interesting and unexpected results.

Here is our basic interface:

From here I am going to destroy the current raid on the system and recreate it since this is  now the oldest of my NAS’s and is being repurposed for this.  After clicking ‘Remove’ above, we get this:

Raid destroyed.

Now we go back to the raid interface and must select the raid type and stripe size:

I selected a stripe size of 4k and raid 5.  Here is the build time for this specific system.  This will vary depending on the stripe size, raid 5, the size of your hard drives, and your nas system.  Expect it to take a while.  Later I will have a demo of this on a NetApp system.

The raid build is complete, as we are using raid5, we lose 1 disk for parity.

Since our intention is to use VMware, we need to add a NFS share.  With VMware you can have LUN (block storage that use iScsi or Fiber Channel) or NAS storage that supports NFS only (no CIFS).  This Thecus does not support iSCSI, therefore we will be creating 2 NFS shares on it, 1 for virtual machines and a different share for data.

After the share is created, we have to add nfs permissions for VMware to be able to mount the share.  This is like any other NFS share.  VMware ESXi requires root= access as well.  Here we are saying the host 10.5.5.240 is allowed to mount the share with read/write and root access.

Another share was created as ‘data1’ with similar permissions.

The storage is now set up.  In a future post, I will walk through the installation of ESXi 5.

 

Building a lab datacenter from the bottom up

In the coming weeks I am going to be making several posts, videos, and screenshots about building a lab datacenter.  The why’s and how’s to this are pretty lengthy, however I will be taking a short answer to the why’s because basically it is for education.  Your goal in this field, if you want to exceed (rather than just succeed) then you must have a desire for education and knowledge.  Most of us learn the best by doing.  In the coming labs we will be doing a lot of doing from setting up storage, installing Vmware ESX , installing Windows 2008 R2 (several times), installing Windows 2008 SQL Server (for vCenter, and later for Vmware view), configuring DNS and maybe some other things depending on what comes up during the process.

It is expected that the lab will change in that we get some new hardware to incorporate into the lab, we’ll make the necessary adjustments and push forward.  Along the way we’ll be taking a look at the importance of planning your environment to avoid some potential “gotchas” as we go along.

One of the first things to think about as far as that is concerned as I plan to leverage a new feature in Vmware ESXi 5 known as Auto Deploy.  This will allow you to boot servers without local storage.  Today this is known as PXE booting, roots are from BOOTP and it all basically still works the same.  However, what happens in a total outage?  Your Auto Deploy environment needs to have DHCP, DNS, vCenter all available in order to operate itself.  Therefore we will create a architect around this issue by creating a Tier-1 environment with local storage to avoid this issue.