Monday, February 28, 2011

Step-by-step instructions for setting up Netapp (Data OnTap) Simulator

Those who are newly learning Netapp can use Netapp Data OnTap Simulator to get comfortable with Netapp commands. This tool gives you the experience of administering and using a NetApp storage system with all the features of Data ONTAP. The Simulator can be downloaded from ( you need NOW access ). The simulator has fully functional license keys for all Netapp functionalities.

The simulator can be loaded onto a Red Hat or SuSE Linux box and looks and feels exactly like Data ONTAP. Almost anything you can do with Data ONTAP can be done with the simulator. Without purchasing new hardware or impacting your production environment, you can test functionality, export NFS and CIFS shares etc.

System Requirement:

Data ONTAP 7G (7.x.x) simulators

Server /PC with Single network card, 128 MB main memory minimum (512 MB recommended), 250MB free hard disk space (minimum) disk space of 5GB would be better for simple testing purpose. More disks you need then you need have ~30GB
Linux installed, running, and networked (Works on Red Hat Linux 7.1 through 9.0, SUSE 8.1 and 8.2) any Linux Operating System (32 bit)
Installer must be logged on as root


This is not a production version of Data ONTAP and should not be used in your production environment. There are inefficiencies (for example, a 1GB disk file will be much larger than 1GB) and performance running on another OS without a disk system behind it will obviously be considerably less than with Data ONTAP. Simulator can’t hold disks more than 28 and approximately around 28GB in total size. Finally, the simulator can't emulate environments where specific hardware is required (for example, Fibre Channel).It is recommended that the Data ONTAP Simulator be installed on a non-production Linux system. Simulator installation scripts may replace the Red Hat libc library with an older more stable one. It's unlikely but possible that other applications may be affected.

Steps to install Simulator:

Step I:
o Download the Data ONTAP simulator and keep it under home directory

linux-sesl-184-54:/home # ls

Step II:
o Now untar the simulator installer.

linux-sesl-184-54:/home # tar -xvf 7.3.1-tarfile-v22.tar

Step III :
o Once you have untarred the installer you will find the new folder simulator where the installer get extracted.

linux-sesl-184-54:/home # ls
7.3.1-tarfile-v22.tar simulator ===========================> Extracted under a folder called simulator

Step IV:
o Change Driectory to the extracted path

linux-sesl-184-54:/home # cd simulator/
linux-sesl-184-54:/home/simulator # ls
Vmware, Linux and Simulator installation.doc disks.tgz disks2.tgz doc license.htm readme.htm sim.tgz

Step V:
o Now run the installer script ( to create the Single Node Simulator. If you wish to install Cluster Pair skip this step and perform Step VII.

linux-sesl-184-54:/home/simulator # ./
Script version 22 (18/Sep/2007)
Where to install to? [/sim]: =====================> Choose your simulator install path.
Would you like to install as a cluster? [no]:
Would you like full HTML/PDF FilerView documentation to be installed [yes]:

Continue with installation? [no]: yes ===================================================> Enter "yes" to contiue the installtion

Creating /sim
Unpacking sim.tgz to /sim
Configured the simulators mac address to be [00:50:56:1:cd:eb]
Please ensure the simulator is not running.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 2
Too high. Must be between 0 and 25.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 2 =====================> Maximum available disk numbers for simulator (Choose the number of disks and size based on your Linux disk space)

The following disk types are available in MB:
Real (Usable)
a - 43 ( 14)
b - 62 ( 30)
c - 78 ( 45)
d - 129 ( 90)
e - 535 (450)
f - 1024 (900)
If you are unsure choose the default option a

What disk size would you like to use? [a]: f ===========================================> Choose the bigger disk size based on your need and the disk space availability
Disk adapter to put disks on? [0]:
Use DHCP on first boot? [yes]: no ===================================================> Say "no" if you wanted to configure Static IP address
Ask for floppy boot? [no]:
Your default simulator network interface is already configured to eth0.
Which network interface should the simulator use? [eth0]: ==============================> Choose the "interface" which you wanted to use for Data Traffic

Another simulator is running. Cannot give good advise about memory.
How much memory would you like the simulator to use? [512]: =============================> Choose the Default RAM size
Create a new log for each session? [no]:
Overwrite the single log each time? [yes]:
Adding 25 additional disk(s).
Complete. Run /sim/ to start the simulator.
linux-sesl-184-54:/home/simulator #

Step VI:
o That's it, start the simulator by running startup script /sim/ And configure the Setup as per your need.

Step VII:

Network Appliance Clustered Failover delivers a robust and highly available data service for business-critical environments. Installed on a pair of NetApp filers, NetApp Clustered Failover ensures data availability by transferring the data service of an unavailable filer to the other filer in the cluster. Data ONTAP simulator also supports the Clustered Failover.

o To configure the Data ONTAP Simulator for the (cluster) Active Active Pair do the following:

CFO Step I:

Run the Setup and when it ask for the following say yes and continue the setup
Would you like to install as a cluster? [no]: yes ====================================> Say yes to install the Active Active Pair (Cluster) Node

CFO Step II: Now you will find node1 & node2 simulators installed in the given path.

CFO Step III: Run the setup script for each node and configure the interface which needs to take over a partner IP address during failover.
Please enter the new hostname []: cfo1
Do you want to configure virtual network interfaces? [n]:
Please enter the IP address for Network Interface ns0 []: ==================> Primary IP address of node1
Please enter the netmask for Network Interface ns0 []:
Should interface ns0 take over a partner IP address during failover? [n]: y ============> Say "Y" to enable Cluster Failover
The clustered failover software is not yet licensed. To enable network failover, you should run the 'license' command for clustered failover.
Please enter the IP address or interface name to be taken over by ns0 []:> Partner IP address of node2

CFO Step IV:Add cluster license.After reboot (mandatory since cluster is licensed) just enable cluster from the CLI.

CFO Step V: Check the status via cf status command. It should say Cluster Failover enabled.

Bringing the Virtual Filer Up
# cd /sim

#/sim/ script version Script version 22 (18/Sep/2007)
This session is logged in /netapp/7.3/sessionlogs/log
NetApp Release 7.3: Thu Jul 24 12:55:28 PDT 2008
Copyright (c) 1992-2008 Network Appliance, Inc.
Starting boot on Tue Dec 9 11:45:37 GMT 2008
Tue Dec 9 11:45:42 GMT [fmmb.current.lock.disk:info]: Disk v4.16 is a local HA mailbox disk.
Tue Dec 9 11:45:42 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.
Tue Dec 9 11:45:43 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks.
Tue Dec 9 11:45:43 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes.
…. Boot message
Please enter the new hostname []: - Specify Filer hostname
Do you want to configure virtual network interfaces? [n]:n
Please enter the IP address for Network Interface ns0 []: -- Provide Filer ip
Please enter the netmask for Network Interface ns0 []: -- Provide Netmask
Please enter media type for ns0 {100tx-fd, auto} [auto]:
Please enter the IP address for Network Interface ns1 []:
Would you like to continue setup through the web interface? [n]:n
Please enter the name or IP address of the default gateway: -- Provide default gateway
The administration host is given root access to the filer's
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host: -- Provide admin hostname
Please enter the IP address for adminserver : -- Provide admin ip
Please enter timezone [GMT]:Asia/Calcutta
Where is the filer located? []:Mumbai
What language will be used for multi-protocol files (Type ? for list):en_US
Setting language on volume vol0
The new language mappings will be available after reboot
Tue Dec 9 11:47:03 GMT [vol.language.changed:info]: Language on volume vol0 changed to en_US
Language set on volume vol0
Do you want to run DNS resolver? [n]: -- Say yes if you want configure dns
Do you want to run NIS client? [n]: y
Please enter NIS domain name []: - Provide nis domain name
Please enter list of preferred NIS servers [*]: - Prodive nis server ip's
Setting the administrative (root) password for [hostname]
New password: - Set root password here
Retype new password:
This process will enable CIFS access to the filer from a Windows(R) system.
Use "?" for help at any prompt and Ctrl-C to exit without committing changes.
Your filer does not have WINS configured and is visible only to
clients on the same subnet.
Do you want to make the system visible via WINS? [n]: n -- Say yes if you want to configure WINS
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer
(1) Multiprotocol filer
(2) NTFS-only filer
Selection (1-2)? [1]: 1
CIFS requires local /etc/passwd and /etc/group files. NIS services,
which normally take the place of the local /etc files, are enabled on
this filer. However, if NIS is ever unavailable, it may be useful to
have a rudimentary /etc/passwd and /etc/group file for CIFS
authentication. This default passwd file would contain 'root',
'pcuser', and 'nobody'.
Should CIFS create default /etc/passwd and /etc/group files? [n]:
NIS is currently enabled but NIS group caching is disabled. This may
have a severe impact on CIFS authentication if the NIS servers are
slow to respond or unavailable. It is highly recommended that you
enable NIS group caching.
Would you like to enable NIS group caching? [y]:
By default, the NIS group cache is updated once a day at midnight. If
you would like to update the cache more often or at a different time,
specify a list of hours (1-24, representing the hours in a day) that
describe when the update should be performed.
Enter the hour(s) when NIS should update the group cache [24 ]:
Would you like to specify additional hours? [n]:
The default name for this CIFS server is 'FILERNAME'.
Would you like to change this name? [n]:
Data ONTAP CIFS services support four styles of user authentication.
Choose the one from the list below that best suits your situation.
(1) Active Directory domain authentication (Active Directory domains only)
(2) Windows NT 4 domain authentication (Windows NT or Active Directory domains)
(3) Windows Workgroup authentication using the filer's local user accounts
(4) /etc/passwd and/or NIS/LDAP authentication
Selection (1-4)? [1]: 4
What is the name of the Workgroup? [WORKGROUP]:
Tue Dec 9 11:48:34 GMT [rc:info]: NIS: Group Caching has been enabled
CIFS - Starting SMB protocol...
Tue Dec 9 11:48:34 GMT [nis.lclGrp.updateSuccess:info]: The local NIS group update was successful.
Welcome to the WORKGROUP Windows(R) workgroup
CIFS local server is running.
filername> -- Filer is up


Perform Filer related activities from admin host via rsh or from the command prompt in the end of previous step

filername> df

Filesystem kbytes used avail capacity Mounted on
/vol/vol0/ 164552 71264 93288 43% /vol/vol0/
/vol/vol0/.snapshot 0 0 0 ---% /vol/vol0/.snapshot

filername> vol status -r

Aggregate aggr0 (online, raid0) (zoned checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
data v4.16 v4 1 0 FC:B - FCAL N/A 120/246784 127/261248
data v4.17 v4 1 1 FC:B - FCAL N/A 120/246784 127/261248

Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for zoned checksum traditional volumes or aggregates only
spare v4.18 v4 1 2 FC:B - FCAL N/A 36/74752 43/89216
spare v4.19 v4 1 3 FC:B - FCAL N/A 36/74752 43/89216
spare v4.20 v4 1 4 FC:B - FCAL N/A 36/74752 43/89216
spare v4.21 v4 1 5 FC:B - FCAL N/A 36/74752 43/89216
spare v4.22 v4 1 6 FC:B - FCAL N/A 36/74752 43/89216
spare v4.24 v4 1 8 FC:B - FCAL N/A 36/74752 43/89216
spare v4.25 v4 1 9 FC:B - FCAL N/A 36/74752 43/89216
spare v4.26 v4 1 10 FC:B - FCAL N/A 36/74752 43/89216
spare v4.27 v4 1 11 FC:B - FCAL N/A 36/74752 43/89216
spare v4.28 v4 1 12 FC:B - FCAL N/A 36/74752 43/89216

NetApp ONTAP Simulator and ESXi 4.1 Server

After installing and configuring the simulator if you can't get any network connectivity whatsoever. Try the following steps :

The network interface that was being used by the simulator has to be in promiscuous mode. ESXi Server, by default, doesn’t allow NICs in guest operating systems to be in promiscuous mode.

The fix is this:

Enable “Promiscuous Mode” for the vSwitch Port Group where the GREEN NIC of the Endian resides on.

In the ESXi configuration,
- Select your ESXi server in the tree view on the left
- Select the “Configuration” tab
- Find the “Virtual Switch” where the vnic of your VM connects to
- Click on the “Properties” link for that Virtual Switch
- Select the “Virtual Machine Port Group”
- Click “Edit”
- Go to the “Security” tab
- Put a checkmark after the “Promiscuous Mode”, then set the value in the combobox to “Accept”
- Press the “OK” button in the “Virtual Machine Port Group” dialog
- Press the “Close” button in the “Virtual Switch” dialog

Why enable Promiscuous Mode?
A router or bridge does more with traffic than a normal NIC. So the router needs to see more packets, Promiscuous mode enables that.

No comments: