EMC Celerra

EMC Celerra 101


Celerra is the NAS offering from EMC.

Control station:

https://celerra-cs0.myco.com/ 	# web gui URL.  Most feature avail there, including a console.

ssh celerra-cs0.myco.com	# ssh (or rsh, telnet in) for CLI access


Layers:
  
VDM (vdm2) / DM (server_2)		
  |
Export/Share
  |
Mount
  |
File System
  |
(AVM stripe, volume, etc)
  |
storage pool (nas_pool)
  |
disk

Export can export subdirectory within a File System.
All FS are native Unix FS.  CIFS features are added thru Samba (and other EMC add ons?).

CIFS share recommended thru VDM, for easier migration, etc.  
NFS share thru normal DM (server_X).  Physical DM can mount/export FS already shared by VDM, 
but VDM can't access the "parent" export done by a DM.
VDM mounts are accessible by underlaying DM via /root_vdm_N

Quota can be on tree (directory), per user a/o group.



Commands are to be issued thru the "control station" (ssh) 
(or web gui (Celerra Manager) or Windows MMC SnapIn (Celerra Management).)

Most commands are the form:
server_...
nas_...
fs_...
/nas/sbin/...

typical options can be abreviated, albeit not listed in command usage:
-l = -list
-c = -create
-n = -name
-P = -Protocol


IMHO Admin Notes

Celerra sucks as it compares to the NetApp. Comments apply to DART 5.5 (as of Feb 2008)
  1. Windows files are stored as NFS, plus some hacking side addition for meta data.
    This mean from the getgo, need to decide how to store the userid and gid. UserMapper is a very different beast than the usermap.cfg used in NetApp.
  2. Quota is nightmare. Policy change is impossible. Turning it off require removing all files on the path.
  3. Web GUI is heavy Java, slow and clunky.
  4. CLI is very unforgiven in specification of parameters and sequences.
Some good stuff, but only marginally:
  1. CheckPoint is more powerful than NetApp's Snapshot, but it requires a bit more setup. Arguably it does not hog up mainstream production file system space due to snapshot, and they can be deleted individually, so it is worth all the extra work it brings. :-)

Sample Setup

Below is a sample config for a brand new setup from scratch. The general flow is:
  1. Setup network connectivity, EtherChannel, etc
  2. Define Active/Standby server config
  3. Define basic network servers such as DNS, NIS, NTP
  4. Create Virtual CIFS server, join them to Windows Domain
  5. Create a storage pool for use with AVM
  6. Create file systems
  7. Mount file systems on DM/VDM, export/share them
# Network configurations
server_sysconfig server_2 -pci cge0 -o "speed=auto,duplex=auto"
server_sysconfig server_2 -pci cge1 -o "speed=auto,duplex=auto"

# Cisco EtherChannel (PortChannel)
server_sysconfig server_2 -virtual -name TRK0 -create trk -option "device=cge0,cge1"
server_sysconfig server_3 -virtual -name TRK0 -create trk -option "device=cge0,cge1"
server_ifconfig  server_2 -c -D TRK0 -n TRK0 -p IP 10.10.91.107 255.255.255.0 10.10.91.255
# ip, netmask, broadcast

# Create default routes
server_route server_2 -add default 10.10.91.1

# Configure standby server
server_standby server_2  -create mover=server_5 -policy auto

# DNS, NIS, NTP setup
server_dns  server_2 oak.net  10.10.91.47,162.86.50.204
server_nis  server_2 oak.net  10.10.89.19,10.10.28.145
server_date server_2 timesvc start ntp 10.10.91.10

 
server_cifs ALL -add security=NT

# Start CIFS services
server_setup server_2 -P cifs -o start

#Create Primary VDMs and VDM file system in one step.
nas_server -name VDM2 -type vdm -create server_2 -setstate loaded

#Define the CIFS environment on the VDM
server_cifs VDM2 -add compname=vdm2,domain=oak.net,interface=TRK0,wins=162.86.25.243:162.86.25.114
server_cifs VDM2 -J compname=vdm2,domain=oak.net,admin=hotin,ou="ou=Computers:ou=EMC Celerra" -option reuse
# ou is default location where object will be added to AD tree (read bottomm to top)
# reuse option allows AD domain admin to pre-create computer account in AD, then join it from a reg user (pre-granted)


# Confirm d7 and d8 are the smaller LUNs on RG0
nas_pool -create -name clar_r5_unused -description "RG0 LUNs" -volumes d7,d8

 
# FS creation using AVM (Automatic Volume Management), which use pre-defined pools:
# archive pool = ata drives
# performance pool = fc drives

nas_fs -name cifs1           -create size=80G pool=clar_archive
server_mountpoint VDM2         -c  /cifs1			# mkdir 
server_mount      VDM2       cifs1 /cifs1 			# mount (fs given a name instead of traditional dev path) 
server_export     VDM2 -name cifs1 /cifs1			# share, on VDM, automatically CIFS protocol
## Mount by VDM is accessible from a physical DM as /root_vdm_N (but N is not an obvious number)
## If FS export by NFS first, using DM /mountPoint as path, 
## then VDM won't be able to access that FS, and CIFS sharing would be limited to actual physical server

nas_fs -name nfshome            -create size=20G pool=clar_r5_performance
server_mountpoint server_4       -c  /nfshome
server_mount      server_4   nfshome /nfshome
server_export     server_4 -Protocol nfs -option root=10.10.91.44 /nfshome

nas_fs -name MixedModeFS -create size=10G pool=clar_r5_performance
server_mountpoint VDM4               -c  /MixedModeFS
server_mount      VDM4       MixedModeFS /MixedModeFS
server_export     VDM4 -name MixedModeFS /MixedModeFS
server_export server_2 -Protocol nfs -option root=10.10.91.44 /root_vdm_6/MixedModeFS
##  Due to VDM sharing the FS, the mount path used by Physical DM (NFS) need to account for the /root_vdm_X prefix

## See additional notes in Config Approach

---

Config Approach

  • Make decision whether to use USERMAPPER (okay in CIFS only world, but if there is any UNIX, most likely NO).
  • Decide on Quotas policy
  • Plan for Snapshots...
  • An IP address can be used by 1 NFS server and 1 CIFS server. server_ifconfig -D cge0 -n cge0-1 can be done for the DM; cge0-1 can still be the interface for CIFS in VDM. Alternatively, the DM can have other IP (eg cge0-2) if it is desired to match the IP/hostname of other CIFS/VDM.
  • Export FS thru VDM first, then NFS export use the /root_vdm_N/mountPoint path.

    Use VDM instead of DM (server_2) for CIFS server. A VDM is really just a file system. Thus, it can be copied/replicated. Because windows group and many other system data is not stored at the underlaying Unix FS, there was a need to easily backup/migrate CIFS server.
    For multi-protocol, it is best to have 1 VDM to provide CIFS access, and NFS will ride on the Physical DM.
    CAVA complication: The Antivirus scanning feature must be connected to a physical CIFS server, not to a VDM. This is because it is 1 CAVA for the whole DM, not multiple instance for multiple VDM that may exist on a DM. Global CIFS share is also required. May still want to just use physical DM with limited windows user/group config, as that may not readily migrate or backup.
    Overall, still think that there is a need of 2 IP per DM. Maybe VDM and NFS DM have same IP so that it can have same hostname. But the Global CIFS share will ride on a Physical DM with a separate IP that user don't need to know. Finally, perhaps scrap the idea of VDM, but then one may pay dearly in replication/backup...

    Celerra Howto

    Create a Server

    * Create a NFS server 
    	- Really just ensuring a DM (eg server_2) is acting as primary, and
    	- Create logical Network interface (server_ifconfig -c -n cge0-1 ...)
    	  (DM always exist, but if it is doing CIFS thru VDM only, then it has no IP and thus can't do NFS export).
    
    * Create Physical CIFS sesrver (server_setup server_2 -P cifs ...)  
        OR
      VDM to host CIFS server (nas_server -name VDM2 -type vdm -create server_2 -setstate loaded)
        + Start CIFS service (server_setup server_2 -P cifs -o start)
        + Join CIFS server to domain (server_cifs VDM2 -J ...)
    

    Create FS and Share

    1. Find space to host the FS (nas_pool for AVM, nas_disk for masoquistic MVM)
    2. Create the FS (nas_fs -n FSNAME -c ...)
    3. Mount FS in VDM, then DM (server_mountpoint -c, server_mount)
    4. Share it on windows via VDM (server_export -P cifs VDM2 -n FSNAME /FsMount)
    5. Export the share "via the vdm path" (server_export -o root=... /root_vdm_N/FsMount)
    Note that for server creation, DM for NFS is created first, then VDM for CIFS.
    But for FS sharing, it is first mounted/shared on VDM (CIFS), then DM (NFS).
    This is because VDM mount will dictate the path used by the DM as /root_vdm_N.
    It is kinda backward, almost like lower level DM need to go thru the higher level VDM, blame in on how the FS mount path ended up...

    File System, Mounts, Exports

    
    nas_fs -n FSNAME -create size=800G pool=clar_r5_performance	# create fs
    nas_fs -d FSNAME						# delete fs
    nas_fs size FSNAME		# determine size
    nas_fs -list			# list all FS, including private root_* fs used by DM and VDM
    
    server_mount server_2		# show mounted FS for DM2
    server_mount VDM1		# show mounted FS for VDM1
    server_mount ALL		# show mounted FS on all servers
    
    server_mountpoint VDM1    -c  /FSName	# create mountpoint (really mkdir on VDM1)
    server_mount      VDM1 FSNAME /FSName	# mount the named FS at the defined mount point/path.
    					# FSNAME is name of the file system, traditionally a disk/device in Unix
    					# /FSName is the mount point, can be different than the name of the FS.
    
    server_mount server_2 -o accesspolicy=UNIX FSNAME /FSName
    # Other Access Policy (training book ch11-p15)
    # NT     (both unix and windows access check NTFS ACL)
    # UNIX   (both unix and windows access check NFS permission bits)
    # NATIVE (default, unix and nt perm kept independent, security hole!)
    # SECURE (check ACL on both Unix and Win before granting access)
    # MIXED - Both NFS and CIFS client rights checked against ACL; Only a single set of security attributes maintained 
    # MIXED_COMPAT - MIXED with compatible features 
     
    NetApp Mixed Mode is like EMC Native.  Any sort of mixed mode is likely asking for problem.  
    Stict to either only NT or only Unix is the best bet.
    
    
    server_export ALL		# show all NFS export and CIFS share, vdm* and server_*
    server_export VDM1 -name FSNAME /FSName
    server_export server_2 -Protocol nfs -option root=10.10.91.44 /root_vdm_6/FSName
    ##  Due to VDM sharing the FS, the mount path used by Physical DM (NFS) need to account for the /root_vdm_X prefix
    
    # NFS share is default if not specified
    # On VDM, export is only for CIFS protocol
    
    

    unshare/unmount

    
    server_export  VDM1 -unexport -p -name FSNAME	# -p for permanent (-unexport = -u)
    server_umount  VDM1 -p /FSName			# -p = permanent, if omitted, mount point remains
     						# (marked with "unmounted" when listed by server_mount ALL)
    						# FS can't be mounted elsewhere, server cannot be deleted, etc!
    						# it really is rmdir on VDM1
    
    

    Server DM, VDM

    nas_server -list		# list physical server (Data Mover, DM)
    nas_server -list -all		# include Virtual Data Mover (VDM)
    server_sysconfig server_2 -pci
    
    nas_server -info server_2
    nas_server -v -l				# list vdm
    
    
    nas_server -v vdm1 -move server_3				# move vdm1 to DM3
    		# disruptive, IP changed to the logica IP on destination server
    		# logical interface (cge0-1) need to exist on desitnation server (with diff IP)
    		# 
    
    
    server_setup server_3 -P cifs -o start		# create CIFS server on DM3, start it
    						# req DM3 to be active, not standby (type 4)
    
    
    
    server_cifs  serve_2  -U compname=vdm2,domain=oak.net,admin=administrator	# unjoin CIFS server from domain
    server_setup server_2 -P cifs -o delete		# delete the cifs server
    
    nas_server -d vdm1				# delete vdm (and all the CIFS server and user/group info contained in it)
    
    
    

    Storage Pool, Volume, Disk

    AVM = Automatic Volume Management
    MVM = Manual Volume Management
    MVM is very tedious, and require lot of understanding of underlaying infrastructure and disk striping and concatenation. If not done properly, can create performance imbalance and degradation. Not really worth the headache. Use AVM, and all FS creation can be done via nas_fs pool=...

    
    
    nas_pool -size all		# find size of space of all hd managed by AVM
    nas_pool -info all		# find which FS is defined on the storage pool
    
    
    server_df			# df...
    
    nas_volume -list
    nas_disk -l
    
    
    /nas/sbin/rootnas_fs -info root_fs_vdm_vdm1 | grep _server 	# find which DM host a VDM
    
    
    

    Usermapper

    Usermapper in EMC is substantially different than in the NetApp. RTFM!

    It is a program that generate UID for new windows user that it has never seen before. Files are stored in Unix style by the DM, thus SID need to have a translation DB. Usermapper provides this. A single Usermapper is used for the entire cabinet (server_2, _3, _4, VDM2, VDM3, etc) to provide consistency. If you are a Windows-ONLY shop, with only 1 Celerra, this maybe okay. But if there is any Unix, this is likely going to be a bad solution.
    If user get Unix UID, then the same user accessing files on windows or Unix is viewed as two different user, as UID from NIS will be different than UID created by usermapper!
    
    server_usermapper server_2 -enable	# enable usermapper service
    server_usermapper server_2 -disable
    # even with usermapper disabled, and passwd file in /.etc/passwd
    # somehow windows user file creation get some strange GID of 32770 (albeit UID is fine).
    # There is a /.etc/gid_map file, but it is not a text file, not sure what is in it.
    
    server_usermapper server_2 -u -E passwd.txt	# dump out usermapper db info for USER, storing it in .txt file
    server_usermapper server_2 -g -E group.txt	# dump out usermapper db info for GROUP, storing it in file 
    # usermapper database should be back up periodically!
    
    server_usermapper server_2 -remove -all		# remove usermapper database
    
    
    
    

    General Command

    nas_version			# version of Celerra
    				# older version only combatible with older JRE (eg 1.4.2 on 5.5.27 or older)
    
    server_log server_2		# read log file of server_2
    
    

    Config Files

    
    A number of files are stored in etc folder.
    retrieve using 
    server_file server_2 -put/-get   ...
    
    
    
    Celera Management
    Windows MMC Plug in thing...
    
    
    

    CheckPoint

    Snapshots are known as CheckPoint in EMC speak.

    
    
    
    

    Backup and Restore, Disaster Recovery

    Quota

    Change to use Filesize policy during initial setup as windows does not support block policy (which is Celerra default). Param change, reboot required. param quota policy=filesize

    Celerra manager is easiest to use.
    
    nas_quotas -user  -on -fs    FSNAME	# enable user quota on FsNAMe.  Disruptive. (ch12, p22)   
    nas_quotas -group -on -mover server_2	# enable group quota on whole DM .  Disruptive.
    
    nas_quotas -both -off -mover server_2	# disable both group and user quota at the same time.
    
    ++ disruption...  ??? really?  just slow down?  or FS really unavailable?? ch 12, p22.
    
    nas_quotas -report -user -fs FSNAME 
    nas_quotas -report -user -mover server_2
    
    
    nas_quotas -edit -config -fs FsNAME 	# Define default quota for a FS.
    
    
    nas_quotas -list -tree -fs FSNAME	# list quota tree on the spefified FS.
     
    nas_quotas -edit -user -fs FSNAME user1 user2 ...	# edit quota (vi interface)
    
    nas_quotas -user -edit -fs FSNAME -block 104 -inode 100 user1	# no vi!
    
    nas_quotas -u -e mover server_2 501	# user quota, edit, for uid 501, whole DM
    
    nas_quota -g -e -fs FSNAME 10		# group quota, edit, for gid 10, on a FS only.
    
    nas_quotas -user -clear -fs FSNAME	# clear quota: reset to 0, turn quota off.
    
    

    Tree Quota

    
    nas_quotas -on -fs FSNAME -path /tree1		# create qtree on FS                (for user???) ++
    nas_quotas -on -fs FSNAME -path /subdir/tree2	# qtree can be a lower level dir
    
    nas_quotas -off -fs FSNAME -path /tree1		# disable user quota (why user?)
    						# does it req dir to be empty??
    nas_quotas -e -fs FSNAME -path /tree1 user_id	# -e,  -edit user quota
    nas_quotas -r -fs FSNAME -path /tree1		# -r = -report
    
    
    nas_quotas -t -on -fs FSNAME -path /tree3	# -t = tree quota, this eg turns it on on
    						# if no -t defined, it is for the user??
    nas_quotas -t -list -fs FSNAME			# list tree quota
    
    
    To turn off Tree Quotas:
    - Path MUST BE EMPTY !!!!!	ie, delete all the files, or move them out.  
    				can one ask for a harder way of turning something off??!!
    				Only alternative is to set quota value to 0 so it becomes tracking only, 
    				but not fully off.
    
    
    Quota Policy change:
    - Quota check of block size (default) vs file size (windows only support this).
    - Exceed quota :: deny disk space or allow to continue.
    The policy need to be established from the getgo.  They can't really be changed as:
        	- Param change require reboot
    	- All quotas need to be turned OFF  (which requires path to be empty).
    
    Way to go EMC!  NetApp is much less draconian in such change.  
    Probably best to just not use quota at all on EMC!
    If everything is set to 0 and just use for tracking, maybe okay.  
    God forbid if you change your mind!
    
    	
    
    

    Other Seldom Changed Config

    Network interface config

    Physical network doesn't get an IP address (for Celera external perspective)
    All network config (IP, trunk, route, dns/nis/ntp server) applies to DM, not VDM.


    # define local network: ie assign IP 
    server_ifconfig server_2  -c      -D cge0  -n cge0-1     -p IP  10.10.53.152 255.255.255.224 10.10.53.158
    #      ifconfig of serv2  create  device  logical name  protocol      svr ip    netmask        broadcast
    
    
    server_ifconfig server_2 cge0-2 down 	??	# ifconfig down for cge0-2 on server_2
    server_ifconfig server_2 -d cge0-2		# delete logical interfaces (ie IP associated with a NIC).
    ...
    
    server_ping  server_2 ip-to-ping		# run ping from server_2 
    
    server_route server_2 a default 10.10.20.1			# route add default 10.10.20.1  on DM2
    server_dns server_2    corp.hmarine.com ip-of-dns-svr		# define a DNS server to use.  It is per DM
    server_dns server_2 -d corp.hmarine.com				# delete DNS server settings
    server_nis server_2 hmarine.com ip-of-nis-svr			# define NIS server, again, per DM.
    server_date server_2 timesvc start ntp 10.10.91.10		# set to use NTP
    server_date server_2 0803132059			# set serverdate format is YY DD MM HH MM  sans space
    						# good to use cron to set standby server clock once a day
    						# as standby server can't get time from NTP.
    	
    
    

    Standby Config

    Server failover:

    When server_2 fail over to server_3, then DM3 assume the role of server_2. VDM that was running on DM2 will move over to DM3 also. All IP address of all the DM and VDM are treansfered, including the MAC address.

    Note that when moving VDM from server_2 to server_3, outside of the fail over, the IP address are changed. This is because such a move is from one active DM to another.

    IP are kept only when failing over from Active to Standby.
    server_standby server_2 -c mover=server_3 -policy auto
    # assign server_3 as standby for server_2, using auto fail over policy
    
    Lab 6 page 89
    
    

    Links

    1. EMC PowerLink
    2. EMC Lab access VDM2


    History

    
    DART 5.6	expected 2008.03
    DART 5.5	mainstream in 2007, 2008
    
    


    [Doc URL: http://sn50.users.sonic.net/psg/emcCelerra.html ]
    [Doc URL: http://www.cs.fiu.edu/~tho01/psg/emcCelerra.html]

    (cc) Tin Ho. See main page for copyright info.
    Last Updated: 2008-03-22

    "LYS on the outside, LKS in the inside"
    "AUHAUH on the outside, LAPPLAPP in the inside"