Export FS thru VDM first, then NFS export use the /root_vdm_N/mountPoint path.
Use VDM instead of DM (server_2) for CIFS server.
A VDM is really just a file system. Thus, it can be copied/replicated.
Because windows group and many other system data is not stored at the underlaying Unix FS, there was a need
to easily backup/migrate CIFS server.
For multi-protocol, it is best to have 1 VDM to provide CIFS access, and NFS will ride on the Physical DM.
CAVA complication: The Antivirus scanning feature must be connected to a physical CIFS server, not to a VDM.
This is because it is 1 CAVA for the whole DM, not multiple instance for multiple VDM that may exist on a DM.
Global CIFS share is also required.
May still want to just use physical DM with limited windows user/group config, as that may not readily migrate or backup.
Overall, still think that there is a need of 2 IP per DM. Maybe VDM and NFS DM have same IP so that it can have same hostname. But the Global CIFS share will ride on a Physical DM with a separate IP that user don't need to know. Finally, perhaps scrap the idea of VDM, but then one may pay dearly in replication/backup...
Celerra Howto
Create a Server
* Create a NFS server
- Really just ensuring a DM (eg server_2) is acting as primary, and
- Create logical Network interface (server_ifconfig -c -n cge0-1 ...)
(DM always exist, but if it is doing CIFS thru VDM only, then it has no IP and thus can't do NFS export).
* Create Physical CIFS sesrver (server_setup server_2 -P cifs ...)
OR
VDM to host CIFS server (nas_server -name VDM2 -type vdm -create server_2 -setstate loaded)
+ Start CIFS service (server_setup server_2 -P cifs -o start)
+ Join CIFS server to domain (server_cifs VDM2 -J ...)
Create FS and Share
- Find space to host the FS (nas_pool for AVM, nas_disk for masoquistic MVM)
- Create the FS (nas_fs -n FSNAME -c ...)
- Mount FS in VDM, then DM (server_mountpoint -c, server_mount)
- Share it on windows via VDM (server_export -P cifs VDM2 -n FSNAME /FsMount)
- Export the share "via the vdm path" (server_export -o root=... /root_vdm_N/FsMount)
Note that for server creation, DM for NFS is created first, then VDM for CIFS.
But for FS sharing, it is first mounted/shared on VDM (CIFS), then DM (NFS).
This is because VDM mount will dictate the path used by the DM as /root_vdm_N.
It is kinda backward, almost like lower level DM need to go thru the higher level VDM, blame in on how the FS mount path ended up...
File System, Mounts, Exports
nas_fs -n FSNAME -create size=800G pool=clar_r5_performance # create fs
nas_fs -d FSNAME # delete fs
nas_fs size FSNAME # determine size
nas_fs -list # list all FS, including private root_* fs used by DM and VDM
server_mount server_2 # show mounted FS for DM2
server_mount VDM1 # show mounted FS for VDM1
server_mount ALL # show mounted FS on all servers
server_mountpoint VDM1 -c /FSName # create mountpoint (really mkdir on VDM1)
server_mount VDM1 FSNAME /FSName # mount the named FS at the defined mount point/path.
# FSNAME is name of the file system, traditionally a disk/device in Unix
# /FSName is the mount point, can be different than the name of the FS.
server_mount server_2 -o accesspolicy=UNIX FSNAME /FSName
# Other Access Policy (training book ch11-p15)
# NT (both unix and windows access check NTFS ACL)
# UNIX (both unix and windows access check NFS permission bits)
# NATIVE (default, unix and nt perm kept independent, security hole!)
# SECURE (check ACL on both Unix and Win before granting access)
# MIXED - Both NFS and CIFS client rights checked against ACL; Only a single set of security attributes maintained
# MIXED_COMPAT - MIXED with compatible features
NetApp Mixed Mode is like EMC Native. Any sort of mixed mode is likely asking for problem.
Stict to either only NT or only Unix is the best bet.
server_export ALL # show all NFS export and CIFS share, vdm* and server_*
server_export VDM1 -name FSNAME /FSName
server_export server_2 -Protocol nfs -option root=10.10.91.44 /root_vdm_6/FSName
## Due to VDM sharing the FS, the mount path used by Physical DM (NFS) need to account for the /root_vdm_X prefix
# NFS share is default if not specified
# On VDM, export is only for CIFS protocol
unshare/unmount
server_export VDM1 -unexport -p -name FSNAME # -p for permanent (-unexport = -u)
server_umount VDM1 -p /FSName # -p = permanent, if omitted, mount point remains
# (marked with "unmounted" when listed by server_mount ALL)
# FS can't be mounted elsewhere, server cannot be deleted, etc!
# it really is rmdir on VDM1
Server DM, VDM
nas_server -list # list physical server (Data Mover, DM)
nas_server -list -all # include Virtual Data Mover (VDM)
server_sysconfig server_2 -pci
nas_server -info server_2
nas_server -v -l # list vdm
nas_server -v vdm1 -move server_3 # move vdm1 to DM3
# disruptive, IP changed to the logica IP on destination server
# logical interface (cge0-1) need to exist on desitnation server (with diff IP)
#
server_setup server_3 -P cifs -o start # create CIFS server on DM3, start it
# req DM3 to be active, not standby (type 4)
server_cifs serve_2 -U compname=vdm2,domain=oak.net,admin=administrator # unjoin CIFS server from domain
server_setup server_2 -P cifs -o delete # delete the cifs server
nas_server -d vdm1 # delete vdm (and all the CIFS server and user/group info contained in it)
Storage Pool, Volume, Disk
AVM = Automatic Volume Management
MVM = Manual Volume Management
MVM is very tedious, and require lot of understanding of underlaying infrastructure and disk striping and concatenation. If not done properly, can create performance imbalance and degradation. Not really worth the headache. Use AVM, and all FS creation can be done via nas_fs pool=...
nas_pool -size all # find size of space of all hd managed by AVM
nas_pool -info all # find which FS is defined on the storage pool
server_df # df...
nas_volume -list
nas_disk -l
/nas/sbin/rootnas_fs -info root_fs_vdm_vdm1 | grep _server # find which DM host a VDM
Usermapper
Usermapper in EMC is substantially different than in the NetApp. RTFM!
It is a program that generate UID for new windows user that it has never seen before. Files are stored in Unix style by the DM, thus SID need to have a translation DB. Usermapper provides this. A single Usermapper is used for the entire cabinet (server_2, _3, _4, VDM2, VDM3, etc) to provide consistency.
If you are a Windows-ONLY shop, with only 1 Celerra, this maybe okay.
But if there is any Unix, this is likely going to be a bad solution.
If user get Unix UID, then the same user accessing files on windows or Unix is viewed as two different user, as UID from NIS will be different than UID created by usermapper!
server_usermapper server_2 -enable # enable usermapper service
server_usermapper server_2 -disable
# even with usermapper disabled, and passwd file in /.etc/passwd
# somehow windows user file creation get some strange GID of 32770 (albeit UID is fine).
# There is a /.etc/gid_map file, but it is not a text file, not sure what is in it.
server_usermapper server_2 -u -E passwd.txt # dump out usermapper db info for USER, storing it in .txt file
server_usermapper server_2 -g -E group.txt # dump out usermapper db info for GROUP, storing it in file
# usermapper database should be back up periodically!
server_usermapper server_2 -remove -all # remove usermapper database
General Command
nas_version # version of Celerra
# older version only combatible with older JRE (eg 1.4.2 on 5.5.27 or older)
server_log server_2 # read log file of server_2
Config Files
A number of files are stored in etc folder.
retrieve using
server_file server_2 -put/-get ...
Celera Management
Windows MMC Plug in thing...
CheckPoint
Snapshots are known as CheckPoint in EMC speak.
Backup and Restore, Disaster Recovery
Quota
Change to use Filesize policy during initial setup
as windows does not support block policy (which is Celerra default).
Param change, reboot required.
param quota policy=filesize
Celerra manager is easiest to use.
nas_quotas -user -on -fs FSNAME # enable user quota on FsNAMe. Disruptive. (ch12, p22)
nas_quotas -group -on -mover server_2 # enable group quota on whole DM . Disruptive.
nas_quotas -both -off -mover server_2 # disable both group and user quota at the same time.
++ disruption... ??? really? just slow down? or FS really unavailable?? ch 12, p22.
nas_quotas -report -user -fs FSNAME
nas_quotas -report -user -mover server_2
nas_quotas -edit -config -fs FsNAME # Define default quota for a FS.
nas_quotas -list -tree -fs FSNAME # list quota tree on the spefified FS.
nas_quotas -edit -user -fs FSNAME user1 user2 ... # edit quota (vi interface)
nas_quotas -user -edit -fs FSNAME -block 104 -inode 100 user1 # no vi!
nas_quotas -u -e mover server_2 501 # user quota, edit, for uid 501, whole DM
nas_quota -g -e -fs FSNAME 10 # group quota, edit, for gid 10, on a FS only.
nas_quotas -user -clear -fs FSNAME # clear quota: reset to 0, turn quota off.
Tree Quota
nas_quotas -on -fs FSNAME -path /tree1 # create qtree on FS (for user???) ++
nas_quotas -on -fs FSNAME -path /subdir/tree2 # qtree can be a lower level dir
nas_quotas -off -fs FSNAME -path /tree1 # disable user quota (why user?)
# does it req dir to be empty??
nas_quotas -e -fs FSNAME -path /tree1 user_id # -e, -edit user quota
nas_quotas -r -fs FSNAME -path /tree1 # -r = -report
nas_quotas -t -on -fs FSNAME -path /tree3 # -t = tree quota, this eg turns it on on
# if no -t defined, it is for the user??
nas_quotas -t -list -fs FSNAME # list tree quota
To turn off Tree Quotas:
- Path MUST BE EMPTY !!!!! ie, delete all the files, or move them out.
can one ask for a harder way of turning something off??!!
Only alternative is to set quota value to 0 so it becomes tracking only,
but not fully off.
Quota Policy change:
- Quota check of block size (default) vs file size (windows only support this).
- Exceed quota :: deny disk space or allow to continue.
The policy need to be established from the getgo. They can't really be changed as:
- Param change require reboot
- All quotas need to be turned OFF (which requires path to be empty).
Way to go EMC! NetApp is much less draconian in such change.
Probably best to just not use quota at all on EMC!
If everything is set to 0 and just use for tracking, maybe okay.
God forbid if you change your mind!
Other Seldom Changed Config
Network interface config
Physical network doesn't get an IP address (for Celera external perspective)
All network config (IP, trunk, route, dns/nis/ntp server) applies to DM, not VDM.
# define local network: ie assign IP
server_ifconfig server_2 -c -D cge0 -n cge0-1 -p IP 10.10.53.152 255.255.255.224 10.10.53.158
# ifconfig of serv2 create device logical name protocol svr ip netmask broadcast
server_ifconfig server_2 cge0-2 down ?? # ifconfig down for cge0-2 on server_2
server_ifconfig server_2 -d cge0-2 # delete logical interfaces (ie IP associated with a NIC).
...
server_ping server_2 ip-to-ping # run ping from server_2
server_route server_2 a default 10.10.20.1 # route add default 10.10.20.1 on DM2
server_dns server_2 corp.hmarine.com ip-of-dns-svr # define a DNS server to use. It is per DM
server_dns server_2 -d corp.hmarine.com # delete DNS server settings
server_nis server_2 hmarine.com ip-of-nis-svr # define NIS server, again, per DM.
server_date server_2 timesvc start ntp 10.10.91.10 # set to use NTP
server_date server_2 0803132059 # set serverdate format is YY DD MM HH MM sans space
# good to use cron to set standby server clock once a day
# as standby server can't get time from NTP.
Standby Config
Server failover:
When server_2 fail over to server_3, then DM3 assume the role of server_2.
VDM that was running on DM2 will move over to DM3 also.
All IP address of all the DM and VDM are treansfered, including the MAC address.
Note that when moving VDM from server_2 to server_3, outside of the fail over, the IP address are changed.
This is because such a move is from one active DM to another.
IP are kept only when failing over from Active to Standby.
server_standby server_2 -c mover=server_3 -policy auto
# assign server_3 as standby for server_2, using auto fail over policy
Lab 6 page 89
Links
- EMC PowerLink
- EMC Lab access VDM2
-
History
DART 5.6 expected 2008.03
DART 5.5 mainstream in 2007, 2008
"LYS on the outside, LKS in the inside"
"AUHAUH on the outside, LAPPLAPP in the inside"