EMC Unity XT380

 

 

So we reccently obtained the following equipment to enhance service availability

1.      EMC Unity XT380 Disk Processor Enclosure (DPE) with Disk Array Enclosure (DAE)

2.      Connectrix 6610 Fibre Channel switches (FC Sw)

3.      2x Dell PowerEdge R730 Servers 

Previously we were using a Dell IP SAN (PS6210)  at the backend database which had some throughput issues. To resolve the issue we contacted Dell EMC to suggest a suitable upgrade without breaking the bank!

Deploying this hardware was a lot of fun. The whole process included:

1.      Rack placement and power provisioning for EMC SAN, DAE & FC Switches with Power Cabling

2.      Mounting of EMC SAN XT380 DPE, DAE and Connectrix FC Switches in Rack 

3.      FC Switches Configurations & Cabling as per EMC Recommendation

4.      FC Cabling, connecting DAE with DPE and DPE with FC switches

5.      EMC SAN Initial Setup/Management IP Scheming and verification of firmware (upgrade if required)  

6.      Installation of Host Bus Adapters on Dell PE R730 servers to enable FC protocol

7.      Pool Creation/Snap shot Quota/ Thin Provisioning.

8.      Partitioning of storage and zoning of LUNs

9.      Installation of RedHat OS Dell  PE R730

10.   Multipath configuration on servers

11.   Final stress testing of storage from servers & verification

1.      Power Requirement

a.      2 x Power input for DPE [Each from one UPS]

b.      2 x Power input for DAE [Each from one UPS]

c.      1 x Power input for FC Switch 1

d.      1 x Power input for FC Switch 2

2.      Rack Placement

Space Requirement: 7U  [2U DPE, 3U DAE and 2x 1U FC Switches]

1.      Switches have been placed on top

2.      DAE after switches

3.      DPE below DAE, this way capacity can be added by adding a new DAE beneath existing DPE

 

Rack 1 From Top to Bottom: 1: FC Sw-1 , 2: FC Sw-2, 3: DAE. 4: Unity XT380 DPE


3.      Connectivity

FC Switches with DPE

 

SP A

SP B

FC Switch 1 – Port 0

Port 0

-

FC Switch 1 – Port 1

-

Port 0

FC Switch 2 – Port 0

Port 1

 

FC Switch 2 – Port 1

 

Port 1


DAE with DPE

Figure 1: Connecting DAE with DPE

Servers with FC Switches

The PE R730 rack servers are located in Rack 2 whereas Unity XT380 and FC switches are placed in Rack 1. For connectivity at least 3 meters of FC cable and HBAs are required.



 

R730 Node 1

R730 Node 2

FC Switch 1 – Port 9

Port 0

-

FC Switch 1 – Port 10

-

Port 1

FC Switch 1 – Port 9

 

Port 0

FC Switch 1 – Port 10

Port 1

 

 

Management IPs

Following management IPs have been assigned to the equipment

1.      192.168.17.60 (EMC Unity XT380 DPE)

2.      192.168.17.61 (Connectrix DS-6610 FC Switch)

3.      192.168.17.62 (Connectrix DS-6610 FC Switch)

Storage Pools

This is a hybrid storage with a combination of Flash and rotating disks. EMC recommendation is to create less pools to increase performance. We have two options here:

1.      Use Mixed Pool (Combination of Flash and Rotating Disks)

2.      Use All Flash Pool and tradition Pool sepately

If Mixed pool is used we can utilize the Auto tier feature of the storage which will work as follows:

1.      Start by putting all data on Flash disks

2.      Check the access frequency of the data and then move to next priority tier

3.      Repeat step two and move to lowest tier available

We have three performance tiers available in this SAN

1.      Extreme Performance Tier (Flash)

2.      Performance Tier (10k RPM)

3.      Capacity Tier (7.2 RPM)

Second option is to create a separate pool for Flash storage and configure high IOPS LUNS from this pool and create separate pool for rotating disks.

RAID Configuration

The first 4 drives of every Dell EMC Unity system are called the system drives (DPE Disk 0 through DPE Disk 3).  Dell EMC Unity uses capacity from these 4 drives to store copies of configuration information and other critical system data.  Therefore, the available capacity from each of these drives is about 107GB less than from other drives.  System drives can be added to storage pools like any other drive, but offer less usable capacity due to the system partitions.  To reduce the capacity difference when adding the system drives to a pool, Dell Technologies recommends using a smaller RAID width for the tier which will contain the system drives.


Technology

RPM

MTBF

(years)

No. of Disks

Hot Spare

Available Capacity (TB)

Capacity Tier

Nearline-SAS

7.2K

132

8

2

6x3.66 TB = 21.96

Performance Tier

SAS

10K

186

6

1

5x1.65 TB = 8.25

Extreme Performance Tier

Flash

NA

-

2

1

1x0.73 TB = 0.73

Flash

(System Drives)

NA

-

4

4x065 TB = 2.506


 

 

 

20

 

33.446 TB

 

RAID Widths and Disk requirements

 


RAID level

Number of drives

RAID Width

RAID 5

6 to 9

4+1

10 to 13

8+1

14 or more

12+1

RAID 6

7 to 8

4+2

9 to 10

6+2

11 to 12

8+2

13 to 14

10+2

15 to 16

12+2

17 or more

14+2

RAID 1/0

3 to 4

1+1

5 to 6

2+2

7 to 8

3+3

9 or more

4+4

 


Suggested RAID settings (Hybrid Pool):

Option 1

·        RAID10 on Extreme Performance Tier (2+2) [Usable Space : 1.2 TB]

·        RAID5 on Performance Tier (4+1) [Usable Space : 6.4 TB]

·        RAID6 on Capacity Tier (4+2) [Usable Space : 14.3 TB]

Option 2

·        RAID5 on Extreme Performance Tier (4+1) [Usable Space : 2.4 TB] –  (RAID5 is recommended by EMC)

·        RAID5 on Performance Tier (4+1) [Usable Space : 6.4 TB]

·        RAID6 on Capacity Tier (4+2) [Usable Space : 14.3 TB]

Separate Pools (Flash & Rotating Disks)

All Flash Pool

·        RAID10 on Extreme Performance Tier (2+2) [Usable Space : 1.2 TB]

Traditional Disk Pool

·        RAID5 on Performance Tier (4+1) [Usable Space : 6.4 TB]

·        RAID6 on Capacity Tier (4+2) [Usable Space : 14.3 TB]

Pool Partitioning

Current utilization and volumes are tabulated below:

#

VOLUME NAME

SIZE

Free Space as per Oracle

1      

FLASH

700 GB

550 GB

2      

OCR1

5.01 GB

14.5 GB

3      

OCR2

5.01 GB

4      

OCR3

5.01 GB

5      

ARCHIVE

1 TB

525 GB

6      

DATA1

250 GB

490 GB

7      

DATA2

250 GB

8      

DATA3

250 GB

9      

DATA4

250 GB

1   

DATA5

250 GB

1   

DATA6

250 GB

1   

LOG1

100 GB

99.9 GB

1   

LOG2

100 GB

99 GB

1  

LOG3

100 GB

99 GB

 

Total Size

5.5 TB

4.689

 

Same size of LUNs can be created on new SAN, considering that the used space of the currently assigned is around 15% and 85% is still free, so there is plenty of room to grow. It is to be noted that multiple LUNs on SAN are created and then these RAW disks are combined into one volume group using Oracle’s ASM, so it is very easy to increase useable capacity for oracle by adding new LUNs to a group later on.

Connectivity Options

Apart from Fibre Channel, the Unity XT380 also supports connectivity over iSCSI via 10G Ethernet ports. It also has built in option for configuration as a File Storage Server (NAS) and VMWare Virtual Volume provider.

 


 

High Availability

In this configuration, the LUN is owned by SPA. Two ports are connected on each SP for a total of four available paths to the storage system. Dual switches are used to provide redundancy at the network or SAN level. Each host has two connections, one to each switch, in order to access all four available paths. Two hosts are configured as a cluster to provide failover capabilities in case of a host fault. In case of SP failure, the LUN fails over to the surviving SP and continues to service I/O since it is connected to the same switches. In case of switch failure, the remaining switch provides access to both SPs, eliminating the need to use the non-optimized path. In case of host failure, the cluster initiates a failover to the other host and brings the application online. Any path failure due to a bad cable or port does not cause any issues since the second optimized path can be used.

This configuration can also survive multiple failures, as long as they are not within the same component. For example, failure of Host B, Switch A, and SPA can be tolerated since the surviving components can be used to access the LUN. In this case, Host A can connect through Switch B, and access the LUN that’s trespassed to SPB.

 

 

Comments

Popular Posts