Multipathing with RHEL7 on Dell R730 & EMC XT380 SAN Storage

 

Multipath Configuration & Testing on Dell PE R730 Servers

 

Multipath Configuration has been performed after consulting with:

1..      DELL-EMC recommendations.

Configuration File Overview

The multipath configuration file is divided into the following sections:

blacklist

In order to prevent the device mapper from mapping /dev/sda which is being used for OS access, an entry has been made in /etc/multipath.conf file to include this device in the blacklist.


add WWID of SDA to multipath.conf (36141877065e99b002672b17112ac0164)

blacklist_exceptions

Listing of multipath candidates that would otherwise be blacklisted according to the parameters of the blacklist section. None is PMD’s case

defaults

General default settings for DM Multipath. Settings used for PMD’s deployment:

user_friendly_names    yes àthis will enable user readable names instead of long WWIDs seen in above blacklist configuration

 

find_multipaths        yes à this should in essence ignore any device that does not have more than 1 access path.

multipaths

Settings for the characteristics of individual multipath devices. These values overwrite what is specified in the defaults and devices sections of the configuration file.

We have specified device specific settings in /etc/multipath/conf.d/emc.conf

devices

Settings for the individual storage controllers. These values overwrite what is specified in the defaults section of the configuration file.  

Specified in /etc/multipath/conf.d/emc.conf

 

devices {

        device {

                        vendor "DGC"

                        product ".*"

                        product_blacklist "LUNZ"

                        path_grouping_policy "group_by_prio"

                        path_selector "queue-length 0"

                        path_checker "emc_clariion"

                        features "1 queue_if_no_path"

                        hardware_handler "1 emc"

                        prio "emc"

                        failback immediate

                        rr_weight "uniform"

                        no_path_retry 60

                        retain_attached_hw_handler yes

                        detect_prio yes

                }

        }

DM MPIO Parameters

Parameter

Value

Description

path_selector

round-robin 0

Loops through every path in the path group,sending the same amount of I/O to each.

 

Queue-length 0

Sends the next bunch of I/O down the path with the least number of outstanding I/O requests.

rr_min_io

1

Specifies the number of I/O requests to route to a path before switching to the nextpath in the current path group (for kernelsolder than 2.6.31).

rr_min_io_rq

1

Specifies the number of I/O requests toroute to a path before switching the nextpath in the current path group (for kernels2.6.31 and above).

path_grouping_policy

multibus

All paths are in a single group (all paths havethe same priority).

path_checker

Tur

Specifies TEST UNIT READY as the defaultmethod used to determine the state of thepaths.

Failback

immediate

Manages the path group failback. immediaterefers to immediate failback to the highestpriority path group that contains activepaths.

fast_io_fail_tmo

15

Specifies the number of seconds betweendetection of a problem on an FC remote portand failing I/O to devices on the remote portby the SCSI layer

 

Tool provided by RHEL for multipath configuration also utilized

https://access.redhat.com/labs/multipathhelper/

LUN Access

Any port connected to the SAN switch becomes visible on the SAN Storage automatically if the zoning policy on the switch allows it.

All 4 ports of Server 1 and Server 2 are visible on SAN but were not configured to access any storage. First step is to create hosts. This can be done by either grouping the detected World Wide Names separately or by grouping them. We have chosen to group the WWNs by server name. So the two groups are:

G1 & G2

1.     

 

Before testing Multipath an LUN needs to be configured and host must be allowed access. We created a test LUN by the name of L1 to carry out the testing and allowed G1 & G2 access.

 

 

Testing Multipath

Testing command is multipath –ll. This should list down all the current disks detected with more than one path.

Output format is mentioned below:

alias (wwid_if_different_from_alias) dm_device_name_if_known vendor,product size=size features='features' hwhandler='hardware_handler' wp=write_permission_if_known

For each path group:

-+- policy='scheduling_policy' prio=prio_if_known status=path_group_status_if_known

For each path:

 `- host:channel:id:lun devnode major:minor dm_status_if_known path_status online_status

As can be seen, we currently have two Path groups with four paths each.

1.      If the path is up and ready for I/O, the status of the path is ready or ghost.

2.      If the path is down, the status is faulty or shaky.

3.      The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file.

4.      The dm status has two states: failed, which is analogous to faulty, and active which covers all other path states.

5.      The possible values for online_status are running and offline. A status of offline means that this SCSI device has been disabled.

Bandwidth Results



Comments

Popular Posts