Open-E Knowledgebase

Setup a Ubuntu Server with iSCSI and MPIO to connect to an iSCSI Tartget in DSS V7

Article ID: 1653
Last updated: 28 Mar, 2013

1) Install Ubuntu Server 12.04 with 3 NICs

2) Run all in terminal:

sudo -i

Then run the following to refresh the repositories database.

apt-get update

3) Then install the open-iscsi, multipath-tools and Midnight Commander (or any other editing tool):

apt-get install open-iscsi multipath-tools mc

4) Set up both NICs eth1 and eth2 on Ubuntu

ifconfig eth1 172.10.1.10/24 up
ifconfig eth2 172.10.2.10/24 up

5) Set up both NICs eth1 and eth2 on DSS

eth1 172.10.1.11/24
eth2 172.10.2.11/24


6) Create the iSCSI volume and attach it to iSCSI target

7) On the Ubuntu server run command to discover iSCSI targets:

iscsiadm --mode discovery --type sendtargets --portal 172.10.1.11

iscsiadm --mode discovery --type sendtargets --portal 172.10.2.11

Both commands should return something like this:

172.10.1.11:3260,1 iqn.2012-07:dss.target0
172.10.2.11:3260,1 iqn.2012-07:dss.target0

where iqn.2012-07:dss.target0 is target name

8) Login to iSCSI targets

iscsiadm --mode node --targetname iqn.2012-07:dss.target0 --portal 172.10.1.11 --login
iscsiadm --mode node --targetname iqn.2012-07:dss.target0 --portal 172.10.2.11 --login

You should now see two new disk in fdisk -l (sdb and sdc)

9) Run the multipath -ll command and you should get something like this :

multipath -ll

26b4c687936416b46 dm-0 SCST_BIO,kLhy6AkFv06cPH7c size=10G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:0 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
   `- 4:0:0:0 sdc 8:32 active ready running

where 2356932543730786e is Worldwide Identifier of the disk.

10) Now create and or edit the file from /etc/multipath.conf using the Midnight Commander:

mcedit /etc/multipath.conf

defaults {
         udev_dir                /dev
         polling_interval        10
         selector                "round-robin 0"
         path_grouping_policy    multibus
         getuid_callout   "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
         prio_callout            /bin/true
         path_checker            readsector0
         prio                    const
         rr_min_io               100
         rr_weight               priorities
         failback                immediate
         no_path_retry           fail
         user_friendly_name      yes
}
blacklist {
         devnode "sda"
         devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
         devnode "^hd[a-z][[0-9]*]"
         devnode "^vd[a-z]"
         devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
multipaths {
         multipath {
                 wwid 2356932543730786e
                 alias open-e-test
         }
}

where the wwid is the Worldwide Identifier is the value you got from multipath -ll command.


11) Restart the multipath tools:

/etc/init.d/multipath-tools restart

and now multipath -ll should output should be similar to the one above:

multipath -ll

open-e-test (26b4c687936416b46) dm-0 SCST_BIO,kLhy6AkFv06cPH7c

size=10G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:0  sdb 8:16 active ready running

  `- 10:0:0:0 sdc 8:32 active ready  running


12) you should now see new unit in fdisk -l

Disk /dev/mapper/open-e-test: 53.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 524288 bytes Disk identifier: 0x00000000


13) Lets perform some testing.

a) Open a second console we will use this to monitor dataflow and login and run following command:

watch -n1 "echo eth1 && ifconfig eth1 | grep bytes && echo eth2 && ifconfig eth2 | grep bytes"

b) Now test and watch the dataflow on a second console. This test checks the writting - watch the TX bytes counters run in different console:

dd if=/dev/zero of=/dev/mapper/open-e-test bs=1M count=1024

c) Now test and watch the dataflow on a second console. This test checks the read test - watch the RX bytes counters run in different console:

dd if=/dev/mapper/open-e-test of=/dev/null bs=1M count=1024

d) Now we formatt and mount the volume

mkfs.ext3 /dev/mapper/open-e-test
mkdir /mnt/open-e-test
mount /dev/mapper/open-e-test /mnt/open-e-test

e) While copying data using the Midnight Commander we will see the data flow from the command that we issued in the past steps.

watch -n1 "echo eth1 && ifconfig eth1 | grep bytes && echo eth2 && ifconfig eth2 | grep bytes"

f) In case you want an immediate (or after specified time) path failure detection edit the /etc/iscsi/iscsid.conf file and find entry:

node.session.timeo.replacement_timeout 

which specifies the length of time to wait for session re-establishment before failing SCSI commands back to the application when running the Linux SCSI Layer error handler. The value is in seconds and the default is 120 seconds.

and set it to 0 or in case your network may occur short interruption some higher value like 5. 

There are also some other entries that should be modified:

# Time interval to wait for on connection before sending a ping.
node.conn[0].timeo.noop_out_interval = 5

# To specify the time to wait for a Nop-out response before failing
# the connection, edit this line. Failing the connection will
# cause IO to be failed back to the SCSI layer. If using dm-multipath
# this will cause the IO to be failed to the multipath layer.
node.conn[0].timeo.noop_out_timeout = 5

# To specify the time to wait for abort response before
# failing the operation and trying a logical unit reset edit the line.
# The value is in seconds and the default is 15 seconds.
node.session.err_timeo.abort_timeout = 15

# To specify the time to wait for a logical unit response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.lu_reset_timeout = 20

It is strongly recommended to play with those values before going into production as they have impact on the time after the the traffic will be switched to other left path(s) after path(s) failure.

After modifying the /etc/iscsi/iscsid.conf file you should umount the iscsi target (if mounted) and restart both iscsi and multipath:

umount /mnt/open-e-test
/etc/init.d/open-iscsi restart
/etc/init.d/multipath-tools restart
mount /dev/mapper/open-e-test /mnt/open-e-test

This article was:   Helpful | Not helpful Report an issue


Article ID: 1653
Last updated: 28 Mar, 2013
Revision: 1
Views: 16219
Posted: 31 Jul, 2012 by --
Updated: 28 Mar, 2013 by
print  Print email  Subscribe email  Email to friend share  Share pool  Add to pool
Also listed in
folder Information -> General info -> iSCSI -> MPIO

Prev     Next
iSCSI connection problems in ESXi 5.0       iSCSI target tuning settings in JovianDSS and DSS V7

The Knowledge base is managed by Open-E data storage software company.