Open-E Knowledgebase

Tuning recommendations for volume replication over 1GbE

Article ID: 1603
Last updated: 28 Mar, 2013

1) Jumbo frames in general does not significantly improve performance with 1Gb NIC and it is recommended to use default 1500 MTU.  But, if your particular application require jumbo frames please use MTU 9000 for both nodes as well as for initiator system. Please note that if there are switches in between, they have to support MTU 9000 as well.
http://kb.open-e.com/Does-Open-E-support-Jumbo-Frames_28.html
kb.open-e.com/Jumbo-Frames_121.html


2) DRBD tuning - please try with following values for DRBD tuning from the DSS V6 console (CTRL+ALT+W, DRDB Tuning):

max-buffers 8000
max-epoch-size 8000
unplug-watermark 8000
no-disk-flushes
no-disk-barries

Note that those values are hardware dependent. The best is to try with different values. Good guide can be found here:
http://www.drbd.org/users-guide/s-throughput-tuning.html

=================

15.3. Tuning recommendations

DRBD offers a number of configuration options which may have an effect on your system’s throughput. This section list some recommendations for tuning for throughput. However, since throughput is largely hardware dependent, the effects of tweaking the options described here may vary greatly from system to system. It is important to understand that these recommendations should not be interpreted as "silver bullets" which would magically remove any and all throughput bottlenecks.

15.3.1. Setting max-buffers and max-epoch-size

These options affect write performance on the secondary node. max-buffers is the maximum number of buffers DRBD allocates for writing data to disk while max-epoch-size is the maximum number of write requests permitted between two write barriers. Under most circumstances, these two options should be set in parallel, and to identical values. The default for both is 2048; setting it to around 8000 should be fine for most reasonably high-performance hardware RAID controllers.

resource <resource> {
  net {
    max-buffers 8000;
    max-epoch-size 8000;
    ...
  }
  ...
}

15.3.2. Tweaking the I/O unplug watermark

The I/O unplug watermark is a tunable which affects how often the I/O subsystem’s controller is "kicked" (forced to process pending I/O requests) during normal operation. There is no universally recommended setting for this option; this is greatly hardware dependent.

Some controllers perform best when "kicked" frequently, so for these controllers it makes sense to set this fairly low, perhaps even as low as DRBD’s allowable minimum (16). Others perform best when left alone; for these controllers a setting as high as max-buffers is advisable.

resource <resource> {
  net {
    unplug-watermark 16;
    ...
  }
  ...
}

15.3.3. Tuning the TCP send buffer size

The TCP send buffer is a memory buffer for outgoing TCP traffic. By default, it is set to a size of 128 KiB. For use in high-throughput networks (such as dedicated Gigabit Ethernet or load-balanced bonded connections), it may make sense to increase this to a size of 512 KiB, or perhaps even more. Send buffer sizes of more than 2 MiB are generally not recommended (and are also unlikely to produce any throughput improvement).

resource <resource> {
  net {
    sndbuf-size 512k;
    ...
  }
  ...
}

DRBD also supports TCP send buffer auto-tuning. After enabling this feature, DRBD will dynamically select an appropriate TCP send buffer size. TCP send buffer auto tuning is enabled by simply setting the buffer size to zero:

resource <resource> {
  net {
    sndbuf-size 0;
    ...
  }
  ...
}

15.3.4. Tuning the Activity Log size

If the application using DRBD is write intensive in the sense that it frequently issues small writes scattered across the device, it is usually advisable to use a fairly large activity log. Otherwise, frequent metadata updates may be detrimental to write performance.

resource <resource> {
  disk {
    al-extents 3389;
    ...
  }
  ...
}

15.3.5. Disabling barriers and disk flushes

[Warning] Warning

The recommendations outlined in this section should be applied only to systems with non-volatile (battery backed) controller caches.

Systems equipped with battery backed write cache come with built-in means of protecting data in the face of power failure. In that case, it is permissible to disable some of DRBD’s own safeguards created for the same purpose. This may be beneficial in terms of throughput:

resource <resource> {
  disk {
    disk-barrier no;
    disk-flushes no;
    ...
  }
  ...
}

This article was:   Helpful | Not helpful Report an issue


Article ID: 1603
Last updated: 28 Mar, 2013
Revision: 1
Views: 5912
Posted: 12 Apr, 2012 by --
Updated: 28 Mar, 2013 by
print  Print email  Subscribe email  Email to friend share  Share pool  Add to pool
Also listed in
folder Information -> General info -> How to Video's -> Replication

Prev     Next
Data replication       Replication to branch offices

The Knowledge base is managed by Open-E data storage software company.