Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »


DRX requires sufficient free physical memory for DRX's buffering capability when installed locally in an operating machine environment. If you do not have enough free physical memory space, you should also consider adding physical memory on your production machine.

Also, if you use the compression feature to accelerate replication, you should be aware that compression can cause CPU load on the operating node. Compression can be used locally if local I / O is not heavily burdened and compression of the operating node is insignificant. However, if compression loads affect performance across the local system, you should rethink the use of compression. Compressed loads can result in an additional load of approximately 20 ~ 30% on the local I/O load. If the DRX that performs the compression is configured as a dedicated machine and separated from the operating environment, it is possible to distribute the load depending on the compression. These DRX operational policies should be based on preliminary data on the local I / O load and should be determined by preliminary investigation of the following items in the configuration environment

2.1. Prior research

2.1.1. Operating System

Windows 2008 or later, Linux CentOS 6.4 or later, Ubuntu 12.04 LTS or later 64-bit platforms are supported.

2.1.2. Operating machine minimum specification

  • At least 1GHz x86/x64 compatible processor (2GHz or higher recommended), minimum 4 core recommended
  • At least 4G physical memory
  • At least 10 GB disk

2.1.3. Replication resources

You can configure it to any size as long as memory resources allow and limit it to up to 100 channels of replication resources.

2.1.4. Measure I/O load on the server

Use the following procedure to measure the I/O load on the server.

  • Measure the read/write I/O load of the server's replication target disk (average I/O, maximum I/O within a minimum of one to four weeks)
  • How to measure
    • Windows: Using the Performance Monitor tool Disk I/O statistical data collection
    • Linux: Utilizing utilities such as iostat Collecting disk I/O statistical data
  • The buffer size, compression, and encryption policy are determined based on the measurement results. See 2.3. Buffer operation policy

2.1.5. Replication Bandwidth

Replication bandwidth requires at least 10 Mbps to 100 Mbps bandwidth.

2.2. DRX configuration method

Determine how you will configure your operating environment based on I/O load and whether compression is enabled. the local configuration is common, but If the replication load is large and WAN section acceleration is required, a dedicated configuration is recommended.

2.3. Buffer sizing policy

  • DRX's physical buffer specification requires a preliminary investigation of network bandwidth and operating machine I/O load for instrumentation.
  • Prior research item
    • Average amount of I/O per active machine's resources
    • Maximum I/O Amount
    • Maximum I/O Duration
  • The average I/O and maximum I/O values of the operating machine are the basis for building an appropriate buffering environment.

case

buffer

remarks
1average I/O < maximum I/O < network bandwidth

Recommended buffer size: 1 GByte or more

Ex) 1Gbps bandwidth, 1G Buffer = Up to 100MB/s I/O can be maintained for about 10 seconds
2average I/O < network bandwidthmaximum I/O(Maximum I/O - bandwidth) * Maximum I/O duration

Ex) Average 50MB/s I/O, 100Mbps bandwidth, up to 200MB/s maximum I/O lasts for 10 seconds

(200MB/s - about 10MB/s) * 10 seconds = about 2GB

3network bandwidth < average I/O < maximum I/O

Consider the need for network bandwidth expansion and compression.


The DRX's buffer should be set to an appropriate size to accommodate the I/O load values of the pre-investigated operating nodes. If the I/O data based on the preliminary investigation can not be obtained, it is necessary to perform the tuning process on the buffer size after the configuration and trial operation according to the recommended buffer specification based on the case 1.

If the I/O load on the operating node is too large and the maximum I/O duration occurs over a long period of time (several minutes to several tens of minutes), DRX buffering can be difficult to handle. In this case, you should consider data compression.

2.3.1 Congestion policy

The congestion state means that buffering is impossible because there is no free space in the DRX buffer because the replication load is increased. In this case, DRX does not perform any special action and concentrates on remotely transmitting the replicated data in the buffer. The response to the congestion state is left to the congestion policy of DRBD.

The congestion policy is the corresponding policy in DRBD when the DRX's buffer enters the congestion state. Here's how to set up a congestion policy.

drbd.conf
resource r0 {
	proxy {
		memlimit 1G: # DRX 버퍼
	}
	net {
		on-congestion pull-ahead; # Congestion policy setting (Ahead mode)
		congestion-fill 950M; # Set congestion awareness point (when 950Mbyte data is buffered is congestion point)
	}
}

The congestion policy of DRBD is as follows. It is recommended to set Ahead mode for WAN interval asynchronous replication operation.

  • block: I/O waits until the buffer is free (until it can be queued to the buffer). This is the default when no congestion policy is set.
  • disconnect: Disconnect the replication connection and enter the StandAlone state.
  • pull-ahead: Enter delayed replication mode. If this is the case, the replication connection is maintained, but replication is stopped and local I/O is written on out-of-sync, and finally resynchronization is performed for the out-of-sync that was recorded when the congestion was released.

2.3.2. Buffer Size Tuning

  • It assumes DRBD asynchronous replication and Ahead mode (delayed replication) configuration.
  • Ensure that only the I/O measurements of the interval during which the replication connection is maintained can be measured. The I/O measurements for the replication disconnect interval are not considered.
  • The number of times DRBD enters Ahead mode (the number of congestion entries) is collected by the following method.
    • Output count of "Congestion-fill threshold reached" on drbd log
    • Check the number of Ahead entries through the drbdsetup events2 command
  • Resizes the buffer based on the number of congestion entries collected. If the frequency of congestion is frequent, the size of the buffer should be increased.
  • If the congestion interval does not decrease even though the buffer has been added, consider compression.


2.4. Physical Memory Specifications

DRX's physical memory specifications are variable based on number of resources, maximum I/O, and network bandwidth. The following is a formula for DRX's physical memory specification when the maximum I/O is greater than the bandwidth.

  • resource count * (max I/O(MB/s) - bandwidth(MB/s)) * max I/O maintained interval (sec) + compression/encryption buffer(2GB)

If the maximum I/O is always lower than the bandwidth, the physical memory specification is determined by calculating the per-resource buffer size as 1GB.

  • resource count * 1GB + compression/encryption buffer(2GB)

WAN bandwidth is virtually variable. Since the WAN bandwidth generally has variable bandwidth characteristics that vary widely depending on the network conditions, not the guaranteed bandwidth, it is generally considered that the WAN bandwidth is estimated to be 1 to 10 MB/s, It is appropriate. Also, it is not recommended to consider disabling of DRX buffering because the maximum I/O measured within a period of time is lower than the replication bandwidth. Depending on the characteristics of the application in the operating environment, there may be a situation in which the maximum I/O surges at an unspecified point. Therefore, it is preferable to allocate free space in the DRX buffer for this situation.

The following is an example of determining the physical memory specification assuming a WAN 100 Mbps bandwidth, a maximum I/O of 100 MB/s, and a maximum I/O duration of 30 seconds. In this example, the required memory for one replication resource operation is 6.7GB (16GB recommended).

I/O Load levelresource countSystem Memory BAB sizecompression / encryption bufferMemory RequirementsRecommended Memory Requirements

Normal speed

(100MB/s)

12GB1 * (100MB-10MB) * 30초 = 2.7GB

max 2GB

6.7GB16GB
1010 * (100MB-10MB) * 30초 = 27GB31GB32GB
5050 * (100MB-10MB) * 30초 = 135GB139GB160GB
100100 * (100MB-10MB) * 30초 = 270GB274GB320GB

The following is an example of determining the physical memory specification assuming a WAN 100 Mbps bandwidth, a maximum I/O of 500 MB/s, and a maximum I/O duration of 30 seconds. In this example, the required memory for one replication resource operation is 20.7GB (32GB recommended).

I/O Load levelresource countSystem MemoryBAB sizecompression / encryption bufferMemory RequirementsRecommended Memory Requirements

High speed 구성

(500MB/s)

12GB1 * (500MB-10MB) * 30초 = 14.7GB

2GB


18.7GB32GB
1010 * (500MB-10MB) * 30초 = 147GB151GB160GB
5050 * (500MB-10MB) * 30초 = 735GB739GB800GB
100100 * (500MB-10MB) * 30초 = 1.47TB1.51TB1.6TB



  • No labels