Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

new-peer 명령은 리소스 내에 연결을 만듭니다. 리소스는 bsrsetup new-resource로 생성되어야 합니다. net-options 명령은 기존 연결의 네트워크 옵션을 변경합니다. connect 명령으로 연결을 활성화하기 전에 new-path 명령으로 하나 이상의 경로를 추가해야 합니다. 사용 가능한 옵션은 다음과 같습니다.

  • --after-sb-0pri policy Define how to react if a split-brain scenario is detected and none of the two nodes is in primary role. (We detect split-brain scenarios when two nodes connect; split-brain decisions are always between two nodes.) The defined policies are:

    disconnect No automatic resynchronization; simply disconnect

    스플릿 브레인 시나리오가 감지되고 두 노드 중 어느 것도 Primary 역할을 수행하지 않는 경우 대응 방법을 정의합니다. 스플릿 브레인은 항상 두 노드 사이에서 결정되며 두 노드가 연결될 때 감지합니다. 정의 된 정책은 다음과 같습니다.

    • disconnect 단순히 연결을 끊습니다.

    • discard-younger-primary, 

    • discard-older-primary Resynchronize from the node which became primary first ( 먼저 Primary 가 됬던 노드(discard-younger-primary) or last 또는 마지막으로 Primary 가 됬던 노드(discard-older-primary). If both nodes became primary independently, the 를 폐기합니다. 만일 두 노드가 독립적으로 Primary 가 됬었다면 discard-least-changes policy is used 정책을 사용합니다.

    • discard-zero-changes If only one of the nodes wrote data since the split brain situation was detected, resynchronize from this node to the other. If both nodes wrote data, disconnect하나의노드에서만 데이터를 쓴 경우 이 노드를 기준으로 재 동기화 합니다. 두 노드가 모두 데이터를 쓴 경우 연결을 끊습니다.

    • discard-least-changes Resynchronize from the node with more modified blocks많은 데이터를 쓴 노드를 기준으로 동기화 합니다.

    • discard-node-nodename Always resynchronize to the named node명명된 노드를 항상 폐기합니다.

  • --after-sb-1pri policy Define how to react if a split-brain scenario is detected, with one node in primary role and one node in secondary role. (We detect split-brain scenarios when two nodes connect, so split-brain decisions are always among two nodes.) The defined policies are:

    • disconnect No automatic resynchronization, simply disconnect.

    • consensus Discard the data on the secondary node if the after-sb-0pri algorithm would also discard the data on the secondary node. Otherwise, disconnect.

    • violently-as0p Always take the decision of the after-sb-0pri algorithm, even if it causes an erratic change of the primary's view of the data. This is only useful if a single-node file system (i.e., not OCFS2 or GFS) with the allow-two-primaries flag is used. This option can cause the primary node to crash, and should not be used.

    • discard-secondary Discard the data on the secondary node.

    • call-pri-lost-after-sb Always take the decision of the after-sb-0pri algorithm. If the decision is to discard the data on the primary node, call the pri-lost-after-sb handler on the primary node.

  • --after-sb-2pri policy Define how to react if a split-brain scenario is detected and both nodes are in primary role. (We detect split-brain scenarios when two nodes connect, so split-brain decisions are always among two nodes.) The defined policies are:

    • disconnect No automatic resynchronization, simply disconnect.

    • violently-as0p See the violently-as0p policy for after-sb-1pri.

    • call-pri-lost-after-sb Call the pri-lost-after-sb helper program on one of the machines unless that machine can demote to secondary. The helper program is expected to reboot the machine, which brings the node into a secondary role. Which machine runs the helper program is determined by the after-sb-0pri strategy.

  • --always-asbp Normally the automatic after-split-brain policies are only used if current states of the UUIDs do not indicate the presence of a third node. With this option you request that the automatic after-split-brain policies are used as long as the data sets of the nodes are somehow related. This might cause a full sync, if the UUIDs indicate the presence of a third node. (Or double faults led to strange UUID sets.)

  • --connect-int time As soon as a connection between two nodes is configured with bsrsetup connect, bsr immediately tries to establish the connection. If this fails, bsr waits for connect-int seconds and then repeats. The default value of connect-int is 10 secondsPrimary 노드 1 개와 Secondary 노드 1 개로 스플릿 브레인이 감지되는 경우 대처 방법을 정의합니다. (두 노드가 연결될 때 스플릿 브레인 시나리오를 감지하므로 스플릿 브레인 결정은 항상 두 노드 중 하나입니다.) 정의 된 정책은 다음과 같습니다.

    • disconnect 단순히 연결을 끊습니다.

    • consensus 희생노드가 선택될 수 있다면 자동으로 해결합니다. 그렇지 않으면, disconnect처럼 동작합니다.

    • discard-secondary Secondary 의 노드를 폐기합니다.

  • --after-sb-2pri policy 스플릿 브레인 시나리오가 감지되고 두 노드가 모두 Primary 역할을 하는 경우 대응 방법을 정의합니다. (두 노드가 연결될 때 스플릿 브레인 시나리오를 감지하므로 스플릿 브레인 결정은 항상 두 노드 중 하나 입니다.) 정의 된 정책은 다음과 같습니다.

    • disconnect 단순히 연결을 끊습니다.

    2 primary 스플릿 브레인의 경우 disconnect 를 통한 수동 복구만 사용할 수 있습니다.

  • --connect-int time bsrsetup connect로 두 노드 간 연결이 구성되는 즉시 연결 설정을 시도합니다. 이것이 실패하면 bsr은 connect-int초 동안 기다렸다가 반복합니다. connect-int의 기본값은 3초입니다.

  • --cram-hmac-alg hash-algorithm Configure the hash-based message authentication code (HMAC) or secure hash algorithm to use for peer authentication. The kernel supports a number of different algorithms, some of which may be loadable as kernel modules. See the shash algorithms listed in /proc/crypto. By default, cram-hmac-alg is unset. Peer authentication also requires a shared-secret to be configured.

  • --csums-alg hash-algorithm Normally, when two nodes resynchronize, the sync target requests a piece of out-of-sync data from the sync source, and the sync source sends the data. With many usage patterns, a significant number of those blocks will actually be identical. When a csums-alg algorithm is specified, when requesting a piece of out-of-sync data, the sync target also sends along a hash of the data it currently has. The sync source compares this hash with its own version of the data. It sends the sync target the new data if the hashes differ, and tells it that the data are the same otherwise. This reduces the network bandwidth required, at the cost of higher cpu utilization and possibly increased I/O on the sync target. The csums-alg can be set to one of the secure hash algorithms supported by the kernel; see the shash algorithms listed in /proc/crypto. By default, csums-alg is unset.

  • --csums-after-crash-only Enabling this option (and csums-alg, above) makes it possible to use the checksum based resync only for the first resync after primary crash, but not for later "network hickups". In most cases, block that are marked as need-to-be-resynced are in fact changed, so calculating checksums, and both reading and writing the blocks on the resync target is all effective overhead. The advantage of checksum based resync is mostly after primary crash recovery, where the recovery marked larger areas (those covered by the activity log) as need-to-be-resynced, just in case. Introduced in 8.4.5.

  • --data-integrity-alg alg bsr normally relies on the data integrity checks built into the TCP/IP protocol, but if a data integrity algorithm is configured, it will additionally use this algorithm to make sure that the data received over the network match what the sender has sent. If a data integrity error is detected, bsr will close the network connection and reconnect, which will trigger a resync. The data-integrity-alg can be set to one of the secure hash algorithms supported by the kernel; see the shash algorithms listed in /proc/crypto. By default, this mechanism is turned off. Because of the CPU overhead involved, we recommend not to use this option in production environments. Also see the notes on data integrity below.

  • --fencing fencing_policy Fencing is a preventive measure to avoid situations where both nodes are primary and disconnected. This is also known as a split-brain situation. bsr supports the following fencing policies:

    • dont-care No fencing actions are taken. This is the default policy.

    • resource-only If a node becomes a disconnected primary, it tries to fence the peer. This is done by calling the fence-peer handler. The handler is supposed to reach the peer over an alternative communication path and call ' bsradm outdate minor' there.

    • resource-and-stonith If a node becomes a disconnected primary, it freezes all its IO operations and calls its fence-peer handler. The fence-peer handler is supposed to reach the peer over an alternative communication path and call ' bsradm outdate minor' there. In case it cannot do that, it should stonith the peer. IO is resumed as soon as the situation is resolved. In case the fence-peer handler fails, I/O can be resumed manually with ' bsradm resume-io'.

  • --ko-count number If a secondary node fails to complete a write request in ko-count times the timeout parameter, it is excluded from the cluster. The primary node then sets the connection to this secondary node to Standalone. To disable this feature, you should explicitly set it to 0; defaults may change between versions.

  • --max-buffers number Limits the memory usage per bsr minor device on the receiving side, or for internal buffers during resync or online-verify. Unit is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible setting is hard coded to 32 (=128 KiB). These buffers are used to hold data blocks while they are written to/read from disk. To avoid possible distributed deadlocks on congestion, this setting is used as a throttle threshold rather than a hard limit. Once more than max-buffers pages are in use, further allocation from this pool is throttled. You want to increase max-buffers if you cannot saturate the IO backend on the receiving side.

  • --max-epoch-size number Define the maximum number of write requests bsr may issue before issuing a write barrier. The default value is 2048, with a minimum of 1 and a maximum of 20000. Setting this parameter to a value below 10 is likely to decrease performance.

  • --on-congestion policy

  • --congestion-fill threshold

  • --congestion-extents threshold By default, bsr blocks when the TCP send queue is full. This prevents applications from generating further write requests until more buffer space becomes available again. When bsr is used together with bsr-proxy, it can be better to use the pull-ahead on-congestion policy, which can switch bsr into ahead/behind mode before the send queue is full. bsr then records the differences between itself and the peer in its bitmap, but it no longer replicates them to the peer. When enough buffer space becomes available again, the node resynchronizes with the peer and switches back to normal replication. This has the advantage of not blocking application I/O even when the queues fill up, and the disadvantage that peer nodes can fall behind much further. Also, while resynchronizing, peer nodes will become inconsistent. The available congestion policies are block (the default) and pull-ahead. The congestion-fill parameter defines how much data is allowed to be "in flight" in this connection. The default value is 0, which disables this mechanism of congestion control, with a maximum of 10 GiBytes. The congestion-extents parameter defines how many bitmap extents may be active before switching into ahead/behind mode, with the same default and limits as the al-extents parameter. The congestion-extents parameter is effective only when set to a value smaller than al-extents. Ahead/behind mode is available since bsr 8.3.10.

  • --ping-int interval When the TCP/IP connection to a peer is idle for more than ping-int seconds, bsr will send a keep-alive packet to make sure that a failed peer or network connection is detected reasonably soon. The default value is 10 seconds, with a minimum of 1 and a maximum of 120 seconds. The unit is seconds.

  • --ping-timeout timeout Define the timeout for replies to keep-alive packets. If the peer does not reply within ping-timeout, bsr will close and try to reestablish the connection. The default value is 0.5 seconds, with a minimum of 0.1 seconds and a maximum of 3 seconds. The unit is tenths of a second.

  • --protocol name Use the specified protocol on this connection. The supported protocols are:

    • A Writes to the bsr device complete as soon as they have reached the local disk and the TCP/IP send buffer.

    • B Writes to the bsr device complete as soon as they have reached the local disk, and all peers have acknowledged the receipt of the write requests.

    • C Writes to the bsr device complete as soon as they have reached the local and all remote disks. 

  • --rcvbuf-size size Configure the size of the TCP/IP receive buffer. A value of 0 (the default) causes the buffer size to adjust dynamically. This parameter usually does not need to be set, but it can be set to a value up to 10 MiB. The default unit is bytes.

  • --sndbuf-size size Configure the size of the TCP/IP send buffer. Since bsr 8.0.13 / 8.2.7, a value of 0 (the default) causes the buffer size to adjust dynamically. Values below 32 KiB are harmful to the throughput on this connection. Large buffer sizes can be useful especially when protocol A is used over high-latency networks; the maximum value supported is 10 MiB.

  • --tcp-cork By default, bsr uses the TCP_CORK socket option to prevent the kernel from sending partial messages; this results in fewer and bigger packets on the network. Some network stacks can perform worse with this optimization. On these, the tcp-cork parameter can be used to turn this optimization off.

  • --timeout time Define the timeout for replies over the network: if a peer node does not send an expected reply within the specified timeout, it is considered dead and the TCP/IP connection is closed. The timeout value must be lower than connect-int and lower than ping-int. The default is 6 seconds; the value is specified in tenths of a second.

  • --use-rle Each replicated device on a cluster node has a separate bitmap for each of its peer devices. The bitmaps are used for tracking the differences between the local and peer device: depending on the cluster state, a disk range can be marked as different from the peer in the device's bitmap, in the peer device's bitmap, or in both bitmaps. When two cluster nodes connect, they exchange each other's bitmaps, and they each compute the union of the local and peer bitmap to determine the overall differences. Bitmaps of very large devices are also relatively large, but they usually compress very well using run-length encoding. This can save time and bandwidth for the bitmap transfers. The use-rle parameter determines if run-length encoding should be used. It is on by default since bsr 8.4.0.

  • --verify-alg hash-algorithm Online verification (bsradm verify) computes and compares checksums of disk blocks (i.e., hash values) in order to detect if they differ. The verify-alg parameter determines which algorithm to use for these checksums. It must be set to one of the secure hash algorithms supported by the kernel before online verify can be used; see the shash algorithms listed in /proc/crypto. We recommend to schedule online verifications regularly during low-load periods, for example once a month. Also see the notes on data integrity below.

...