Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents

...

The basic

bsr synchronizes and replicates the volumes of hosts in a cluster in real time over the network.

Based on Windows DRBD (wdrbd) forked from drbd (http://www.drbd.org), it is is an open source project built as a cross-platform common engine to support both Windows and Linux platforms. It inherits all the basic concepts and functions of Windows DRBD, and provides solutions to build a more stable and efficient replication environment by supplementing the problems and functions of DRBD9.

bsr is free to contribute and participate in development through the open source community (bsr) (For technical support or inquiries regarding bsr, please contact to github issues for bsr or Mantech's dev3@mantech.co.kr )

Info

bsr is distributed under the GPL v2 license. 

Basic

...

Synchronization and Replication

To replicate, volume data on both hosts must first match. To achieve this, bsr performs a process of copying data from the source to the target using disk blocks as a unit, which is called synchronization.

Once synchronization is complete, both volumes will be in a completely identical state, and if data changes occur on the source side, only the changes will be reflected to the target side to maintain the consistency of both volumes.

Here, when data on the source side changes, the operation of reflecting the change in real time to the target side is called replication. Synchronization operates slowly in the background, while replication occurs quickly in the context of local I/O.

Replication works in the following way:

  • The application writes data to the block device and replicates while replicating it in real time.

  • Real-time replication does not affect other application services or system elements.

  • Replicate synchronously or asynchronously

    • The synchronous protocol treats replication as In the synchronous method, replication is considered complete when the replication data is has been written to the local disk and the target host's disk.

    • The asynchronous protocol method treats replication as completed complete when replication data is written to the local disk and data is transferred to socket’s tx buffer.

Kernel Driver

The core engine of bsr was developed as a kernel driver.

The kernel driver is located at the disk volume layer, and controls write I/O from the file system in block units. And as it performs replication in the lower layer of the file system, it is suitable for high availability by providing a transparent replication environment independent of the file system and application. However, since it is located in the lower layer of the file system, it cannot control general operations related to files. For example, it is not possible to detect damage to the file system or control pure file data. It is only replicate in blocks written to disk.

...

    • transmitted to the target host.

Synchronization and replication operate separately within bsr, but can occur simultaneously at a single point in time. In other words, since replication can be processed simultaneously while synchronization is being performed (the operating node processes synchronization and simultaneously replicates write I/O that occurs during operation), the throughput between each node must be appropriately adjusted within the range of the maximum network bandwidth. . For information on setting the sync band, see https://mantech.jira.com/wiki/spaces/BUGE/pages/1419935915/Working#Adjusting-the-synchronization-speed.

Kernel drivers

The core engine of BSR is implemented as a kernel driver.

The kernel driver sits at the disk volume layer and provides block-by-block control over write I/O from the filesystem. Because it sits at the lower layer of the filesystem, it provides a transparent replication environment that is independent of the filesystem and the application, making it ideal for building high availability. However, being at the lower layer of the filesystem means that it has no control over common operations on files. For example, it can't detect corruption in the filesystem or control the file data - it just replicates it block by block as it is written to disk.

BSR provides Active-Passive clustering by default, not support Active-Active clustering.

Management tools

bsr provides management tools for organizing and managing resources. It consists of bsradm, bsrsetup, bsrmeta, and bsrcon described below. Administrator-level privileges are required to use administrative commands.

bsradm

  • It is a utility that provides high-level commands abstracting detailed functions of bsr. bsradm allows you to control most of the behavior of bsr.

  • bsradm gets all configuration parameters from the configuration file etc/bsr.conf, and serves to pass commands by giving appropriate options to bsrsetup and bsrmeta. That is, the actual operation is performed in bsrsetup, bsrmeta.

  • bsradm can be run in dry-run mode via the -d option. This provides a way to see in advance which bsradm will run with which combination of options without actually invoking the bsrsetup and bsrmeta commands.

  • For more information about the bsradm command option, refer to bsradm in the Appendix B. System Manual.

bsrsetup

  • bsrsetup can set option values for the bsr kernel engine. All parameters to bsrsetup must be passed as text arguments.

  • The separation of bsradm and bsrsetup provides a flexible command system.

    • The parameters accepted by bsradm are replaced with more complex parameters, and then invoke bsrsetup.

    • bsradm prevents user errors by checking for grammatical errors in resource configuration files, etc., but bsrsetup does not check for these grammatical errors.

  • In most cases, it is not necessary to use bsrsetup directly, but it is used to perform individual functions or individual control between nodes.

  • For details on the bsrsetup command options, refer to bsrsetup Appendix B. System Manual.

bsrmeta

  • Create metadata files for replication configuration, or provide dump, restore, and modification capabilities for meta data. In most cases, like bsrsetup, you do not need to use bsrmeta directly, and control the metadata through the commands provided by bsradm.

  • For details on the bsrmeta command options, refer to bsrmeta of Appendix B. System Manual.

bsrcon

  • Check bsr related information or adjust other necessary settings.

  • For more information about the bsrcon command option, refer to BSrcon in Appendix B. System Manual.

Resource

Resources are the abstraction of everything needed to construct a replicated data set. Users configure a resource and then control it to operate a replication environment.

In order to configure resources, the following basic options (resource name, volume, network connection) must be specified.

Resource Name

  • Name it in US-ASCII format without spaces.

Volume

  • A resource is a replication group consisting of one or more volumes that share a common replication stream, ensuring write consistency of all volumes in the resource.

  • The volume is described as a single device and is designated as a drive letter in Windows.

  • One replication set requires one volume for data replication and a separate volume for storing metadata associated with the volume. Meta volumes are used to store and manage internal information for replication.

    • Metadata is divided into External Meta Type and Internal Meta Type according to the storage location. For example, if meta data is located on the disk of the replication volume, it is the internal meta, and if it is located on another device or another disk, it is the external meta.

    • In terms of performance, the external meta type is advantageous over the internal meta, because bsr can perform replication I/O and write metadata simultaneously during operation. And since the I/O performance of the meta disk directly affects the replication performance, it is recommended to configure the disk as good as possible.

    • Note that the volume for the meta should be configured in RAW without formatting to a file system(such as NTFS).

Network Connection

  • Connection is a communication link for a replication data set between two hosts.

  • Each resource is defined as multiple hosts, including full-mesh connections between multiple hosts.

  • The connection name is automatically assigned as the resource name at the bsradm level, unless otherwise specified.

Resource Role

Resources have the role of a Primary or Secondary.

  • Primary can read and write without limitation on resources.

  • Secondary receives and records any changes made to the disk from the other node and does not allow access to the volume. Therefore, the application cannot read or write to the secondary volume.

The role of a resource can be changed through the bsr utility command. When changing the resource role from Secondary to Primary, it is called Promotion, and the opposite is Demotion.

Main Features

Replication Cluster

bsr defines a set of nodes for replication as a replication cluster, and basically supports a single primary mode that can act as a primary resource on only one node of the replication cluster member. Dual or multiple primary modes are not supported. The single primary mode, the Active-Passive model, is the standard approach to handling data storage media in a highly available cluster for failover.

Replication Protocol

bsr supports three replication methods.

Protocol A. Asynchronous

The asynchronous method considers replication complete when the primary node finishes writing to the local disk and simultaneously writes to the send buffer of TCP. Therefore, this method is used locally when fail-over, but the data in the buffer may not be able to completely pass to the standby node. The data on the standby node after transfer is consistent, but some unsuccessful updates to the writes that occurred during transfer may be lost. This method has good local I / O responsiveness and is suitable for WAN remote replication environments.

Protocol B. Semi Synchronous

In the case of a semi-synchronous method, when a local disk write occurs on the primary node, replication is considered complete when the replication packet is received from the other node.

Normally, data loss does not occur during a fail-over, but if both nodes are powered off simultaneously or irreparable damage occurs in the primary storage, the most recently recorded data in the primary may be lost.

Protocol C. Synchronous

The synchronous method is considered complete when the primary node has completed writing to both the local and remote disks. This ensures that no data is lost in the event of loss on either node.

Of course, loss of data is inevitable if both nodes (or a node's storage subsystem) suffer irreparable damage at the same time.

In general, bsr uses the Protocol C method a lot.

The replication method should be determined by data consistency, local I / O latency performance, and throughput.

Info

Synchronous replication completely guarantees the consistency of the active and standby node, but because the local I/O is completed after writing to the standby node for one write I/O, the local I/O latency There is a performance penalty. Depending on the I/O depth, latency can be reduced from several times to as many as tens of times or more, and on a throughput basis, it averages 70 MB/s on a 1 Gbps network.

For an example of configuring the replication mode, refer to create resources.

Replication Transport Protocol

bsr's replication transport network supports the TCP/IP transport protocol.

TCP(IPv4/v6)

It is the basic transport protocol of bsr and is a standard protocol that can be used on all systems that support IPv4/v6.

Efficient synchronization

In bsr, replication and (re)synchronization are separate concepts. Replication is a process that reflects all disk write operations of the resource of the primary role in real time to a secondary node, and synchronization is a process of copying block data from the perspective of all block devices excluding real-time write I/O. Replication and synchronization work individually, but they can be processed simultaneously.

If the connection between the primary and secondary is maintained, replication continues. However, if the replication connection is interrupted due to a failure of the primary or secondary node, or the replication network is disconnected, synchronization between the primary and secondary is required.

When synchronizing, bsr does not synchronize blocks in the order in which the original I/O was written to disk. Synchronization sequentially synchronizes only the areas that are not synchronized from sector 0 to the last sector based on the information in the metadata and efficiently processes as follows.

  • Synchronization is performed block by block according to the block layout of the disk, so disk search is rarely performed.

  • It is efficient because it synchronizes only once for blocks in which multiple writes have been made in succession.

During synchronization, some of the Standby node's entire dataset is past and some are updated to the latest. The status of this data is called Inconsistent, and the status that all blocks are synchronized with the latest data is called UpToDate. Nodes in an inconsistent state are generally in a state where the volume is not available, so it is desirable to keep this state as short as possible.

Of course, even if synchronization is performed in the background, the application service of the Active node can be operated continuously with or without interruption.

Fixed-rate synchronization

In fixed-rate synchronization, the data rate synchronized to the peer node can be adjusted within the upper limit in seconds (this is called the synchronization rate), and can be specified as the minimum (c-min-rate) and maximum (c-max-rate).

Variable-rate synchronization

In Variable-rate synchronization bsr detects the available network bandwidth and compares it to I/O received from the application, automatically calculates the appropriate synchronization speed. bsr defaults to variable-band synchronization.

Checksum-based synchronization

Checksum data summarization can further improve the efficiency of the synchronization algorithm. Checksum-based synchronization reads blocks before synchronization, obtains a hash summary of what is currently on disk, and then compares the obtained hash summary by reading the same sector from the other node. If the hash values match, the re-write of the block is omitted. This method can be advantageous in performance compared to simply overwriting a block that needs to be synchronized, and if the file system rewrites the same data to a sector while disconnected (disconnected), resynchronization is skipped for that sector, so you can shorten the synchronization time in overall.

Congestion Mode

bsr provides a congestion mode function that can actively operate by detecting the congestion of the replication network during asynchronous replication. The congestion mode provides three operation modes: Blocking, Disconnect, and Ahead.

  • If you do not set anything, it is basically a blocking mode. In Blocking mode, it waits until there is enough space in the TX transmission buffer to transmit duplicate data.

  • The disconnect mode can be set to temporarily relieve the local I/O load by disconnecting the replication connection.

  • Ahead mode responds to congestion by automatically re-synchronizing when congestion is released by first writing the primary node's I / O to the local disk while maintaining the replication connection and recording the area as out-of-sync. The primary node that is in the Ahead state becomes the Ahead data state compared to the Secondary node. And at this point, the secondary becomes the behind data state, but the data on the standby node is available in a consistent state. When the congestion state is released, replication to the secondary automatically resumes and background synchronization is performed automatically for out-of-sync blocks that were not replicated in the Ahead state. The congestion mode is generally useful in a network link environment with variable bandwidth, such as a wide area replication environment through a shared connection between data centers or cloud instances.

Online Verification

Online Verification is a function that checks the integrity of block-specific data between nodes during device operation. Integrity checks efficiently use network bandwidth and do not duplicate checks.

Online Verification is a cryptographic digest of all data blocks on a specific resource storage in one node (verification source), and the summary is compared to the contents of the same block location by transmitting the summarized content to a verification target. To do. If the summarized ㅗhash does not match, the block is marked out-of-sync and is later synchronized. Here, network bandwidth is effectively used because only the smallest summary is transmitted, not the entire block.

Since the operation of verifying the integrity of the resource is checked online, there may be a slight decrease in replication performance when online checking and replication are performed simultaneously. However, there is an advantage that there is no need to stop the service, and there is no downtime of the system during the scan or synchronization process after the scan. And since bsr executes FastSync as the basic logic, it is more efficient by performing an online scan only on the disk area used by the file system.

It is common practice to perform tasks according to Online Verification as scheduled tasks at the OS level and periodically perform them during times of low operational I / O load. For more information on how to configure online integrity checking, see Using on-line device verification.

Replication traffic integrity checking

bsr can use cryptographic message digest algorithms to verify the integrity of replication traffic between nodes in real time.

When you use this feature, Primary verifies the integrity of the replication traffic by generating a message digest of all data blocks and passing it to the Secondary node. If the summarized blocks do not match, request retransmission. bsr protects the source data against the following error conditions through this integrity of replication traffic. If you don't respond in advance to these situations, potential data corruption during duplication can occur.

  • Bit errors (bit flips) occurring in data transferred between the main memory and the network interface of the transmitting node (If the TCP checksum offload function provided by the latest LAN card is activated, these hardware bit flips may not be detected by software).

  • Bit errors that occur on data transferred from the network interface to the receiving node's main memory (the same consideration applies to TCP checksum offloading).

  • Bugs or race conditions in the network interface firmware and drivers.

  • Bit flips or random corruption injected by linked network components between nodes (if direct connections or back-to-back connections are not used).

Split-Brain notification and recovery

Split brain refers to a situation in which two or more nodes have a primary role due to manual intervention of the cluster management software or administrator in a temporary failure situation in which all networks are disconnected between cluster nodes. This is a situation that suggests that modifications to the data have been made on each node rather than being replicated to the other side, which can lead to potential problems. Because of this, the data may not be merged and two data sets may be created.

bsr provides the functions to automatically detect and recover split brains. For more information on this, see the split brain topic in Troubleshooting.

Disk I/O Error Policy

디스크 장비에서 장애가 발생할 경우 bsr은 디스크 장애 정책의 사전 설정을 통해 해당 I/O 에러를 상위 계층(대부분 파일시스템)으로 단순히 전달해서 처리하거나 복제 디스크를 detach 하여 복제를 중단하도록 합니다. 전자는 패스스루 정책, 후자는 분리 정책입니다.

패스스루(passthrough)

하위 디스크 계층에서 에러 발생 시 별도 처리없이 상위(파일시스템) 계층으로 에러 내용을 전달합니다. 에러에 따른 적절한 처리는 상위 계층에게 맡깁니다. 예를 들어, 파일 시스템이 에러 내용을 보고 디스크 쓰기 재시도를 하거나 read-only 방식으로 다시 마운트를 시도할 수 있습니다. 이렇게 오류를 상위 계층으로 전달하는 방식을 통해 파일시스템 스스로가 에러를 인지할 수 있도록 하여 스스로 에러에 대처할 수 있는 기회를 부여합니다.

분리(detach)

에러 정책을 detach 방식으로 구성하였다면 하위 계층에서 에러 발생 시 bsr이 자동으로 디스크를 분리(detach)하는 방식으로 처리합니다. 디스크가 detach 되면 diskless 상태가 되고 디스크로의 I/O 가 차단되며, 이에 따라 디스크 장애가 인지되고 장애 후속조치가 취해져야 합니다. bsr에서 diskless 상태는 디스크에 I/O 가 유입되지 못하도록 차단된 상태로 정의합니다. I/O 에러 처리 정책 설정 에서 설정 파일을 구성하는 방법에 대해 설명하고 있습니다.

Outdated 데이터 정책

bsr은 Inconsistent 데이터와 Outdated 데이터를 구분합니다. Inconsistent 데이터란 어떤 방식으로든 접근이 불가능하거나 사용할 수  없는 데이터를 말합니다. 대표적인 예로 동기화 진행 중인 타겟 쪽 데이터가 Inconsistent 상태 입니다. 동기화가 진행 중인 타깃 데이터는 일부는 최신 이지만 일부는 지난 시점의 데이터 이므로 이를 한 시점의 데이터로 간주할 수 없습니다. 또한 이 때에는 장치에 적재 되었을 파일시스템이 마운트(mount)될 수 없거나 파일시스템 자동 체크 조차도 할 수 없는 상태 일 수 있습니다.

Outdated 디스크 상태는 데이터의 일관성은 보장되지만 Primary 노드와 최신의 데이터로 동기화되지 않았거나 이를 암시하는 데이터 입니다. 이런 경우는 임시적이든 영구적이든 복제 링크가 중단할 경우 발생합니다. 연결이 끊어진 Oudated 데이터는 결국 지난 시점의 데이터 이기 때문에 이러한 상태의 데이터에서 서비스가 되는 것을 막기 위해 bsr은 Outdated 데이터를 가진 노드에 대해 승격(promoting a resource)하는 것을 기본적으로 허용하지 않습니다. 그러나 필요하다면(긴급한 상황에서) Outdated 데이터를 강제로 승격할 수는 있습니다. 이와 관련하여 bsr은 네트워크 단절이 발생하자마자 응용프로그램이 측에서 즉시 Secondary노드를 Outdated 상태가 되도록 만들 수 있는 인터페이스를 갖추고 있습니다. Outdated 상태가 된 리소스에서 해당 복제링크가 다시 연결된다면 Outdated 상태 플래그는 자동으로 지워지며 백그라운드로 동기화(background synchronization)가 완료되어 최종 최신 데이터(UpToDate)로 갱신됩니다.

Primary 가 Crash 되었거나 연결이 단절된 Secondary 노드는 디스크 상태가 Outdated 일 수 있습니다.

운송 동기화

디스크를 직접 가져와서 구성하는 운송 동기화(Truck based replication)는 아래와 같은 상황에 적합합니다.

  • 초기 동기화 할 데이터의 량이 매우 많은 경우(수십 테라바이트 이상)

  • 거대한 데이터 사이즈에 비해 복제할 데이터의 변화율이 적을 것으로 예상되는 경우

  • 사이트간 가용 네트워크 대역폭이 제한적인 경우

위의 경우 직접 디스크를 가져다가 동기화 하지 않고 bsr의 일반적인 초기 동기화를 진행한다면 매우 오랜 시간이 걸릴 것입니다. 디스크 크기가 크고 물리적으로 직접 복사하여 초기화를 시킬 수 있다면 이 방법을 권장합니다.  운송 동기화 사용을 참고 하세요

Drawio
zoom1
simple0
inComment0
custContentId3605397667
pageId1419968625
lbox1
diagramDisplayNamebsr_basic_architecture.drawio
contentVer1
revision2
baseUrlhttps://mantech.jira.com/wiki
diagramNamebsr_basic_architecture.drawio
pCenter0
width503
links
tbstyle
height279.875

Administration tools

BSR provides administrative tools for configuring and managing resources. It consists of bsradm, bsrsetup, bsrmeta, and bsrcon, which are described below. Administrator-level privileges are required to use the management commands.

bsradm

A utility that provides high-level commands that abstract from the detailed functionality of BSR. You can control most of the behaviour of BSR through bsradm.

bsradm gets all its configuration parameters from the configuration file etc\bsr.conf, and is responsible for passing commands to bsrsetup and bsrmeta with the appropriate options. This means that the actual behaviour is done by bsrsetup and bsrmeta.

bsradm can be run in dry-run mode with the -d option. This provides a way to see what combinations of options bsradm will run with, without actually invoking the bsrsetup and bsrmeta commands.

For more information about bsradm command options, see Appendix, bsradm in the Commands.

bsrsetup

Allows you to set the values required by the bsr kernel engine. All parameters to bsrsetup must be passed as text arguments.

The separation of bsradm and bsrsetup provides a flexible command scheme.

The parameters accepted by bsradm are replaced by more complex parameters to call bsrsetup.

bsradm prevents user mistakes by checking resource configuration files for grammatical errors, etc. bsrsetup does not check for these grammatical errors.

In most cases, you will not need to use bsrsetup directly, but use it when you need individual control between nodes or for special functions.

For more information about the bsrsetup command options, see Appendix, bsrsetup in the Commands.

bsrmeta

Provides the ability to create, dump, restore, and modify metadata for replication configurations. Like bsrsetup, most users do not need to use bsrmeta directly; they control metadata through commands provided by bsradm.

For more information about the bsrmeta command options, see Appendix, bsrmeta in the Commands.

bsrcon

View bsr-related information or adjust other necessary settings.

For more information about the bsrcon command options, see Appendix, bsrcon in the Commands.

Resource

A resource is an abstraction of everything you need to construct a replication dataset. You configure resources and control them to operate your replication environment.

To configure a resource, you must specify the following basic things: resource name, volume, and network connectivity.

Resource name

Specify a name in US-ASCII format without spaces.

Volume

  • A resource is a replication group consisting of one or more volumes that share a common replication stream. bsr ensures the consistency of all volumes within a resource.

  • A volume is described as a single device and is specified by a drive letter in Windows.

  • A replica set requires one volume for data replication and a separate volume to store metadata associated with the volume. The meta volume is used to store and manage internal information for replication.

    • Metadata is divided into external and internal meta types based on where it is stored. For example, if the metadata is located on the disk of the volume being replicated, it is internal meta; if it is located on another device or another disk, it is external meta.

    • External meta types have an advantage over internal meta in terms of performance because replication I/O and meta data writing can be performed simultaneously during operation, and the I/O performance of the meta disk directly affects replication performance, so it is recommended to configure it with a high-performance disk as much as possible.

    • The volume for the meta should not be formatted with a filesystem like NTFS and should be configured as RAW.

Network Connections (Connection)

  • A Connection is the communication link for a replica dataset between two hosts.

  • Each resource is defined as a multi-host with a full-mesh connection setup between multiple hosts.

  • The Connection Name is automatically assigned as the Resource Name at the bsradm level unless you specify otherwise.

Resource roles

A resource has a role of either Primary or Secondary.

  • Primary can perform unlimited read and write operations on the resource.

  • Secondary receives and records all changes to the disk from the other node and does not allow access to the volume. Therefore, applications cannot read or write to a Secondary volume.

The role of a resource can be changed through the bsr utility command. Changing the role of a resource from Secondary to Primary is called a promotion, and the opposite is called a demotion.

Main features

Replication clusters

BSR defines a set of nodes for replication as a replication cluster and supports single-primary mode by default, where only one node among the replication cluster members can act as a primary resource. It does not support multiple-primary mode. Single-primary mode, or the active-passive model, is the standard approach to handling data storage media in a highly available cluster for failover.

Replication methods

BSR supports three replication methods

Protocol A. Asynchronous

The asynchronous method considers replication complete when the primary node finishes writing to its local disk and simultaneously finishes writing to TCP's egress buffer. Therefore, in the event of a fail-over, data that has been written locally but is in the buffer may not fully pass to the standby node. After a failover, the data on the standby node is consistent, but some undelivered updates to writes that occurred during the failover may be lost. This method has good local I/O responsiveness and is suitable for long distant replication environments.

Protocol B. Semi-Synchronous

The semi-synchronous method considers replication to be complete when a local disk write occurs on the primary node and the replication packet is received by the other node.

While a forced fail-over typically does not result in data loss, the most recently written data on the Primary may be lost if both nodes lose power at the same time or if irreparable damage occurs on the Primary storage.

Protocol C. Synchronous

The synchronous method considers replication complete on the primary node when writes to both the local and remote disks are complete, thus ensuring that no data is lost in the event of a loss on either node.

Of course, if both nodes (or the nodes' storage subsystems) suffer irreversible damage at the same time, data loss is inevitable.

In general, BSR relies heavily on the Protocol C method.

The replication method should be determined by data consistency, local I/O latency performance, and throughput, which are factors that determine operational policy.

Info

Synchronous replication fully guarantees the consistency of production and standby nodes, but at the cost of performance degradation in terms of local I/O latency because it completes the local I/O after completing the write to the standby node for each write I/O.

For an example of configuring replication mode, see Configuration examples.

Transport protocols

BSR's replication transport network supports the TCP/IP transport protocol.

TCP (IPv4/v6)

This is the default transport protocol for BSR and is a standard protocol that can be used on any system that supports IPv4/v6.

Efficient synchronization

As long as the replication connection between the primary and secondary is maintained, replication is performed continuously. However, if the replication connection is interrupted for any reason, such as a primary or secondary node failing, or the replication network being disconnected, synchronization between the primary and secondary is required.

When synchronizing, BSR does not synchronize blocks in the order in which the original I/O was written to the disk. It synchronizes only the unsynchronized areas sequentially, from sector 0 to the last sector, based on information in the metadata, and handles them efficiently as follows.

  • Sync performs little disk traversal because it syncs on a block-by-block basis based on the block layout of the disk.

  • Blocks with multiple consecutive write operations are synchronized only once, which is efficient.

During synchronization, the entire dataset on the Standby node is updated, some of it before past changes, and some of it up to date. The state of such data is called the Inconsistent state, and the state when all blocks have completed synchronization with the latest data is called the UpToDate state. A node in the Inconsistent state typically means that the volume is not available, so it is desirable to keep this state as short as possible.

Of course, application services on the Active node can continue to operate with little or no interruption while synchronization takes place in the background.

Partial synchronization

Once a full sync has been performed, it always operates as a partial sync. It is efficient by synchronizing only for out-of-sync areas (OOS).

Fast synchronization (FastSync)

bsr implements FastSync, which synchronizes only the parts of the volume that are in filesystem use. Without FastSync, you would have to synchronize over the entire volume, which can take a lot of synchronization time if the volume is large. FastSync is a powerful feature of bsr that can significantly reduce sync time.

Checksum-based synchronization

The efficiency of the synchronization algorithm can be further improved by using a summary of the checksum data. Checksum-based sync reads a block before syncing, obtains a hash summary of what is currently on the disk, and then compares it to the hash summary obtained by reading the same sector from the other node. If the hashes match, it skips the sync rewrite for that block. This can have a performance advantage over simply overwriting the block that needs to be synchronized, and if the file system rewrote the same content to a sector while disconnected (disconnect state), it will skip the re-sync for that sector, which can reduce the overall sync time.

Specify synchronization bandwidth

If you specify a synchronization band within the replication network band, the remaining bands are used as replication bands. If there is no synchronization behavior, all bands will be used as replication. You can specify the minimum value (c-min-rate) and maximum value (c-max-rate).

Fixed-rate synchronization

The data rate synchronized to the counterpart node is fixed to the resync-rate value.

Variable-rate synchronization

Variable-band synchronization handles synchronization between c-min-rate and c-max-rate by detecting available network bandwidth and arbitrating with replication throughput. In variable band synchronization, resync-rate only has the meaning of the initial synchronization band value.

bsr defaults to variable band synchronization.

Fixed-rate synchronization

In fixed-rate synchronization, the data rate of synchronization to the relative node per second can be adjusted within upper bounds (this is called the synchronization rate) and can be specified as a minimum (c-min-rate) and maximum (c-max-rate).

Variable-rate synchronization

Variable-rate sync detects the available network bandwidth and compares it to the I/O received from the application, and automatically calculates the appropriate sync rate. BSR uses variable-rate sync as the default setting.

Congestion mode

BSR provides a congestion mode feature that allows asynchronous replication to detect and proactively deal with congestion on the replication network. Congestion Mode provides three modes of operation: Blocking, Disconnect, and Ahead.

  • If no settings are made, it defaults to Blocking mode. Blocking mode waits until there is free space in the TX transmit buffer to send replication data.

  • You can set it to disconnect mode to temporarily relieve local I/O load by disconnecting the replication connection.

  • It can be set to Ahead mode, which maintains the replication connection but writes the primary node's I/O to local disk first and writes those areas as out-of-sync, automatically resyncing when congestion is released. Once in the Ahead state, the primary node is in the Ahead data state relative to the secondary node, at which point the secondary is in the Behind data state, but the data on the standby node is consistent and available. When the congestion state is lifted, replication to the Secondary automatically resumes and background synchronization is automatically performed for any out-of-sync blocks that could not be replicated in the Ahead state. Congestion mode is typically useful in environments with variable bandwidth network links, such as wide area replication environments over shared connections between data centers or cloud instances.

Online data integrity checks

Online integrity verification is a feature that verifies the integrity of block-by-block data between nodes during device operation. Integrity checks make efficient use of network bandwidth and avoid redundant checks.

Online integrity verification sequentially cryptographically digests all data blocks on a specific resource storage on one node (verification source) and sends the digested contents to the other node (verification target) for summary comparison of the contents of the same block locations. If the summaries do not match, the block is marked as out-of-sync and will be subject to synchronization later. This is an efficient use of network bandwidth because we're not sending the entire contents of the block, just a minimal summary.

Because the work of verifying the integrity of the resource is done online, there may be a slight degradation in replication performance if online checks and replication are performed at the same time. However, it has the advantage of not requiring service interruption and no system downtime during the inspection or post-inspection synchronization process. And because bsr performs FastSync as its underlying logic, it is more efficient by performing online inspection only on the disk area that is being used by the filesystem.

A common usage for online integrity checks is to register them as scheduled tasks at the OS level and perform them periodically during times of low operational I/O load. For more information on how to configure online integrity checks, see Using on-line device verification.

Replication traffic integrity checks

BSR can perform real-time integrity verification of replication traffic between two nodes using a cryptographic message summarization algorithm.

When this feature is enabled, the primary generates a message summary of all data blocks and forwards it to the secondary node to verify the integrity of the replication traffic. If the summarized blocks do not match, it requests a retransmission. BSR uses these replication traffic integrity checks to protect source data against the following error situations. If these situations are not addressed proactively, they can lead to potential data corruption during replication.

  • Bit errors (bit flips) that occur in the data passed between main memory and the network interface of the sending node (these hardware bit flips may go undetected by software if the TCP checksum offload feature offered by recent rancards is enabled).

  • Bit errors occurring in the data being transferred from the network interface to the receiving node's main memory (the same considerations apply to TCP checksum offloading).

  • Corruption caused by bugs or race conditions within the network interface firmware and drivers.

  • Bit flips or random corruption injected by recombinant network components between nodes (unless direct, back-to-back connections are used).

Split-brain

A split-brain is a situation where two or more nodes have had a primary role due to manual intervention by the cluster management software or administrator in a temporary failure situation where all networks are disconnected between the cluster nodes. This is a potentially problematic situation because it implies that modifications to the data were made on each node rather than replicated to the other side. This can result in data not being merged and creating two data sets.

BSR provides the ability to automatically detect split brains and repair them. For more information about this, see the Split brain topic in Troubleshooting.

Disk status

The disk status in BSR is represented by one of the following states, depending on the situation.

  • Diskless This is the state before the backing device is attached as a replica disk (Attach), or the disk is detached due to an I/O failure (Detach).

  • UpToDate The disk data is up to date. If the target's disk is UpToDate, it means it is in a failover-able state.

  • Outdated The data is consistent at a point in time, but may not be up to date. If the mirror connection is explicitly disconnected, the target's disk state defaults to Outdated.

  • Inconsistent Refers to broken data where data consistency is not guaranteed. If the target's disk is Inconsistent, it is in an incorrigible state by default.

BSR distinguishes between inconsistent and outdated data. Inconsistent data is data that is inaccessible or unusable in some way. Typically, data on the target side of a synchronization is in an inconsistent state. The target data being synchronized is partly current and partly out of date, so it can't be considered data from a single point in time. Also, the filesystems that would have been loaded on the device may not be mountable at this time, or the filesystems may not even be automatically checked.

The Outdated disk state is data that is consistent but not synchronized with the primary node to the most recent data, or data that suggests it is. This happens when a replication link goes down, whether temporarily or permanently. Since disconnected Oudated data is, after all, data from a past point in time, to prevent data in this state from becoming a service, BSR disallows promoting a resource to a node with outdated data by default. However, it can force promotion of outdated data if necessary (in an emergency situation). In this regard, BSR provides an interface that allows applications to immediately cause a secondary node to become Outdated on their side as soon as a network disconnect occurs. Once the replication link is reconnected from the Outdated resource, the Outdated status flag is automatically cleared and a background synchronization is completed to update it to the latest and greatest data (UpToDate). A secondary node with a crashed primary or a disconnected secondary may have an Outdated disk status.

Handling disk I/O errors

When a disk device fails, BSR uses presets in the disk failure policy to either simply pass the I/O error to a higher tier (most likely the filesystem) to handle it, or to detach the replication disk to stop replication. The former is a pass-through policy, the latter a detach policy.

Pass-through

When an error occurs at the lower disk tier, it is passed to the upper (filesystem) tier without further processing. The corresponding handling of the error is left to the higher tier. For example, the filesystem might see the error and attempt to retry writing to the disk or remount in a read-only fashion. This way of passing errors to higher layers allows the filesystem to recognize errors on its own, giving it a chance to react on its own.

Detach

If you configure your error policy to DETACH, BSR will automatically detach the disk when an error occurs at a lower tier. When a disk is detached, it becomes diskless and I/O to the disk is blocked, which means that a disk failure is recognized and failure follow-up should be taken. BSR defines a diskless state as a state in which I/O to the disk is blocked. This is discussed in more detail in Disk failures in Troubleshooting.