| Table of Contents |
|---|
Configuration environments
...
BSR is designed as a versatile block-replication engine that can be deployed across a wide range of IT infrastructures without environmental constraints. It delivers a consistent replication model regardless of the underlying compute architecture—physical servers, virtualized environments, or hyper-converged infrastructure (HCI)—and can be flexibly integrated into various operational scenarios.
Since BSR operates at the block device layer, independent of operating system or hardware architecture, it can be reliably deployed in the following environments:
...
Physical Servers
Supports general x86 servers as well as diverse storage backends such as DAS, SAN, and NVMe
Provides stable replication performance for high-throughput, enterprise-grade workloads
Suitable for OS-level replication, database servers, and mission-critical applications
...
Virtualized Environments (VMs)
BSR performs block-level replication inside virtual machines across major hypervisors.
Fully operable in VMware, Hyper-V, KVM, and other virtualization platforms
Supports replication of virtual disks (VMDK, VHDX, QCOW2, and others)
Enables P2V (physical→virtual), V2P (virtual→physical), and V2V (virtual→virtual) transition scenarios
Well-suited for implementing HA/DR strategies for VM-based services
...
Hyper-Converged Infrastructure (HCI)
BSR functions as an independent replication layer even in environments where storage and compute are tightly integrated.
Compatible with platforms such as Nutanix, vSAN, and Red Hat HCI
Operates reliably on distributed, local-storage–based clusters
Integrates cleanly without conflicting with storage policies or cluster-management frameworks
...
Development / Test / Large-Scale Service Environments
BSR can be adopted in environments ranging from single-server deployments to large, multi-host clusters.
Simple data-protection scenarios between individual hosts
Enterprise systems running tens to hundreds of terabytes
Distributed service architectures requiring continuous data availability
Mixed-OS and multi-platform service environments
Specifications
The following are the physical requirements of the platforms and target systems supported by the BSR.
Platforms
Supports Windows 2012 or higher, Linux CentOS 6.4, Ubuntu 16.04 LTS or higher x64 environment.
OS | CPU Architecture |
|---|---|
Windows 2012 or higher | x64 |
RHEL / CentOS 6.4 ~ 8.4 |
File systems
Block replication solutions usually work regardless of the type of file system. However, bsr has a specification for supported file systems, as it implements fast synchronization that only synchronizes to the areas used by the file system.
...
x64 | |
RHEL / Rocky 8.5 or higher | x64 |
Ubuntu 16 LTS or higher | x64 |
ProLinux 7 or higher | x64 |
SUSE 12, 15 | x64 |
File systems
In recent years, the size of replication volumes has grown significantly (tens to hundreds of terabytes), and
...
as a result, the
...
We have implemented Fast Sync (FastSync) to solve this excessive sync time issue. FastSync is a function that synchronizes only the area used in the file system of the replication volume, dramatically improving the initial synchronization time. For example, if you are using only 10 GB of actual volume at 100 TB capacity, FastSync will complete synchronization within 1 minute in the 1 Gbps band. However, since it is a function developed using the characteristics of a file system, it depends on the file system type and has a support specification for the file system type accordingly.
bsr supports NTFS and ReFS file systems commonly used in Windows, and Ext-like file systems (ext3 or higher) and xfs file systems in Linux. These file systems are the types of file systems most commonly used by general users, and we plan to gradually support other file systems.
...
initial synchronization of these volumes now requires a substantial amount of time. When performing initial synchronization over the entire volume, the process may take several days to even several weeks. For example, with a 1 Gbps network link, synchronizing a 10 TB volume requires at least 27 hours, while a 100 TB volume can take more than ten days.
To address this long initial-sync problem, we implemented FastSync(Filesystem Active Sector Transfer Sync), which synchronizes only the sectors actually allocated by the filesystem. FastSync dramatically reduces the initial synchronization time by transferring only the filesystem-used areas instead of the entire volume. For instance, if a 100 TB volume contains only 10 GB of actual data, FastSync can complete the initial sync in under one minute over a 1 Gbps network.
FastSync supports the following filesystems. For all other filesystems, synchronization is performed over the full volume:
NTFS on Windows
Ext3, Ext4, XFS, Btrfs on Linux
| Note |
|---|
Precautions for ReFS (Non-Journaling Filesystem) VolumesScopeThis section applies when using a Windows ReFS (Resilient File System)–formatted data volume as a BSR replication target. Precautions
Recommendation
|
CPU
It is recommended to configure at least 2GHz, 4 core or higher, x64 compatible processor. There is no problem in operating in a processor environment with a lower specification, but considering the I / O performance of the system, it is necessary to secure the specification of the configuration machine as high as possible.
Memory
The system typically starts paging virtual memory when memory usage exceeds 70%, depending on kernel settings. Because paging degrades system I/O performance, it is beneficial for replication to be configured to always have at least 30% free physical memory so that paging is suppressed.
The memory used by BSR is primarily required for buffering purposes and is determined by the maximum write request value (max-req-write-count) in the BSR settings and the size of the transmit buffer. Below is an example for a Windows environment.
For synchronous without a send buffer
At the default setting for write requests (10k), use a maximum of 1.5GB per resource.
At the write request maximum setting (100,000), use a maximum of 3 GB per resource.
For asynchronous with a 1GB send buffer setting, use a maximum of 3GB per resource in the
Use a maximum of 2.5 GB per 1 resource at the write request default setting.
Use a maximum of 4 GB per resource at the Write Request Max setting.
For a server with 64 GB of physical memory, approximately 20 GB (30%) of memory free space is required, and of the memory space used, a maximum of 2.5 GB per 1 resource is required for asynchronous by default.
If you don't have that 30% free memory, you'll have to accept a degradation in basic I/O performance due to paging.
The 3 to 4 GB NP memory usage per resource required by replication should be kept free, otherwise you will run out of memory, which can lead to failure.
Transmit buffer size
The size of the
...
transmit (TX) buffer
...
on the local side is ideally set to a number that allows for the free transfer of locally generated I/O data to the remote side. It is found by the following equation
Maximum size of transmit band per time (s) * Transmit timeout time (s)
For example, for a 1 Gbps band, you can get (about 100 MB/s * 5 s) = 500 MB, and you can set a buffer of 500 MB to 1 GB.
If you need to consider the variable bandwidth of the WAN segment, you can add more capacity to the above size plus the time (s) to tolerate the WAN bottleneck delay, or buffer it with a proxy (DRX). Typically, WAN leg buffering is specified at 5 to 10 times the TX buffer.
| Info |
|---|
When paging occurs can vary depending on your system's memory capacity, platform, and OS version. The 70% figure described above is typical and should be understood in the context of your environment. BSR memory usage on Linux is similar or less than on Windows. It uses a bit more memory on Windows due to some differences in the replication architecture. |
| Info |
|---|
| Replication Recently, replication configurations withthrough VMs are becoming increasinglymore common in virtualized environments. A characteristic of these environments is thatvirtualization environments, but there are cases where the CPU or memory resources allocated to individual VMs are not alwayssufficient. For example, if an individual VM is configured with 2 cores and 2-4 GB of1 core CPU and 1~2GB memory, it maywill not meet the minimum configuration specifications forof bsr. When a replication environment is configured with suchlow- specend VMs, it is more likely to experience frequent performance delays inside the bsr engine and networkperformance delays due to frequent CPU context switching, and thus inter-node communication (keep alive) delays between nodes. While there is no problem with the basic behavior, as the I/O load increases or the frequency of interrupts in the HW layer of the system increases, the performance of the VM as a whole will decrease, and bsr will be affected accordingly. The bottom line is that building a replication environment on low-specification VMs has limitations.There's not much you can do about it except to make sure you have plenty of system configuration, become frequent. Operating replication on VMs in this environment has severe limitations. There is no solution to this problem other than freeing up system resources. |
Disk
Basic installation space
It requires about 200MB to install all the binary execution modules of bsr, and 1GB is required to store the log of bsr, which requires about 2GB of disk installation space.
Mirror disks
Theoretically, the capacity of a BSR's mirror disk is unlimited, but the actual disk capacity is limited to 10 TB. At higher capacity, the meta-area corresponding to the mirror disk capacity grows along with it, and the operation (Attach) that needs to process the entire meta-disk becomes time-consuming and impedes operations.
| Info |
|---|
Multivolume In many cases, storage for a singleMulti-volume Storage corresponding to one service task is organized intooften composed of multiple volumes , and. In this case, it is necessary to treat these volumes need to be treated as a singleas one resource. OperatingIn this way, operating multiple volumes on a singleby tying them to one resource is called a multivolumemulti-volume configuration. When operating with multivolumesresources as multiple volumes, the data processing buffer queue for replicated datais operated as a single queue to ensure service-side consistency by serializing the order of read/write requests for service, thereby serializing the service's write I/O order to the disk volumesvolume to ensure service consistency. The number of multivolumes is theoretically unlimitedThere is no limit to the number of multiple volumes (maximum 65535), but in practicereality, ifas the number of volumes becomes too largeincreases, the delay in the buffer queue canqueue buffer delay may become severe and difficult to control. On a 1 Gbps network, a configuration of three volumes or less is appropriateIt is realistic to configure no more than 4 to 5 volumes in a 10G network. |
Meta Disk
Depending on the capacity of the replication volume, you need to estimate the capacity of the metadisk. It requires about 33MB of meta disk space per 1TB of the replication volume, and for more accurate size, see Metadata Size Estimation.
Network
throughput
In a recent corporate environment, mirroring on a local network is generally configured with a bandwidth of 1 Gbps to 10 Gbps, and a remote replication environment (Disaster Recovery) for DR (Disaster Recovery) is generally operated with a lower bandwidth. That is, it is applied to a wide range of network environments from very low bandwidth (10 ~ 100Mbps) to high bandwidth. However, because replication in a low-band network environment is bound to affect replication performance due to bandwidth limitations, consider such as linking replication accelerators for performance improvement.
ports
The mirroring ports for replication (specified in the configuration file) must be open. On Windows, additionally TCP loopback ports 5678, 5679 must be open for control.
Storage provisioning
It is only supported for the THICK provisioning method of storage provisioning. Thin provisioning environments with disk space reclamation are not compatible with BSR.
| Info |
|---|
Disk space reclamation in THIN provisioning can cause data inconsistency issues in certain situations. To use a THIN environment, you must disable space reclamation. |