Storage spaces direct windows 2022 step by step
This blog post is cross posted on arnaudpain.com and tech-addict.fr, as we (Arnaud Pain and Samuel Legrand) have decided to work together to present this topic in mulitple events in 2019. Show We have decided to work on this presentation to help users understand how they can rely on Microsoft for their data protection. Here after some more information on the implementation and our feedback. What is Storage Replica Storage Replica is Windows Server technology that enables replication of volumes between servers or clusters for disaster recovery. It also enables you to create stretch failover clusters that span two sites, with all nodes staying in sync. Supported configurations Stretch Cluster allows configuration of computers and storage in a single cluster, where some nodes share one set of asymmetric storage and some nodes share another, then synchronously or asynchronously replicate with site awareness. This scenario can utilize Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs. It is managed with PowerShell and the Failover Cluster Manager graphical tool, and allows for automated workload failover. Cluster to Cluster allows replication between two separate clusters, where one cluster synchronously or asynchronously replicates with another cluster. This scenario can utilize Storage Spaces Direct, Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs. It is managed with Windows Admin Center and PowerShell, and requires manual intervention for failover. Server to server allows synchronous and asynchronous replication between two standalone servers, using Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs, and local drives. It is managed with Windows Admin Center and PowerShell, and requires manual intervention for failover. The Lab We decided to work with a Cluster to Cluster configuration for the purpose of this article with:
Storage Spaces Direct Installation First of all you will need to define if you want to use a File Share Witness or Cloud Witness for the cluster:
As we decided to use a Cloud Witness Account, there are some prerequisites on the SSD servers. Run the following commands in an Elevated PoSH on each node: Install NuGet Repo: Find-Module -Repository PSGallery -Verbose -Name NuGet Install Azure Module: Install-Module Az You will then need to restart each nodes Here after are the PoSH commands to run $nodes = ("server-1", "server-2”) icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools} icm $nodes {Install-WindowsFeature FS-FileServer} Restart each node Test-Cluster -node $nodes New-Cluster -Name Cluster-Name -Node $nodes –NoStorage –StaticAddress Cluster-IP Connect-AzAccount Set-ClusterQuorum –CloudWitness –AccountName Cloud-Witness-Account -AccessKey Key-1 Enable-ClusterS2D -SkipEligibilityChecks Install Azure Module: Install-Module Az0 Install Azure Module: Install-Module Az1 Install Azure Module: Install-Module Az2 Restart each node SOFS Role Configuration We will install the Scale-Out File Server role, however it requires some stuff to be done before. In fact, you will need to ensure that the created Cluster has the permission to create Computer on the OU where it resides: Here after the steps to install SOFS role After the installation of the SOFS role, the next step is to create File Share, however based on you configuration you will need to wait replication to occur before continuing. Here after the steps to add File Share Storage Replica installation and configuration Before starting the Storage Replica steps, you will need to have created the other Cluster. In our example, we need all the above steps to be done on Samuel infrastructure in France. Here after are the PoSH commands to run $nodes = ("server-1", "server-2”) Install Azure Module: Install-Module Az4 In an elevated Command Prompt run the following command on each node: Install Azure Module: Install-Module Az5 Run the above PoSH commands on 1 node Install Azure Module: Install-Module Az6 After the above operation done on both Cluster, you will need to ensure that both clusters can connect/communicate: Install Azure Module: Install-Module Az7 Install Azure Module: Install-Module Az8 On US SSD server: Within Failover Cluster Manager console, ensure that US-Storage and US-Storage-Logs is owned by local server and is not assigned to Cluster Shared Volume. Within Computer Management console, assign drive letter L: to US-Storage-Logs Note: Verify that you have a C:\Temp folder on the source and destination Computers (if not create it) Run the following PoSH command: Install Azure Module: Install-Module Az9 On FR SSD server: Within Failover Cluster Manager console, ensure that FR-Storage and FR-Storage-Logs is owned by local server and is not assigned to Cluster Shared Volume. Within Computer Management console, assign drive letter L: to FR-Storage-Logs Run the following PoSH command: $nodes = ("server-1", "server-2”)0 On US SSD server:$nodes = ("server-1", "server-2”)1 On FR SSD server:$nodes = ("server-1", "server-2”)2 On US SSD server: $nodes = ("server-1", "server-2”)3 Note: As our Log disk is less than minimum requirements (which is at least 8GB), we need to specify the -LogSizeinBytes parameters) During the initial synchronization, the replication status is Initial block copy Change Replication Mode As we have both site in US and France with a latency which is medium/high, we decided to switch Replication Mode from Synchronous to Asynchronous. To do so, we ran the following PoSH command $nodes = ("server-1", "server-2”)4 To validate the change, you can run the PosH command Get-SRGroup Now that replication is enabled, if you open the FailOver clustering management, you can see that some volumes are source or destination. A new tab called replication is added and you can check the replication status. The destination volume is no longer accessible until you reverse storage replica way. The first status will be Initial bloc copy Validation of replication Status of the replication can be checked using the following command $nodes = ("server-1", "server-2”)5 BTW we can also see traffic between Firewall on the IPsec interface Once the initial synchronization is finished, the replication status is Continuously replicating. Test Storage Replica Now that everything is in place, we need to ensure that it’s working as expected. We will need to reverse the Storage Replica to allow FR to become the source and the disk to be mounted. In US we have the following: Run the following PoSH command $nodes = ("server-1", "server-2”)6 We can see files in FR and size and ensure it’s consistent from what we had before switching replication As we simulate an outage and during outage data are written, we had some data in the folder and wait for replication to occur before switching back in US We can see below that we had 3 Files and 83MB Switch replication back to US $nodes = ("server-1", "server-2”)7 Validation Notes from the Field The disk for the Log should have a minimum recommended size of 8GB (however you can test with a smaller size) During initial replication, the full size of the disk will be copy between source and replication. In our example with 30GB of disk + Logs we had the following: Replication mode selection: Storage Replica supports synchronous and asynchronous replication:
Windows Server 2016 and later includes an option for Cloud (Azure)-based Witness. You can choose this quorum option instead of the file share witness. How to configure S2D cluster step by step?In this article. Before you start.. Step 1: Provision the cluster.. Step 2: Set up networking for the cluster.. Step 3: Configure DCB settings on the S2D cluster.. Step 4: Manage the pool and create CSVs.. Step 5: Deploy VMs on the cluster.. Next steps.. How does storage spaces direct work?Storage Spaces Direct makes two copies of data to other nodes in the cluster. Each node runs as a fault domain and data is spread across the fault domains to prevent data loss if a disk fails. If a disk fails, data will be replicated to another disk in the cluster so three copies of data are present at all times.
What is the first step in deploying disaggregated S2D?The first step in deploying disaggregated S2D is creating a failover cluster.
How many nodes can take part in storage spaces direct?Whether all-flash or hybrid, Storage Spaces Direct can exceed 13.7 million IOPS per server.
|