Skip to content

Commit eda4b98

Browse files
Merge pull request #312498 from Xelu86/hagluster
[Update] GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
2 parents ff79e49 + 11560cc commit eda4b98

1 file changed

Lines changed: 83 additions & 73 deletions

File tree

Lines changed: 83 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
2-
title: GlusterFS on Azure VMs on RHEL for SAP NetWeaver | Microsoft Docs
3-
description: GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
2+
title: GlusterFS on Azure VMs on RHEL for SAP NetWeaver
3+
description: Learn about deploying GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver.
44
services: virtual-machines-windows,virtual-network,storage
5-
author: rdeltcheva
6-
manager: juergent
75
ms.service: sap-on-azure
86
ms.subservice: sap-vm-workloads
9-
ms.topic: article
10-
ms.date: 07/03/2023
7+
ms.topic: how-to
8+
manager: juergent
9+
author: rdeltcheva
1110
ms.author: radeltch
11+
ms.date: 03/02/2026
1212
ms.custom:
1313
- linux-related-content
1414
- sfi-image-nochange
@@ -17,43 +17,25 @@ ms.custom:
1717

1818
# GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
1919

20-
[dbms-guide]:dbms-guide-general.md
21-
[deployment-guide]:deployment-guide.md
22-
[planning-guide]:planning-guide.md
20+
This article describes how to deploy the virtual machines (VMs), configure the VMs, and install a GlusterFS cluster. The GlusterFS cluster is used to store the shared data of a highly available SAP system. This guide describes how to set up GlusterFS that is used by two SAP systems, NW1 and NW2. The names of the resources (for example VMs, virtual networks) in the example, assumes that you used the [SAP file server template][template-file-server] with resource prefix `glust`.
2321

24-
[2002167]:https://launchpad.support.sap.com/#/notes/2002167
25-
[2009879]:https://launchpad.support.sap.com/#/notes/2009879
26-
[1928533]:https://launchpad.support.sap.com/#/notes/1928533
27-
[2015553]:https://launchpad.support.sap.com/#/notes/2015553
28-
[2178632]:https://launchpad.support.sap.com/#/notes/2178632
29-
[2191498]:https://launchpad.support.sap.com/#/notes/2191498
30-
[2243692]:https://launchpad.support.sap.com/#/notes/2243692
31-
[1999351]:https://launchpad.support.sap.com/#/notes/1999351
22+
As documented in [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs), Red Hat Gluster Storage reaches end of life at the end of 2024. The configuration is supported for SAP on Azure until it reaches end of life stage. GlusterFS shouldn't be used for new deployments. We recommend deploying the SAP shared directories on an NFS on Azure Files, or Azure NetApp Files volumes as documented in [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md). See also [HA for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md).
3223

33-
[template-file-server]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-file-server-md%2Fazuredeploy.json
34-
35-
[sap-hana-ha]:sap-hana-high-availability-rhel.md
36-
37-
This article describes how to deploy the virtual machines, configure the virtual machines, and install a GlusterFS cluster that can be used to store the shared data of a highly available SAP system.
38-
This guide describes how to set up GlusterFS that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [SAP file server template][template-file-server] with resource prefix **glust**.
39-
40-
Be aware that as documented in [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs) Red Hat Gluster Storage will reach end of life at the end of 2024. The configuration will be supported for SAP on Azure until it reaches end of life stage. GlusterFS should not be used for new deployments. We recommend to deploy the SAP shared directories on NFS on Azure Files or Azure NetApp Files volumes as documented in [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md) or [HA for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md).
41-
42-
Read the following SAP Notes and papers first
24+
Read the following SAP Notes and papers first:
4325

4426
* SAP Note [1928533], which has:
4527
* List of Azure VM sizes that are supported for the deployment of SAP software
4628
* Important capacity information for Azure VM sizes
4729
* Supported SAP software, and operating system (OS) and database combinations
4830
* Required SAP kernel version for Windows and Linux on Microsoft Azure
4931

50-
* SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
51-
* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
32+
* SAP Note [2015553] has prerequisites for SAP-supported SAP software deployments in Azure.
33+
* SAP Note [2002167] has the recommended OS settings for Red Hat Enterprise Linux
5234
* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
5335
* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure.
5436
* SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure.
5537
* SAP Note [2243692] has information about SAP licensing on Linux in Azure.
56-
* SAP Note [1999351] has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
38+
* SAP Note [1999351] has extra troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
5739
* [SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux.
5840
* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide]
5941
* [Azure Virtual Machines deployment for SAP on Linux (this article)][deployment-guide]
@@ -70,36 +52,36 @@ Read the following SAP Notes and papers first
7052

7153
## Overview
7254

73-
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster and can be used by multiple SAP systems.
55+
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS runs in a separate cluster and supports access by multiple SAP systems.
7456

75-
![SAP NetWeaver High Availability overview](./media/high-availability-guide-rhel-glusterfs/rhel-glusterfs.png)
57+
![A diagram of a GlusterFS shared storage with 3 nodes configured for high availability for SAP NetWeaver.](./media/high-availability-guide-rhel-glusterfs/rhel-glusterfs.png)
7658

77-
## Set up GlusterFS
59+
## Prerequisites
7860

7961
In this example, the resources were deployed manually via the [Azure portal](https://portal.azure.com/#home).
8062

81-
### Deploy Linux manually via Azure portal
63+
This document assumes that you deployed:
8264

83-
This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
65+
* A resource group previously.
66+
* An [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) and subnet.
8467

85-
Deploy virtual machines for GlusterFS. Choose a suitable RHEL image that is supported for Gluster storage. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
68+
When you deploy VMs for GlusterFS, choose a suitable RHEL image that supports Gluster storage. You can deploy VM in any one of the availability options - scale set, availability zone, or availability set.
8669

8770
### Configure GlusterFS
8871

8972
The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1, **[2]** - only applicable to node 2, **[3]** - only applicable to node 3.
9073

9174
1. **[A]** Setup host name resolution
9275

93-
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
94-
Replace the IP address and the hostname in the following commands
76+
You can either use a DNS server or modify the `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands:
9577

9678
```bash
9779
sudo vi /etc/hosts
9880
```
9981

100-
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
82+
Insert the following lines to `/etc/hosts`. Change the IP address and hostname to match your environment:
10183

102-
```text
84+
```
10385
# IP addresses of the Gluster nodes
10486
10.0.0.40 glust-0
10587
10.0.0.41 glust-1
@@ -108,7 +90,7 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
10890

10991
1. **[A]** Register
11092

111-
Register your virtual machines and attach it to a pool that contains repositories for RHEL 7 and GlusterFS
93+
Register your VMs and attach it to a pool that contains repositories for RHEL 7 and GlusterFS:
11294

11395
```bash
11496
sudo subscription-manager register
@@ -117,39 +99,39 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
11799

118100
1. **[A]** Enable GlusterFS repos
119101

120-
In order to install the required packages, enable the following repositories.
102+
In order to install the required packages, enable the following repositories:
121103

122104
```bash
123105
sudo subscription-manager repos --disable "*"
124106
sudo subscription-manager repos --enable=rhel-7-server-rpms
125107
sudo subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
126108
```
127-
109+
128110
1. **[A]** Install GlusterFS packages
129111

130-
Install these packages on all GlusterFS nodes
112+
Install these packages on all GlusterFS nodes:
131113

132114
```bash
133115
sudo yum -y install redhat-storage-server
134116
```
135117

136118
Reboot the nodes after the installation.
137119

138-
1. **[A]** Modify Firewall
120+
1. **[A]** Modify firewall
139121

140-
Add firewall rules to allow client traffic to the GlusterFS nodes.
122+
Add firewall rules to allow client traffic to the GlusterFS nodes:
141123

142124
```bash
143125
# list the available zones
144126
firewall-cmd --get-active-zones
145-
127+
146128
sudo firewall-cmd --zone=public --add-service=glusterfs --permanent
147129
sudo firewall-cmd --zone=public --add-service=glusterfs
148130
```
149131

150132
1. **[A]** Enable and start GlusterFS service
151133

152-
Start the GlusterFS service on all nodes.
134+
Start the GlusterFS service on all nodes:
153135

154136
```bash
155137
sudo systemctl start glusterd
@@ -158,32 +140,40 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
158140

159141
1. **[1]** Create GluserFS
160142

161-
Run the following commands to create the GlusterFS cluster
143+
Create the GlusterFS cluster by running the following commands:
162144

163145
```bash
164146
sudo gluster peer probe glust-1
165147
sudo gluster peer probe glust-2
166-
167-
# Check gluster peer status
148+
```
149+
150+
Check the GlusterFS peer status:
151+
152+
```bash
168153
sudo gluster peer status
169-
154+
```
155+
156+
```output
170157
# Number of Peers: 2
171-
#
158+
#
172159
# Hostname: glust-1
173160
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
174161
# State: Accepted peer request (Connected)
175-
#
162+
#
176163
# Hostname: glust-2
177164
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
178165
# State: Accepted peer request (Connected)
179166
```
180167

181168
1. **[2]** Test peer status
182169

183-
Test the peer status on the second node
170+
Test the peer status on the second node:
184171

185172
```bash
186173
sudo gluster peer status
174+
```
175+
176+
```output
187177
# Number of Peers: 2
188178
#
189179
# Hostname: glust-0
@@ -197,10 +187,13 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
197187

198188
1. **[3]** Test peer status
199189

200-
Test the peer status on the third node
190+
Test the peer status on the third node:
201191

202192
```bash
203193
sudo gluster peer status
194+
```
195+
196+
```output
204197
# Number of Peers: 2
205198
#
206199
# Hostname: glust-0
@@ -214,9 +207,9 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
214207

215208
1. **[A]** Create LVM
216209

217-
In this example, the GlusterFS is used for two SAP systems, NW1 and NW2. Use the following commands to create LVM configurations for these SAP systems.
210+
In this example, the GlusterFS is used for two SAP systems, **NW1** and **NW2**. Use the following commands to create LVM configurations for these SAP systems.
218211

219-
Use these commands for NW1
212+
Use these commands for **NW1**:
220213

221214
```bash
222215
sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun0
@@ -229,35 +222,35 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
229222
sudo lvcreate -l 50%FREE -n rhgs-NW1/ascs
230223
sudo lvcreate -l 100%FREE -n rhgs-NW1/aers
231224
sudo lvscan
232-
225+
233226
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sapmnt
234227
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/trans
235228
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sys
236229
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/ascs
237230
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/aers
238-
231+
239232
sudo mkdir -p /rhs/NW1/sapmnt
240233
sudo mkdir -p /rhs/NW1/trans
241234
sudo mkdir -p /rhs/NW1/sys
242235
sudo mkdir -p /rhs/NW1/ascs
243236
sudo mkdir -p /rhs/NW1/aers
244-
237+
245238
sudo chattr +i /rhs/NW1/sapmnt
246239
sudo chattr +i /rhs/NW1/trans
247240
sudo chattr +i /rhs/NW1/sys
248241
sudo chattr +i /rhs/NW1/ascs
249242
sudo chattr +i /rhs/NW1/aers
250-
243+
251244
echo -e "/dev/rhgs-NW1/sapmnt\t/rhs/NW1/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
252245
echo -e "/dev/rhgs-NW1/trans\t/rhs/NW1/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
253246
echo -e "/dev/rhgs-NW1/sys\t/rhs/NW1/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
254247
echo -e "/dev/rhgs-NW1/ascs\t/rhs/NW1/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
255248
echo -e "/dev/rhgs-NW1/aers\t/rhs/NW1/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
256-
249+
257250
sudo mount -a
258251
```
259252

260-
Use these commands for NW2
253+
Use these commands for **NW2**:
261254

262255
```bash
263256
sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun1
@@ -269,62 +262,62 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
269262
sudo lvcreate -l 10%FREE -n rhgs-NW2/sys
270263
sudo lvcreate -l 50%FREE -n rhgs-NW2/ascs
271264
sudo lvcreate -l 100%FREE -n rhgs-NW2/aers
272-
265+
273266
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sapmnt
274267
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/trans
275268
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sys
276269
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/ascs
277270
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/aers
278-
271+
279272
sudo mkdir -p /rhs/NW2/sapmnt
280273
sudo mkdir -p /rhs/NW2/trans
281274
sudo mkdir -p /rhs/NW2/sys
282275
sudo mkdir -p /rhs/NW2/ascs
283276
sudo mkdir -p /rhs/NW2/aers
284-
277+
285278
sudo chattr +i /rhs/NW2/sapmnt
286279
sudo chattr +i /rhs/NW2/trans
287280
sudo chattr +i /rhs/NW2/sys
288281
sudo chattr +i /rhs/NW2/ascs
289282
sudo chattr +i /rhs/NW2/aers
290283
sudo lvscan
291-
284+
292285
echo -e "/dev/rhgs-NW2/sapmnt\t/rhs/NW2/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
293286
echo -e "/dev/rhgs-NW2/trans\t/rhs/NW2/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
294287
echo -e "/dev/rhgs-NW2/sys\t/rhs/NW2/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
295288
echo -e "/dev/rhgs-NW2/ascs\t/rhs/NW2/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
296289
echo -e "/dev/rhgs-NW2/aers\t/rhs/NW2/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
297-
290+
298291
sudo mount -a
299292
```
300293

301294
1. **[1]** Create the distributed volume
302295

303-
Use the following commands to create the GlusterFS volume for NW1 and start it.
296+
Use the following commands to create the GlusterFS volume for **NW1** and start it:
304297

305298
```bash
306299
sudo gluster vol create NW1-sapmnt replica 3 glust-0:/rhs/NW1/sapmnt glust-1:/rhs/NW1/sapmnt glust-2:/rhs/NW1/sapmnt force
307300
sudo gluster vol create NW1-trans replica 3 glust-0:/rhs/NW1/trans glust-1:/rhs/NW1/trans glust-2:/rhs/NW1/trans force
308301
sudo gluster vol create NW1-sys replica 3 glust-0:/rhs/NW1/sys glust-1:/rhs/NW1/sys glust-2:/rhs/NW1/sys force
309302
sudo gluster vol create NW1-ascs replica 3 glust-0:/rhs/NW1/ascs glust-1:/rhs/NW1/ascs glust-2:/rhs/NW1/ascs force
310303
sudo gluster vol create NW1-aers replica 3 glust-0:/rhs/NW1/aers glust-1:/rhs/NW1/aers glust-2:/rhs/NW1/aers force
311-
304+
312305
sudo gluster volume start NW1-sapmnt
313306
sudo gluster volume start NW1-trans
314307
sudo gluster volume start NW1-sys
315308
sudo gluster volume start NW1-ascs
316309
sudo gluster volume start NW1-aers
317310
```
318311

319-
Use the following commands to create the GlusterFS volume for NW2 and start it.
312+
Use the following commands to create the GlusterFS volume for **NW2** and start it:
320313

321314
```bash
322315
sudo gluster vol create NW2-sapmnt replica 3 glust-0:/rhs/NW2/sapmnt glust-1:/rhs/NW2/sapmnt glust-2:/rhs/NW2/sapmnt force
323316
sudo gluster vol create NW2-trans replica 3 glust-0:/rhs/NW2/trans glust-1:/rhs/NW2/trans glust-2:/rhs/NW2/trans force
324317
sudo gluster vol create NW2-sys replica 3 glust-0:/rhs/NW2/sys glust-1:/rhs/NW2/sys glust-2:/rhs/NW2/sys force
325318
sudo gluster vol create NW2-ascs replica 3 glust-0:/rhs/NW2/ascs glust-1:/rhs/NW2/ascs glust-2:/rhs/NW2/ascs force
326319
sudo gluster vol create NW2-aers replica 3 glust-0:/rhs/NW2/aers glust-1:/rhs/NW2/aers glust-2:/rhs/NW2/aers force
327-
320+
328321
sudo gluster volume start NW2-sapmnt
329322
sudo gluster volume start NW2-trans
330323
sudo gluster volume start NW2-sys
@@ -340,3 +333,20 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
340333
* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
341334
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery).
342335
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
336+
337+
[dbms-guide]:dbms-guide-general.md
338+
[deployment-guide]:deployment-guide.md
339+
[planning-guide]:planning-guide.md
340+
341+
[2002167]:https://launchpad.support.sap.com/#/notes/2002167
342+
[2009879]:https://launchpad.support.sap.com/#/notes/2009879
343+
[1928533]:https://launchpad.support.sap.com/#/notes/1928533
344+
[2015553]:https://launchpad.support.sap.com/#/notes/2015553
345+
[2178632]:https://launchpad.support.sap.com/#/notes/2178632
346+
[2191498]:https://launchpad.support.sap.com/#/notes/2191498
347+
[2243692]:https://launchpad.support.sap.com/#/notes/2243692
348+
[1999351]:https://launchpad.support.sap.com/#/notes/1999351
349+
350+
[template-file-server]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-file-server-md%2Fazuredeploy.json
351+
352+
[sap-hana-ha]:sap-hana-high-availability-rhel.md

0 commit comments

Comments
 (0)