Featured image of post Optimizing Storage: Deploying Cloud SAN in Advanced Azure Landing Zones

Optimizing Storage: Deploying Cloud SAN in Advanced Azure Landing Zones

When following best practices for building an Azure Landing Zone (such as Azure Cloud Adoption Framework), deploying a shared SAN storage solution can introduce challenges, particularly from security and FinOps perspectives. This post presents a deployment architecture pattern designed to mitigate these challenges, optimizing both security and cost efficiency while maintaining operational flexibility.

This post is follow up to my previous post about pricing changes, introduced on Microsoft Build 2024 and their cost implications for cloud storage solution.

Motivation

The Azure Cloud Adoption Framework (CAF) is Microsoft’s proven methodology for guiding organizations through their cloud adoption journey. It provides a structured approach to designing, implementing, and optimizing Azure environments based on industry best practices. CAF helps businesses align their cloud strategy with operational, security, and compliance requirements, ensuring a scalable and well-governed foundation.

Native cloud storage solutions often struggle to meet the high performance, low latency, and resilience demands of enterprise workloads. SAN solutions in the cloud address these challenges by providing highly available, scalable, and feature-rich block storage, similar to on-premises SAN architectures. SAN solutions like Pure Cloud Block Store enable seamless replication, disaster recovery, and multi-cloud mobility, making them ideal for mission-critical applications, databases, and virtualized environments that require consistent performance and data integrity.

Challenges

With adopting a SAN solution into any advanced Azure landing zone (inspired by CAF) comes few challenges that requires to be addressed.

Hidden Costs for the Data Path – Cloud networking and storage transactions can introduce unforeseen costs, especially when data is transferred across regions or via multiple peered vNETs (see more details). Understanding these costs is critical for FinOps efficiency.

No TLS Inspection or Additional Latency – Security policies enforcing TLS decryption can impact SAN replication traffic, potentially introducing performance overhead and requiring additional network configurations. Wrong design can disable to perform a TLS inspection, or in other cases bring an additional latency and lower storage performance.

Network Segregation – Proper network isolation and segmentation are crucial for security and compliance, but on the other hand a SAN solution might become a single point of failure from this perspective.

Deployment Architecture Pattern for a Shared SAN Solution

In this post, I’ll demonstrate this pattern using Pure Cloud Block Store (CBS), a SAN block storage solution designed for both Azure and AWS.

Info

Please note that the best performance and minimum costs for cloud SAN storage solutions are achieved within a single availability zone in a single virtual network deployments.

This post describes a deployments in cases where it is necessary to share a single SAN instance between multiple workloads deployed across multiple virtual networks, while adhering to CAF’s design principles and leveraging cloud benefits.

Key Features

  • Optimal SAN performance: The data path is optimized, ensuring no additional latency or performance degradation is introduced
  • Security isolation: Storage access is strictly limited to host VMs, preventing host VMs from communicating with each other
  • Support for TLS Inspection of traffic: All standard traffic from host VMs can be a subject to TLS inspection. The only exception is the CBS data path, which is routed directly for performance optimization
  • FinOps optimization: This architecture pattern is designed to minimize unnecessary vNET peering configurations, minimizing hidden costs associated with inter-vNET data transfer.

Diagram

Deployment diagram of cloud SAN using dual-NIC setup in advanced Azure landing zones

Components

  • SAN solution (Pure Cloud Block Store in our example)
  • Virtual network (vNET)
  • Network interface (NIC)
  • Azure Firewall
  • Network Security Group (NSG)
  • Azure VM

Data path(s)

iSCSI Data Path

Each Host VM is in dual-NIC setup - primary and secondary NICs. A primary NIC is dedicated for standard traffic, routed to Azure Firewall and a secondary NIC is dedicated for iSCSI communication for block storage access.

Host VMs connect directly to CBS over iSCSI via a vNET Peering between host VM vNET and CBS vNET.

Replication Data Path (Between Multiple CBS Instances)

CBS supports various replication types between storage arrays, ranging from ActiveCluster (synchronous) to multiple asynchronous options such as ActiveDR and CloudSnap. These replication processes utilize dedicated network interfaces to ensure efficient data transfer.

In this setup, CBS instances replicate to each other directly via vNET Peering between the CBS #1 vNET and the CBS #2 vNET.

Details

For following examples, consider following IP ranges (IP ranges are selected to make this example easier to understand, you don’t need to follow them):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
- CBS #1 vNET: 192.168.96.0/21
   |- subnet system: 192.168.100.0/24
   |- subnet iscsi: 192.168.101.0/24
   |- subnet mngmt: 192.168.102.0/24
   |- subnet replication: 192.168.103.0/24

- Spoke #1 vNET: 192.168.1.0/22
   |- subnet default: 192.168.1.0/24
   |- subnet cbs-nics: 192.168.2.0/24

- Hub vNET: 10.0.0.0/16
   |- subnet AzureFirewallSubnet: 10.0.1.0/26

- CBS #2 vNET: 192.168.200.0/21
   |- subnet system: 192.168.200.0/24
   |- subnet iscsi: 192.168.201.0/24
   |- subnet mngmt: 192.168.202.0/24
   |- subnet replication: 192.168.203.0/24

NSG configurations

To accomplish all the needs described above, you need to configure the following Network Security Groups:

CBS #1 vNET - iSCSI subnet

Purpose: This NSG allows iSCSI traffic from NICs in the subnet only and blocks any further communication from other vNET/subnets.

1
2
3
4
5
6
7
8
Inbound: 
Allow 3260 from <<host VM IPs>> to Any
Deny all


Outbound:
Allow 3260 from Any to <<host VM IPs>>
Deny all

Example:

NSG configuration on CBS vNET, the iSCSI subnet

Tip

For a port configuration of inbound security rules, configure the Destination port range only and keep Source port ranges as wildcards.

Spoke #1 vNET - NSG for VM NICs within default subnet

This NSG is for testing purposes, to make sure the iSCSI traffic to CBS will not be routed via the incorrect interface.

1
2
3
Outbound - add following rule:
Deny 3260 from Any to <<CBS iSCSI subnet range>>
...
Spoke #1 vNET - NSG for VM NICs within cbs-nics subnet

Thanks to this NSG, the iSCSI traffic can be routed via this secondary NIC and communicate with CBS, but can’t communicate with each other.

1
2
3
4
5
6
7
Inbound Security Rules: 
Allow 3260 from <<CBS iSCSI subnet range>> to Any
Deny all

Outbound Security Rules:
Allow 3260 from Any to <<CBS iSCSI subnet range>>
Deny all

Example:

NSG configuration on host VM vNET, the cbs-nics subnet

Azure Firewall configuration

All non-datapath traffic from CBS (system or management interface) can be routed through Azure Firewall or other type of virtual appliance.

Because there might be a SSL pinning in place, in the following example, we proceed with an Application rule allowing traffic to FQDN *.purestorage.com and management.azure.com without TLS inspection on Port 443 (HTTPS).

Application rules in Azure Firewall (TLS inspection)

Routing configurations

Forced Tunneling

Routing to Azure Firewall is achieved via routing table, with a User-Defined Route (UDR) 0.0.0.0/0 with next hop to Azure Firewall (in our example 10.0.1.4).

Forced tunneling to Azure Firewall via UDR

CBS vNET

For achieving the forced tunneling to Azure Firewall is this routing table attached to the system and management subnets. Traffic from these CBS interfaces will be routed via Azure Firewall.

Routing table assignment for CBS vNET

Host VM vNET

For achieving the forced tunneling to Azure Firewall is this routing table attached to the default subnet. Standard traffic from the Host VM (except for iSCSI data path) will be routed via Azure Firewall. iSCSI path will be connected directly to the CBS iSCSI subnet via vNET peering.

Routing table assignment for Host VM vNET

Host VM Routing Table

To ensure the operation system on the Host VM will know when to use secondary NIC for iSCSI traffic, you need to configure a route to CBS iSCSI interface.

Windows

Before configuring a route, we need to identify the ID of the network interface within the cbs-nics subnet.

Following commands provide information about host VMs network interfaces:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
PS> ipconfig
...
Ethernet adapter Ethernet 3:

   Connection-specific DNS Suffix  . : 
   Link-local IPv6 Address . . . . . : fe80::5fe4:1cd8:2983:16ad%11
   IPv4 Address. . . . . . . . . . . : 192.168.2.5
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . :

...


PS> netsh interface ipv4 show interfaces

Idx     Met         MTU          State                Name
---  ----------  ----------  ------------  ---------------------------
  1          75  4294967295  connected     Loopback Pseudo-Interface 1
  4          10        1500  connected     Ethernet
 **11**          10        1500  connected     Ethernet 3

To configure a persistent route, use following command:

1
route -p add <<IP range of subnet iscsi>> MASK 255.255.255.0 <<gateway IP of subnet cbs-nics>> METRIC 5015 IF <<network interface Idx>>

Tip

As IP address of subnet gateway use IP address: x.x.x.1

Example:

1
PS> route -p add 192.168.101.0 MASK 255.255.255.0 192.168.2.1 METRIC 5015 IF 11
Linux

Firstly, we need to identify the ID of the network interface within the cbs-nics subnet.

Following commands provide information about host VMs network interfaces:

1
2
3
4
5
6
7
8
$ ip addr show

3: **eth1**: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 7c:1e:52:5d:6d:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.4/24 metric 200 brd 192.168.2.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::7e1e:52ff:fe5d:6d9c/64 scope link
       valid_lft forever preferred_lft forever

To configure a persistent route, use following command:

1
$ route add -net <<IP range of subnet iscsi>> netmask 255.255.255.0 gw <<gateway IP of subnet cbs-nics>> dev <<ntw interface>> metric 5015

Tip

As IP address of subnet gateway use IP address: x.x.x.1

Example:

1
$route add -net 192.168.101.0 netmask 255.255.255.0 gw 192.168.2.1 dev eth1 metric 5015 

iSCSI target configuration

For connecting CBS as an iSCSI target, make sure you select the right network interface in subnet cbs-nics to be used for ISCSI initiator.

In our example it’s 192.168.2.5:

For Windows - Powershell option:

1
2
3
4
5
6
7
PS> New-IscsiTargetPortal -TargetPortalAddress 192.168.101.5 -InitiatorPortalAddress 192.168.2.5 -TargetPortalPortNumber 3260

#ct0
Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress 192.168.2.5 -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress 192.168.101.5

#ct1
Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress 192.168.2.5 -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress 192.168.101.8

Or in Windows iSCSI wizard:

Windows iSCSI wizard configuration

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy