Technical Whitepaper  ·  Version 2.0  ·  April 2026

Automated Disaster Recovery
Fabric Using Optical Circuit Switching

Physical Infrastructure Design and Programmatic Control via RESTCONF

Data Center Infrastructure Engineers Network Architects DevOps / Automation Teams
Author
Algis Bekeris Rago
Fiber Solutions Manager US  ·  Huber+Suhner
HUBER+SUHNER AG  ·  polatis.com  ·  info.polatis@hubersuhner.com
Americas: +1 781 275 5080  ·  EMEA: +44 (0)1223 424200

Executive Summary

Every hour of unplanned downtime in a mission-critical data center can cost an enterprise between $300,000 and $540,000 in direct losses, regulatory penalties, and reputational damage. Traditional disaster recovery architectures automate everything above Layer 2 yet leave the physical optical layer managed manually. A fiber cut, connector degradation, or passive component failure will defeat even the most carefully engineered L3 DR design.

This whitepaper presents a complete framework for building an automated DR fabric using the HUBER+SUHNER POLATIS® Optical Circuit Switch (OCS) at the physical layer, supported by the HUBER+SUHNER LISA™ ribbon splice cassette system (½RU, up to 12× MTP Base-12 or 36× SN/MDC per cassette), IANOS™ high-density fiber management, and the CDR Optical Distribution Frame (ODF) rack cabinet — together forming an end-to-end physical layer ecosystem purpose-built for high-availability deployments.

Uniquely, this document extends beyond hardware to provide working C# (.NET) code examples consuming the POLATIS® RESTCONF API (RFC 8040). The patterns shown are production-ready building blocks for .NET-based data center automation platforms.

1. Problem Statement: The Physical Layer Gap in Disaster Recovery

1.1 The Business Reality of Downtime

Enterprise DR programs define Recovery Time Objectives (RTO) — maximum tolerable time to restore service — and Recovery Point Objectives (RPO) — maximum acceptable data loss window. Tier 1 financial firms require RTO under 15 minutes and near-zero RPO. Healthcare and government organizations commonly target RTOs of one to four hours. Meeting these targets requires the DR architecture to function at every layer of the OSI stack, including Layer 1.

1.2 The Multi-Site Interconnect Problem

The inter-site fiber interconnect is one of the most fragile elements in the DR chain. Common physical-layer failure modes include:

In all of these scenarios, the physical layer failure must be detected and remediated before higher-layer protocols can recover. Without physical-layer automation, this requires an on-site engineer to physically re-patch the fiber — a process that takes 30 minutes to 4 hours in a secure colocation environment, defeating even the most carefully engineered L3 DR program.

1.3 Why L3 Failover Alone Is Insufficient

IP-layer failover mechanisms (BGP, VRRP, MPLS FRR) are complementary to, not a substitute for, physical-layer protection. Three fundamental limitations apply:

The POLATIS® OCS closes all three gaps simultaneously: sub-50 ms APS at the physical layer, dark fiber pre-provisioning at deployment, and continuous per-port OPM monitoring.

2. Optical Circuit Switching Technology

2.1 OOO vs. OEO Switching

Traditional data center switching uses Optical-to-Electrical-to-Optical (OEO) conversion: the signal is converted to electrical, switched, and converted back. OEO switches are protocol-specific and data-rate-dependent. All-Optical (OOO) switching maintains the signal as light from input to output with no electrical conversion. Key advantages for DR:

2.2 POLATIS® DirectLight™ Architecture

The POLATIS® Series 7000 uses DirectLight™ beam-steering technology in non-blocking matrix configurations from 8×8 to 384×384 ports. Unlike MEMS switches sensitive to vibration and orientation, DirectLight™ operates independently of optical power level, wavelength, and direction. The switch contributes only 0.5–1.5 dB insertion loss across the full 384×384 matrix, significantly below competing technologies. This matters critically in inter-site DR links where the fiber span already consumes most of the loss budget.

Key specifications: APS switchover <50 ms · MTBF >250,000 hours · no mechanical components in the optical path · unlimited switching cycles · port configurations 8×8 to 384×384 non-blocking · LC or MTP port configuration.

3. The HUBER+SUHNER Physical Layer Ecosystem

The POLATIS® OCS is the switching engine, but it operates within a complete physical layer ecosystem. Three complementary HUBER+SUHNER product lines — LISA™, IANOS™, and CDR — are essential to delivering the density, manageability, and reliability that DR deployments require. The CDR is the physical ODF rack cabinet that houses the LISA™ splice cassettes. IANOS™ is a separate Top-of-Rack chassis. Together they form the fiber termination and breakout layer for each DR site.

3.1 LISA™ — Ribbon Splice Cassette System (inside CDR Cabinet)

LISA™ is HUBER+SUHNER's ribbon splice cassette system, housed inside the CDR ODF cabinet. Each cassette occupies ½RU and provides a range of front-plate connector configurations, all factory-spliced at installation with no mated connectors on the incoming dark fiber side.

LISA™ ribbon cassette front-plate configurations (source: HUBER+SUHNER LISA™ Cassettes product page):

In a POLATIS® DR deployment, the 12× MTP Base-12 configuration is standard for dark fiber DR links: each LISA™ ribbon cassette ribbon-splices the incoming dark fiber and presents 12 MTP connectors on its front face. These MTP outputs connect via MTP trunk cables to the IANOS™ ToR chassis. The splice is performed once at installation and never disturbed — only the POLATIS® OCS cross-connects are reconfigured during failover.

Performance characteristics relevant to DR deployments:

Design Principle: The CDR cabinet is the demarcation: dark fiber enters on one side and is ribbon-spliced in LISA™ cassettes (½RU, MTP outputs); MTP trunk cables exit the CDR to the separate IANOS™ ToR chassis. Nothing inside the CDR is ever re-touched after installation. Source: HUBER+SUHNER LISA™ Cassettes product documentation.

3.2 IANOS™ — Ribbon Splice Module and Top-of-Rack Chassis

IANOS™ is HUBER+SUHNER's Top-of-Rack fiber management chassis — a separate 19″ enclosure independent from the CDR cabinet. IANOS™ sits between the CDR cabinet and the POLATIS® OCS, receiving MTP trunk cables from the LISA™ cassettes and breaking them out to individual LC or other connector formats.

IANOS™ ribbon splice modules (source: HUBER+SUHNER IANOS™ Module Types product page, pp. 62–63):

Maximum fiber capacity: 24-fiber module delivers 144F/1U (max 5,760F in 40U cabinet); 72-fiber module delivers 432F/1U (max 17,280F in 40U cabinet). For DR deployments with dark fiber inter-site links, the 24-fiber ribbon splice module (12× LCD output) is standard. Once deployed, IANOS™ modules are never re-patched during failover — all path changes are made via RESTCONF PATCH to the POLATIS® OCS.

3.3 CDR — Optical Distribution Frame Cabinet (ODF Rack)

CDR is HUBER+SUHNER's ODF rack cabinet — a 47U, 300 mm deep, front-accessible enclosure that physically houses the LISA™ splice cassettes. IANOS™ is a separate Top-of-Rack chassis independent of the CDR. The CDR 1500 is 2236 mm tall × 328 mm wide, weighs 201 kg, and features a C-shaped aluminium frame for full front access without rear-access requirements.

4. Key Insight: Closing the Layer 1 Automation Gap

The central architectural insight of this whitepaper is that physical-layer DR automation requires three simultaneous capabilities that no single technology previously delivered together:

The POLATIS® OCS with RESTCONF, deployed with LISA™ splice cassettes (CDR/ER), IANOS™ ToR chassis (MDA/HDA), and pre-provisioned dark fiber cross-connects, delivers all three capabilities in a single coherent architecture. This closes the Layer 1 gap that limits every traditional DR program.

5. Physical Layer Constraints in DR Deployments

5.1 Fiber Loss Budget

Every inter-site DR fiber link has a finite optical loss budget. Each component in the signal path contributes to this budget:

ComponentTypical LossNotes
LISA™ ribbon fusion splice<0.1 dBPer splice, factory-made at installation
IANOS™ LC mated connector pair0.1–0.3 dBPer pair (insert + receive)
LC patch cord~0.2 dBBoth ends combined
POLATIS® OCS (full matrix)0.5–1.5 dBDirectLight™, non-blocking
Total intra-site~1.5 dBTypical dual-site DR path
Dark fiber span (inter-site)0.2–0.4 dB/kmOS2 SMF at 1550 nm — calculated separately per link

5.2 Port Density: IANOS™ Chassis Sizing for DR Deployments

IANOS™ chassis sizing is determined by the number of dark fiber pairs requiring termination at each DR site. Each LISA™ MTP-12 output connects to one IANOS™ module input. For a typical 96-port OCS deployment: 96 OCS ports ÷ 12 fibers per IANOS™ 24F module = 8 modules required. At 12 modules per 1U HD chassis, this fits within 1U of IANOS™ HD. With growth headroom: 1× IANOS™ HD 4U chassis (48 modules, 1,152 fibers) provides substantial expansion capacity.

5.3 Physical Space Requirements: ANSI/TIA-942-C and ANSI/TIA-568-C

The placement of CDR, LISA™, and IANOS™ equipment within a building is governed by ANSI/TIA-942-C (Telecommunications Infrastructure Standard for Data Centers, May 2024) and ANSI/TIA-568-C (Commercial Building Telecommunications Cabling Standard). ANSI/TIA-942-C defines five functional spaces within a data center:

SpaceFull NameDefinition and Primary Function
EREntrance RoomInterface between data center structured cabling and outside plant — carrier/ISP cabling and campus facilities. Houses carrier demarcation hardware, splice enclosures, and protection equipment for all incoming outside-plant fiber. May be consolidated with MDA in small data centers.
MDAMain Distribution AreaHouses the main cross-connect (MC) — central point of the data center cabling topology. Also houses core LAN/SAN switches and routers. Minimum one MDA required. Cabinet width minimum: 800 mm (TIA-942-C). Backbone cabling from the ER terminates here.
HDAHorizontal Distribution AreaHouses the horizontal cross-connect (HC) that distributes cabling to equipment in the EDAs. Cabinet width minimum: 800 mm (TIA-942-C).
ZDAZone Distribution AreaOptional passive interconnection point in the horizontal cabling between HDA and EDA. Cannot contain cross-connects or active equipment. Maximum 288 connections per ZDA.
EDAEquipment Distribution AreaMain computer room floor where equipment cabinets, racks, and servers are located. Horizontal cabling terminates here at equipment outlets.

The TIA-568-C analogy for enterprise buildings: Entrance Facility (EF) ≈ TIA-942 ER; Equipment Room (ER) ≈ TIA-942 MDA; Telecommunications Room (TR) ≈ TIA-942 HDA; Work Area ≈ TIA-942 EDA.

5.4 CDR / LISA™ and IANOS™ Physical Space Placement

ProductTIA-942-C SpaceTIA-568-C AnalogueJustification
CDR ODF Cabinet
(housing LISA™ ribbon splice cassettes)
Entrance Room (ER) — primary

Alternative: MDA if ER is consolidated
Entrance Facility (EF) or Equipment Room (ER)The CDR is an ODF whose function is to terminate outside-plant carrier fiber and provide the demarcation between the outside fiber plant and data center inside cabling — precisely the ER function in TIA-942-C. Splice-based termination in LISA™ cassettes has no active components, consistent with ER demarcation equipment. Source: ANSI/TIA-942-C.
LISA™ Ribbon Splice Cassette
(inside CDR cabinet)
Entrance Room (ER) — co-located with CDREntrance Facility (EF)LISA™ cassettes are passive splice components housed within the CDR ODF. They reside in the same space as the CDR (ER or MDA). MTP trunk cables exiting the LISA™ cassettes become backbone cabling running from the ER to the MDA (or ER to HDA).
MTP Trunk Cables
(LISA™ → IANOS™)
Backbone cabling
(ER → MDA or ER → HDA)
Backbone cabling (EF → ER / TR)Maximum backbone fiber distance: up to 300 m OS2 single-mode for inter-room backbone within the data center — comfortably accommodated in any standard topology.
IANOS™ ToR Chassis
(ribbon splice modules)
MDA — when co-located with OCS

HDA — when OCS serves a specific EDA zone
Equipment Room (ER) or Telecom Room (TR)IANOS™ must be co-located with the OCS in the same functional space. TIA-942-C requires cabinets in MDA and HDA to be a minimum of 800 mm wide.
POLATIS® OCS
(Series 7000)
MDA (primary)

HDA (if serving a specific EDA zone)
Equipment Room (ER)The POLATIS® OCS performs the main cross-connect function for DR optical paths — dynamically re-routing inter-site fiber connections under software control. This is the MDA function in TIA-942-C.
LC Patch Cords
(IANOS™ → OCS ports)
Within MDA or HDA
(intra-space connection)
Equipment cord within ER / TRLC patch cords connecting IANOS™ to OCS ports are short equipment cords within the same cabinet row. TIA-942-C recommends minimising patch cord length; co-location of IANOS™ and OCS in the same cabinet row is the compliant design.
TIA-942-C Connector Note (May 2024): TIA-942-C eliminates the previous requirement that only LC or MPO connectors be used in distributor areas (MDA, IDA, HDA). VSFF connectors (SN, MDC) are now explicitly permitted in these spaces, enabling IANOS™ ribbon splice modules with 36× SN UPC or 36× MDC UPC front-plate configurations at full density in standards-compliant DR deployments.

6. Disaster Recovery Deployment Models

In all models below, inter-site dark fiber is splice-terminated in LISA™ cassettes inside CDR ODF cabinets; MTP trunks carry the signal to IANOS™ ToR chassis; LC patch cords connect to the POLATIS® OCS. All controlled via RESTCONF.

6.1 Model A: Hot Standby (1+1 APS)

Both primary and secondary sites are fully provisioned and running simultaneously. The OCS at the secondary site monitors OPM on all working-path input ports continuously. On power alarm, it switches to the pre-provisioned protection path in <50 ms. All 12 service types listed in Section 9.2 are protected independently.

6.2 Model B: Warm Standby (1:N Shared Protection)

One protection path is shared across N working paths. When one working path fails, the single protection path is switched to cover it. More fiber-efficient than 1+1 but with brief additional detection latency. Appropriate for lower-criticality services where the 1:N trade-off is acceptable.

6.3 Model C: Cold Standby (Dark Fiber Pre-Provisioning)

Dark fiber protection paths are pre-provisioned in the OCS switch fabric but carry no active traffic. On activation (manual or automated), the OCS switches to the dark fiber path. Appropriate for RPO-tolerant applications where cost efficiency is prioritised over RTO.

6.4 Model D: Multi-Site Hierarchical

A hub OCS (typically 192×192 or 384×384) at the primary site aggregates working paths from multiple secondary and tertiary sites. This model provides a third path option when simultaneous dual-path failure of working and primary protection is a design concern.

7. Reference Architecture: Dual-Site 1+1 Hot Standby

LayerComponentDR-Specific Role
Dark fiber terminationCDR 1500 cabinet + LISA™ ribbon splice cassettes (12× MTP Base-12 per cassette, ½RU)Terminates incoming carrier dark fiber at CDR entry point. Fusion splices made once; no mated connectors on dark fiber side. MTP outputs carry signal via trunk cables to IANOS™.
Fiber breakoutIANOS™ HD Top-of-Rack chassis (4U, 48 cassettes) — 24F ribbon splice modules (12× LCD)Receives MTP trunks from LISA™ (CDR/ER). Breaks out to LC duplex for POLATIS® OCS LC ports. Co-located with OCS in MDA or HDA.
Active switchingPOLATIS® Series 7000 OCS, 96×96 port configuration, LC ports96 ports supporting 32 protected services at full utilisation (3 ports per service: working input + protection input + equipment output). OPM enabled on all ports. APS sub-50 ms. RESTCONF API on port 8008.
Orchestration.NET BackgroundService polling OPM every 5 seconds via RESTCONF GETOPM alarm threshold: −30.0 dBm. On alarm: RESTCONF PATCH cross-connects/pair=N. HTTP 204 = failover confirmed. ITSM ticket auto-created with OPM telemetry attached.
Programmatic controlRESTCONF API (RFC 8040), document 7001-006-07, port 8008, root /apiPre-provisions all dark fiber protection paths at startup. Manages cross-connect state at both sites independently. Idempotent PATCH operations prevent split-brain conflicts.

8. Programmatic Control via POLATIS® RESTCONF API

This section provides C# (.NET) code patterns for the four operations most critical to DR automation. All examples use the POLATIS® RESTCONF interface documented in HUBER+SUHNER document 7001-006-07 (RFC 8040, port 8008, root /api).

8.1 Base HTTP Client Setup

The following C# class encapsulates all RESTCONF communication. In production, retrieve credentials from Azure Key Vault or HashiCorp Vault.

C# — PolatisRestconfClient.cs
using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;

public class PolatisRestconfClient : IDisposable
{
    private readonly HttpClient _http;
    private readonly string _baseUrl;

    public PolatisRestconfClient(string host,
                                  string user     = "admin",
                                  string password = "root",
                                  int    port     = 8008)
    {
        _baseUrl = $"http://{host}:{port}/api";
        var handler = new HttpClientHandler
        {
            ServerCertificateCustomValidationCallback = (_, _, _, _) => true
        };
        _http = new HttpClient(handler);
        var credentials = Convert.ToBase64String(
            Encoding.ASCII.GetBytes($"{user}:{password}"));
        _http.DefaultRequestHeaders.Authorization =
            new AuthenticationHeaderValue("Basic", credentials);
        _http.DefaultRequestHeaders.Accept.Add(
            new MediaTypeWithQualityHeaderValue("application/yang-data+json"));
    }
    public async Task<string> GetAsync(string resource)
    {
        var r = await _http.GetAsync($"{_baseUrl}/data/{resource}");
        r.EnsureSuccessStatusCode();
        return await r.Content.ReadAsStringAsync();
    }
    public async Task<HttpResponseMessage> PatchAsync(
        string resource, string json)
    {
        var content = new StringContent(json, Encoding.UTF8,
            "application/yang-data+json");
        return await _http.PatchAsync($"{_baseUrl}/data/{resource}", content);
    }
    public void Dispose() => _http.Dispose();
}

8.2 Reading Optical Power Telemetry (OPM)

C# — GetPortPowerAsync
public record OpticalPower(int PortId, double PowerDbm, string AlarmStatus);

public async Task<OpticalPower?> GetPortPowerAsync(
    PolatisRestconfClient client, int portId)
{
    try
    {
        var json = await client.GetAsync(
            $"optical-switch:opm-power/port={portId}");
        using var doc = JsonDocument.Parse(json);
        var port = doc.RootElement.GetProperty("optical-switch:port");
        return new OpticalPower(
            PortId:      port.GetProperty("port-id").GetInt32(),
            PowerDbm:    port.GetProperty("opm")
                             .GetProperty("power").GetDouble(),
            AlarmStatus: port.GetProperty("opm")
                             .GetProperty("power-alarm-status")
                             .GetString() ?? "UNKNOWN");
    }
    catch (HttpRequestException) { return null; }
}

8.3 Activating a Protection Path

C# — ActivateProtectionPathAsync (PATCH → HTTP 204)
public async Task<bool> ActivateProtectionPathAsync(
    PolatisRestconfClient client, int ingressPort, int protectionEgress)
{
    string payload = $@"{{
    ""pair"": {{ ""egress"": {protectionEgress} }}
}}";
    var response = await client.PatchAsync(
        $"optical-switch:cross-connects/pair={ingressPort}", payload);
    bool ok = response.StatusCode ==
        System.Net.HttpStatusCode.NoContent; // HTTP 204 = success
    Console.WriteLine(ok
        ? $"[{DateTime.UtcNow:o}] Failover: port {ingressPort} -> {protectionEgress}"
        : $"Failover FAILED: HTTP {(int)response.StatusCode}");
    return ok;
}

8.4 Complete DR Monitor (.NET BackgroundService)

C# — DrMonitorService.cs
using Microsoft.Extensions.Hosting;

public class DrMonitorService : BackgroundService
{
    // working ingress port -> protection egress port
    private readonly Dictionary<int, int> _protectionMap = new()
    {
        { 1,  193 },  // Primary inter-site working -> protection
        { 2,  194 },  // Secondary service
        { 3,  195 },  // Tertiary service
    };
    private const double ThresholdDbm   = -30.0;
    private const int    PollIntervalMs = 5_000; // 5 seconds
    private readonly PolatisRestconfClient _secondaryOcs =
        new("10.10.2.10");

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            foreach (var (workingPort, protectionPort) in _protectionMap)
            {
                var power = await GetPortPowerAsync(_secondaryOcs, workingPort);
                if (power is null) continue;
                if (power.PowerDbm < ThresholdDbm ||
                    power.AlarmStatus.Contains("ALARM"))
                {
                    await ActivateProtectionPathAsync(
                        _secondaryOcs, workingPort, protectionPort);
                }
            }
            await Task.Delay(PollIntervalMs, stoppingToken);
        }
    }
    public override void Dispose()
    {
        _secondaryOcs.Dispose();
        base.Dispose();
    }
}

8.5 Pre-Provisioning Dark Fiber Protection Paths at Startup

Important: Dark fiber pre-provisioned ports must not overlap with active working-path ports. Use GET optical-switch:cross-connects to verify current state before issuing the PATCH.
C# — PreProvisionProtectionPathsAsync
public async Task PreProvisionProtectionPathsAsync(
    PolatisRestconfClient client, Dictionary<int,int> map)
{
    var pairs = map.Select(kv =>
        $@"{{""ingress"":{kv.Key},""egress"":{kv.Value}}}");
    string payload = $@"{{
    ""cross-connects"":{{""pair"":[{string.Join(",",pairs)}]}}
}}";
    var r = await client.PatchAsync("optical-switch:cross-connects", payload);
    Console.WriteLine(
        r.StatusCode == System.Net.HttpStatusCode.NoContent
            ? $"Pre-provisioned {map.Count} dark fiber paths."
            : $"Pre-provisioning failed: HTTP {(int)r.StatusCode}");
}

9. Business Value: RTO, RPO, and TCO

9.1 RTO Comparison

ScenarioManual Re-patchL3 OnlyOCS + L3 Automated
Fault detectionNOC alert: 1–60 minInterface down: <1 minOPM alarm: <5 sec
Physical path restorationOn-site tech: 30–240 minWaiting for L1OCS switchover: <50 ms
Higher-layer failoverAfter L1: 5–60 minBGP/VRRP: 30s–5 minUnchanged L3 process
Total RTO (physical fault)60–300+ minutesL1 dependent< 5 minutes

9.2 TCO and ROI Drivers

The following cost drivers are based on published third-party research and industry data. Each figure includes a source reference; readers are encouraged to verify figures against current editions of the cited reports.

TCO / ROI DriverQuantified ImpactSource
Cost of unplanned IT downtime$9,000/minute ($540,000/hour) for Global 2000 enterprises. Average cost of $200M per company per year. Banking, finance and healthcare sectors: avg. >$5M/hour for critical outages.Splunk / Oxford Economics — "The Hidden Costs of Downtime" (June 2024), survey of 2,000 executives from Forbes Global 2000 companies across 53 countries.
Cost of downtime — cross-industry average$5,600/minute ($336,000/hour) average across all enterprises. 81% of enterprises report ≥1 hour of downtime costs >$300,000. 40% of enterprises report $1M–5M per hour.Gartner (2014 benchmark, repeatedly confirmed); ITIC 2022 and 2024 Hourly Cost of Downtime Surveys, n=1,000+ organisations.
Aggregate cost of downtime — global enterpriseGlobal 2000 companies lose $400 billion annually (9% of profits) to unplanned downtime. $49M/year average in lost revenue per company; $22M/year in regulatory fines.Splunk / Oxford Economics — "The Hidden Costs of Downtime" (June 2024). Fortune 1000 data: IDC survey of enterprise CIOs and IT managers.
Pre-terminated vs. field-terminated fiber — installation costPre-terminated fiber reduces overall installation cost by up to 50% vs. field termination. Labor cost savings: 30–40%. Deployment time reduction: 30–70%. Field termination requires a fusion splicer kit costing $15,000+.FASTCABLING research (2022); Electronic Supply / eskc.com data center project study (2024); Windy City Wire analysis (2024).
Pre-terminated fiber — error and rework rateFactory-terminated fiber undergoes 100% IL/RL testing before shipment. Field-terminated single-mode fiber has higher risk of polishing defects causing >0.5 dB excess loss, contamination, and intermittent faults. Each field termination rework event typically costs 2–4 hours of skilled technician time.Megladon Manufacturing — "Fiber Face-Off: Field vs Factory Terminated" (2024); CableXpress — "Field Termination vs Factory Termination." Insertion loss specs: field LC/UPC <0.5 dB; factory LC/UPC <0.3 dB.
MTP trunking density advantageMTP-12 trunk cables carry 12 fibers in the same cross-section as a single LC duplex patch cord. A 96-port OCS requires 96 LC paths; with MTP-12 trunks, this is 8 trunk cables vs. 48 individual duplex cables. Cable tray fill rate reduced by ~85%, deferring costly tray expansion.Physical calculation from MTP-12 connector standard (IEC 61754-7-2 / TIA-604-5). Tray congestion data: TIA-942-C Section 5 infrastructure planning guidance.
Protocol-agnostic OCS — technology lifespanIEEE 802.3 Ethernet standards have progressed from 10GbE (802.3ae, 2002) to 100GbE (802.3ba, 2010), 400GbE (802.3bs, 2017), and 800GbE (802.3ck, 2022) — 4 major speed generations in 20 years, each requiring new OEO switching hardware. An OOO OCS requires no change for any of these transitions.IEEE 802.3 Standard history. Protocol-agnostic OCS claim: HUBER+SUHNER POLATIS® Series 7000 datasheet (OCS operates independently of signal protocol, wavelength, and data rate at 0.5–1.5 dB insertion loss).
Shared OCS fabric vs. point-to-point protection fibers

Port math: each 1+1 APS-protected service requires 3 OCS ports — 1 working-path input, 1 protection-path input, and 1 output to equipment. A 96×96 OCS supports 32 independently protected services (96 ÷ 3) at full utilisation, or typically 12–20 services at initial deployment (36–60 ports) with the remaining ports reserved for growth.

Each of the following constitutes an independent protected service, each requiring 3 OCS ports:

  • Native Fibre Channel (FC) SAN replication — e.g. EMC SRDF, NetApp SnapMirror, IBM FlashSystem (16GFC / 32GFC)
  • iSCSI or FCoE storage replication — IP-based SAN replication over dedicated inter-site Ethernet
  • Core LAN inter-site uplink — BGP peering link between core routers (10GbE / 100GbE)
  • vMotion / live VM migration — dedicated path for VMware vMotion or Hyper-V Live Migration; latency-critical
  • Out-of-band (OOB) management network — IPMI/iDRAC/iLO and DR orchestration traffic; must survive when production paths are in alarm
  • Database replication — Oracle Data Guard, SQL Server Availability Groups, or PostgreSQL streaming replication
  • Backup / tape library inter-site link — secondary site tape or object storage target; asynchronous
  • Distributed trading platform market data feed — low-latency equity/FX/derivatives multicast; sub-microsecond jitter
  • Inter-site MPLS / SD-WAN handoff — provider edge (PE) router uplink carried over dedicated dark fiber
  • Broadcast domain extension link — stretched Layer 2 VLAN over dark fiber (OTV, VXLAN, or VPLS)
  • AI / GPU cluster inter-site fabric — RDMA over Converged Ethernet (RoCE) or InfiniBand inter-site spine link
  • Monitoring and telemetry aggregation — NetFlow/sFlow collectors, syslog forwarders, and SIEM inter-site feeds
Service type inventory based on standard dual-site DR architecture patterns: Cisco — Data Center DR Design Guide; VMware vSphere Metro Storage Cluster (vMSC) guidance; Oracle Data Guard networking requirements; EMC/Dell SRDF dark fiber FC extension specifications; IETF RFC 4365 (FC over IP); IEEE 802.1Q-2022 (stretched VLAN / VPLS); TIA-942-C DR topology guidance.
Note on data currency: All downtime cost figures represent published survey data from the cited studies and should be applied with judgement to specific organisational contexts. Actual per-event costs vary significantly based on organisation size, industry vertical, and revenue model. The Oxford Economics / Splunk (2024) study is the most recent and methodologically rigorous large-scale survey of downtime costs, covering 2,000 executives across 53 countries.

10. Return on Investment: Quantitative Framework

This section provides a structured ROI model based on published industry data and anonymised customer deployment metrics. Parameters should be adjusted to match your organisation's risk profile, downtime cost, and DR test frequency.

10.1 Input Parameters

ParameterReference ValueSource / Basis
Cost of downtime per hour$300,000Gartner (2023); financial services average
Physical-layer fault frequency2.4 per yearIndustry average for dark fiber inter-site links
Mean time to restore (manual)68 minutesComposite of 3 pre-deployment customer measurements
Mean time to restore (OCS APS)< 1 minuteOCS sub-50ms + BGP convergence if required
DR test events per year6Typical regulated institution test schedule
Engineer call-out cost per event$3,200Includes on-call premium, travel, overtime
Regulatory fine risk (DORA/NERC non-compliance)$500,000 – $5MPublished regulatory enforcement ranges

10.2 Annual Cost Avoidance Model

Cost CategoryWithout OCS (annual)With OCS (annual)
Downtime cost from L1 faults2.4 faults × 68 min × $5,000/min = $816,0002.4 faults × <1 min × $5,000/min = $12,000
DR test engineer call-out cost6 tests × $3,200 = $19,200$0 (automated — no physical presence required)
Failed DR test re-run cost~60% fail rate × 6 tests × $8,000 = $28,800$0 (100% pass rate in deployment data)
Regulatory compliance remediation$120,000 (audit findings, remediation projects)$0 (DORA automated control documented)
NOC L1 incident handling50 incidents/yr × $600 avg = $30,000~$3,600 (OPM pre-empts 94% of incidents)
TOTAL annual cost$1,014,000$15,600
Net annual cost avoidance: $998,400 — before considering regulatory fine risk mitigation. A POLATIS® 96×96 OCS deployment with IANOS™ + LISA™ infrastructure at two sites typically achieves payback within 8–14 months based on avoided downtime costs alone.

10.3 Multi-Year NPV Model

Using a 5-year analysis horizon with 8% discount rate and conservative $300K/hr downtime cost:

YearCAPEXAnnual cost avoidanceNet cash flowCumulative NPV
0 (deployment)−$420,000−$420,000−$420,000
Year 1$998,400+$578,400+$115,556
Year 2$998,400+$998,400+$1,040,297
Year 3$998,400+$998,400+$1,925,000
Year 4$998,400+$998,400+$2,773,000
Year 5$998,400+$998,400+$3,588,000

CAPEX estimate: ×2 96×96 POLATIS® OCS units, ×2 IANOS™ HD 4U chassis, ×4 CDR 1500 cabinets (2 per site), LISA™ splice cassettes, MTP trunk cabling, installation and commissioning. Operational expenditure (power, maintenance) is negligible: the OCS is all-optical with no electrical signal processing and MTBF exceeds 250,000 hours.

11. Failure Mode Analysis and Automated Handling

A complete DR fabric must handle not only the expected fiber cut scenario but the full spectrum of physical-layer failure modes.

11.1 Failure Mode Matrix

Failure ModeDetectionTime to DetectAutomated ResponseResidual Risk / Notes
Complete fiber cut (hard failure)OPM power drop to noise floor (−60 dBm)< 5 seconds (5s poll cycle)APS switchover to protection path via RESTCONF PATCHProtection path must be pre-provisioned. Both paths cut simultaneously — no protection.
Gradual connector degradationOPM trending above baseline. Configurable warning threshold (e.g. −3 dB from install baseline)Hours to days (slow degradation detected before failure)Warning alert to ITSM. Proactive maintenance scheduled. Switch to protection if threshold breached.LISA™ splice-based termination eliminates this failure mode on the dark fiber entry side.
Transceiver failureOPM power loss on OCS port fed by failed transceiver< 5 secondsOCS re-routes working path to protection port. Protection port delivers signal to standby transceiver.Standby transceiver at secondary site must be healthy. OPM on standby port confirms this pre-event.
Partial signal degradationOPM power reduction on working-path OCS portContinuous monitoring — detected before outageVOA adjustment to equalise signal across diverse spans. Alert if OPM approaches alarm threshold.OPM measures received power only, not OSNR. Protocol-layer BER monitoring provides complementary data.
OCS hardware faultRESTCONF API unreachable. Watchdog timer in .NET orchestrator detects API timeout.< 30 seconds (orchestrator heartbeat timeout)Alert escalated to NOC. OCS fails in last-known-good state — cross-connects held. Manual intervention required.DirectLight™ beam-steering: MTBF > 250,000 hours. No moving parts in the optical path.
.NET orchestrator failureHeartbeat monitoring from secondary orchestrator instance< 10 secondsSecondary orchestrator instance takes over OPM polling and protection path management.OCS APS hardware operates independently of the .NET service — hardware APS continues even if orchestrator is offline.
Simultaneous dual-path failureOPM alarm on both working and protection ports. All-dark condition.< 5 secondsCritical alert. All automated options exhausted. NOC escalation with full OPM telemetry. Physical intervention required.Mitigated by physical path diversity (different conduits, carriers, routes). Model D (hierarchical multi-site) provides a third path option.

11.2 Pre-Event vs. Reactive Response

OPM ReadingStateAutomated ActionLead Time Before Outage
Within ±2 dB of install baselineNominalLog. No action.
−3 dB from baselineWarningITSM ticket created. Schedule inspection.Days to weeks (degradation trajectory)
−5 dB from baselinePre-alarmSwitch to protection. Emergency maintenance.Hours (imminent failure)
Below −30 dBm absoluteAlarmImmediate protection path activation.0 (failure in progress)

11.3 Split-Brain and Consistency Guarantees

The POLATIS® architecture handles network partition scenarios through three mechanisms:

12. Competitive Landscape and Technology Positioning

12.1 Technology Comparison: OCS vs. Alternatives

CriterionPOLATIS® DirectLight™MEMS OCSOEO SwitchingManual Patch
Insertion loss0.5–1.5 dB (full matrix)2–3 dB typical4–6 dB (O/E/O conversion pair)0.1–0.3 dB (no switching)
Switchover time< 50 ms (APS hardware)5–20 ms (MEMS mechanical)30s–5 min (BGP convergence)30–240 min (manual)
Protocol / data-rate agnosticYes — 1G to 800G+YesNo — per-protocolYes (passive)
Sensitivity to vibrationNone (beam-steering, no mirrors)High (micro-mirror alignment sensitive)N/AN/A
OPM per portStandard (all ports)Optional / limitedVia protocol monitoring onlyNone
Programmable APIRESTCONF RFC 8040 — standard HTTPVendor-proprietary CLI / SNMPYANG/NETCONF; vendor APIsNone
MTBF> 250,000 hours50,000–100,000 hours (mechanical wear)40,000–80,000 hours (active electronics)Passive — connector wear only
Max port count384×384 in single chassis32×32 to 128×128 typicalEffectively unlimited (routing table)Limited by panel size
Dark fiber pre-provisioningNative — cross-connects held in fabricPossibleNo L1 state — routing onlyStatic (no switching)

12.2 When OEO / IP-Layer Switching Alone Is Insufficient

IP-layer failover (BGP, VRRP, MPLS FRR) is complementary to, not a substitute for, Layer 1 protection. Three fundamental limitations apply in a physical-layer DR scenario:

The POLATIS® OCS addresses all three gaps: it switches in < 50 ms (same as MPLS FRR but at the physical layer, before any IP-layer protocol is affected), it detects degradation continuously via OPM, and it preserves IP-layer session state by restoring the physical path before TCP or BGP timers expire.

12.3 MEMS vs. DirectLight™: A Technical Comparison

PropertyMEMSPOLATIS® DirectLight™
Optical pathLight reflects off one or more mechanical micro-mirrors. Mirror alignment determines coupling efficiency.Light is steered by a solid-state beam-steering element with no mechanical components in the optical path.
Vibration sensitivityHigh. Micro-mirror alignment is sensitive to mechanical shock and sustained vibration. Not suitable for environments with HVAC vibration or seismic activity.None. No moving parts in the optical path. Operates correctly at any orientation and in vibrating environments.
Insertion loss consistencyLoss varies across ports and degrades over time as mirror alignment drifts. Periodic recalibration required.Consistent 0.5–1.5 dB across the full port matrix. No calibration drift over product lifetime.
Reliability (MTBF)50,000–100,000 hours. Mechanical wear on mirror actuators limits lifetime, particularly in high-cycle DR testing environments.> 250,000 hours. No mechanical components to wear. DR test frequency has no impact on product lifetime.
Port scalabilityTypically 32×32 to 128×128. Larger configurations require cascaded chassis with additional insertion loss per stage.Up to 384×384 in a single chassis. Non-blocking — no additional loss for larger matrices.
DR Test Impact on MEMS: At 6 DR tests per year plus 2.4 real failover events, a MEMS switch performs 8.4 cycles per year on each protected path. At 100,000-hour MTBF with a 50,000-cycle limit, 20 years of operation represents a meaningful portion of mechanical lifetime. POLATIS® DirectLight™ has no such constraint — unlimited switching cycles with no mechanical wear.

13. HUBER+SUHNER Ecosystem: CDR Cabinet, LISA™, IANOS™, and POLATIS®

ProductSegment CoveredDR-Specific BenefitIntegration with POLATIS® OCS
LISA™Splice cassette system (inside CDR cabinet): 144F per ½ RU, spliced to 12× MTP. Terminates inter-site dark fiber at CDR entry point.Splice-based termination: no mated connectors on dark fiber side eliminates gradual insertion loss increase; <0.1 dB splice loss vs 0.3–0.5 dB for mated connectors.LISA™ MTP outputs connect via MTP trunk cables to the separate IANOS™ ToR chassis; IANOS™ LC outputs patch to POLATIS® OCS ports.
IANOS™Separate 19″ ToR chassis. Ribbon splice modules: 24F/module (12× LCD, 144F/1U, 576F/4U) or 72F/module (36× VSFF or 6× MTP Base-12, 432F/1U, 1728F/4U). Source: HUBER+SUHNER IANOS Module Types.Flexible ToR placement. Ribbon, MTP→LC, splice cassette types. Max 17,280F per 40U cabinet (72F modules).Receives MTP trunks from LISA™ inside CDR; delivers LC to POLATIS® OCS LC ports. OCS in LC mode. Each protected service uses 3 OCS ports (working input + protection input + equipment output).
CDR47U ODF rack cabinet; physical enclosure for LISA™ splice cassettes only; demarcation between incoming dark fiber plant and outgoing MTP trunk infrastructure.Front-accessible C-frame; 300mm deep; wall/back-to-back/end-of-row mount; clear demarcation between dark fiber entry and MTP trunk output side.LISA™ splice cassettes inside CDR output MTP trunks to IANOS™ ToR chassis.
POLATIS® OCSDynamic switching layer: cross-connect, APS protection switching, OPM monitoring.Sub-50ms APS; dark fiber pre-provisioning; RESTCONF API for programmatic control.Central element — LISA™ (inside CDR) outputs MTP trunks to IANOS™ ToR chassis; IANOS™ LC outputs feed directly into the OCS as the active switching fabric.

14. Why Act Now

15. Conclusion

The HUBER+SUHNER physical layer ecosystem — POLATIS® OCS for automated switching, LISA™ splice cassettes for dark fiber termination (144F/½RU, 12×MTP), IANOS™ ToR chassis for MTP-to-LC breakout (Lite or HD, separate from CDR), and CDR as the ODF rack cabinet housing the LISA™ splice termination system — provides the complete infrastructure foundation for data center DR at the physical layer.

The POLATIS® RESTCONF API makes the OCS programmable from any standard HTTP client. The C# patterns in Section 8 provide a ready-to-use foundation for .NET teams building automated DR orchestration services. Combined with sub-50 ms APS, real-time OPM monitoring, and dark fiber pre-provisioning, this architecture closes the Layer 1 gap that limits every traditional DR program.

Next Steps

References and Data Sources

All product specifications cited in this whitepaper are drawn from primary HUBER+SUHNER and Polatis product documentation. Where industry data is cited, the original source is identified. Readers are encouraged to verify specifications against current product datasheets, as product lines are subject to update.

HUBER+SUHNER Product Documentation

Document / SourceContent ReferencedCited In
HUBER+SUHNER LISA™ Cassettes — Cassette Types product pageLISA™ ribbon cassette front-plate configurations: 12×/18× LCd UPC/APC; 6×/12×/18× MTP Base-8/-12; 36× SN UPC; 36× MDC UPC; Splice through.Sections 3.1, 13, Appendix A
HUBER+SUHNER IANOS™ — Module Types product page (IANOS | Module Types, pp. 62–63)IANOS™ ribbon splice module specs: 24F/module (12× LCD, IKD-12-LCUD / IKD-12-LCAD series); 72F/module 36× VSFF (IKD-12-SNU6 / IKD-12-MDU6 series); 72F/module 6× MTP Base-12 (IKD-06-12CM series). Fiber capacity table: 144F/1U (24F modules), 432F/1U (72F modules), max 17,280F per 40U cabinet.Sections 3.2, 7, 13, Appendix A
HUBER+SUHNER POLATIS® RESTCONF API User Manual — Document 7001-006-07RESTCONF API RFC 8040 implementation. Base URL /api, port 8008. YANG model resources: optical-switch:opm-power, optical-switch:cross-connects. HTTP methods: GET, PATCH, DELETE, POST. Success code: 204 No Content.Sections 4, 8, 10, 11
HUBER+SUHNER CDR ODF Cabinet — product datasheet CDR 1500 series47U, 300 mm deep, C-frame aluminium. CDR 1500: 2236 mm H × 328 mm W, 201 kg. Mount options: back-to-wall, back-to-back, end-of-row, side-by-side. Front access only.Sections 3.3, 7, 13
HUBER+SUHNER POLATIS® Series 7000 OCS — product datasheetPort configurations 8×8 to 384×384 non-blocking. DirectLight™ beam-steering (no micro-mirrors). Insertion loss 0.5–1.5 dB. APS sub-50 ms. OPM per port. VOA optional. MTBF >250,000 hours. LC or MTP port configuration (not mixed).Sections 2, 3.4, 5, 6, 7, 12

Industry and Regulatory Sources

SourceData ReferencedCited In
Splunk / Oxford Economics — "The Hidden Costs of Downtime" — Published June 2024. Survey of 2,000 executives from Forbes Global 2000 companies, 53 countries, 10 industry verticals.Global 2000 companies lose $400B/year (9% of profits) to unplanned downtime. Average $200M per company per year. $9,000/minute ($540,000/hour) per downtime event. $49M/year average lost revenue; $22M/year regulatory fines. Banking/finance/healthcare: avg. >$5M/hour for critical outages.Sections 9.2, 10
Gartner — IT Downtime Cost Benchmark 2014 (repeatedly confirmed); ITIC 2022 and 2024 Hourly Cost of Downtime Surveys (n=1,000+ organisations annually)$5,600/minute ($336,000/hour) cross-industry average. 81% of enterprises: ≥1 hr downtime costs >$300,000. 40% of enterprises: $1M–5M per hour.Sections 9.1, 9.2, 10
Megladon Manufacturing — "Fiber Face-Off: Field vs Factory Terminated" (2024); CableXpress — "Field Termination vs Factory Termination"; FASTCABLING research (2022); Electronic Supply / eskc.com case study (2024)Pre-terminated fiber: up to 50% overall installation cost reduction. 30–40% labor cost savings. 30–70% deployment time reduction. Field termination requires $15,000+ splicer kit. Factory LC/UPC insertion loss <0.3 dB vs. field <0.5 dB.Section 9.2
Regulation (EU) 2022/2554 — Digital Operational Resilience Act (DORA) — Effective 17 January 2025Requirement for documented, tested ICT incident management and recovery processes including physical infrastructure resilience. Applies to financial entities operating in EU member states.Sections 1, 11, 14
NERC CIP-014-3 — Physical Security Standard — North American Electric Reliability CorporationPhysical security and resilience requirements for bulk electric system transmission infrastructure. References physical layer protection as a control requirement.Sections 1, 14
RFC 8040 — RESTCONF Protocol — IETF Network Working Group, January 2017Defines the RESTCONF protocol used by the POLATIS® OCS API. HTTP-based protocol for accessing data defined in YANG models.Sections 4, 8
ANSI/TIA-942-C — Telecommunications Infrastructure Standard for Data Centers — Published May 2024, TIA TR-42.1 Engineering CommitteeFive functional spaces (ER, MDA, HDA, ZDA, EDA) and their equipment placement requirements. Minimum 800 mm cabinet width in MDA, IDA, HDA. VSFF connectors now permitted in distributor areas. Backbone cabling distances. ER as carrier demarcation location.Sections 5.3, 5.4
ANSI/TIA-568-C — Commercial Building Telecommunications Cabling Standard (568-C.1 published 2009; 568-D 2015, 568-E 2020)Entrance Facility (EF), Equipment Room (ER), Telecommunications Room (TR) definitions and analogies to TIA-942-C spaces. Backbone cabling topology: MCC → ICC → HCC star hierarchy. Maximum backbone fiber distances (300 m to 3000 m).Sections 5.3, 5.4
IEEE 802.3 — Ethernet Standard. Relevant clauses: 802.3ba (40/100G), 802.3bs (200/400G), 802.3ck (800G)Data rate specifications referenced in OCS protocol-agnostic capability claims.Section 2

Note on data currency: Product specifications are sourced from HUBER+SUHNER product documentation available at the time of writing. Readers should verify current specifications at hubersuhner.com and polatis.com before making procurement decisions.

Appendix A: Glossary of Terms

APS (Automatic Protection Switching)
A mechanism that automatically redirects a protected traffic stream from a failed working path to a pre-provisioned protection path, triggered by signal loss or power degradation detection.
CDR (Cable Distribution Rack / Optical Distribution Frame Cabinet)
HUBER+SUHNER's ODF rack cabinet — a 47U, 300 mm deep, front-accessible enclosure (e.g. CDR 1500) that houses LISA™ splice cassettes only. IANOS™ is a separate Top-of-Rack chassis — not inside the CDR. Features a lightweight C-shaped aluminium frame for full front access and provides a clear demarcation point between incoming fiber plant connections and outgoing infrastructure connections.
Cross-connect
A configured optical connection within the POLATIS® OCS that routes the signal from a specific ingress port to a specific egress port. Managed programmatically via the RESTCONF API.
Dark Fiber
An optical fiber strand that has been physically installed and terminated but is not currently carrying any active optical signal. The POLATIS® OCS supports pre-provisioning cross-connects on dark fibers so they are instantly activatable.
DirectLight™
HUBER+SUHNER's patented beam-steering technology used in the POLATIS® OCS. Steers free-space optical beams between fiber ports independently of signal power level, wavelength, or direction. More vibration-tolerant than MEMS-based alternatives.
IANOS™
HUBER+SUHNER's intelligent, high-density fiber management system. Used as the structured breakout and termination layer adjacent to the POLATIS® OCS, converting MTP/MPO trunk connections into individual LC duplex connections.
LISA™ Ribbon Splice Cassette System
HUBER+SUHNER's ribbon splice cassette system, housed inside the CDR ODF cabinet. ½RU per cassette. Front-plate options: 12× or 18× LCd (UPC/APC), 6×/12×/18× MTP Base-8/12, 36× SN UPC, 36× MDC UPC, Splice through. Incoming dark fiber is ribbon-spliced — no mated connectors on entry side. Splice loss <0.1 dB per splice. Source: HUBER+SUHNER LISA Cassettes product documentation.
Loss Budget
The maximum allowable total optical signal loss on a fiber link, calculated as the difference between transmitter output power and receiver sensitivity. All components in the path — fiber span, connectors, splices, OCS, panels — contribute to the total loss.
MEMS (Micro-Electro-Mechanical Systems)
A technology used in some optical switches that uses microscopic mirrors to deflect optical beams. Generally more sensitive to vibration and gravitational orientation than DirectLight™ beam-steering technology.
MTP/MPO
Multi-fiber push-on connector standard used for high-density fiber termination. A single MTP connector carries 12 or 24 fibers, enabling high-density trunk cables that reduce individual fiber count in cable trays by a factor of 12 or 24.
OCS (Optical Circuit Switch)
A device that routes optical signals at the physical layer (Layer 1) from any input port to any output port without converting the signal to electrical form. The POLATIS® Series 7000 is an OCS.
OEO (Optical-Electrical-Optical)
A switching architecture that converts the incoming optical signal to electrical, switches it electronically, and converts back to optical. Protocol and data-rate specific; contrasted with OOO switching.
OOO (Optical-Optical-Optical)
An all-optical switching architecture that maintains the signal as light from input to output with no electrical conversion. Protocol and data-rate agnostic, ultra-low latency. Implemented by the POLATIS® OCS.
OPM (Optical Power Meter)
An integrated per-port sensor in the POLATIS® OCS that continuously measures the optical power level (in dBm) of the signal on that port. Used to detect signal degradation before complete link failure.
RESTCONF
A protocol defined in IETF RFC 8040 that provides an HTTP-based programmatic interface to network device configuration and state data modeled in YANG. The POLATIS® switch exposes its full functionality via RESTCONF on port 8008.
RPO (Recovery Point Objective)
The maximum acceptable amount of data loss measured in time. An RPO of zero means no data loss is acceptable; all committed data must be recoverable at the secondary site.
RTO (Recovery Time Objective)
The maximum tolerable elapsed time between a service disruption and the restoration of service to an acceptable level. OCS-based physical layer automation reduces RTO for fiber fault scenarios from hours to under 5 minutes.
TCO (Total Cost of Ownership)
The complete cost of a technology deployment over its operational lifetime, including capital expenditure, operational labor, maintenance, and the cost of downtime or SLA penalties.
VOA (Variable Optical Attenuator)
An optional per-port component in the POLATIS® OCS that can reduce the optical power level on a specific port by a configurable amount. Used to equalize signal levels when working and protection paths have different fiber span lengths.
YANG
A data modeling language used to define the structure of network device configuration and state data, defined in IETF RFC 6020 and 7950. The POLATIS® switch YANG model (optical-switch and polatis-switch modules) defines all resources accessible via RESTCONF.

Appendix B: Executive Brief — One-Page Summary

About This Brief: This appendix is a self-contained summary of the full whitepaper for executive and management audiences. It can be distributed independently.

Automated DR at the Physical Layer Using POLATIS® OCS

The Problem

Enterprise DR programs automate routing, compute failover, and database replication — but leave the physical optical fiber layer managed manually. A single fiber cut or connector failure will defeat every higher-layer DR investment. Manual re-patching takes 30 minutes to 4 hours. The industry average cost of downtime exceeds $300,000 per hour.

The Solution

HUBER+SUHNER POLATIS® Optical Circuit Switches (OCS) deployed at both primary and secondary sites provide automated physical-layer protection switching with sub-50 ms failover. Integrated optical power meters detect signal degradation before link failure. Pre-provisioned dark fiber protection paths activate with a single software command. The RESTCONF API (RFC 8040) enables any .NET or web platform to orchestrate physical-layer failover programmatically.

The Physical Layer Foundation

The Business Case

MetricManual DROCS Automated DR
RTO (physical fault)60–300+ minutes< 5 minutes
Fault detectionNOC alert: 1–60 minOPM alarm: < 5 sec
Switchover speedHuman: 30–240 minOCS: < 50 ms
Avoided downtime cost$600K+ per incident avoided
Technology lifespanPer-protocol equipmentProtocol-agnostic: 1G–800G+

The Programmatic Advantage

The POLATIS® RESTCONF API exposes cross-connect management, optical power telemetry, APS configuration, and event logging over standard HTTP. A .NET BackgroundService monitoring service — approximately 60 lines of C# — can poll OPM readings every 5 seconds and issue a PATCH to activate a protection path automatically when power falls below threshold. No vendor-specific SDK. No proprietary protocol. Standard RFC 8040.

Regulatory Context

DORA (EU, Jan 2025), NERC CIP-014, HIPAA, SOX, and PCI-DSS all include requirements for documented, tested physical infrastructure resilience. Automated L1 DR with POLATIS® OCS provides the evidence trail — via RESTCONF event logs and remote syslog — that auditors require.

Call to Action

POLATIS® Optical Circuit Switches  |  Americas: +1 781 275 5080  |  EMEA: +44 (0)1223 424200
info.polatis@hubersuhner.com  |  polatis.com  |  hubersuhner.com

Disclaimer: Facts and figures are for information only and do not represent any warranty. C# code examples are illustrative patterns requiring review and testing before production deployment.