Skip Navigation

Azure status history

This page contains Post Incident Reviews (PIRs) of previous service issues, each retained for 5 years. From November 20, 2019, this included PIRs for all issues about which we communicated publicly. From June 1, 2022, this includes PIRs for broad issues as described in our documentation.

Product:

Region:

Date:

March 2025

3/26

Test ASH (Tracking ID 8N85-PDG)

Test

3/14

test-copy from targeted comm (Tracking ID 8TL3-99G)

We like to inform you about something...

3/14

test-update (Tracking ID ZLK7-99Z)

test

3/11

create history channel (Tracking ID 9MK2-3TG)

test - save draft test then update to publish

3/4

Mitigated - Cosmos DB - South Central US - Connectivity Issues (Tracking ID 8N_5-VSZ)

Between 12:00 UTC and 13:00 UTC on Mar 4 2025, some customers experienced connectivity issues when attempting to connect to their Cosmos DB resources in the South Central US region. This event also affected other Azure services including Virtual Machines, Storage and App Gateway in the South Central US region.

Current Status: This event is now mitigated. More information will be provided shortly.

February 2025

2/28

test (Tracking ID ST_1-NTZ)

test

2/28

Bug Bash 2_28 (Tracking ID SVV0-RCG)

Bug Bash Testing

2/26

Bryan - WAN issue in the Eastern United States (Tracking ID PMW5-VTG)

Starting at 12:00 UTC on 26 February 2025, customers attempting to access various Azure resources primarily hosted around the Eastern US geo, including East US and East US 2 Azure regions may experience connectivity issues, degraded performance, or errors.

We have confirmed a WAN router has had an issue due to a recent configuration causing packet loss for users traversing this particular device. We have completed the rollback and network telemetry shows traffic moving through the WAN router as expected with limited packet loss. Downstream affected services are beginning to recover, including Storage in East US. More information will be provided in 60 minutes, or as events warrant.

2/26

Daniel - WAN issue in Eastern United States (Tracking ID ZMW5-V8Z)

We have received multiple reports of connectivity issues with various Azure resources. We are actively engaged and investigating a WAN router issue occurring the eastern United States geography...

2/25

test (Tracking ID 9MF4-TTZ)

test

2/10

test (Tracking ID DSB5-JZG)

test

2/6

Test (Tracking ID SNY0-TDG)

test

January 2025

1/31

PIR - Virtual Machines impacting event in West US (Tracking ID TMY0-NSZ)

Post Incident Review (PIR) - Service - Brief Impact Summary

1/24

test (Tracking ID DKJ5-NTG)

test

1/23

test (Tracking ID DS64-VCZ)

test

1/22

test (Tracking ID 8SY1-TSG)

test

1/22

test (Tracking ID CKJ5-T9Z)

test

1/21

test update (Tracking ID CS64-L8Z)

test

1/20

testUpdate (Tracking ID TK35-LSG)

test

1/20

test (Tracking ID DNY0-RPG)

test

1/19

testUpdate (Tracking ID TVJ5-JCZ)

test

1/19

test (Tracking ID 9VM0-JCZ)

test

1/19

teststatusUpdate (Tracking ID SVJ1-JDZ)

test

1/18

test (Tracking ID SN25-JCZ)

test

1/15

test (Tracking ID CL74-RSG)

test test

1/14

WEIRD TEXT ISSUE (Tracking ID ZNY1-H8Z)

By 10:50 UTC on 27 December, >99.8% of the impacted VMs had recovered, with our team re-enabling Azure’s automated detection and remediation mechanisms. Some targeted remediation efforts were required for a remaining small percentage of VMs, requiring manual intervention to bring these back online. 

AzureCosmos d

B:

For Azure Cosmos DB accounts configured with availability zones, there was no impact, and the account maintained availability for reads and writes. 

Impact on other Cosmos DB accounts varied depending on the customer database account regional configurations and consistency settings:  

  • Database accounts configured with availability zones were not impacted by the incident, and maintained availability for reads and writes. 
  • Database accounts with multiple read regions and a single write region outside South Central US maintained availability for reads and writes if configured with session or lower consistency. Accounts using strong or bounded staleness consistency may have experienced write throttling to preserve consistency guarantees until the South Central US region was either taken offline or recovered. This behavior is by design.  
  • Active-passive database accounts with multiple read regions and a single write region in South Central US maintained read availability, but write availability was impacted until the South Central US region was taken offline or recovered. 
  • Single-region database accounts in South Central US without Availability Zone configuration were impacted if any partition resided on the affected instances.

Azure SQL Database: 

For Azure SQL Databases configured with zone redundancy, there was no impact.

A subset of customers in this region experienced unavailability and slow/stuck control plane operations, such as updating the service level objective, for databases that are not configured as zone redundant. Customers with active geo-replication configuration were asked to consider failing out of the region at approximately 22:31 UTC.

Impact duration varied. Most databases recovered after Azure Storage recovered. Some databases took an extended time to recover due to the aforementioned long recovery time of some underlying VMs.