Resolution Summary
We sincerely apologise for the inconvenience caused. This incident resulted from a sequence of compounding events that amplified the impact and extended the resolution time longer than expected. Our emergency response processes were initiated, however, due to external factors, they did not function as effectively as intended.
For full transparency, please see the details below.
Overview
Between 8:00am and 9:50am AEST on 2 February 2026, Core Practice experienced a performance degradation due to a failure in the VM Autoscale automation capability. As a result, newly created virtual machines were unable to successfully install startup configurations, causing them to report an unhealthy state. This led to reduced VM capacity across the platform.
8:34am AEST, We became aware of the issue following customer reports of system slowness.
8:42am AEST, Our engineering team initiated emergency procedures to manually override VM configurations and restore individual virtual machines to a healthy state. In parallel, a separate team began provisioning a new VM pool to restore capacity.
9:15am AEST, We identified ongoing service issues reported by our service provider, Microsoft Azure. Relevant incidents published on the Azure Status page included: https://azure.status.microsoft/en-us/status
19:46 UTC, 2 February 2026 – Incorrect notifications during VM management operations 00:02 UTC, 3 February 2026 – Issues related to storage accounts and host extension packages Issue Amplification
The incident was further compounded by Azure reporting incorrect VM health and provisioning states. While engineers were manually overriding configuration files, Azure Monitoring continued to display affected VMs as unhealthy. Under normal circumstances, these VMs should have reported a healthy status. This misleading information diverted engineering efforts toward investigating additional, non-existent issues.
9:40am AEST, We confirmed that the manually overridden VMs were operating correctly and that the primary remaining issue was inaccurate health and provisioning status reporting within Azure Monitoring. Engineers promptly updated the configurations on the remaining VMs to restore full system functionality.
9:50am AEST All systems were fully restored and operating normally.
Root Cause
The root cause of this incident was an ongoing service issue within Microsoft Azure that affected storage accounts used by VM custom extension packages. These storage accounts are required during VM provisioning to startup configuration routine. This issue was further exacerbated by Azure reporting incorrect VM health and provisioning states, which delayed identification of the true operational status of affected virtual machines.
Next Steps
We sincerely apologise for the impact this incident had on affected customers. We are continuing to work closely with Microsoft Azure to:
Improve the reliability and accuracy of health and provisioning state reporting. Improve our emergency response notification and procedures. Investigate architectural improvements to increase system resiliency and automation to reduce resolution time for future incidents.