Service monitor detected the build-up of unprocessed requests at 9:00AM AEDT. Engineer initiated secondary backup memory server and began reloading the data into memory server by 9:15AM AEDT. The cache refresh process on memory server took 15 minutes to complete and at 9:30AM AEDT most service was restored. The impacted network path was pinpointed and traffic was reconfigured to bypass the failure path.
Next steps: We sincerely apologize for the impact to affected customers. Investigations have assessed the best ways to improve architectural resiliency and mitigate the issues that contributed to this incident and its widespread impact.
Please perform the following: 1. Reconnect Hicaps and Media Bridge 2.Refresh browser
Posted Nov 18, 2019 - 09:15 AEDT
Lost of network connectivity on our memory server was determined as the underlying root cause, it also coincided with morning peak traffic. This combination of events caused build-up of unprocessed request and longer recovery time. Engineers have routed the traffic to minimize any customer impact.
Posted Nov 18, 2019 - 09:00 AEDT
Between 8:45AM and 9:15AM AEDT (approx) on 18 Nov 2019, we have identified one of our server may have intermittently experienced degraded performance, during this time our users may experience limited functionality or was not available.
Posted Nov 18, 2019 - 08:45 AEDT
This incident affected: Core Practice Application.