After all production systems had already recovered, our off-site database backup was still lagging behind. This issue is now fixed, and we will be going back to sleep.
Ticket ordering processSystem available
Our automatic monitoring has detected that the system is running correctly. We are still watching the system closely for any irregularities.
Main web serviceDatabase node issue
While the system in general recovered, after the long downtime one of our database nodes has trouble re-syncing. We're running on the remaining nodes and are trying to get the missing node back up.
Main web serviceRoot cause identified
According to our provider, the root cause was a major DDoS attack to another customer of the data center that wasn't filtered correctly due to a configuration error.
Main web serviceProblems continuing
The connectivity issues persist, especially on IPv6. We're trying everything in our power to recover quickly.
Main web serviceProblem identified
Our main data center provider has major internal networking issues and is losing network packags. Connections are therefore interrupted from time to time at the moment.
Since connections worked part of the time, our monitoring only alerted us very late, since it's the middle of the night in Europe and our monitoring is configured to only alert us really loud at night if the system is down for more than a few minutes.
Our data center says that the root cause has been resolved and the system will recover shortly.
Main web serviceOutage detected
Our automatic monitoring has detected an outage. Our team has already been informed and is trying to resolve the issue as quickly as possible. We are sorry for any inconvenience this causes to you.