We're looking into the cause of a service interruption. We'll update here as soon as we have details.
UPDATE 4:30pm CST: Service was restored at 4:18pm CST. We are still looking into the root cause.
UPDATE 5:33pm CST: We believe we have identified the root cause as a large number of sustained requests. We will be working on improving rate-limiting, and also at moving specific services to different clusters.
UPDATE 2018-04-05, 2:30pm CST: After further analysis, we have tracked down the root cause to an incorrectly placed git tag. When this tag was deployed, it triggered other issues that ultimately caused excessive forking of processes on our webservers. Though we rolled back the faulty deploy within 10 minutes, the webservers were still struggling under the increased load. In order to restore service, we rebooted our webservers. Service was then restored, and we worked to diagnose the root cause.
To prevent this from happening in the future, we have implemented new internal procedures to review tags before deployment. (Though we have strict code review requirements in place, those same requirements weren't in place for one portion of the deployment process, and that's what bit us.)
We sincerely apologize for the trouble this caused.