On Tuesday evening, post the launch of the new home page, we had a second set of performance problems that impacted the entire tes.com site around 6pm for 40 minutes, and then subsequently during two periods at 9pm and 12am. The root cause turned out to be a misconfigured Redis caching server that was moved to in response to the issues on the 24th of April. During the post-mortem of the issues the day before we had agreed that a key action was to upgrade and improve the monitoring of the part of our platform that does the composition of the shared fragments (e.g.
We had a number of site related performance issues on Monday 24th April that impacted the entirety of tes.com. The fix to which (as always) was deceptively simple, and resulted in response times on average dropping from 100ms to 10ms, and CPU usage on the server reduce by almost 400%. As part of the rebrand we have been rebuilding the services that supply shared assets to all parts of our platform, which include core styles, images and the fragments of HTML for the masthead, footer and left hand navigation rail.
Debugging allows us developers to assume the role of detective, and like any good detective, we need to consult all of our sources to understand what’s going on. If your application uses MongoDB for persistence, one source you have available is the oplog. What is the oplog? The MongoDB oplog, or operations log, is a standard capped MongoDB collection. Each document in the collection is a record of a write operation (a delete, update or insertion) that has resulted in data being changed.
This is the second in a series of posts about improving page performance. Part 1 discussed what we're measuring and how. A video of me talking about the performance issues discussed in this post. The problem For the job details page, we accept banners supplied by schools which aren't compressed as well they could be. Large images don't block the rendering of the main content, however they hog bandwidth, especially on mobile.
A video of me talking about the performance issues discussed in this post. How are we defining 'page load speed'? How quickly the user can see and interact with core page content after they navigate. Non-core content could be adverts, their user avatar, or recommended links. It's important they appear as quickly as possible but they're not the main reason the user navigated to the page. Where are the biggest gains to be made?
We have recently added a new feature that allows a user to upload a file from our webpage. We implemented this using redux-plupload, ClamAV and S3 to satisfy the following requirements: the file should be uploaded from the client to avoid excessive memory use on the server while streaming files. the upload must be secure and the file must be stored securely (and ideally encrypted at rest). the file should be virus free so that it can be downloaded without worry.
One of the issues of using public GitHub is that, well, it’s public. Even with the layers of security, it’s all your information ‘out there’. Somewhere. However, it is a fact of life that we all use GitHub and many large and small companies choose the hosted GitHub option over hosting an in-house, expensive GitHub Enterprise environment. The problem is that developers and operations folks sometimes push things into GitHub without thinking.