Technology

Application-Optimized SSDs: Life in the Digital Fast Lane

28/02/2023 by Matthias Poppel

In Germany, many highways have no speed limit. It is quite common to see cars moving at a travel speed of 90 mph, with the occasional driver even going faster than 150 mph – that is, if weather conditions and the traffic situation permit such high-adrenaline driving. However, on rocky backcountry roads, an off-road vehicle will get you to your destination faster than any sports car, illustrating that the “tool” needs to fit the task. In the data center, data moves at the speed of light – i.e. considerably faster than any Audi, BMW, Mercedes, or Porsche. But traffic conditions in the data center vary just as widely as those of real-world roads: some data can move uninhibited on multi-lane fiber highways, while others might have to constantly move back and forth along the alleyways of multi-tier application landscapes, or have to cope with the perpetual rush hour traffic of data storage.

In the data center, too, selecting the suitable equipment will determine how fast and efficiently one gets from A to B. This is why solid-state drives (SSDs) have widely replaced the hard-disk drives (HDDs): SSDs make reading data about twenty times faster than HDDs, and writing data up to ten times faster.

SSDs are in a league of their own when compared with old-fashioned spinning storage media, but there can be substantial differences in SSD read/write speeds. Much of this depends on the application that accesses the data: streaming media has different needs of interacting with SSDs than interactive cloud applications or machine learning. This is one of the reasons why the big cloud providers, so-called hyperscalers, like AWS or Microsoft invest the time, effort, and resources to configure, and sometimes even design, their own data center equipment.

Smaller service providers and data center operators don’t have the luxury of having servers and storage adapted to their individual specifications – they have to rely on off-the-shelf hardware. This causes various challenges. For example, standard SSDs experience latency spikes while periodically running memory recovery routines that are colorfully, and fittingly, named "garbage collection". These spikes can cause irritating interruptions when streaming a movie, or slow down e-commerce transactions. So being able to adjust SSD hardware to handle such issues definitely gives the Amazons and Microsofts of this world a head start.

But for data center operators, there is another way to tune SSDs to the demands of individual applications. Today, it is possible to combine state-of-the-art SSDs with specialized software that analyzes how an application utilizes them: how frequently does the app write data, and how rapidly? Do data writes occur randomly or sequentially, etc.? By analyzing app behavior, the SSD firmware can be optimized to deliver exactly what the application needs. At the same time, this avoids unnecessary wear and tear of the solid-state drives.

The Three Benefits of SSD Optimization

Optimizing SSDs for application-specific use aims at increasing performance, but at the same time, data center operators can achieve TCO benefits as well. Specifically, operators benefit in three ways from this innovation:

  1. Latency reduction: SSD firmware that is optimized for individual applications can reduce read/write response times by up to factor 2. In the example mentioned above, a colocation provider for a video streaming service could tweak the SSD firmware in a way that avoids garbage collection during active video streaming. This means that the colo provider could offer guaranteed response times for streaming – and turn a small technical tweak into a monetizable business benefit.

  2. Endurance: SSDs need to be replaced regularly, usually after three years. Stress tests show that by optimizing SSDs for apps, their lifecycle can be extended to five years. The reason: data writes are spread out more evenly across the SSD data space. Hyperscalers use custom methods to achieve this. With application-optimized SSDs, however, the SSD will perform these adjustments continually and automatically, with no need for further intervention.

  3. Steady performance: Initial SSD read/write performance tends to decline rapidly. Usually, it will be considerably lower after only twelve to 18 months, in some cases dropping to just one third of the initial capacity. Here too, application-optimized SSDs perform much better by employing app-specific adjustments to improve data write handling: their optimized firmware will not only make the SSDs last longer, but will also keep read/write performance much closer to the initial level, with a performance drop of less than ten percent. Combined with the extended lifecycle, this performance boost makes app-optimized SSDs much more cost-efficient.

Gearing up for the Digital Race

In their highly competitive market, data center operators need to adapt their storage equipment to match the application load as exactly as possible. When buying a car, it is easy to see that you need to pick the one that suits your individual driving style and the road conditions you are likely to face – is it the asphalt of highways and city streets, or the rocks and gravel of the wilderness? In data center storage, the need to adjust the equipment to the task at hand is less obvious, but even more important, as the user experience of millions of customers might depend on it. Optimizing the SSD firmware for individual applications gives data center operators the pole position within their respective market segments – while faster, more durable, and more reliable solid-state drives allow them to leave the competition in the rearview mirror. swissbit

Learn more on Swissbit Data Center Solutions