One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you’re willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we’re unlikely to see much demand for drives over 16TB.

Speaking with a few individuals at the show about expanding capacities, storage customers that need high density are starting to discuss maximum drive size requirements based on their implementation needs. One message starting to come through is that storage deployments are looking at managing risk with drive size – sure, a large capacity drive allows for high-density, but in a drive failure of a large drive means a lot of data is going to be lost.

If we consider how data is used in the datacentre, there are several levels regarding how often the data is used. Long-term storage, known as cold storage, is accessed very infrequently and occupied with mechanical hard-drives with long-time data retention. A large drive failure at this level might lose substantial archival data, or require long build times. More regularly accessed storage, or nearline storage / warm storage, is accessed frequently but is often used as a localised cache from the long-term storage. For this case, imagine Netflix storing a good amount of its back-catalogue for users to access – a loss of a drive here requires accessing colder storage, and the rebuild times come in to play. For hot storage, the storage that has constant read/write access, we’re often dealing with DRAM or large database work with many operations per second. This is where a drive failure and rebuild can result in critical issues with server uptime and availability.

Ultimately the size of the drive and the failure rate leads to element of risks and downtime, and aside from engineering more reliant drives, the other variable for risk management is drive size. 16TB, based on the conversations I’ve had today, seems to be that inflection point; no-one wants to lose 16TB of data in one go, regardless of how often it is accessed, or how well a storage array has additional failover metrics.

I was told that sure, drives above 16TB do exist in the market, however aside from niche applications (such as risk is an acceptable factor for higher density), volumes are low. This inflection point, one would imagine, is subject to change based on how the nature of data and data analytics will change over time. Samsung’s PM983 NF1 drive tops out at 16 TB, and while Intel has shown samples of 8 TB units of its long ruler E1.L form factor, it has listed future drives using QLC up to 32TB. Of course, 16 TB per drive puts no limits on the number of drives per system – we have seen 1U units with 36 of these drives in the past, and Intel has been promoting up to 1 PB in a 1U form factor. It is worth noting that the market for 8 TB SATA SSDs is relatively small - no-one wants to rebuild that large a drive at 500 MB/s, which would take a minimum of 4.44 hours, bringing server uptime down to 99.95% rather than the 99.999% metric (5m22 per year).

Related Reading

POST A COMMENT

83 Comments

View All Comments

  • dgingeri - Wednesday, March 13, 2019 - link

    I would love 4TB per drive at under $100 each. Reply
  • nagi603 - Wednesday, March 13, 2019 - link

    This. If SSDs would have a lower price point, perhaps only double, or - and I know I'm dreaming here - 50% extra over that of cheap HDDs, (which is ~$360 with tax, using local prices, as opposed to currently that being enough for 2TB quad-level cell SSD only) I'd get rid of my 8TB archive HDDs without a second thought. My NAS system already has redundancy, and with SSDs, it would be virtually inaudible. Reply
  • coburn_c - Thursday, March 14, 2019 - link

    If capacity stagnates flash could catch up and create actual competition for price per gigabyte. Reply
  • TelstarTOS - Friday, March 15, 2019 - link

    Bring 3-4TB ones to consumers. Reply
  • philehidiot - Monday, March 18, 2019 - link

    For me, I have four SSDs and two HDDs. The SSDs have been built up over the years and have got bigger and faster as time goes by, each new one becoming the root drive and the older ones moving to storage. For my use I think I've reached the peak in terms of noticeable performance people and going faster isn't going to yield tangible results. This opens me up to maintaining performance and increasing capacity so within my budget will be 1TB drives quite readily next time. I may try an M2 NVMe drive next time if the price is right and use another drive for backing up.

    On a side note, my missus is trialling a mechanical keybaord next door and I can hear she's typing faster than I can. This SHALL NOT STAND.
    Reply
  • philehidiot - Monday, March 18, 2019 - link

    "performance people"????? I've clearly been drinking too much.
    And yes, I recognise the irony of my other typos. Shut up.
    Reply
  • abufrejoval - Tuesday, March 19, 2019 - link

    After some reflection:
    That headline is clickbait, and you probably noticed with the discussion it launched.

    You have two distinct points, one for HDD another for SSD: That they coincide currently at around 16TB IMHO has little to do with that specific capacity.

    For HDDs the recovery time loss could be a point, but it's related to bandwidth. If multi-active head technology becomes more prevalent, that "manageable risk" capacity point will jump with the sequential bandwidth.

    With SSDs you need to keep IOPS and capacity in balance, which means you'd have to put in more channels and switching fabrics inside the SSD to keep performance in line, either on-chip (bigger chips) or with multi-chip. Since these chips are little processing--lots of I/O they won't shrink well, so there is no pricing benefit. And if you go multi-chip, there are good reasons to go modular, which proves your point.

    So I guess we'll see some effort put into making the modular design less expensive in terms of connectors and perhaps we'll see some fixed aggregates (modular design, but fixed assembly to save on interconnector cost and reliability downturn).

    So I guess you're raising a valid point, but could have said it better :-)
    Reply
  • PeachNCream - Wednesday, March 13, 2019 - link

    While formatting a 1GB hard drive in a mom and pop computer shop in the mid-1990s, all of the technicians gathered around to watch it, remarking to one another that no one would ever need that much capacity. In 2017, I got hassled for buying a new smartphone with only 1GB of RAM.

    I think it's only a matter of time until there will be a demand for that sort of density and I think that NAND has enough legs to still be around when that reality arrives...relatively soon.
    Reply
  • willis936 - Wednesday, March 13, 2019 - link

    No one is saying they don't want more storage. They're saying they don't want more than a certain amount of storage if it's shelf life is below a certain threshold. I mean yeah if you take it to the logical extreme where SSDs are literally free then you could probably come up with a cold storage system based around replacing the SSDs relatively often, but that's a ridiculous premise. Why generate all of that waste? Why not just choose a more suitable technology? Reply
  • Eliadbu - Thursday, March 14, 2019 - link

    They don't want larger drives since they don't want the risk of long server downtime when they need to recover it incase of faulty drive. If recovery process would have been faster and less risky in terms of server down time then yes they would have got higher capacity drives. Reply

Log in

Don't have an account? Sign up now