Could Helium and Heat Assisted Magnetic Recording outpace SSD as the Future for Data Storage?
According to a forecast by IDC Research, “Alarmingly, the exponential [data] growth rate easily exceeds our ability to store it, even when accounting for forecast improvements in storage technologies.”
By 2019, Cisco anticipates that 55% of internet users, roughly 2 billion, will be using personal cloud storage. They also expect that by 2019, each one of these users will create 1.6 GB of traffic per month for consumer cloud storage. This is up significantly from the 992 MB per month from 2014.
When you factor in the growth from big data, the Internet of Things, and more, Cisco is expecting that we will reach 507 ZB per year by 2019.
A growing trend in the storage industry is the Solid State Drive (SSD). According to Jim O’Reilly with TechTarget, “Storage is evolving at a phenomenal pace to meet future data center needs. We can expect all-SSD storage products to replace bulk hard drive secondary tier boxes by 2020, while either some form of local PCIe drives or vSAN will host primary storage.”
Most companies can easily justify using SSDs in servers. Using SSD, machines are able to handle faster response runtimes and increased workloads with performance gains. SSDs have lower latency, in many cases 1,000 times the number of IOPS with 3 to 5 times the throughput. Since SSD drives run faster than traditional hard disk drives, it allows the same workload to be done with fewer servers, which can reduce overall server cost.
Can SSD be too fast in some cases? In the case of traditional RAID, it can. SSD is so fast, it can cause issues with traditional RAID configurations. According to Jim O’Reilly with TechTarget, “First, SSD pushes the limits of RAID. With SSDs, most RAID controllers become bottlenecks in RAID 5 mode, throwing away a good part of available performance. Consider that four SSDs can handle 1.6G IOPS. That's much more than any RAID controller's XOR engine can support, and also faster than what the RAID controller's CPUs handle well from an interrupt point of view. Thus, when deploying SSDs, it's better to use a RAID 1 or 10 mirror for data protection, which can be achieved using host software.”
Another big growth factor is that SSDs have capacity beyond the largest spinning hard drives. With traditional hard disk drives (HDD), vendors are running up against physical limitations, “Helium-filled drives and shingled recording together have made 10 TB drives possible, but any larger capacities will need either heat-assisted magnetic recording, which uses a laser to soften a spot on the magnetic media, or pre-formatted disks with isolated pits for each bit. These are both several years from production. Meanwhile, we've seen the announcement of 16 TB SSDs and we can expect SSD capacity to grow to as much as 30 TB by 2020,” says O’Reilly.
Google is also intensely aware of the problem and is looking for bigger, long term solutions. This is why companies like Google, Microsoft, and the University of Washington are working with academia and other industry leaders to create new types of storage that are able to handle the load.
“The current generation of disks, often called “nearline enterprise” disks, are not optimized for this new use case; they are designed around the needs of traditional servers. Google believes it is time to develop a new kind of disks designed specifically for large-scale data centers and cloud services,” says Bill Kleyman with Data Center Knowledge.
Google’s VP of infrastructure, Eric Brewer wants to work with different industry officials and academics to create new types of disk that are built to better support cloud-based storage and data centers. Google is asking for industry leaders to meet and talk about new standards for redesign. This could potentially do away with the 3 ½ inch HDD geometry that dates back to the floppy disk.
When asked about SSDs, Google agrees that they can deliver better IOPS than traditional drives and could possibly be “the future of storage technologies”. However Google says that the cost per gigabyte with SSD is too high and that growth rates and capacity between traditional disk and SSDs are close so that cost will not change significantly within a decade. According to Kleyman, “Google does make extensive use of SSDs, but it uses them primarily for high-performance workloads and caching, which helps disk storage by shifting seeks to SSDs.”
For the next few years, it will be interesting to see how SSDs will continue to grow, as it holds great promise for data center storage. It will also be interesting to see how Google and other technology companies will rethink data storage in the years to come.
Subscribe to our blog to get the latest up-to-date data center news.