Brought to you by:

  • Adorama
  • AJA Video Systems
  • ARRI
  • Backblaze
  • Bexel
  • IDX
  • LiteGear
  • LiveU
  • Marshall Electronics
  • Quantum
  • Schneider Kreuznach
  • Zeiss
image

Jason Coari, Director, High-Performance Storage Solutions at Quantum

Large and unstructured data, such as that obtained with video, is often 50 times larger than the average corporate database. This “unstructured” data is projected to surpass 100 Zetabytes worldwide by 2020. Out of the several influencing factors driving this trend, the fact that content resolutions are rapidly rising is fuelling this growth even further. Not only is 4K going mainstream, but 8K and high dynamic range (HDR) content are becoming a more likely choice for a variety of applications such as corporate video, sports, and VR/AR.

After years of missing industry growth expectations, 2019 is expected to be the year where virtual reality (VR) and augmented reality (AR) applications finally cross the chasm into higher consumption territory by mainstream audiences. To produce this footage, media organizations are turning to 360-degree cameras and employing volumetric filmmaking to create amazingly immersive, three-dimensional experiences. But to deliver those experiences, media teams need to be able to ingest, work with, and store huge amounts of visual data. Today’s 360-degree and light-field cameras generate multiple gigabytes of data per second! 

So what about sports video production, and the transformation taking place to start broadcasting world-class sporting events in 8K and HDR formats? It may sound like something far in the future, but we are closer to that time than most people realize. In just a little over a year, the 2020 Olympics in Tokyo will be the first major international sporting event broadcast natively in 8K. Not surprising given Japan public broadcaster NHK was one of the only companies to have created a small broadcasting camera with an 8K image sensor in 2018, and in December, launched a broadcast channel with 8K capability.

These up-and-coming high-resolution formats add data to streams that must be ingested at a much higher rate, and all that extra data eventually requires more storage. HDR-capable cameras typically record in high-resolution RAW image files. Before RAW camera files can be edited, they need to be converted to a format that can be used by editing and effects systems. Therefore, HDR content will not only require storage fast enough to keep up with it, but storage that also has the built-in intelligence to properly organize millions of large files in a way that allows them to be played back from multiple workstations simultaneously.

A New Take on All-Flash Storage — NVMe

Non-volatile memory express (NVMe) is a technology that will allow users to finally unlock the true potential of flash – dramatically reducing latencies and enabling IP-based infrastructures to achieve staggering performance numbers, while also reducing expensive fiber infrastructure costs. Given this impact on performance and the ability to significantly accelerate ingest and other aspects of media workflows, according to G2M Research, the NVMe market is estimated to reach $80 billion in 2022; that’s more than a 25 percent increase from the estimated $20 billion market it is today. NVMe also sets the stage for truly software-defined infrastructures and can free up hardware resources to focus on advanced analytics.However, as with most new technologies, there is always a catch.  And here NVMe is no different; the challenge most companies will face when deploying NVMe-based solutions is how to support the growing volumes of large files that come with higher resolutions, with the known capacity limitations of flash.

Simultaneous workflows with varying requirements

Most broadcast and post-production organizations have multiple workflows operating simultaneously at any given time, but in ways that don’t necessarily require the same support from the underlying storage solution. For example, actions such as real-time editing and color correction require fast storage, while offline work with hi-res content does not. In the latter scenario, data can simply be loaded into the workstation, and the workstation does the work and then sends the media back to a centralized storage location. Workflows such as this – with variable storage requirements – therefore necessitate the need to have flexibility in a storage solution so that performance, capacity and cost can all be balanced.

Having your cake and eating it too — making NVMe solutions more cost-effective

As stated earlier, NVMe is ideal for high-performance workflows, especially for tasks like real-time, high-resolution content editing. However, for most modern M&E post-houses and broadcasters, there are several other workflows that don’t require this kind of storage horsepower. So as long as the file system can support multiple workflows simultaneously and provide the flexibility to architect varying tiers of storage that can be optimized for their specific application, there inherently is an opportunity for organizations to get all the benefits of flash, but to do so much more cost-effectively.  It is in this increasingly common scenario that a good file system can make all the difference – tying everything together and providing comprehensive and coordinated accessibility to rich media content.

In practical terms, when such balance has been achieved in an organization’s storage architecture, clients can access flash storage over a very fast Ethernet at the same time as low-priority clients conducting offline editing, connected by a lower-bandwidth Ethernet attached to a NAS. This ability for everyone to see and access the same piece of content from the same centralized location allows organizations to deploy workflows more efficiently. An editor can do a real-time edit on a piece of content and once finished, it becomes available again for others to access and edit, transcode, overlay text, etc. – thus creating a truly end-to-end optimized workflow.

The last crucial piece of the puzzle — a capacity optimized tier

Having a multiple tiered storage environment with archive is key in retaining massive volumes of valuable content – often accessed very infrequently and doing so cost-effectively. Policies that automatically protect and archive content give users the power to scale their workflows and, as a result, their business. When more capacity is needed, simply add more cost-effective tape storage. Alternatively, when more performance is needed, add more flash and a storage controller. Having the ability to independently scale on capacity and performance is a tremendous advantage.

Flexibility is the answer

When it comes to storage, there’s no one-size-fits-all solution. And given the demands being placed on today’s media organizations, a storage solution that can be architected in a way that most closely aligns with the needs of the business is more crucial than ever before.  

A storage solution and modern file system that combines all of the advantages of different networking types, such as SAN and NAS, and allows users to choose amongst various storage media – whether it be tape, disk, and NVMe-based flash – will ultimately provide the best blend of performance, capacity, and cost.  Not only will this balanced solution have a meaningful positive effect on the bottom line, but it will also allow an organization’s staff to operate at maximum efficiency.