With all the talk about newer HD and 4K video standards and implementation, we wanted to know more about Dolby Vision technology. So Techook got in touch with Dolby to get first hand information on the same. We mailed a few questions to the company officials and they were kind enough to reply back. While originally it was just meant for our knowledge, we thought why not share it with our readers too.
Yes, our chat does get a bit technical but here it is. If it feels like Greek to you and would like to understand it better, do drop us a line in the comments section below and we will try and simplify it for you.
Can you share with us the precise video standards dictated by Dolby vision, that is, colour gamut requirements, maximum white point, minimum black point, contrast ratios and the maximum resolution supported by Dolby Vision for mobile?
In a nutshell, Dolby Vision is a transformative technology for imaging that delivers a dramatic visual experience — higher brightness, superior contrast and captivating color- that brings entertainment to life via OTT online streaming, broadcast and gaming applications. It achieves this image quality by leveraging breakthrough HDR and wide color gamut imaging technologies, both on-screen and in specially mastered content. As a result, Dolby Vision enabled devices deliver images with much greater brightness and provide much deeper, more nuanced and detailed darks, while rendering a fuller palette of rich colors on screen.
To deliver this experience, Dolby works with today’s end-to-end ecosystem, from capture to playback. It augments 2k/HD and 4k/UHD content for the cinema, over-the-top online streaming, broadcast, Blu-ray and gaming applications, by maintaining and accurately reproducing the high dynamic range and wide color gamut of the original signal.
At the request of the industry, we have standardized the relevant components of the Dolby Vision solutions in bodies like the ITU-R, SMPTE, BDA, ETSI and others. This will ensure visibility into the technology used and facilitate broad adoption.
How does an OEM go about getting the Dolby Vision certification? Is the OEM required to manufacture their displays in any special ways?
A TV manufacturer needs to license the technology (from Dolby), use panels and backlighting technology that best complement the technology, and implement the necessary components from our licensing kits. Dolby provides guidance and support along the way.
This process typically starts with the selection of an SoC that is already Dolby Vision enabled. There is a broad selection of chipsets available for both TV as well as source devices (such as STBs, BD-players and DMAs).
We are trying to understand what factors will tip the scales in favour of HDR10 for an OEM instead of Dolby Vision since they are both software driven (and that the latter supports HDR10 by default).
The main difference between HDR10/PQ10 and Dolby Vision is that the Dolby Vision technology suite includes dynamic metadata (containing info about the content in a scene by scene and sometimes frame by frame) and there is an intelligent display mapping unit in the playback side. The combination of the two enables accurate color reproduction, detail retention and superior contrast. HDR10/PQ10 on the other hand is a single ended solution that relies on the display device to “stretch and boost” the image. The net result is that a compliant device playing back Dolby Vision content will have more detail, realistic & vivid colors, better contrast and deliver an overall much more realistic viewing experience to the consumer.
It also maximizes the content choice for the consumer as all Dolby Vision TVs play back content in both Dolby Vision and HDR10.
How does the HDR10 workflow compare to the Dolby Vision workflow?
The majority of HDR content creation uses the Dolby Vision workflow. This means that the highest quality HDR master is created first: The Dolby Vision master in 12-bit PQ with dynamic metadata. From that, all lesser versions such as HDR10 and standard dynamic range legacy versions (for conventional HD video) can be derived at the push of a button. The major difference is HDR10 does not generate dynamic metadata (Scene by Scene and Frame by Frame) like Dolby Vision. Therefore, the HDR10 artistic intent may not be well preserved over a wide range of consumer displays and devices.
By using the Dolby Vision Metadata and the Dolby Vision CM offline process creatives can render out your key deliverables for other HDR formats and Rec 709 SDR. This really makes Dolby Vision post workflows the best choice to efficiently deliver all needed options for formats on the market today and best maintain the creative intent across those formats.
What are the real visual differences between HDR10 and Dolby Vision VS10 playback experience?
The differences between the two become more noticeable when the peak brightness of the TV is different (either above or below) from the HDR10 master, normally 1000 Nits. The Dolby Vision image will have more detail and accuracy than the HDR10 version of the same content.
This is because the Dolby Vision signal contains dynamic metadata that enables accurate reproduction, mapping and detail preservation. In the case of HDR10, the DTV SoC (System on Chip) has to rely on “extrapolation” since there is no guide data. Thus Dolby Vision allows for scalability across performance tiers for the best quality image on a range of TVs.
Since Dolby Vision on smartphones is a software based implementation, is it in any way different from the way it is implemented on TVs using a dedicated chip?
No matter what form factor, we deliver a consistent Dolby Vision experience. It can be enabled in hardware, software or hybrid implementations as long as those platforms are capable of running the algorithms required to produce the Dolby Vision experience.
Could TVs also eventually get Dolby Vision’s software based implementation?
Technology is always evolving but we have nothing to announce at this time but as we mentioned earlier, as long as those platforms are capable of running the algorithms it’s possible.