Tips & Tricks

[Tips & Tricks][bigposts]


[Mobile Tips][twocolumns]

Are 4K videos recorded on the iPhone really 4K?

Yes, The video has a 4K resolution. Unfortunately, too many people mistakenly believe resolution is the be all and end all definition of picture quality. There are many other factors that determine what the video will look like other than resolution. The first mistake people make is to believe resolution equals sharpness. Sharpness also depends on the lens, the size and type of the sensor, the alignment of the lens and sensor, and the nuances of the image processing that occurs just as the image is recorded not to mention the device on which the image is viewed. Ultimately, it’s up to individuals to determine if any recorder regardless of its resolution is good enough for them. Don’t be swayed by resolution specs alone and the marketing hype behind it.

But it’s important to understand that “4K” isn’t one thing. 4K is a general class of video resolution nominally 4000 x 2000 pixels in size. Thus the term “4K”.

Today’s iPhone videos are recorded in “consumer 4K,” otherwise known as UltraHD. That’s 3840 x 2160 pixels in size. The common professional 4K standard format is called DCI-4K (Digital Cinema Initiative), which is 4096 x 2160 pixels.

The other thing to know is that, being a space-limited smartphone, the iPhones are going to record very highly compressed 4K video. Professional cameras record IPB-encoded 4K at 200–400Mb/s, and many record AVC-Intra at yet-higher bitrates. The standard for consumer 4K recording in reasonably serious cameras is around 100Mb/s. This makes sense: the consumer standard for HD, at 1/4 the resolution, was more or less 25Mb/s, depending on the exact encoding used. The iPhone standard is about 55Mb/s for 2160p60 and 24Mb/s for 2160p24. They can record in H.265 (HEVC) compression rather than AVC, which will help a bit on quality, but it’s still fairly low-end consumer video.

So what’s the problem with lower bitrate encoding? All MPEG algorithms, as well as JPEG, work basically the same way. First, the color is changed from RGB to a YUV format called YPrPb. Then the video stream is “subsampled”… some of the color samples are just tossed out. Cinema quality cameras don’t lose anything (they record in “raw” video), higher end 4K cameras subsample at 4:2:2, tossing out half of the color samples, and consumer grade video, usually with a subsampling dubbed 4:2:0, tosses out a full three-quarters of the color information.

It’s good enough, since, at least as originally imagined, highly subsampled and compressed video was playback-only, and the things tossed out are those less noticable to we humans. If you any television at home, you see it every day. DVD, Blu-ray, ATSC, DVB, cable, satellite, streaming, they all use the same set of video encoding algorithms. Your eye just isn’t that sensitive to color  you only have about 6 million color receptors per eye, out of a total of about 130,000 rods and cones.

Video is still 4K resolution from the pixel count perspective, but you’re not going to see the same detail encoding at 25Mb/s that you would encoding at 400Mb/s. And of course, it’s all moot unless you’re viewing the output on a 4K display, and sitting close enough to that display to actually see any detail improvments in 4K video. So if you just bought that shiny new 55″ 4K (UltraHD) television, make sure you’re not sitting much beyond 6ft/2m from the screen, or you might as well just have an HDTV.

You can watch your recorded video on your iPhone, but not in 4K. Apple screens are typically less than half the 4K resolutions. 

Keep visiting for more interesting contents.

No comments:

Post a Comment