This is what 10Bit 4:2:2 video actually means

Oct 27, 2016

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

This is what 10Bit 4:2:2 video actually means

Oct 27, 2016

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

Share on:


The new Panasonic Lumix GH5 will record 10Bit 4:2:2 video internally, but what exactly does that mean? how will it benefit you? Isn’t video just video? Why is this such a big deal? No, video isn’t just video, and it means quite a great deal.

Fortunately for us, Filmmaker Griffin Hammond is here to explain. In short, it offers more tone in colours with less risk of banding in gradients. It makes it easier to chroma key or green screen your footage. It also makes it easier to correct and grade your footage with minimal loss.

YouTube video

The way most current DSLRs work is that they capture 8Bit 4:2:0 footage. This means for every group of 4 horizontal x 2 vertical pixels, it records colour from two of the top row, and zero from the second row. These colour pixels are then expanded to fill in the gaps.


10Bit 4:2:2 means that for each 4×2 pixel grid, 2 colour pixels are recorded from the first row, as well as 2 from the second row. This results in much cleaner footage for things like green screening. The colour around edges is more defined. The extra 2 Bits of data bump the number of possible colours each of those pixels can be from 1 in 16.7 million to 1 in over 1 billion possible colours.


The best possible amount of information 4:4:4 footage. This means that every single pixel in each 4×2 pixel grid of a sensor records its own unique colour information.

Do you really need it? Well, the answer is largely that “it depends”. If you’re basically shooting, editing and rendering straight back out with no colour correction or grading and no fancy special effects, then probably not. If you need to start doing chroma key, or colour grading, then it will make a big difference.

It’s a little bit like the difference between JPG and RAW (although 10Bit 4:2:2 still isn’t raw video). Sure, you can get a fantastic shot from a JPG if your camera’s set just right. But, if you decide you want to make changes to it in post, the information can degrade very quickly if you’re not careful.

A RAW file, on the other hand, contains so much “hidden” information that your monitor can’t display that pushing and prodding pixels in different directions doesn’t have as much of a negative effect. 10Bit 4:2:2 just gives you a lot more latitude in post to retain detail in the extremes, and more accurate colour information.

Previously, you’d need clean HDMI output from your camera to external recorders like the Atomos Ninja to get this sort of information. And still, most DSLRs would only output 8Bit. It was still better than recording internally to 8Bit 4:2:0, but nowhere near as good as 10Bit 4:2:2. At least, not from a post processing perspective.

Even if you decide that the GH5 isn’t the camera for your next video project, the fact that it offers 10Bit 4:2:2 internal recording is fantastic. It means that the other big players will hopefully also start to include it in their future generation of cameras.

What do you think? Has this got you excited for the GH5? Will you be considering adding one to your arsenal now? Had you already decided this was the next camera for you? Or are you holding out hope that your preferred manufacturer will now incorporate this in future models, too? Let us know in the comments.

Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

John Aldred

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *

19 responses to “This is what 10Bit 4:2:2 video actually means”

  1. Melbar Avatar

    awaiting your new and enhanced cat videos…..

  2. rutenrudi Avatar

    Yet it also means very little if the bitrate can’t hold up. Then you
    have makroblocking everywhere, instead of banding; that is still awful. I
    hope they give the GH5 at least 300mbit at 4K 10bit 422, rather more like 400

    1. theSUBVERSIVE Avatar

      It’s not that simple, bit rate alone can hardly tell if there might be compression issues or not.

      A 150mbps H.264 Long GOP file can be just as good as a much higher bit rate ProRes file, with no macro blocking whatsoever.

      Each codec has its stadard compression to quality ratio, the bit rate will depend on the codec, the type of compression and the chroma subsampling of choice.

      The 1DX and 5D offers 4K at 500mbps but with an old Motion JPEG codec with 8-bit 422 Intra compression. If you take a XAVC-I codec from Sony with H.264 10-bit 422 with 240mbps for 4K 24p which is much more current, despite having double the bit rate, the XAVC-I will still be better with 10-bit instead of 8-bit and being an H.264 Intra codec, you can edit it right away when you will need to transcode the Motion JPEG from Canon.

      H.265 can keep the same quality as any other codec for about 50% the file size – but it will need much more hardware power to encode/decode.

      So image quality and bit rate is not such a direct comparison, you can compare under the same codec and same chroma subsampling, because the bit rate will define the compression ratio, but other than that, the bit rate won’t tell you much about the quality of the image.

      As for the GH5, I expect a H.264 Intra codec similar to the Sony XAVC-I. Panasonic has the AVC Ultra, which is also a H.264 Intra codec, but it has higher bit rate and it’s exclusive to higher end video cameras. If so, it might be a 240mbps to 300mbps codec depending on the framerate. Another alternative would be a 240mbps Long GOP codec, which means the same type of codec of the GH4, but with more data and less compression. They could go 150mbps H.265 Intra, which would be equivalent to the 300mbps H.264 Intra, but I doubt.

      1. rutenrudi Avatar

        We talk about the GH5, so my preposition was that it’s some sort of h264. I didn’t mentioned that, so that is surely my bad, sorry!

        Back on topic, I strongly don’t believe that Panasonic will use h265. And then again, I worked with h265 enough to come to the conclusion that it’s not the magical wonder-codec that makes smaller bitrates appealing. Maybe 1/3 of the size in comparison to h264.

        I desperately hope that Panasonic uses their AVC-Ultra that is specified in their Whitepapers (400mbit at 25p 4K 422 10bit All Intra, IIRC). And I desperately hope it’s not that 240Mbit 4K 422 10bit crap that Sony produces in many cameras, that is a real headache for me; that’s exacly the material I targeted in my initial post, 10bit 422 is wasted on that material IMHO. In 8bit 422 that bitrate for that codec might’ve been Ok.

        1. theSUBVERSIVE Avatar

          Yeah, I don’t think they would go H.265. But did you test Long GOP HEVC or Intra HEVC? I’m interested in All-I H.265, because that would have about the same size as a IPB H.264 of similar quality.

          I haven’t tested so I don’t know, why is the XAVC-I crappy? Which camera did you test? I haven’t read complains about that codec, so I wasn’t aware of that.

          But in theory the compression for 10-bit and 8-bit is basically the same if both are 422, the bits are only the color space, while the chroma subsampling is the amount of data per sample. So being 8-bit 422 or 10-bit 422 won’t make any perceptive difference under the same codec and compression.

          Either way, just because Sony didn’t possibly do a good job with that compression, it doesn’t mean Panasonic can’t. I mean, there were a lot of AVCHD cameras out there and Panasonic’s video from their mirrorless cameras always looked better than Sony’s video. They could both have the same H.264 codec with the same compression ratio and still have different video quality.

          I think Panasonic will try to avoid going 400mbps or higher because that would mean above U3 rating cards and when sequential writing speed goes up, so does the price. 400mbps is not much higher and 50MB/s is not that ard to achieve, but if they can keep it close to 240mbps, they probably will.

          Since Olympus is using a 240mbps IPB codec for the E-M1 II DCI 4K video, maybe Panasonic might also have a 240mbps IPB codec but with 10-bit 422.

          1. rutenrudi Avatar

            You lose me here a bit – in which way should 10 and 8bit not matter regarding the filesize? The bitdepth is not the colourspace, that is an entirely different thing. Of course 8 vs 10bit differ in size, compressed or not? The colourspace would be if you record LOG or sRGB, which doesn’t matter for the filesize indeed.

            I’m not a fan of that XAVC simply because I used it some time. It’s not as bad as some other formats, but a simple Luma-Key can already be problematic because the codec can not hold up. And I doubt that Panasonic does this better, because it’s not at all about the look of the video file, but its behaviour inside the software. I’m not a colourist so don’t read it as if I’d push the limits just for fun and create awful overdone looks :)
            I have a neat demo file to show what I mean, but I can’t share that; is there some sort of PM here where I could hook you up with that?

          2. theSUBVERSIVE Avatar

            My bad, I meant bit-depth not color space and gamuts. You can PM via FB, Twitter, Instagram or Reddit.

            This is something I’m trying to understand because when I asked about it, nobody could tell exactly how does color depth, Intra/Inter and chroma subsampling affect the file size.

            Let’s say you use a H.264 codec. In theory the color depth affects the file size differently whether it’s an All-I or an IPB codec, the difference between 8-bit files and 10-bit files should have a different increase between All-I and IPB. Canon has their XF-AVC codec they use 8-bit 422 for their XC10/15 cameras but 10-bit 422 for their higher end cameras, the 8-bit has 305mbps and the 10-bit has 410mbps.

            The question is, does it have similar compression ratio? If so, can we usually expect 33% increase for other Intra codecs as well? I’m assuming that for IPB the file increase will be significantly less than 33% since IPB compression works more efficiently, but even so, by how much would it increase?

            Intraframe codecs seems to be 2x-3x bigger than its Interframe counterpart of similar quality. But as most 420 codecs are Interframe and most 422 codecs are Intraframe, it’s a bit hard to have the exact number and isolate how much each factor affects the file size. 422 has double the color data per sample compared to 420, but the way it affects file size should also differ between different encoding process.

            I’m trying to find out the following:
            – how much bigger an Intraframe file compared to an Interframe? (both using the same codec, same bit depth and same chroma subsampling)
            – how much bigger is a 10-bit file compared to a 8-bit for Intraframe codec and for Interframe codec (both using the same codec and same chroma subsampling)
            – how much bigger is a 422 file compared to a 420 file for Intraframe codec and for Interframe codec (both using the same codec and the same bit depth)

            I’m also trying to understand the difference between H.264 and H.265 with supported hardware and software. Intraframe is much less tasking than Interframe, which one would be more demanding to edit in real time without transcoding a year from now, a 4K H.264 Long GOP video or a 4K H.265 All-I?

          3. rutenrudi Avatar

            For subsampling and frame coding, it’s a bit deeper. But for bitdepth it’s incredibly easy to tell how it affects filesize; lets get to the most basic thing, a RAW image. Let’s take the BMCC with its uncompressed data as example: it produces 2,5K 12bit RAW files.
            The res is 2400×1350 px. So let’s do the math from here.
            2400 x 1350 x 12bit
            = 2400 x 1350 x 12/8 (as 8bit are one byte)
            = 3.240.000 x 1,5
            = 4.860.000
            Now lets divide this by 1024, because the 1000 vs 1024 thing
            = 4.746,094 (lets round that to 4.746)
            That’s pretty much the filsize of one .DNG frame in raw, without the metadata (so actual frames are a bit bigger; actually smaller these days, since the update that compressed the RAW a bit). Bitdepth is crucial for the math behind the filesize. Now when we take common compression into consideration it’s a different thing, still a higher bitdepth has a huge impact on the need for an adequat bitrate to make proper use of it.
            Chromasubsampling gets a bit further here, because unlike RAW we don’t talk about one channel anymore. We have a debayered image and therefore three channels; Usually one for the luminance and two for colourdifference. With a lower coloursubsampling than 444 (where we could also use RGB instead of YCbCr), two of the three channels are more or less just lower resolution.
            Of course, different encoding will handle things differently, and I’m not so much into codec programming to make a judgement on how it actually affects filesizes in different codecs :)

            I actually have no idea how Canon holds these things, I hadn’t had any Canon material for quite some time on my table now. And I can’t say I’m sad about that :)

            -> It’s hard to tell how much bigger an All-I has to be compared to an Interframe, as how would you measure it? It’s two different things and Inter is somehow always lossy as less info gets stored. My AME is somehow broken, I wanted to look what Sony and Panasonic are up to when using the same “class” of codec for intra vs long gop; the Whitepapers of XAVC are no help, it only tells about capabilities but not actual “presets”
            -> I never saw a codec that does both in respect for the bitdepth. So it’s hard to compare, as mostly a codec will enable you for a higher bitdepth, but not “adjusting” for it, so all other variables stay the same. Makes sense, as bitdepth is also only a variable.
            -> Same as with bitdepth, I’d say. There are some codecs which let you choose, but might not adjust.

            Maybe there are ways inside of ffmpeg to just change a value and leave the “quality preset” so it adjusts somehow? I’m not too much into it, maybe worth a look.

            I’ll see tomorrow how I may be able to send you that XAVC example. I can only use a cutout of the screen, but it will do the trick for what I want to show.

          4. theSUBVERSIVE Avatar

            Yeah, for RAW it’s pretty much straight forward math but the compression part is indeed where things start to get tricky, each codec use different algorithm, there are different compression ratio/quality within the codec, bit depth and chroma subsampling will be compressed differently in Intra and Inter, and like I said, most codecs don’t offer examples from what you can actually build some parameter.

            The H.264 8-bit 422 from Canon was the only one of its kind, first because it’s not worth it, but Canon uses 8-bit 422 as a way to purposely separate it from their higher-end EOS-C cameras, just like they could use this same H.264 codec in the 1DX and 5D, but they chose Motion JPEG instead just to make life harder – or more expensive. Just like you won’t see 10-bit 420 because it’s not worth making it.

            Olympus 240mbps codec of the upcoming E-M1 will be IPB and I’m curious to know who helped Olympus with that, among other options the E-M1 has a 200mbps All-I 1080p and the only other company using it is Panasonic, so maybe there is indeed a deal in place between Panasonic and Olympus. Maybe Olympus helped Panasonic with IBIS, while Panasonic helped with video, maybe they are sharing sensor tech as well, rumors say Panasonic will have exclusivity over 10-bit 422 and if it’s true I suspect that Olympus then have the right to the 60fps RAW burst, which would explain why Panasonic will use Photo 6K instead, it’s a workaround to get fast photo burst without RAW.

            I’m not sure if Panasonic will use the same tech or possibly the same sensor as the E-M1, being capable of 60fps RAW is a pretty big achievement and it means a very fast full sensor readout, a speed never seen before. For Panasonic that could mean pretty good rolling shutter for video and even 240fps slow motion.

            If the press release from Panasonic GH5 is a reliable source, the GH5’s sensor will be a multi-aspect one, that way, all the math would add up and make sense. A 17:9 multi aspect Micro4/3 sensor would accommodate both DCI 4K and UHD from full sensor readout downsampling, Photo 6K and still be a 20MP Micro4/3 sensor. The same way Photo 4K is 3840×2160 transformed into stills burst, Photo 6K would be 5760×3240 30fps burst from the multi-aspect 16:9 area. This would put the GH5 in line with both rumors of 20MP sensor and multi-aspect sensor since the biggest resolution of this sensor would be the 4:3 portion of it with 20MP. But it would be ok if it’s not multi-aspect and just the same sensor – even more if Panasonic finally adopts PDAF.

            In theory with similar compression of the IPB 8-bit 420 4K with 100mbps, Panasonic could make a IPB 150mbps 10-bit 422 if they wanted but if Panasonic is really helping Olympus and Olympus is using a 8-bit 240mbps IPB codec, it means that Panasonic could possibly use a IPB 10-bit 422 codec with about the same bit rate. The interesting part is that the E-M1 only offers 240mbps for DCI 4K but not for UHD.

            A 240mbps IPB file should demand less of the codec than an All-I 240mbps version of the same codec – like the XAVC-I – so I would prefer a 240mbps IPB than a 240mbps All-I but I don’t know, I really feel like Panasonic will try to keep the bit rate closer to U3 specs.

            I’m interested in H.265 All-I codec to know if it would be worth having it in the GH5. Samsung used a IPB HEVC codec and although the files hold pretty well and there is even a hacked version with higher bitrate, that’s still a IPB version of it and much more tasking, so I wonder if there was a All-I version, if that would make it significantly more practical so people could use it instead of H.264 IPB codec.

            All-I HEVC codec will be pretty important in the future.

          5. rutenrudi Avatar

            Well for compression it’s always hard to tell how different aspects will treated, after all a plain blue image will be handled differently than a complex, grainy shot :) so I don’t see why we go on about this, as this applies to every single aspect. Bitdepth rasises the factor – not too much, but still.

            I’d truly welcome a multi-aspect sensor again, if it’s measuered like in the GH2. Bigger size for us video-guys, that was neat :)
            And I truly hope they don’t let their codec be such a low bitrate. It will simply not be very fun in h264. I’m also not welcoming IPB, I like my frames full, regardless if the filesize. I just want to have good data, not “good enough for anyone who doesn’t need more”. I’d just prefer no variant at 240Mbit, if they want to raise their game they should really do it :)

            I, too believe that h265 will be important, but only if you let the codec shine. I’m a bit afraid of companies using it as an excuse for shitty quality in smaller sizes; for example, if they decide they want to comply to U3, I’d rather have that written away in h265 as I gain, in theory, the better quality. Still, since years it’s not a too big issue to have SD cards which write 60MB/s, so even 480Mbit should be totally doable. That’s the speed of USB2.0, and they tell us U3 is all the cams should do? Please.

            I’m new to this DISQUS thingy, I actually made it up for the sake of posting my initial comment here. So I’m still struggling to see where I can send you over a file :(

          6. theSUBVERSIVE Avatar

            For my use I don’t think I would mind the 240mbps All-I H.264 but without knowing exactly what you were trying to explain to me, I can’t say it would be a deal breaker.

            I don’t think it would make much sense to have a 16:9 aspect ratio like the GH1 and GH2, since Panasonic offers DCI 4K, it would make more sense to have the same aspect ratio as that. It’s better to crop 16:9 from it than the opposite, since you would lose the wider FOV of DCI 4K.

            You mention the USB speed but the issue is that, you are talking about maximum sequential writing speed and video recording need a minimum sequential writing speed, so the slowest the card can record has to be enough to record the video file. Maximum sequential writing speed are not necessarily constant and it could possibly go lower than the minimum.

            But I don’t really know what’s exactly the reason why they don’t use the new nomenclature like V60 (60MB/s) and V90 (90MB/s), in theory a lot of the new SD cards should be capable of having that minimum sequential writing speed, the new ones can even record 4K ProRes HQ but they are VERY expensive, it would be worth buying a recorder with SSD than a bunch of the fastest UHS-II cards.

            Do you have any other social media profile like Twitter, Facebook, Google Plus, Instagram, Reddit, etc?

            2016-10-31 17:15 GMT-02:00 Disqus :

          7. rutenrudi Avatar

            That is one thing I’m afraid for: lots of people wouldn’t mind. Hell, I bet they could call it 10bit 422 and still only have real 8bit 420 in there and people would love it.
            I don’t say it’s a dealbreaker if it “only” delivers 240mbit but I still hope they don’t. I hope they make the cam as great as it could be. After all, 240mbit is somehwat the same as 60mbit in FullHD. That would be ok-ish for 8bit 420, but in 10bit 422? Meh.
            Look at it that way: it delivers 4K DCI. Which is only really interesting if you do cinema, it’s not really something that is desperately needed; but for high-end cinema use, I don’t see why it should be cool if they only give the codec a bare-minimum. Why needed DCI if it doesn’t really hold up?

            What I just meant is that we most likely would end up with a slightly larger sensor size, I don’t mind if that’s 16:9 or sightly wider.

            I’m talking about USB2.0 as a comparison for speed. Of course, you never reached max USB2.0 speed with anything. I just wanted to say that it is pretty damn slow. And I don’t mind buying expensive cards if my quality shines.
            I mean I don’t talk about 90MB/s, 350 to 400Mbit would be totally cool. Simply the AVC-Ultra Spec that Panasonic already has.

            I have all of those, maybe reddit works best; hit me up u/CameraRick

          8. theSUBVERSIVE Avatar


            2016-10-31 18:59 GMT-02:00 Disqus :

  • Avatar

    who is Griffing Hammond? come on guys get the name correct.

  • shinn_33 Avatar

    our program distributor changed their format to H.264 with 8bit 422. They are sending the signal via satellite for ON-AIRing on the same day as LIVE or a few hours ahead. I asked them why 8 bit 422 since our decoders are good for 8 bit 420. But seems they have another target. So, it there any benefit for them to transmit at 8 bit 422 instead of 8 bit 420? Note that satellite bandwidth is around 36 MHz if full.

  • SunnyMountains Avatar

    John – sorry for the late post but a small correction might avoid some confusion. I don’t believe the article is correct on this point: “10Bit 4:2:2 means the extra…[color] data bump the number of possible colours from 1 in 16.7 million to 1 in over 1 billion possible colours”.

    This is not correct, because 4:2:2 and 4:2:0 actually support the same range or total number of possible colors. They both support a billion colors, it’s just that 4:2:0 provides less of those color values for a given group of pixels.

    1. aenews Avatar

      You’re missing the point. The color depth has little to do with the chroma subsampling. The number of possible colors is indeed 64X higher with over 1 Billion colors.

      8-Bit: 16,777,216
      10-Bit: 1,073,741,824

      The article doesn’t differentiate and specify those properties to be fair. It’s hardly a detailed or technical article.

      1. SunnyMountains Avatar

        No – It’s less the point was missed and more the point was not made clear. Look at the words quoted in my comment.

        If you start a paragraph with “10-bit means…”, then in the same paragraph say what determines number of colors, while not mentioning 8-bit at all, all of this in the same paragraph, sorry but that’s confusion. Which is going to make a few people wonder if you’re mistaking pixels for bits.

        It’s a minor ambiguity, in fact I think it’s a good post overall and useful. I’d also disagree it’s “hardly technical or detailed”. It’s a blog post on diy photography, not a textbook on image processing algorithms and fourier transforms. It’s appropriately targeted.

  • Ace Avatar

    Your first looping video says the two samples taken in the top row then are expanded to fill in the gaps. However, the dark brown sample does fill up the left hand square but the light brown sample only expands to fill in one other pixel. Two light green square appear in the end result but there was no light green color sampled.

    I don’t understand the sequence of capturing the entire image when I press the shutter and when the sampling takes place. Where are these groups of 4 X 2 vertical rows relative to each other? What is the first such group that the processor encounters and what part of the sensor does that group come from? Where does the second group to be processed come from? Is there a two-step process where the processor first records all the pixels “seen” by the processor and THEN this color sampling takes place? Or does this sampling take place as each group from the sensor is sent on its way to the SD card?

    And does this sampling occur when one shoots in RAW format?