For the last few years, Sony has been shipping cameras with a new format, XAVC. XAVC encompasses a lot of different elements, and it’s created some confusion about what XAVC is, and what it isn’t. We’d like to help unravel the confusion, both from the official uses of XAVC, and some of the colloquial uses we’ve seen.
There are a few things that are constant about XAVC. XAVC is always going to involve some form of H.264 video, along with uncompressed LPCM audio. So, if you’ve got MPEG2 video or AC3 audio or any other permutation, it’s not XAVC.
We’ll get into some of the particulars of how H.264 is used within XAVC a little later. First, let’s cover the wrappers.
A “wrapper” is the container that holds your audio and video. One wrapper most people are familiar with is the QuickTime .MOV format. An MOV file might contain many different types of video and audio, but the MOV part stays the same. Other wrappers include M2T, MXF, AVI, etc.
XAVC generally uses the MXF wrapper, with the audio and video in a single file. (Other vendors use separate MXF files for audio and video.) However, there’s one flavor of XAVC, called XAVC-S, that uses the MP4 wrapper format. This format is found in the popular Sony A7s camera. Whether it’s MXF or MP4, the data inside the file is still H.264 and LPCM audio.
One common area for misunderstanding is the overlap between XAVC and AVCHD. AVCHD is a popular HD format, which normally uses the “MTS” wrapper. AVCHD is usually H.264 with AC3 audio, though some cameras have extended it to use LPCM audio as well. Some of Sony’s XAVC cameras are also capable of recording the AVCHD format, but at that point, they’re just recording AVCHD. It’s not “XAVC inside AVCHD” or anything like that.
The XAVC family ranges from very affordable pocket-friendly cameras, to very expensive shoulder-mount cameras. It’s a very capable format, which means there are many variations in use. Most of these variations appear as different “profiles” of the H.264 codec, which have different capabilities. For a primer on these types of compression, check out our blog posts on intraframe and interframe compression.
The H.264 format within XAVC can be either 4:2:0, 4:2:2, or 4:4:4 color sampling. It can have either 8, 10 or 12 bits per sample (4:4:4 and 12bit have been announced, but aren’t included in any current camera models). And it can scale all the way from “proxy” sizes (480p for example) to full 4K, and potentially beyond, at very high frame rates.
XAVC can also be either intra-frame or inter-frame. In the video community, these have come to be known as XAVC-I and XAVC-L (for long-GOP or inter-frame). The different modes offer different capabilities, and these vary from camera to camera.
While Sony is making use of well-known standards like H.264 and MXF, there have been some compatibility issues with XAVC in a variety of applications. In large part, this is because they’re using components of the H.264 specification that haven’t been widely adopted in the past. In particular, they’re using the “high” profile, levels 5.1 and 5.2. Many of the H.264 decoding tools on the market don’t properly handle these. In addition, a number of H.264 decoders, including the one built into Mac OS X, don’t deal well with the mix of inter-frame compression and higher bit depths.
We’ve created a chart, below, which maps out how all these factors relate. For a full rundown, check out Sony’s profiles and operating points guide.
XAVC and EditReady
We’ve been working to stay ahead of the curve with EditReady, by adding support for formats like XAVC-S, XAVC-L, and XAVC-I as the cameras have shipped. For both compatibility reasons, and performance reasons, we strongly recommend using EditReady to transcode these formats to an intermediate format like ProRes for editing.
|XAVC-L HD||XAVC-I HD||XAVC-L 4K||XAVC-I 4K||XAVC-S|
|12bit||No cameras currently use XAVC 12-bit|
|4:4:4||No cameras currently use XAVC 4:4:4|
|Framerates||up to 60p||up to 180p1||up to 60p||up to 60p||up to 120p|
|Bitrates||up to 150mbps||up to 440mbps||up to 720mbps||up to 960mbps||up to 100mbps|
|H.264 Profiles||Main Profile, High Profile, High 4:2:2 Profile, Level 4 or more||High 10 Intra Profile, High 4:2:2 Intra Profile, Level 4 or more||High Profile, High 4:2:2 Profile, Level 4.2 or more||High Profile, High 4:2:2Intra Profile, Level 4.2 or more||Main Profile, High Profile, Level 2.1 to 4.2|
|Supporting Cameras (partial list)||PXW-X70
1: Not listed in “Profiles” document, but in use in F5, F55, and FS7
Codecs don’t need to be hard. No, really, they don’t. All that matters is that you choose the right codec.
By the end of this article, you will be able to pick the best codec for you on each project. My goal is to give you what you need to make your own informed decisions about codecs. So you can choose the right codec for yourself, instead of relying on what worked for someone else.
I’m going to walk you through every step in the process of making a video. Click on a heading to jump to that section. I’ll cover:
- The codec you shoot
- The codec you edit
- The codec you color-correct
- The codec you send to VFX
- The codec you export
- The codec you archive
At each stage, I’ll explain which factors you should be considering as you choose a codec. I’ll also give you some examples of the most commonly-used codecs for that stage.
Along the way, we’ll cover why low-end codecs and high-end codecs can each slow down your editing, the reasons for a proxy/offline edit, a real-world project walkthrough, some storage-saving strategies, and an explanation for why transcoding cannot improve your image quality.
The benefits of optimizing your codecs can be huge. Choose the right codec and you’ll preserve your images in the highest quality. It can also make your work faster, and lets you take the best advantage of your computer and storage. You’ll be able to work faster on a laptop than many can on a high-end tower, if you are a media creator for, lets say youtube, these are some of the music licensing companies you can get assistance with copyright free files.
What a Codec Does
A codec is a method for making video files smaller, usually by carefully throwing away data that we probably don’t really need. And they’re pretty smart about how they do that. A few years ago, I created a video that covers the main compression techniques that many codecs use. It’s not required viewing to understand this article, but it certainly won’t hurt.
If you’re skipping the video, here are some very basic explanations:
- Chroma subsampling: Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if you’re doing color-correction. Really bad if you’re doing green screen or VFX work.
- Macro-Blocking: Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate.
- Temporal compression: Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing.
- Bit depth: The number of possible colors. Deeper bit-depth (larger numbers) is good for color-correction and VFX.
Codec Comparison Table
I’ve also pulled together a list of all of the most common codecs used in the postproduction world. This list can help you compare different codecs against each other and make the best decision for your project.
There are many different codecs that can be used in the editing process. The ones I’ve included are by far the most common. There is a significant advantage to using popular codecs. They are more likely to work on your system, your client’s system, your system-in-five-years, etc. And it’s easier to find help if something goes wrong.