![]() ![]() Proxies put much less strain on the network-attached storage systems in post houses. If you quiz your Hollywood post-production colleagues, I’ll bet most are using proxies too for their onlines even if they have the latest hardware. I’m simply developing a workflow that makes the most of what I’ve got. Oversampling provides room to breathe in post-production. By having 4x more data than the output format needs, the aliasing can be mitigated when these kinds of filters and effects are applied. The moment we rotate or scroll something in post-production, that 1:1 ratio gets busted and harsh aliasing and edge roll will start to happen. #Ffmpeg scale not workign 1080pWe also have to keep in mind that a 1080p capture and a 1080p output are 1:1 and only look optimal if the footage doesn’t budge. This reason alone would be enough to make me use an oversampling workflow like 4K->1080p. ![]() The colors are visibly more true to life when oversampling gets involved like this. But if the input footage has 4x the data of the output resolution, merely averaging four pixels into one helps mitigate the sampling error by producing a color value that’s somewhere in the middle of multiple samples. ![]() When the capture resolution and the output resolution are 1:1 like 1080p, then the error is baked into those pixels and nothing optical can be done about it. Thirdly, all sensors suffer from some amount of sampling error whether it be from dark current or temperature or substrate chemistry or anything else. If a 4K sensor can provide four times as much data to a downscaler, a smart scaler like Lanczos can anticipate and reduce the effects of aliasing, especially if any roll movements happen in the footage. Secondly, the 1080p sensor will have a harder time with aliasing at the borders of the photon wells. So the 1080p sensor would look worse from the start just by being a worse physical sensor that technology has passed by. First, it’s getting harder to find native 1080p sensors anymore, and the ones that exist are generally not going to outshine their modern 4K counterparts. However, there are still three problems here. ![]() In that case, the same plot of light is collected and the image quality is more comparable. What I was not comparing was a physical 4K sensor to a physical 1080p sensor. Likewise, if the camera attempts its own 4K->1080p scaling but does a low quality job of it to conserve power, the results are, well, low quality. The 1080p version will have harsher edges that suffer from edge roll as a result. The 1080p version would miss out on contrast changes that happened within the pixel gaps, and that contrast would not be averaged into the pixels that did get sampled. If the 4K sensor is skipping pixels to create a 1080p image, then it is not collecting the same light as the 4K image that uses every pixel. The logic for why 4K->1080p is better should be pretty straight-forward now in this context. #Ffmpeg scale not workign fullI need my image to come from the full surface of the sensor, not from every other pixel. You asked why I needed 4K, and getting around the quality ding of 1080p in-camera fakery is my reason why. For people like me with only one 4K mirrorless camera, these hack-job in-camera downsize methods are the only “native” 1080p acquisition options we have. I was trying to say a full-sensor 4K image downscaled to 1080p with Lanczos would look better than the same 4K sensor trying to fake a 1080p image by skipping every other pixel of the 4K sensor, or by using a low-quality 4K-to-1080p scaler. I need one paragraph to unravel the confusion, then another paragraph to actually answer your question. I think my use of the word “native” 1080p camera capture was confusing. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |