


Otherwise to support both directions you need two separate algorithms - area averaging for downsampling (which would degrade to nearest-neighbor for upsampling), and something like (bi-)cubic for upsampling (which would degrade to nearest-neighbor for downsampling). One possible reason to use gaussian resampling is that, unlike most other algorithms, it works correctly (does not introduce artifacts/aliasing) for both upsampling and downsampling, as long as you choose a radius appropriate to the resampling factor. radius 5 for downsampling by 1/5) may give better results with a bit more computational overhead, and it's more mathematically sound. Gaussian resampling (with radius chosen proportional to the reciprocal of the factor, e.g. It's very simple and fast and near-optimal. Personally I would recommend (area-)averaging samples for most downsampling tasks. They will result in very bad aliasing akin to what you'd get if you downscampled by a factor of 1/2 then used nearest-neighbor downsampling. (Bi-)linear and (bi-)cubic resampling are not just ugly but horribly incorrect when downscaling by a factor smaller than 1/2. It is kind of ironic that there is more controversy about scaling down an image, which is theoretically something that can be done perfectly since you are only throwing away information, than there is about scaling up, where you are trying to add information that doesn't exist. Now ImageMagick has an extensive guide on resampling filters if you really want to get into it. They also include test images that may be useful in doing your own tests. The folks at fxguide put together a lot of information on scaling algorithms (along with a lot of other stuff about compositing and other image processing) which is worth taking a look at. There is a good example of the results of various algorithms at Cambridge in Color. But as usual, it depends on the image and what you want: shrinking a line drawing to preserve lines is, for example, a case where you might prefer an emphasis on preserving edges that would be unwelcome when shrinking a photo of flowers. Lanczos is one of several practical variants of sinc that tries to improve on just truncating it and is probably the best default choice for scaling down still images. It probably refers to a truncated version of sinc. But sinc is a theoretical filter that goes off to infinity and thus cannot be completely implemented, so I don't know what they actually meant by 'sinc'. The results were that for these pros looking at huge motion pictures, the consensus was that Mitchell (also known as a high-quality Catmull-Rom) is the best for scaling up and sinc is the best for scaling down.
MODUL8 PRELOAD BILINEAR FILTER VS LOSSLESS QUALITY PROFESSIONAL
Unfortunately, I cannot find a link to the original survey, but as Hollywood cinematographers moved from film to digital images, this question came up a lot, so someone (maybe SMPTE, maybe the ASC) gathered a bunch of professional cinematographers and showed them footage that had been rescaled using a bunch of different algorithms. This would naturally be computationally intensive, but it should be as close to the ideal as possible, no?

So, if a target pixel would cover 1/3 of a yellow source pixel, and 1/4 of a green source pixel, I'd get (1/3*yellow + 1/4*green)/(1/3+1/4). Then, to get the color of the target pixel, one would simply calculate the average of these colors, adding their areas as "weights". It would then be possible to calculate the areas and colors of these pixels. It would probably overlay one or more other pixels. The idea is this - for every pixel in target picture, calculate where it would be in the source picture. I suppose it also has a name (as something this trivial cannot be the idea of me alone), but I couldn't find it among the popular ones. It also reminded me of an algorithm I had "invented" myself, but never implemented. That makes me wonder if interpolation algorithms are the way to go at all. I checked out Paint.NET and to my surprise it seems that Super Sampling is better than bicubic when downsizing a picture. Unfortunately I would like to use this algorithm myself in my software, so Adobe's carefully guarded trade secrets won't do. I know of bicubic, but is there something better yet? For example, I've heard from some people that Adobe Lightroom has some kind of proprietary algorithm which produces better results than standard bicubic that I was using. With best I mean the one that gives the nicest-looking results. I want to find out which algorithm is the best that can be used for downsizing a raster picture.
