I got this question twice, so here are some comments.
cmsChangeBuffersFormat() is gone in 2.0
There is a good reason to do that: optimization
When you create a transform, you supply the profiles and the expected buffer format. Then, the engine, on depending on things like number of channels and bit depth can choose to implement such transform in different ways.
Let's take an example. If you create a AdobeRGB to sRGB transform using TYPE_RGB_8 for both input and output, the engine can guess that the maximum precision you would require is 8 bits, and then simplify the curve and matrix handling to, for example 1.14 fixed point.
This precision is enough for 8 bits but not for 16 bits, so if you change the format after creating the transform to TYPE_RGB_16, you would end either with artifacts or throughput loss.
Remember lcms 2 allows you to close the profiles after creating the transform. This is very convenient feature but prevents to recalculate the transform by reading the profile again. And there are situations, MPE for example when different precision means different tags.
Overall I think the balancing of losing "change format" versus optimization and early profile closing is good. Otherwise you can always create a new transform for each format. Since you can close the profiles after creation, the amount of allocated resources should remain low.
Saturday, June 26, 2010
Subscribe to:
Post Comments (Atom)
I can read many cool articles on your blog. Sabrina
ReplyDelete