Thursday, December 10, 2009

Include files

Any application using LittleCMS 2 has to include just one header.


#include “lcms2.h”

The header has been renamed to lcms2.h in order to improve the adoption of version 2. In fact, both Little CMS 1.x and 2.0 can coexist installed in same machine. This is very important on platforms like linux, where LittleCMS is nested deep in the dependency tree. Little CMS 2 no longer relies on icc34.h or any file coming from ICC. All costants are now prefixed by “cms” and there is one license for all package.

Lcms2.h does expose the API, and only the API. Unlike 1.xx series, all internal functions are no longer accesible for client applications.

A special case are the LittleCMS plug-ins. Those constructs can access more functions that the API, just because they are supposed to access Little CMS internals to add new functionality. There is a specialized include file for that:

#include “lcms2_plugin.h”

This file should only be included when defining plug-ins. It defines some additional functions and is described in the LittleCMS2.0 Plugin API document.

Saturday, December 5, 2009

Requeriments

In order to improve portability and minimize code complexity, LittleCMS 2.0 requires a C99 compliant compiler. This requirement has been relexed on Microsoft’s Visual Studio because its wide adoption by industry (VC is not fully C99 compliant). Borland C 5.5 (available for free) has been tested and found to work Ok. gcc and the Intel compiler does work ok.

Monday, November 30, 2009

Backwards compatibility

Little CMS 2 is almost a full rewrite of 1.x series, so there is no guarantee of backwards compatibility. Having said this, if your application doesn’t make use of advanced features, probably all what you need to do is to change the include file from lcms.h to lcms2.h and maybe to do some minor tweaks on your code. Profile opening and transform creation functions are kept the same, but there are some changes in the flags.  Little CMS 2 does offer more ways to access profiles, so it is certainly possible your code will get simplified.  The basic parts where Little CMS 2 differs from 1.x series are:
  • ·    Transform flags
  • ·    Error handling
  • ·    Textual information retrieval
  • ·    New non-ICC intents
  • ·    Floating point modes
  • ·    Pipelines


On internal advanced functions, the underlying implementation has changed significantly. You still can do all what lcms1 did, but in some cases by using a different approach. There are no longer gamma curves or matrix-shaper functions. Even the LUT functions are gone. All that has been superseded by:
  • ·    Gamma functions -> Tone curves
  • ·    Matrix Shaper, LUT -> Pipelines
  • ·    LUT resampling -> Optimization engine
There is no one-to-one correspondence between old and new functions, but most old functionality can be implemented with new functions.

Sunday, November 29, 2009

What is new from lcms 1.x

First obvious question is “why should I upgrade to Little CMS 2.0”. Here are some clues:

Little CMS 2.0 is a full v4 CMM, which can accept v2 profiles. Little CMS 1.xx was a v2 CMM which can deal with (some) V4 profiles. The difference is important, as 2.0 handling of PCS is different, definitively better and far more accurate.
  • It does accept and understand floating point profiles (MPE) with DToBxx tags. (Yes, it works!) It has 32 bits precision. (lcms 1.xx was 16 bits)
  • It handles float and double formats directly. MPE profiles are evaluated in floating point with no precision loss.
  • It has plug-in architecture that allows you to change interpolation, add new proprietary tags, add new “smart CMM” intents, etc.
  • Is faster. In some combinations, has a x 6 throughput boost.
  • Some new algorithms, incomplete state of adaptation, Jan Morovic’s segment maxima gamut boundary descriptor, better K preservation…
  • Historic issues, like faulty icc34.h, freeing profiles after creating transform, etc. All is solved.

Saturday, November 28, 2009

Documentation strategy

Little CMS documentation is hold in three different papers. First one is the tutorial. Its goal is to introduce the engine and to guide you in its basic usage. It does not, however, give details on all available functionality. For that purpose, you can use the API reference, which gives information on all the constants, structures and functions in the engine. The third document is the plug-in documentation. It details how to extend the engine to fit your particular purposes. You need some experience in the core API to write plug-ins, therefore, the plug-in API reference is somehow more advanced that the remaining two.

Aside documentation, there are sample programs that you can explore. Those are located in the “utils” folder. Those programs are also handy in isolation. This is the list of utilities, each one is documented elsewere.

  • TiffICC
  •  JpegICC
  •  TransICC
  •  LinkICC
  •  TiffDiff
  • psicc

Friday, October 23, 2009

Documentation!

Back from vacation. Lots of fun and some time devoted to LittleCMS. As a result, we have now some incipient documentation. There is a tutorial you can read to explore the differences between LittleCMS 1 and 2. The plug-in API is also documented, but the documentation lacks some explanations and is mostly unfinished. The API reference is a work in progress...

Saturday, September 19, 2009

Unbounded CMM

With the drop I'm posting right now (September 19) most of the utilities are now working. You have a tifficc applier, a transicc calculator and a linkicc devicelink generator. As you can see, some names have changed. This is because icclink was clashing with Graeme Gill's icclink from Argyll. This latter is an outstanding utility, and many users may want to have both installed, so I changed the name and get an extra consistency bonus (now all lcms utility names are "whatever-icc")
Now you have transicc (former icctrans) and therefore you can check one of the new features of lcms2. That's what is called "unbounded CMM mode"

What is that? Well, with lcms2 you can use floating point values. That means,  you are no longer limited to 0..255 on 8 bits or 0...65535 on 16 bits, but on some profiles you can get out of those bounds and the CMM still works.

Not all profiles does accept that. You need profiles that are implemented using enterly math expressions. Some very simple profiles works in such way, for example AdobeRGB. The sRGB profile does not work because it has curves implemented as tables, but the built-in sRGB does work as instead of tables it uses parametric curves. Also some advanced profiles for digital cameras using multiprocessing elements may work in unbounded mode. Right now it is hard to find any of those, as they are described in an addendum to the ICC spec.

Let's check this feature. Using AdobeRGB to the built-in  Lab in transicc:

C:\lcms-2.0\bin>transicc -i AdobeRGB1998.icc -o *Lab
LittleCMS ColorSpace conversion calculator - 4.0 [LittleCMS 2.00]
Enter values, 'q' to quit

R? 10
G? 10
B? 10
L*=0.7287 a*=0.0000 b*=-0.0000

Nothing special, but let's try this one

R? 300
G? 300
B? 300

L*=114.6770 a*=0.0006 b*=-0.0005

Do you see where I'm going? 300 is above 8 bits, and L*=114.6 is a highlight, so here you have an example of what can be accomplised with this mode.

Wednesday, September 2, 2009

New drop

Ok, so here is the september drop. That's functionality complete, that means all the functionality is already there and now it remains debugging and stabilizing. I am still in good shape for the schedule, so hopefully the first beta will be available on mid october.

New in this drop:

  • error logging supersedes error plug-in, which is no longer available (it was a bad idead, indeed)
  • black preservation intents are working
  • TAC detection is working.
  • All tags and tag types are correcly read/written
  • Gray/RGB/CMYK/Lab/XYZ
  • Info fns with localization, unicode
  • Segment maxima gamut boundary descriptor
... And now back to the documentation, that's what is taken most of my time. First documentation draft will be available on next drop.

Friday, August 14, 2009

Black is black (II)

The second black thing has nothing to do with black point compensation, but is still black-fashioned. That are the black preserving intents. No, this does not belong to normal ICC workflow. ICC has tried to address such need but still there is nothing in the spec.

Let's see what the issue is. Suppose we work in press. Press are very tied to standards, US press uses SWOP and European folks are more toward FOGRA. Japanese people uses other standards like TOYO, for example. Each standard is very well detailed and presses are setup to faithfully emulate any of these standards.

Ok, let's imagine you got an image ad, looking like that. This is a very usual flier, now just imagine instead of getting it in PDF, you get it as a raster file. Say in a CMYK TIFF, ready for a SWOP press. And you want to print in on a FOGRA27!!

Maybe you see no issue over here, but take a look on this:


LittleCMS ColorSpace conversion calculator - 4.0 [LittleCMS 2.00]

Enter values, 'q' to quit
C (0..100)? 0
M (0..100)? 0
Y (0..100)? 0
K (0..100)? 50

C=48.21 M=38.71 Y=34.53 K=1.09


That means, if I convert from SWOP to FOGRA27, the ICC profiles totally mess up the K channel, so a portion of the picture that originally is using only black ink, after the conversion, gets Cyan, Magenta, Yellow and well, a little bit of black as well. Now please realize what happens on all the text in the Flier.


Ugly. I guess the vendor who pays the bill will not be very pleased, right?

So I've added two different modes to deal with that: Black-ink-only preservation and black-plane preservation. The first is simple and effective: do all the colorimetric transforms but keep only K (preserving L*) where the source image is only black. The second mode is fair more complex ans tries to preserve the WHOLE K plane. I'm still checking this latter. If you want to give it a try, please download the new snapshot and build the TIFFICC utility. It already implements those new intents.

Wednesday, August 12, 2009

Back in black

I've been lately working hard in two black thingies. Namely, black point compensation and black preserving intents. Black preserving intents are worth of a post, and still there are some subtle bugs, so let's go fot the black point compensation first. BPC has been implemented in lcms for years. It works. So if ain't broken don't fix it. Well, not really. It seemed to me lcms2 would be a good chance to implement Adobe's BPC... oh, that's a nice story.

BPC is a sort of "poor man's" gamut mapping. It basically adjust contrast of images in a way that darkest tone of source device gets mapped to darkest tone of destination device. If you have an image that is adjusted to be displayed on a monitor, and want to print it on a large format printer, you should realize printer can render black significantly darker that the screen. So BPC can do the adjustment for you. It only makes sense on relative colorimetric intent. Perceptual and saturation does have an implicit BPC .

Works like magic but it is quite simple: just a plain linear scaling in XYZ space.

What turns out the real problem: It is easy to implement BPC as long as you can figure out what the black point is. The black point tag doesn't help. There is such quantity of bogus profiles that ICC has deprecated the tag in a recent addendum. It is up to the poor CMM to figure out what the black point is.

And it is a hard task tough. Ok, you may argue, just use the relative colorimetric in the input direction, feed RGB=(0, 0, 0) and ... wait, what about CMYK? Ok, CMY=(255, 255, 255,255)... Hum.. CMYK devices are ink limited, that is, the maximum amout of ink that a given paper can accept is limited (else ink will spread all over) so this doesn't work either. What lcms do, is to use perceptual intent to convert Lab of zero to CMYK and then this CMYK to Lab using rel. colorimetric. This works fine on all but some broken profiles. Well, Adobe's paper target those profiles.

The funny part is the complexity of Adobe's algorith. The perceptual trick is used as a seed on certain profiles, then black axis of a round-trip on L* is examined. If the difference is greater than 4, then a least-squares fitting of the curve to a quadratic curve should be done. The black point is the vertex.

In fact it is more complex as I'm ommiting constants depending on the intent and other gory details. Such complex. After the 6th time you read the paper, it begins to make sense.

But unfortunately it doesn't work to me. And unfortunately, it doesn't work for the ICC reference implementation CMM. We both get the same bad guess of L* = 625 for a given profile. And guess what? I've tried Photoshop CS4, and the reported black point does exactly match with my humble, perceptual to rel.col. algorith, so I will keep this one for a while.

Tuesday, August 4, 2009

New drop available

August drop is here. Working features are CMYK, black point compensation, all profile I/O ...
Enjoy!

Saturday, August 1, 2009

Sleep your bugs

It hurts. Its a sort of burning... when you figure out where is the bug, you experience an urgent necessity of fixing it, NOW! Big mistake.

It is not clear if was Knuth or Hoare who coined the phrase:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.


It is indeed very true. I would like to add a humble corollary:

premature bugfixing is lame too.


So, here is my advice: Sleep your bugs.

I am not talking about typos. You know, misspelling 'l' and '1' for example (nobody should use 'le0' as var name, '1e0' is still a valid number) or failing to include a range check (last time I forget to place an 'if' I got a Pwnie nomination)

I am talking about logic bugs. Something wrong with the algorithm or the true logic of the program. Adding inconsistent features is another example.

Sleep those kind of bugs. Take good note of them, and wait a day or two in fixing. Probably you will find a neat way to deal with the issue without changing those hundreds of lines. Maybe not, but even in this case the solution is worth of the wait.

In my case the bug was black point detection. lcms2 is now an unbounded CMM that operates on floats. So the black point detection code was using floating point to do the calculations. Yep, it seems a nice feature: far more precision, etc. On the other hand, inks on floating point are represented as %, so the effective range is 0..100%, that is how Photoshop and others do work and I thought it makes sense to keep the feature in lcms2 as well.

Put both features together and you have a nice logic bug: extra code is needed on BP detection to deal with different ranges.

If you happen to be a professional programmer, you surely realize that my advice of postpone bug fixing goes against the schedule, the program manager and the planner. Sure, but that's lcms2, my pet project which has unlimited resources on time. Would be a dream for a commercial project, but this project is not commercial and therefore is not subjected to schedule tyranny.

By the way, a new drop of lcms2 is in the way and will be available in a day or two.

Monday, July 27, 2009

Less is more

If you have devoted some time to review the new API, maybe you have discovered an odd thing: there are some functions missing. Ok, you can blame me for remove *that* function doing exactly what you need. Of course that may be my mistake. But please consider perhaps I have good reasons to do that.

Let's take one example:

cmsReadICCMatrixRGB2XYZ(LPMAT3 r, cmsHPROFILE hProfile);

This function no longer exists in lcms 2.0 API.

Well, many people were using this function to retrieve primaries of a profile. So, for example, if you want to know which are the AdobeRGB primaries, just call the function with the right profile and here you go.

Seems easy, and useful, but trust me, it is not. The real reason d'être of this function is somehow surprising. Not because it is handy but because is precise. Please consider this piece of pseudo-code:

cmsXYZTRIPLE Result;
hXYZ = cmsCreateXYZProfile()
xform = cmsCreateTransform(hProfile, TYPE_RGB_DBL, hXYZ, TYPE_XYZ_DBL, INTENT_RELATIVE_COLRIMETRIC, 0)
cmsDoTransform(xform, {{ 1, 0, 0}, {0, 1, 0}, {0,0,1}}, &Result, 3)

Do you follow it? I create a transform from the profile (in RGB) to XYZ. Then I convert max of R, and B to XYZ. I am obtaining the primaries! Despite it seems more complex, this method is much better because is guaranteed to work in *any* profile, not only on matrix-shaper ones.

So, what is the point of having the old function? Easy: lcms 1.x was precission-limited to 16 bits, so you cannot obtain primaries with enough precision with the method described above. But that does not apply with lcms2, where you have an outstanding 64-bit double precission. Less is more in this particular case!

Thursday, July 23, 2009

Linking tags

Implementing the tag link feature has been easier than I originally thought, but there are some caveats.

lcms2 does support some new features on read/writting profiles. You can use cmsReadTag/cmsWriteTag to read/write lcms objects like LUTs, tone curves and so. You can also use cmsReadRawTag/cmsWriteRawTag to read or write whatever you want, but poor library does not do any checking or understanding on whats going on.

You can also link tags to items created by any of those two methods. So far so good.

But wait, you can also write plug-ins to add more "understood" objects in cmsReadTag/cmsWriteTag and also you can write plug-ins to add new types for those objects.

In addition to all that, there is a brand new structure added as an addedum to ICC spec 4.2, that is the MPE or multi profile elements. This is worth of several comments in this blog. Basically that may make photographers and precission yonkies very happy as it includes *true* floating point numbers, among other things. There is a plug-in type devoted to MPEs. More to come.

So, if you consider all all those acess methods should be consistent, we have a nice mess. The good news are it seems to work, the bad news, we need more testing. But overall I see progress.

Fear not!, maybe I will even accomplish the schedule and realease the whole thing on November.

Wednesday, July 22, 2009

ZOO Doppelgänger and the Link feature

To some extent, old'n'good lcms allowed profile editing. That was a "dangerous" feature in the sense people may abuse of it to grab copyrighted material. That is, you may open your favourite profile to remove the copyright tag, and here you go. Obviously preventing that by the hard way is just like killing the messanger, so this feature is working in lcms and is up to you to use -or abuse- it.

But reviewing the previous entry about the zoo, I had a wild idea: if the zoo test reads every single tag, what if then I try to rewrite all those tags? that would create a Doppelgänger version of every profile in the zoo, but not necesarely with same organization and size.

The test code was written in few minutes, and after I run it, poor lcms2 get into hyperspace and crashed badly.

Ok, five bugs ahead, I got the writting/copying feature working. But this also has unveiled how profile vendors abuse of the link feature. That is, since a tag is described by some block in the file, I can put two or more different entries in the tag directory pointing to the same location. Humm.. I have to add some code to deal with this case.

Friday, July 17, 2009

Profile ZOO

The serialization part is now complete. That is, lcms2 should be able to read all tags it understands on all profiles in the wide world. Since it understands all tags that are, or have been part of any ICC spec, present or past, a lot of profiles should be readed by current code.

I have been lately worried about stability and qualification. If lcms2 want any success, it should be tolerant with ill-formed profiles. I say tolerant, not permissive, because crafted profiles may be used by the bad guys to introduce exploits. We don't want wargames again, right?

I have compiled along the +10 years of lcms life, a "Zoo" of profiles. This collection includes actually about 1500 assorted profiles, going from the widely distributed v2 sRGB to rare corner cases, like devicelinks holding 12-ink separations. Not to be very common.

Some of those profiles are broken. Well, not completely broken. They have slightly malformed tags, like colorants using a bad type, descriptions with wrong char count, bad sizes in the header...

To check how well lcms2 may deal with that, I wrote a small program that runs across every single profile in the Zoo and then reads every tag in the profile. The code should reject the unuseable tags and behave nicely if some information can be recovered. It cannot in anycase segfault or leak resources.

So, keeping my fingers crossed, I executed the program and ... almost! one segmentation fault. Ok, it was a bug, fixed. Run it again and .. success! no memory leaks, lots of tags discarded and a cool "all is ok" printf'd at the end. Pfew!

Tuesday, July 14, 2009

Same profile on both sides

It is very convenient to detect whatever the source and destination profiles are same to instruct the CMM to do nothing. Seems quite simple but it is certaily complex.

The issue is on embedded profiles. You can't do a binary compare because embedded profiles may have changed attributes. That is, some fields in the profile header are different to reflect the preference on intent and the fact the profile is being used embedded.

V4 offersProfileID, which is an MD5 checksum of the profile avoiding those conflicting fields. Which is a good thing: if both source and destination profiles does have same ProfileID AND the intent is same on both profiles, then you can get rid of the whole transform as it is basically a no-op.

But sometimes (most of times, currently) you get AdobeRGB or sRGB embedded, which are v2 profiles. No Profile ID, and a very common case.

So, let's try to do some optimization. If both profiles are matrix-shaper, you can detect if the obtained matrix is an identity, and then if the curves are cancelling. We have room for improvement in 3 cases:

  • All different
  • Same primaries but different gamma
  • Same primaries and equal tone curves
Last case is a no-op, but is pretty frequent: untagged images assumed to be sRGB and uncalibrated monitor assumed to be sRGB. Handling this case separately is a big plus if you care about speed.

Saturday, July 11, 2009

More on speed

As promised, I have updated the snapshot. The performance numbers on matrix-shaper to matrix shaper should be close to what lcms2 is going to deliver when released. If you want to run the testbed, you would need to copy those profiles from Photoshop distribution, as I'm not allowed to redistribute that:
  • AdobeRGB1998.icc
  • CoatedFOGRA27.icc
  • UncoatedFOGRA29.icc
  • USWebCoatedSWOP.icc
  • USWebUncoated.icc
Put them on the "testbed" folder. Ok, now just type

./configure; make; make check


Then take a look on the numbers at the end of the testbed execution.

tifficc utility should also work to some extent, but there is a 1-pixel caché that may give bad performance. I have to turn caché off for such profiles as the caché code takes more than the transform itself.

It is funny to note that this is pure "C" code, and in some situations outperforms SSE2 hand-written assembly. That was the case when using the Intel compiler.

64-bits hardware is pretty untested, so if you manage to make it work on such architectures, please drop me a note, thanks!

Thursday, July 9, 2009

about speed

I'm getting outstanding results with lcms2 and matrix-shaper profiles. It is still not on the public preview, so you have to trust me, but here are some numbers. That's tested on my laptop which is an old 2GHz 2-core CPU:

lcms 2.0:
8 bits on Matrix-Shaper profiles...done.
[625 tics, 0.625 sec, 25.6 Mpixels/sec.]

lcms 1.18:
lcms is transforming full spectrum in 8 bits...done.
[3984 tics, 3.984 sec, 4.01606 Mpixels/sec.]


Thats a boost of about X 6.3. Please note that applies only to 8 bit matrix-shaper to matrix-shaper transforms, so RGB only! When primaries of both profiles are same, the performance is even better, it reaches about 30 Megapixel/second. I will put all this code available this weekend.

Monday, July 6, 2009

Tag plug-in

Related with cmsReadTag, here comes one of the easier plug-ins to write: the tag plug-in.

Imagine you are a printer vendor and want to include in your profiles a private tag for storing the ink consumption. So, you register a private tag with ICC, and you get signature "inkc".

Ok, now you want to store this tag as a Lut16Type, so it will be driven by PCS and return one channel giving the relative ink consumption by color.

Writing a plugin in lcms2 will allow cmsReadTag and cmsWriteTag to deal with you new data exactly as any other standard tag.

To do so, you have to fill a cmsPluginTag structure to declare the plugin. This structure is formed by a base, which is common to all plug-ins.

plugin.base.Magic = cmsPluginMagicNumber;
plugin.base.ExpectedVersion = 2.0;
plugin.base.Type = cmsPluginTagSig;


That latter identifies your plug-in as "tag type". Now we need to define the tag signature

plugin.signature = 'inkc';

And some additional info about the type used by your tag:
  • How many instances of the type the tag is going to hold (usually one)
  • in how many different types the tag may come (again, usually one)
  • and then the needed type(s).

plugin.descriptor. ElemCount = 1;
plugin.descriptor. nSupportedTypes = 1;
plugin.descriptor.SupportedTypes[0] = cmsSigLut16Type;

That is all. You can setup the new functionality by calling

cmsPlugin(&plugin);

Advanced tag plug-ins may use polymorphic types, depending on the version of the profile for example. Instead of one type, you can declare several. Then the read tag logic will search for all supported types to find the suitable one. cmsSigLut16Type for v2 and cmsSigLutBtoAType for v4 for example. There is an optional callback function to decide which type to use when writing the tag.


cmsTagTypeSignature DecideType(double ICCVersion, const void *Data);

This plugin is most useful when combined with the tag type plugin, which will be discussed soon.

Saturday, July 4, 2009

cmsReadRawTag

Today I've finished the cmsReadRawTag/cmsWriteRawTag interface. It may seem a poor accomplishment, but in reality that means the serialization engine is now complete.
In lcms2 you can read a tag from an open profile by doing

tag = cmsReadTag(hProfile, TagSignature)

And lcms will return (if found) a pointer to a structure holding the tag. Simple, but not simpler as the structure is not the contents of the tag, but the result of parsing the tag. For example, reading a cmsSigAToB0 tag results as a LUT structure ready to be used by all the cmsLUT functions. The memory belongs to the profile and is set free on closing the profile. In this way there are no memory duplicates and you can safely re-use the same tag. Writing tags is almost same, you just specify a pointer to structure and the tag name and lcms2 does all serialization for you. Process under the hood may be very complex, if you realize v2 and v4 of the ICC spec are using different representations of structures.

Anyway, you may decide all that is useless and your want just to write/read bytes to the profile, in this case the Raw variants are for you.

Friday, July 3, 2009

Plug-ins

One of the main improvements of lcms2 is the plug-in architecture. Plug-ins means you can use the normal API to access customized functionality. Licensing are another compelling reason, you can move all your IP into a proprietary plug-in and still be able to upgrade core revisions in open source. There are 10 types of plug-ins currently supported.
  • Memory management
  • Error management
  • Interpolation
  • Tone curve types
  • Formatters
  • Tag types
  • Tags
  • Rendering intents
  • Multi processing elements
  • Optimizations
I will discuss each type in incoming posts. Plug-ins are declared to lcms by a single function

cmsBool cmsPlugin(void* Plugin);

And the "Plugin" parameter may represent one or several plug-ins, as defined by the plug-in developer. To write plug-ins, there is an additional include file lcms2_plugin.h, which declares functions which are not in the public API but may be useful to this task. For example I/O access, matrix manipulation, and all the types needed to populate the plug-in structures. Those functions begins with "_cms" to denote those are extended functionality and should not be called the application by rather by the plug-in.

Thursday, July 2, 2009

The probe profile

"Jimmy Volatile" a Lcms user, suggested to incorporate test plots for the regression tests. In this way external apps using lcms could check if all is working as expected. I think this is a good idea, and maybe it is also feasible (all depends on the schedule).

An interesting check would be to use the ICC probe profile. This comes from the ICC site:

"The 'probe profile' (Probev1_ICCv2.icc) is syntactically a v2 ICC output device ('prtr') profile, and can be used in a workflow wherever such a profile is required. The color space of this profile is CMYK, and its PCS is Lab.

Colors processed via this profile are deliberately distorted in a systematic way, to enable visual determination of the rendering intent used when rendering ("BToA" or PCS to device transforms) and when proofing ("AToB" or device to PCS transforms). This is useful, in cases when color-management-aware software does not document the behavior."

Here are some examples


Tuesday, June 30, 2009

Regression tests

Right now it is pretty clear lcms2 would need an exhaustive test bed if we want some sort of robustness in the code. I'm spending a lot of time with this program. It began as a small one and now it is a huge file of 5600 lines. Maybe I should split it in several modules...

My intent is to have a test on every single feature. Since this is close to impossible, the actual test is focused on usual cases. Now the question, what are usual cases? Sure, sRGB to screen and aRGB to printer output are pretty common, but there are people over there using lcms1 to do multi-ink separations on 12 channels. Why I should not check this case as well? Another big area that deserves careful testing are plug-ins. How should the code base react to a wrong plugin? is a segfault admissible in this particular case?

Monday, June 29, 2009

When/where to clip?

An interesting effect I've found, you can reproduce it with Photoshop CS4 as well. Set the working space as sRGB, intent relative colorimetric, no BPC. Then using color picker enter this Lab value:

0, -120, 0

As you can see, the obtained RGB values are 0, 0, 0. So far so good. Or not? Try this other Lab value:

0, 0, -120


Oops! now we got 0, 27, 182! What's happening? The answer, as far as I can tell, is clipping. If you take the work (I did) of implementing Lab -> XYZ and then XYZ -> sRGB by using entirely floating point formulas, the second value of 0, 0, -120 getting converted to some color with lots of L* is just a consequence of the math. From where it comes all this L*?

lcms does clip negative XYZ numbers, so in both you will obtain 0, 0, 0. What makes me wonder is why photoshop does clip one axis (a) and does not clip the other (b) in the same fashion?

Maybe is just clipping XYZ negative numbers? Ok. lcms2 gives 0, 48, 0 for a Lab of 0, 0, -120. This is the result of not clipping anything at middle stages and letting XYZ negative numbers as well. Clipping happens only on the last stage or when is absolutely required (indexing LUTs, for example). I guess more experimentation is needed in this part...

Saturday, June 27, 2009

Threads and cmsThreadID

One of the things lcms 2 is different to lcms is multithreading support. That is, indeed, a difficult feature. One may argue that just avoiding global variables should be enough to support multiple threads, but I think this is not necessarily true. Sometimes some global settings have to apply to all threads. On the other hand, it would be desirable for error handling and memory allocation to have some clues on which are the thread currently active or the environment where the current operation takes place.

So the solution I've found is the cmsThreadID type. That is just a void pointer.

Most of the high-level functions in lcms2 does have two forms. The first one is defaulting the ThreadID parameter to zero, which means "user don't care about threads". The second form allows to specify the ThreadID. This ID will be passed to memory allocation, error handler and plug-ins, so the user code may be smart enough to react differently on different threads. One example is to use different memory pools, which is useful, among others, when a given thread crashes and you want to recover gracefully.

Related functions :

cmsThreadID cmsGetProfileThreadID()
cmsThreadID cmsGetTransformThreadID()

And all functions with THR ending the function name, for example:

cmsHPROFILE cmsCreateRGBProfileTHR();

Hello, World!

Howdy and welcome. This is an informal blog dealing with color management stuff. More precisely, this blog will discuss all the gory details I am facing when writting the version two of littlecms. I think people over here would already know what am I talking about, but if it not your case, see here an abstract.
So what is the blog about? Mainly software architecture. I would like to discuss the reasons I'm designing the API in such way or why that specific limitation. To what extent this would be a good idea, time will tell... stay tuned!