Unsupervised Color Calibration
A concept on how to linearize your screen without having calibrated hardware.
Table of Contents
- The problem
- A potential solution
- 1. Measure camera sensitivity
- 2. Create the camera-to-linear mapping
- 3. Measure the brightness of the monitor color channels
- 4. Creating a sRGB-to-screen mapping
Take two uncalibrated things, and create two calibrates onces.
DISCLAIMER: I did not implement this yet, as I don’t ever find the time for it. But I wanted to share the idea here so people can adopt it and implement it. If you do so, please sent me an email!
Screens are notoriously shitty at displaying colors. Most people also don’t care about how well the screens do it, but graphics programmers and artists care about it.
There are also things called color profiles that can be used to make your OS aware of how well the screen actually represents colors and reduce the problem. Very good and expensive monitors also come with their own profile so you can just enable it and be happy.
The problem are cheaper monitors. We can calibrate those screens with (hardware) tools, but those are often expensive and usually not worth the price for hobbyists.
A potential solution
I thought a lot about the problem, and I think I found a way to at least measure a rough gamma curve (roughly 6 bit precision) for brightness and each color channel individually, and we can use hardware we already own!
The two components are a webcam with user-controllable focus (either software or hardware), and the monitor we want to calibrate.
My idea roughly involves the following steps:
- Measure the camera sensitivity on each color channel and brightness by using a Bayer dithering to create 256 distinct levels of brightness with our screen.
- Create a camera-to-linear color space conversion function from that data
- Measure the brightness curves of each color channel
- Using the measured brightness and the camera-to-linear mapping, we can now compute the sRGB-to-screen color mapping.
But why should that work? Let’s take a closer look at each step:
1. Measure camera sensitivity
By using a bayer dithering, we can utilize physics to accuratley mix brightness levels. Turning 50% of the pixels to full brightness and 50% of the pixels to full darkness, we can stimulate the camera with 50% brightness.
This will be improved additionally by physically blurring the input image by heavily defocusing the camera and using physical effects to blur our dithered image. This should result in a perfectly mixed brightness.
Measuring all levels from 0% to 100% brightness for each color channel will give us then a correct gamma curve for the camera.
Assuming that we won’t be able to ever utilize all 255 brightness levels of the source image, we’ll have to sacrifice some bits here in precision. But the camera exposure should be set to a fixed value that allows both maximum brightness and darkness to be in the measurable range, otherwise we’ll get clipping artifacts.
Cameras that can output RGB data instead of YUV will obviously generate better results due to less compressed image data. In contrast, JPEG artifacts shouldn’t be a problem as we can just compute the average of good amount of screen surface, and reduce noisyness by super strong blurring.
2. Create the camera-to-linear mapping
This can be easily done by utilizing lookup tables
3. Measure the brightness of the monitor color channels
This can be achieved by slowly sweeping each color channel from 0% brightness to 100% brightness and measuring the camera response.
4. Creating a sRGB-to-screen mapping
This can also be done by combining the results of the previous measurement with the camera mapping. This way, we should be able to compute a color profile that will at least have correct linear color response.
We cannot calibrate on defects in the hue/shade/wavelength of the mixed light, as we cannot create our own reference material.