When we look at an image on a computer monitor, it seems that what we see is actually what is stored in the image. This is not quite true. Most modern monitors apply a number of additional transformations to the image "improve" its appearance — higher contrast, brighter colors, ... Although this looks great when we want to browse the pictures from our last vacation it can be a problem for vision science experiments. In such experiments, we want to make sure that whatever we have stored on the computer gets shown to the observer's eye as accurately as possible, which the least amount of distortion possible. One of the most important transformations that virtually every monitor applies to its output is a pixel wise nonlinear transformation called the gamma function. This post is about how to undo this transformation to make sure the images shown on the screen are less distorted.

If we represent an image on in the computer, we typically store an intensity \(x\) at every pixel location (this is for grayscale images; for color images, we would need to store three intensities for the red, green and blue channels of the image). The graphics card and the monitor receive this intensity value and convert it to a light intensity \(L\) at a certain position on the screen. In the simplest case, this conversion results in a relationship between the nominal intensity \(x\) and the actual light intensity \(L\) of the form

$$ L = x^\gamma $$

With an exponent \(\gamma\) that's somewhere around 2. (Other descriptions of gamma correction also have a factor \(a\) that ensures that the units are correct.)

As mentioned above, this makes it difficult to do psychophysical experiments that measure sensitivity to luminance, color, or contrast. Imagine we wanted to measure how sensitive an observer is to small increments in light intensity relative to some base intensity. A straight forward experiment would measure the discrimination threshold between just the base intensity \(x_b\) and the base intensity plus some some small increment \(\Delta x\). Note that as experimenters, we control the nominal intensities of the display and not the actual light intensities. Now if we measure that at \(x_b=1\) the threshold is \(\Delta x=1\) and at \(x_b=200\) the threshold is also \(\Delta x=1\), can we conclude that the threshold is the same at these two base intensities? In fact, we can't! After applying the gamma transformation, we see that the difference at \(x_b=1\) is

$$ ((x_b+\Delta x)^\gamma - x_b^\gamma) = (2^\gamma - 1^\gamma) = 3, $$

(where we assumed in the last step that \(\gamma=2\)). At \(x_b=200\) it is

$$ a((x_b+\Delta x)^\gamma - x_b^\gamma) = a(201^\gamma - 200^\gamma) = 401a, $$

a much larger difference! You may say, that we could just do this calculation and we would know that the difference is larger. That's correct, but these calculations become more complicated if we don't directly vary the intensity, but a derived measure such as contrast.

It would be much easier if we could just tell the monitor to not apply the stupid gamma transformation in the first place. Technically, this can be achieved by telling the graphics card that every time we ask it to display an intensity \(x\), it should actually send an intensity \(x^{1-\gamma}\) to the monitor. The monitor then applies the gamma transformation, so that we end up with

$$ L = (x^{1-\gamma})^\gamma = x^{1-\gamma + \gamma} = x. $$

For this, we proceed in three steps:

  1. We measure pairs of \(x\) and \(L\) values using a photometer.
  2. We use this data to estimate the value of \(\gamma\).
  3. We tell the graphics card to invert a gamma transformation with the measured exponent.

Most psychophysics tools provide ways to do a gamma calibration automatically (and in just a few seconds). However, we will here go through the manual process to really clarify how this process works. However, we will only do this process for grayscale measurements. For color displays, we would need to apply this to every individual color channel.

Measuring the monitor with a photometer

We want to show a number of nominal intensities on the screen and measure the corresponding light intensity with a photometer. Importantly, we have to do this without any gamma correction to actually get the correct values of the (unintended) gamma transformation. It helps to create a little script that can display fields of homogeneous color on the monitor. In psychopy this could like this:

from psychopy import visual, event

win = visual.Window()
patch = visual.GratingStim(
    win,
    size=(1, 1),
    colorSpace='rgb255',
    units='norm',
    sf=0)
text = visual.TextStim(win, pos=(0, -0.6))

for intensity in range(0, 255, 10):
    patch.color = (intensity, intensity, intensity)
    text.text = f'x={intensity}'
    patch.draw()
    text.draw()
    win.flip()
    event.waitKeys()

This code shows a homogeneous patch at the center of the screen. At the beginning this patch is black and every time you press a key, it will become a little brighter. The current value of \(x\) will be shown below the patch.

In order to actually measure the light intensity coming from the screen, you need a photometer. A photometer is a device that can measure light intensity. Depending on the photometer you have, there will be different ways to actually get the respective values. It is important that you make sure that your photometer really measures the light intensity coming from the patch on the screen and not from other light sources. It may help to turn off the light for this.

Use photometric measurements to determine gamma value

After you measured pairs of values for \(x\) and \(L\), you can go ahead and try to determine the value of \(\gamma\) from them. At this point, I have to confess that the equation shown above is a little oversimplified. In reality the gamma transformation would nearly always be like this:

$$ L = b + ax^\gamma, $$

where \(b\) is the minimum luminance of the monitor and \(a\) is a proportionality constant that ensures that the units of \(L\) are correct. (There are more complicated versions for this relationship, see for example here.) Although we are not directly interested in \(a\) and \(b\), we still have to estimate them, because omitting them would skew our estimate of \(\gamma\). Here is some python code that will do the estimation for you

import numpy as np
from scipy.optimize import fmin


def error_func(abgamma, x, L):
    a, b, gamma = abgamm
    Lexpect = b + a*x**gamma
    return np.sum((Lexpect - L)**2)


def estimate_gamma(x, L):
    return fmin(error_func, [0, 1, 2], args=(x, L))[-1]

If you call the function estimate_gamma with an array of nominal values \(x\) and another array of measured \(L\) values, it will give you a value for \(\gamma\).

Let's take a brief look at what happens in this code snippet. The function error_func takes the parameter \(a\), \(b\) and \(\gamma\), as well as the measured \(x\) and \(L\) values, calculates the \(L\) values we would expect if the parameters \(a\), \(b\) and \(\gamma\) were correct. It then returns the sum of squared differences between the expected and the measured \(L\) values. Clearly, for the correct parameters, this sum of squared differences should be very small. The function estimate_gamma applies exactly this idea and it searches for a parameter combination for which the output from error_func is as small as possible. Searching for parameter combinations for which some function is minimal is a well studied mathematical problem and of course this is something that other people have solved before. We therefore simply import a routine that does the job from the module scipy.optimize. This function, fmin, takes our error function, an initial guess of the parameters and any additional arguments to our error function. It then tries to modify the initial guess until further modifications do not result in smaller function values. Once this happens, it returns this refined version of the initial guess.

Tell the graphics card to invert the gamma transformation

You can now use the \(\gamma\) value to you received from the previous step to tell the graphics card that it should "undo" gamma transformation with the corresponding \(\gamma\) value. In psychopy we can achieve this by creating a Monitor instance and passing this monitor instance to the constructor of our experiment Window like this

from psychopy import visual, monitors

monitor = monitors.Monitor(gamma=gamma)
win = visual.Window(monitor=monitor)

...

We can now use win to display stimuli for which nominal intensities correspond linearly to actual light intensities.