Viewing entries in
Image Sensors

The Difference Between Rolling Shutter and Global Shutter Sensors

 Global-Shutter Sensor

Image sensors are available in many shapes and sizes, and with different capabilities. But in this post, we will focus on one very important thing: the electronic shutter methods that are available.

Rolling Shutter

Most consumer cameras use a rolling shutter method. With this method, the pixels on the sensor are read sequentially. When you press the shutter button, the camera scans through all the pixels and stores the information digitally. This means that the first pixel will be read out at a different time than the last pixel. And everything that happens after the first pixel is read out will still be captured by the last pixel, and the pixels in between.

Global Shutter

Global-shutter sensors read out all pixels of the sensor simultaneously, so the entire frame represents image data that was captured at the same moment in time. This method is not subject to the same motion artifacts as the rolling-shutter method.
 

Consequences

In everyday use, you won't notice if your camera uses the rolling shutter method. Only when you're capturing an image of a fast-moving object (like a fan), you may notice some motion artifacts like deformed fan blades.

In situations that require high-performance imaging, rolling shutter can severely affect your data. In such cases, it is better to use a global-shutter sensor, to ensure that your image represents the same instant in time and to prevent rolling shutter artifacts.


Related Posts

GigE Vision

GigE Vision is a framework for transmitting images over an Ethernet connection. It consists of protocols that define how to configure a camera and to transfer the image data. Every computer with a fast Ethernet card is compatible with the GigE Vision framework. So GigE Vision requires only an ethernet card, whereas CoaXPress and Camera Link require a framegrabber.

The maximum transfer speed of a GigE Vision camera (assuming a gigabit Ethernet card in the computer) is 1000 Mb/s.

 

Related Posts

GenICam

The Generic Interface for Cameras (GenICam) standard aims to provide a generic programming interface for cameras and other camera-related devices. Every step in the imaging process -from configuring the camera to getting the recorded images off the camera- can be configured using GenICam.

No matter what type of camera or data transfer interface you are using, if all your devices are GenICam compatible then it will be much easier for them to communicate.


Related Posts

Fiber-Optic Coupling

It is important that image quality is maintained as much as possible when using intensifiers. At the same time, light efficiency should be maximized. This can be achieved by using a fiber-optic window as the output of the first stage and as the input of the second stage.

A fiber-optic window is a solid piece of glass consisting of millions of parallel glass fibers sealed together. Each fiber acts as an independent light conductor. The shape of the window can either be flat (parallel input and output faces), or concave. Fibers with a concave surface are used for distortion correction in electrostatic image inverters.

Often the second stage will also have a fiber-optic output to allow coupling to a third stage, or to the image sensor of the camera. In the latter case the image sensor of the camera should be equipped with a fiber optic input window. In addition, take the following into consideration when you need to make a choice for either fiber-optic coupling or lens coupling:

  • Fiber-optic coupling is a permanent connection; the connection is made during the manufacture of the integrated intensified camera.
  • A fiber-optic window transfers an image from one face to the other. If the fiber optic has a tapered form, the image is reduced or enlarged. This characteristic can be used to match it to the format of a coupled imaging component.
  • While fiber-optic coupling between intensifiers is the standard technique, coupling to the camera can also be done by lens optics. Disadvantages of lens coupling are the greater loss in efficiency (compared to fiber optics) and the lenses are more bulky.
  • Lens coupling offers the flexibility of easy decoupling, allowing you a choice to make camera recordings with or without the use of an intensifier.

Spatial Resolution of Image Intensifiers

The limiting spatial resolution of an intensified imaging system depends on several factors, including (but not limited to)

  • Image intensifier type
  • Image intensifier gain
  • Pixel size

Before we can discuss each of these factors, we need to define what limiting spatial resolution means. When characterizing an imaging system, the limiting spatial resolution describes the smallest features that can be distinguished. There are several ways of characterizing the spatial resolotion, most of them use a test chart like the USAF resolution test chart. Such charts have a series of lines on them, the smaller the lines an imaging system can distinguish, the better the spatial resolution.

  SilverFast Resolution Target USAF 1951 ( Creative Commons 3.0 )

SilverFast Resolution Target USAF 1951 (Creative Commons 3.0)

Spatial resolution is quantified in the number of line pairs that can be distinguised per millimeter (lp/mm). A line pair consists of a dark line and a bright line. So if one line is 5 microns wide, then a line pair will be 10 microns wide and there would be 1 mm/10 microns = 100 line pairs per millimeter.

Image intensifier type

There is a wide range of image intensifiers available. We advise our customers on the type of intensifier they need for their application based on the wavelengths that are important for our customers, and the frame rates they need. High-speed intensifiers usually have a lower spatial resolution than image intensifiers that are optimized for lower frame rates.

Image intensifier gain

We can increase the MCP voltage of an image intensifier to increase its gain. But MCP noise and the size of the electron cloud at the exit of the MCP also depend on the MCP voltage, so the spatial resolution will be slightly reduced as the MCP voltage is increased. You can learn more about how an image intensifier works on our image intensifier page.

Pixel size

Finally, the limiting spatial resolution of an imaging system is determined by the size of the pixels that collect the light from the image intensifier. You can use our intensifier-sensor matching calculator to find the theoretical maximum sensor resolution. It is calculated using the size of the pixels.

  25 and 50 lp/mm (not to scale)

25 and 50 lp/mm (not to scale)

For example: If the pixels are 20 microns wide, we would need two adjacent pixels to distinguish a bright line and a dark line of a test chart. Those two pixels would have a total width of 40 microns, so the theoretical spatial resolution would be 1 mm/40 microns = 25 lp/mm.

The element of the imaging system with the lowest spatial resolution determines the limiting spatial resolution of the whole system. In our example, we have a sensor that has a limiting resolution of 25 lp/mm. If we have an image intensifier with a 50 lp/mm resolution, the size of the pixels would limit the resolution of the imaging system to 25 lp/mm.

However, if the pixels are smaller, 2 microns for instance, then the theoretical resolution of the sensor would be 250 lp/mm. In that case, the resolution of the image intensifier would determine the resolution of the total system.

Other factors

Many factors influence the spatial resolution of an intensified imaging system, like the size of the image intensifier, the number of image intensifiers and the optics. If you would like more information about the right image intensifier for your application, please contact us.


Related Posts

CCD Camera Sensitivity

At low light levels standard CCD/CMOS cameras are not sensitive enough to capture useful high-contrast images. There are ways to increase the sensitivity of such cameras. The first method is to allow the CCD to integrate for much longer times. In order to prevent high background noise, CCD cooling is applied when using long exposure times. A second method is to use an image intensifier to boost the input signal.

Cooled CCD

At longer integration times of a CCD, more light is captured to enhance images. However, not just more input signal is collected, but also more dark current from the CCD itself. The amount of dark current depends strongly on the temperature; for every 6 degrees C the CCD is cooled down, the noise (dark current) halves. When the CCD is cooled to -25 degrees C, integration times up to minutes can be applied. This enhances the sensitivity of the camera immensely.

To better improve the cameras SNR, the read-out noise is reduced by using a lower read-out speed. These techniques are used in high performance 14 and 16 bits digital cameras.

Intensified CCD with Fiber-Optic Coupling

An image intensifier helps to increase the sensitivity of a camera by amplifying the input light-signal before relaying it to the CCD/CMOS sensor of a camera. Roughly, there are two ways to relay the output image, from an image intensifier, to a CCD/CMOS sensor. The first is by means of a relay lens. A lens coupling is flexible, but the downside is that a lens coupling has a low transmission efficiency, caused by the limited aperture of a lens. A more efficient way is to use a fiber-optic window to transfer the image from the intensifier to the CCD. A fiber-optic window contains a large number of microscopic (6-10 micron) individual fibres and acts as an image guide. A tapered fiber-optic window will magnify or demagnify input images. Generally, demagnification is chosen to match the image intensifier to the CCD/CMOS sensor.

In summary, the advantages of a fiber-optic coupling are:

  • Low light losses
  • Intensifier/CCD combination is more compact
  • Camera design is sturdier
  • No optical adjustments are needed

Related Posts