Digital image sensors are omnipresent(无所不在的) in our everyday lives: cameras, videoconferencing, webcams, and of course in our cell phones (although...). Image sensors account for more than 40% of the sales of optoelectronic components. Between 2002 and 2007, the sales of image sensors increased by 35%. These sales mainly consist in CMOS sensor for the basic cameras for cell phones, and in CCD digital cameras. During the first half of 2007, sales fell by 12% compared with 2006, year during which CMOS sensors permitted to obtain a turnover of 4.2 billion dollars which is 30% better than in 2005. On the all year, CCD and CMOS sensors have decreased by 7% ; this decrease could have been due to the cell phones market that started to stagnate(停滞). Now, sales are expected to increase by an average of 14% a year until 2012 owing to the emerging markets: medical imaging, toys, video games, ... and it will reach 13.2 billion dollars in 2012. Between 2007 and 2008, the turnover already went up by 10% from 6.9 billion dollars to 7.6 billion dollars. The sales of CMOS sensors reached 4.4 billion dollars in 2008, which represents an increase of 19%, after the drop of 12% in the turnover in 2007. As for CCD sensors, their sales fell by 1% in 2008 (3.2 billion dollars) whereas they had gone up in 2007. The sensor market is booming and above all more and more expertise is required to be able to analyse the supply and correctly choose “One's sensor”. But what are the constraints that have to be taken into account? How to read the technical data and most of all how to evaluate one's needs and performances ??? This lesson recapitulates the different questions in a literal way and tries to provide the reader with an answer or, at best, with a progression enabling you to draw your own conclusions about the relevance of the investment that has to be made. However, must be asked the key question WIWA: : Which Image for Which Application ? We will suppose here that the reader is familiar with the fundamentals of imaging and that the reader will only have to use this knowledge to choose the suitable optics and sensor... There are several possibilities to digitize an image (Fig.1), by using scanning digitization or using digitization point by point distributed on a surface (these “points” can themselves be miniature images, constituents of the final image). Then, the final resolution of the image would be determined by its size and the number of pixels constituting it, and this depending on the physical size of each pixel. You can also find an image definition no longer expressed in dpi (dot per inch), but in lp/mm (line pairs/mm), knowing that two pixels are needed to generate a line pair... This module is divided into two parts: “lesson” and “case study”. Somes exercises supplement this presentation of image sensors.

2.Historical summary

Some key dates: - 1970: CCD (Bell Labs) - 1974: Sensor 100 x 100 pixels (Fairchild) - Sensitivity: IR and Visible - 1993: Sensor CMOS (NASA JPL) - 2000: Sensor 7186 x 9216 pixels (Philips) - Sensitivity: IR, Visible, X - 2007: CCD sensor 9216 x 9216 pixels - 2007: CMOS sensor SA1: 5400 fr/s for 1Mpixel (Photron) - 2009: Dalsa sensor for Mamiya : 22Mpixels for 36x48mm - 2009: Photron sensor SA5 : 5400fr/s for 1Mpixel... in colors

3.Basic glossary

  • Pixel: picture element,size of several µm, contiguous or not
  • Pitch: interpixel distance, varies according to the sensor format
  • Format: line, interline, field, frame, full frame
  • Spatial resolution: linked to the number and size of pixels
  • Temporal resolution: linked to the acquisition frequency (fr) per second
  • Sensitivity resolution: linked to the semiconductor and the digitization in greyscale
  • Binning: pixel gathering
  • Transfer rate (in pix/s, or in octets/s): depends on the resolution and transfer rate format of one pixel
  • Sensitivity: linked to the amount of light and its color, according to the semiconductor, kind of doping and covering
  • Noise: any signal that does not take part in the electrical transcription of the image wanted

4.Image sensors

4.1 Introduction

The aim of image sensors is to transcribe, as faithfully as possible, the image of a lighted object, or of a source of light, formed on their surface by an appropriate optical system. This presentation mainly focuses on the two major types of optoelectronic sensors used today: CCD sensors and CMOS sensors: - CCD (Charge Coupled Device), that Collects, Transfers and Converts the electrical charge generated by incident photons. - CMOS (Complimentary Metal Oxyde Semiconductor),(互补金属氧化物半导体) that Collects and Converts on the site of collection of the electrical charge generated by the incident photons. Historically, the first video cameras worked with photographic films, then with tubes as detectors, and only the central part of the tube was used. With the appearance of semiconductors, the first CCD sensors appeared in the 1970s and the advances made in physics of solids as much as in electronics (growth techniques, photolithography, nanoetching, ...) have made it possible to have simpler and simpler and more and more efficient manufacturing processes. In the late 1990s, CMOS sensors supplanted ultra-fast photographic films for high speed acquisition of images. Finally, their explosive growth in the 2000s and the development of broadband and high-speed means of telecommunication, led to their use in different fields as diverse as remote surveillance, video communication... or quite simply as presence sensor in its simplest form (one single photodiode). Eventually, the complete replacement of photographic emulsions(乳状液) is expected, including in scientific fields with high spatial resolution request such as holography(全息摄影术), satellite and astronomical imaging.

4.2 CCD matrices

The CCD sensors (Charge Coupled Device : charge-transfer systems) refer to a semiconductor architecture in which the charge is transferred via a storage area. Most sensors operating in the visible region uses a CCD architecture to move the charge packet. They are commonly known as CCD matrices. Their architecture consists of three basic functions, in addition to the creation of the charge: - charge collection, - charge transfer, - conversion of the charge into measurable voltage. As matrices operating in the visible are monolithic(庞大的) systems, the generation of the charge is often regarded as the initial function of CCD sensors. The charge is created in a pixel (pixel is the short term for “picture element”: it is the smallest piece of image) proportionately to the level of incident light in this site. The aggregation effect of all the pixels produces a sampled representation of the continuous scene. The technology on which CCD sensors are based is the MOS (Metal Oxyde Semiconductor) capacitor. The capacitor is called a “gate”. Charge packets are sequentially transferred from gate to gate until they are measured at the detection node. In most systems, the generation of charges is produced by photoelectric effect (Fig2) at a MOS gate (also called "photogate"). For some systems, including interline transfer systems, photodiodes generate the charge. After generation, the transfer of the charge to the conversion node (conversion of the charge into voltage) occurs in the MOS capacitors for all systems. Although CCD matrices are in the public domain, their manufacture is complex. The number of steps that are necessary to their production can vary between ten and a hundred depending on the complexity of their architecture. The systems can be described functionnaly according to their architecture (“frame transfer”, interline transfer, ...) or according to their application. To minimize costs, the complexity of the matrix and electronic processing, the architecture is typically chosen for a specific application (“ ASIC : Application Specific Integrated Circuit ” ). Astronomical video cameras typically use full frame matrices, while video systems generally use interline transfer. Finally, the separation between professional TV, camcorder users, artificial vision and scientific or military applications is becoming more ans more tenuous with the advances in technology.

4.3 CCD working

The technology on which CCD sensors are based is the MOS capacitor (Metal Oxide Semiconductor) (Fig.3). If a photon, whose energy exceeds the energy of the gap, is absorbed in the depletion(消耗) zone, it creates anelectron-hole pair. The electron remains in the depletion zone while the hole heads toward the earth electrode. The amount of negative charge (electrons) that can be collected is proportional to the voltage applied, the oxide(氧化物) thickness and to the surface of the gate electrode. The total number of electrons that can be stored is called “well capacity”. When the wavelength increases, photons are absorbed at increasing depths. It notably limits the response to high wavelengths. Currently, available sensors can function from far infrared to X-rays. The CCD register consists of a serie of gates. The handling of gate voltage, in a systematic and sequential way, transfers the electrons from one gate to another like a courier(信使). For the charge transfer, depletion zones have to overlap each other(Fig 4.a).Depletion zones are in gradient and gradients should overlap so that the charge transfer occurs. Each gate has its own control voltage that varies according to time. Voltage is a squarewave signal called “clock” or “clock signal”. First, a voltage is applied at gate 1 and photoelctrons are collected in the well 1 (b). When a voltage is applied at gate 2, electrons move toward well 2 like a waterfall (c). This process is fast and charge is rapidly balanced between the two wells (d). When the voltage is reduced at the gate 1, the potential well decreases and all the electrons flow again cascading into well 2 (e). Finally, when the voltage applied at gate 1 approaches zero, all the electrons are in the well 2 (f). This process is repeated several times until the charge is transferred across all the shift register. When the gate voltage is low, it acts as a barrier whereas when the voltage is high, the charge can be stored. A pixel can be constituted of several gates (from 2 to 4, or even more), sometimes also called system phases. According to the clock signals (cf. infra), and for an equal pixel size, 50% of the pixel surface is available for the well with a four-phases system, to 33% with a three-phases system. The CCD matrix is a series of registers in columns (Fig. 5). After exposure and charge generation by photoelectric effect for each pixel, the charge is stored in the lines or in the columns by channels or arrest blocks and depletion zones overlap only in one direction (descendant for columns, horizontal for the reading line). Charge transfer first occurs from line to line by intra and then inter-pixel gate jumps (Fig. 6). At the end of each column, there is an horizontal pixel register. This register collects one line at a time and then transports the charge packets in serial mode to an output amplifier. The entire horizontal serial register has to be synchronized with the detection node before the next line enters the serial register (Fig. 6). That's why separated vertical and horizontal clocks are required for all matrices. The interaction between thousands of transfers reduces the output signal. The capacity to transfer the charge is specified by the charge transfer efficiency (CTE). Although one can use any number of transfer sites (gates) by pixel on the sensor surface, we generally use from two to four of them, or even a virtual phase system that only requires one clock. In each mentioned system, the final step of the process consists in the conversion of the charge packet into a measurable voltage. This is achieved by a floating diode or a floating diffusion.The diode, working as a capacitor, generates a voltage proportional to the number of electrons, ne. Then, the signal can be amplified, processed and digitally encoded by electronic processor independent from the CCD sensor. With several matrices, it is possible to move more than one charge line in the serial register. Similarly, it is also possible to move more than one element from the serial register into a summing gate just before the output node. It is called “binning”, “super pixeling”, or even “charge gathering”. The binning increases the output signal and the signal dynamic range, but at the expense of the sensor spatial resolution. As it increases the signal-to-noise ratio, the binning is interesting for low-light-level applications in cases in which resolution is not really important. Serial registers and output nodes require wells of higher capacity for binning. If the output capacitor is not updated after each pixel, then it can accumulate charge.

4.4 Matrix architecture

The matrix structure is directly related to its application. Full or partial frame transfer (frame “fr”) tend to be used for scientific applications. Interline transfer system are used for mass-produced camcorders and professional television systems. Linear sensors, both progressive scan sensor and time delay and integration sensors (TDI) are used for industrial applications. Progressive scan simply means that the image is scanned sequentially line by line (not interlaced). This is important for artificial vision because it provides a precise timing and has a simple format. Any application requiring digitization and an interface with a computer will probably work better with progressively scanned imaging. However, few monitors can directly display this type of imaging and an interface is required. Block capture cards can provide this interface. Scientific level matrices can be as broad as 5120 x 5120 elements or even more (it can even be 9000 x 9000 ...). While large format matrices provide the highest resolutions, their use is restricted by limitations in their reading speed. A mass-produced camcorder(摄像录像机) with a 30 block/s (fr/s) frame rate has a pixel data rate of about 10M pixels/s. A matrix of 5120 x 5120 elements operating at 30 fr/s will have a pixel date rate of about 768Mpixels/s. Large matrices can reduce the reading speeds of sub-matrices having multiple parallel ports assigned to the sub-matrices. A compromise exists between block transfer rate and the number of parallel ports (complexity of the CCD) and interfacing with electronics downstream. As each sub-matrix is processed by different amplifiers whether on or off the chip, the image can present local variations in contrast and level. It is caused by differences in level adjustments and gains of each amplifier (the climax is reached with CMOS sensors in which each pixel has its own amplifier, cf. Infra). Currently, association of large matrices are available to obtain what is called networks of focal plane array. This is mainly used for very large images and especially for applications in astronomy. The spatial resolution is often presented as the number of pixels in a matrix. The common perception is that “bigger is better”, both in terms of size and dynamics. Matrices can reach 9216 x 9216 with a 16 bits dynamic. This matrix requires 9216 x 9216 x 16 or 1.36Gbits storage per image. The compression of this kind of images can be necessary if the space on the disk is limited. To loose the least resolution possible, great progress have been and are still being made on compression algorithms. To increase the temporal resolution of the image sequence, we tend too an acceleration of the acquisition frequency and the image transfer rate. (Acquisition frequencies of 5400fr/s for 1Mega pixel can be reached with a CMOS sensor !!!). However, the user of such type of acquisition will have to determine which images are significant and to save the ones that have value via data reduction algorithms. Otherwise, he risks to be overwhelmed by tones of data to process.

4.5 Linear matrices

The simplest arrangement is the linear matrix or a simple row of detectors (photodiodes or photogates). Linear matrices are used for applications in which either the camera or the object moves in a direction perpendicular(垂直的) to the alignment of the sensors. They are used when a strict control is imposed on the movement as for example in the case of a document scanner. The CCD shift register is placed right next to each detector. The register is also sensitive to light and is covered, or shielded, by a metallic opaque screen. The total size of the pixel is limited by the size of the gate. The resolution is inversely proportional to the distance between each detector (pixel pitch).

4.6 Full frame transfer matrices

Anglo-Saxons use “Full Frame Transfer”, which they shorten by FFT, not to be confused with Fast Fourier Transform. After integration of the received light, the pixels of the image are read line by line via a serial register which then, synchronizes its contents on the output detection node (Fig. 8 et 9). Any charge has to be transferred from the serial register before the next line is transferred. In full frame matrices, the number of pixels is often in power of two (512 x 512, 1024 x 1024, ...) in order to simplify the memory mapping. Matrices dedicated to scientific applications generally have square pixels in order to simplify the image processing algorithms. During the reading, photosites are continuously irradiated(照耀) and it can results in an image presenting smears(涂沫). This smear will be in the direction of the charge transport in the image area of the matrix. A mechanical or electronic external shutter can be used to isolate the matrix during the reading in order to avoid the smear. When using a strobe(闸门) light to generate the image, the use of a shutter is not necessary is the transfer is made between flashes. Is the integration time is much longer than the reading time, then the smear can be considerably important. This situation occurs very often during astronomical observations. The data transfer speed are limited by the bandwidth of the amplifier and conversion capability of the analog to digital converter. In order to increase the actual speed of reading, the matrix can be divided into sub-arrays that are read simultaneously. In figure 9, the matrix is divided into four sub-matrices. Since they are all read simultaneously, the actual clock speed is multiplied by four. A software then rebuilds the image. This is achieved by a video processor, external to the CCD sensor, where serial data is decoded and reformatted. Large surface systems often allow the user to select a sub-matrix in reading. So, the user can manage the compromise the transfer rate and the size of the image. It makes it possible to obtain higher rates on a region of interest (sub-block). However, according to the architecture of the matrix, some sensors only allow selecting entire lines and not line fragments. In addition, some cameras have onboard memory whereas others don't. In all cases, the shutter time is the parameter limiting the maximum speed. In the final limit for the camera with no on-board memory(单板存储器), the limiting factor will be the information transfer rate (in Mo/s) to the remote memory (RAM PC, SATA, ...). The shutter time permits, if sufficiently short, to freeze objects in motion. If the object's velocity is high, its image could result in a blurred smear. Recently, one can have Full Frame sensor of 8cm by 8cm with “contiguous” pixels measuring 8.75 x 8.75µm, which corresponds to a 57lp/mm resolution. This detector allows acquisition at 2fr/s (Source Fairchild, CCD595 sensor), with a resolution equivalent to photographic films.

4.7Frame transfer

A Frame Transfer system consists in two matrices (X columns x Y lines) almost identical, one dedicated to image pixels and the other one to storage The storage cells are structurally identical to sensitive cells but they are covered by an optical metallic shield to avoid any exposure to light. After the integration cycle, the charge is rapidly transferred from sensitive cells to storage cells. The transfer rate to the shielded part depends on the size of the matrix but is typically inferior to 500 µs. The smear is limited by the time taken to transfer the image to the storage area. All this process requires much less time than in a Full Frame Transfer (FFT) system. As the CCD size is double, it is more complex than a simple Full Frame system. The matrix must have some false charge wells between the sensitive part and the storage part. These wells collect any anticipated leak of light. Some matrices can operate in full frame X x 2Y) mode or in frame transfer (X x Y) mode. In this case, the user must build the shield for the frame transfer mode.

5.Interline transfer

The “Interline Transfer” matrix consists in a series of photodiodes separated by vertical transfer registers which are covered by an optical metallic shield (Fig. 11). After integration, the charge generated by photodiodes is transferred to CCD vertical registers in about 1 µs and so the smear is minimized. The main advantage of interline transfer is that the transfer from sensitive pixels to storage pixels is fast. So there is no need to shutter incident light. This is commonly known as "electronic shuttering". The disadvantage is that it lets less space to active sensors. The shield acts like a Venetian blind that darks half of the information present in the scene. The “area fill factor” can be as low as 20%. (However, we can remedy to this by placing microlenses (Fig. 12) just in front of the sensor surface, in order to redirect the light toward sensitive cells. (HAD sensor: “Hole Accumulated Diode”). For scenes in very high brightness, a portion of the incident light can still reach the vertical registers. For professional applications, we will then use a Frame Interline Transfer architecture, in which we find a shielded storage block under the active part as in the frame transfer. As the interline systems are more often used in mass-produced camcorders, the design of the transfer registers is based on the standard video synchronization. With a 2:1 interlacing system, each field is collected simultaneously and read alternately. It is called “frame integration”. In EIA 170 (formerly RS170), each field is read every 1/50 s (1/60 s US). This allows a maximum integration time of 1/25s for each field. The pseudo-interlacing system or field integration is obtained by changing the gate voltage; the centroid of the image is shifted of half a pixel in the vertical direction. It generates a 50% overlap between each field. The pixels have twice the standard size of interline transfer and hence a double sensitivity. However, it reduces the Modulation Transfer Function (MTF). Microlenses The optical fill factor may be inferior to 100% because of the manufacturing constraints of full frame systems. In the interline systems, the shielded storage area reduces the fill factor to less than 20%. Microlens assemblies (also called microlens array or mini lens array) increase the effective optical fill factor (Fig.12). However, it will not reach 100% because of slight misalignment of the microlens system, of mini lens imperfections, non symmetrical shielded areas, and transmission losses. As the output voltage of the camera depends on the effective size of the sensor, increasing the fill factor with microlenses increases the effective size of the detector and the output voltage.

6.Progressive scan

Progressive Scan simply means an non-interlacing scan, or sequential, line by line scan of the image. CCD matrices do not scan the image but it is easier to visualise the output as if it did. For the scientific or industrial applications, it is sometimes named “slow scan”. The main advantage of slow scan is that the whole image is captured in a given instant, unlike interlacing(隔行) systems which collect each field sequentially. Each vertical image motion from field to field disturbs the interlaced image in a very complex manner. Horizontal movements make a sawtooth(锯齿) cut of the vertical lines. If the movement is too important, the two fields will show images which are too staggered one from another. By using stroboscopic light to freeze the movement, the image will only appear on the active field during the light pulse. Because of these movement effects, only one data field can be used for image processing. It reduces the vertical resolution by 50% and increases the vertical aliasing of the image. As a full frame is collected, slow scan systems do not suffer the same effects due to the image movements as in interlacing systems. Thus, slow scan systems are said to have an enhanced resolution compared to interlacing systems. For scientific applications, the progressive output is captured by an (on-board or external) acquisition card (of a computer). A post-processor reformats the data according to the desired display.

7.Time delay and integration

The Time-Delay and Integration, TDI, is similar to register multiple exposures of a same moving object and adds them. This addition automatically takes place in the charge well and it is the temporal synchronization of the sensor that produces multiple images. As the typical application of the TDI, we consider a simple camera lens which reverses the image of the object so that it moves at the opposite of the object movement. As the image is scanned along the matrix, charge packets are synchronized at the same speed. For a flatbed scanner, the document is fixed and the camera or a mirror moves. The figure 13 illustrates the working of four detectors in TDI mode. At the time T1 the image is on the first detector and creates a charge packet. At T2, the image is transferred to the second detector. Simultaneously, the clock pixel transfers the charge packet to the well of the second detector. Here, the image creates an additional charge which is added to the charge created by the first detector. The signal (charge) increases linearly with the number of detectors in TDI. The noise also increases but like the square root of the number of elements in TDI, NTDI. This results in an increase of the signal-to-noise ratio in NTDI 1/2 . It is the well capacity that limits the maximum number of TDI elements that can be used. For this concept of TDI to be working, the charge packet has to always be synchronized with the moving image. The precision with which the speed of the image is known drastically limits the use of TDI systems.

8.CCD sensor noise

For CCD sensors, and image sensors in general, is considered as noise everything which does not correspond to the image. They can be from diverse origins, from the collection of photons in charge to the conversion of the charge into voltage, passing through the different processes of interline, column or frame transfer. It includes in particular the phenomena due to thermal effects, light irradiation during the transfer, filling and overflow of the wells on the neighbours, etc... (Fig. 14)

9.Dark current

The CCD output is proportional to the exposure ERxt, in which ER is the illumination on the surface of the CCD sensor and t the integration time. The output signal can be enhanced by increasing the integration time.... and long integration times are generally used for low light level applications. However, this approach is quickly limited by the generation of dark current, which is integrated such as photocurrent. The dark current is expressed in current density [ [A/m2] or in electron/pixel/second [e -/pix/s] (Fig.15). For a large pixel (24 x 24µm2), we can reach a dark current density of 1000pA/cm2, producing 36,000 electrons/pixel/second. If the system has a full well capacity (FWC) of 360,000 electrons, the well is filled in 10 seconds. The dark current is sensible only when t is large. It can be the case in scientific applications at low light level (studies of plasmons, photoemissions, low-reflection materials, astronomy.... A critical parameter in the design of the chip will be to reduce significantly the dark current. There are three potential sources of dark current:
  • Thermal generation and diffusion in the bulk,

  • Thermal generation in the depletion zone,

  • Thermal generation due to surface states.

The dark current can be measured by capturing images at various exposure times with the sensor closed by its cap. Some sensors include the measurement of dark current by using extra pixels, shielded and next to the image surface, called “dark pixels”. The dark current density varies significantly according to the manufacturers and in a range between 0,1nA/cm2 and 10nA/cm2 for silicon-based CCDs. The dark current due to the thermal generation of electrons can be solved by cooling the system. In principle, the dark current density can be made negligible by adequate cooling. The dark current density decreases approximately by a factor two for each decrease of 7 to 8°C f the temperature of the matrix and vice versa. Cooling is particularly important in scientific applications at low light level where high precision on charge level of the wells (greyscale) is required. TEC (ThermoElectric Cooling) systems, are Peltier systems driven by electric current pumping the heat of the CCD to a radiator. The radiator is cooled by air (forced or not), or by a circulating liquid (water, liquid nitrogen...). As the liquid nitrogen is at a temperature of -200°C, the optimal working temperature is between -60°C and -120°C because the charge transfer efficiency (CTE: reliability to transfer the charge for site to site) and the quantum efficiency decrease at inferior temperatures. Condensation is a problem and matrices should be placed in a low pressure chamber or a chamber filled with a dry atmosphere. The dark current can approximate 3.5 electrons/pixel/second at -60°C and 0.02 electrons/pixel/hour at -120°C. The output amplifier continuously dissipates heat. It results in the local heating of the silicon chip. To minimize this effect, output amplifiers are more often separated by several insulation pixels to insulate locally the amplifier from other active pixels. In the case of cooled on-board cameras in spacecraft, the camera often ends its life due to a lack of coolant(冷冻剂).


A pixel is saturated when its Full Well Capacity (FWC) is reached. When a potential well is filled, the excess charge can overflow into the neighbour wells of adjacent pixels. Two types of blooming can be distinguished according to the directionality of the overflow. These overflows can of course be avoided by never reaching the saturation of the FWC, hence by working with short exposure times... which is not always very practical for all or part of the image. This is particularly true in the case of scenes that have highly heterogeneous( 各种各样的;) illumination (high-contrast objects, flames, explosions, galaxies, night lightings, ...).
10.1 Horizontal blooming
In this case, the charge overflow happens in the adjacent columns. To avoid it, only the drains relative to each pixel or each column can avoid the collection in neighbour charges. For interline transfer systems, this process is called Vertical Overflow Drain (VOD), in orange on the figure 17. However, it can be noted again that their presence makes the architecture more complex, but also reduces the fill factor. The efficient surface decreases because the generated charge by the light on the drain will be directly eliminated. The working of the drain can be linked to the electronic shutter. In this case, it will be used to rapidly purge the image, for example in the case of an asynchronous acquisition.
10.2 Vertical blooming – Smear
In some cases, the exposure of sensors can persist during the transfer along the column. In that case, if the full well capacity is exceeded, the charges will flow during their transfer, generating vertical lines, called “smear”. (Fig. 18) The only way to fight actively against this overflow is to reduce the exposure time or to choose an other architecture, at the expense of the fill factor.

11.Quantum efficiency

The Quantum Efficiency (QE) gives the quality of the light/charge transformation for a CCD sensor. It is about 40% CCDs illuminated in front. It represents 40 electrons generated for 100 incident photons. Quantum efficiencies superior to 90% can be reached for some wavelengths, with some backlit( 从背后照亮的), thinned CCD (the photographic films and human eye have a maximum quantum efficiency of ...10%). The polysilicon transmittance drastically falls for wavelengths tending to 600nm, and for wavelengths inferior to 400nm, polysilicon is opaque. We can also extend the range of spectral sensitivity by using phosphors arranged on the surface of the CCD (Fig. 19). These phosphors absorb for example the ultraviolet (120 to 450nm) to reemit in the window of maximum sensitivity of the component (540-580nm). All CCD sensors are illuminated by front until they are thinned and backlit. Then they pass from a few hundreds to a few tens of micrometers thick.

12.Transport and conversion of the charge

The charge is transferred successively from gate to gate, and finally converted at the output node. The quality of the output signal directly depends on two vital parameters: the Charge Transfer Efficiency and the efficiency of the output conversion.
12.1 Charge transfer efficiency
The charge transfer efficiency allows us to quantify the quality of the passage of the charge from one gate to another, from one pixel or line/column to another, and through the gates of the reading shift register. The charge transfer efficiency of a gate is often very close to 1, however, the slightest deviation can have a huge impact on the final signal depending on the size and number of transfers from one to another.It can be compared to the amount of water that would remain in a bucket after transferring all its content into an other bucket nearby. The percentage of the initial amount of water transferred into the second bucket is the CTE. Good-quality CCDs have a CTE close to 99.999%. If we take again the example of the Fairchild CCD595 sensor, the number of pixels is of 9216 by 9216, with 3 gates by pixel. Transferring information from the top pixel to the bottom one and through the output register requires 9216 x 3=27648 transfers. The vertical charge transfer efficiency is about 99.9999% so only 0.0001% of the signal is lost. For the 27948 transfers, only 2.7% of the charge will be lost. For this particular sensor, the horizontal charge transfer efficiency is slightly different because the information only travels through ¼ of the pixel number to reach the output node, and a system with only two gates is used for each of the four output registers. The horizontal charge transfer efficiency is about 99.995% and so leaves 2.2% of charge lost for the last pixel. If the CTE of a CCD is very bad, streaks can even appear on the image.
12.2 Output structure
At the end of its journey through the different gates, the charge is finally converted into voltage by a floating diode or a floating diffusion. The voltage difference between the final state of the diode and its pre-stored value is linearly proportional to the number of electrons, ne. The signal voltage after the source is: The gain G is approximately equal to 1, q is the charge of the electron and the charge conversion rate Gq/C typically varies between 0.1µV/e- and 10µV/e-. Then, the signal is amplified, processed, and digitized by electronic systems external to the CCD sensor.

13.CCD chip size

Historically, vacuum tubes “vidicon” used for professional television applications were characterized by the diameter of the tube. In order to minimize the distortion and the non uniformities of the tube, the recommended size of the image was considerably smaller the the diameter(直径) of the complete tube(电子管). When CCDs replaced the tubes, the CCD industry maintained the image size but also continued to use the nomenclature of tubes. (Table 1). Although each manufacturer provides a sensor and pixel size slightly different, the nominal sizes of 640 x 480 matrix are given in table 2. These pixels tend to be square.However, and in particular for interline transfer systems, approximately half of the pixel is dedicated to the vertical shielded transfer register. It means that the active width of the detector is half the width of the pixel. Thus, the active surface of a pixel is rectangular with interline transfer systems. For professionals of video, this asymmetry does not come as a source of significant disruption of the image quality. However, it is good to know this fact in scientific applications including the use of image correlation or stereophotogrammetry. The decrease in optical format is linked to cost. The price of CCDs is principally determined by the manufacturing cost of wafers of semiconductors. When the size of the chip decreases, it is possible to place more matrices on a same wafer and this makes the rice of each individual chip decrease. The tendency to reduce the size of the chips will probably continue as long as optical and electrical performances of imaging systems do not change. However, smaller pixels reduce the size of the charge well. For a given luminous flux and aperture, the smallest sensors will have a lower sensitivity. Smaller chips generate smaller cameras. However, in order to maintain the resolution, only the pixels can be small. Here, the compromise is relative to the size of the pixel, to the focal distance and to the total size of the chip. The table 3 shows the most used sensors in general audience photography.

14. Defects

Large matrices sometimes present defects. They are categorized in the table 4. Matrices with few defects are more expensive than those with many defects. Some manufacturers classify their chips in grades: grade 1 being of better quality than grade 2, etc... These grades are given according to the nature, the number and the location of the defects for a standard lighting of the sensor. According to the application, the location of the defects can be important. The sweet spot has to be completely operational, whereas more substantial defects can be tolerated as we get closer to the periphery of the matrix. Do not confuse the dead pixel with the dark pixel. The dark pixel is a reference shielded pixel that establishes a benchmark for the dark current. Some manufacturers show their grade selection criteria. The figure 21 takes the example of the Kodak KAF0401E.

15.Size and architecture

Choice criteria can entirely depend on the laboratory's application, budget or history. Althought there is no general rule, a specific type of device can often be linked with a specific type of laboratory:
  • Scientific applications: block transfer,
  • Television and mass market: interline transfer,
  • Industrial applications: linear sensors, progressive scan, TDI, ...
  • Military applications: anything useful ...
Scientific applications often deal with high spatial or temporal resolution or high greyscale resolution. Mass market application mainly have to be compatible with high-contrast images. Noise is not a great concern for the general public, but for professional applications involving numerous relays, noise has to be reduced to a minimum. For industrial applications, the context and the application itself dictate the choice of lighting and sensors; The cost is also a concern for many laboratories. For military applications, everything has to contribute to the final goal: visualization of the desired phenomenon, no matter at what cost. During the last decade, from the geometrical point of view, manufacturers focused on maximizing the fill ratio and thus on covering the image space with as much sensitive pixels as possible. However, an increase in the number of pixels for a better resolution leads to an increase in the signal-to-noise ratio and a decrease in sensitivity and exposure latitude due to the pixels being smaller. Super-CCD solve these problems and improve the quality of the final image. In the year 2000, the first Super-CCD were created to improve performance. Further generations were to improve resolution, and the next tends to increase sensitivity. SR Super-CCD created by Fuji radically extend exposure latitude and improve sensitivity thanks to two different types of sensors contained in its photosites. The octagonal(八边形的) photodiode (Fig. 22) and the honeycomb(蜂窝) disposition allowed a greater display surface for each photosite. The surface of one photodiode in a ½ inch, 3-million pixel Super-CCD is 2.3 times the surface of a same-size conventional CCD. Thus sensitivity, signal-to-noise ratio and exposure latitude could be improved. Honeycomb disposition of the pixels also allows to get closer to the human sight's image quality. Super-CCD HR offer far better resolution. A pixel is interpolated between each pair of sensors. Another useful property is the fineness of their electrodes, which reduces the depth of photosite pits. Thus photosite receive more light. Furthermore, Super-CCD are still being developed. The Super-CCD SR and Super-CCD EXR models, for instance, combine the properties of the two previous models and create an image with a quality almost comparable to that of the human eye. Traditionally, there is only one photodiode for each photosite to capture the whole light range. Super-CCD SR have two different types of photodiodes for each photosite. (Fig 25). Thus, photosites are doubled. S photodiodes are highly sensitive but have a narrow exposure latitude. They capture dark and mildly clear colors. On the other hand, R photodiodes are less sensitive (i.e. they record a darker image) but can spot details in clearer areas, contrary to conventional photodiodes. The combination of those two photodiodes allows an exposure latitude four times as wide as that of a conventional photodiode. This creates a more detailed image, especially in darker and lighter zones. Furthermore, the augmentation of the sensor's exposure latitude (or dynamic range) allows incorrect exposition up to a certain point (i.e. more tolerance for poor lighting conditions, over or underexposure). There are two generations of Super-CCD SR exploiting the same principle but with a different disposition for the photodiodes. In the first version, each photosite is divided in two to be able to contain both photodiodes. In the second version, the S photodiode occupies the whole surface of the photosite and the R photodiode is placed between the octagonal photosites. This technique allows greater sensitivity, because the photodiode surface is greater in SR II than in SR. The Super-CCD EXR sensor was revealed in 2008 and is the result of a combination between the properties of Super-CCD HR and Super-CCD SR. It consists of a three-in-one universal sensor with very high resolution, high sensitivity and a wide exposure latitude, thus radically improving image quality. The Super-CCD EXR can be adjusted to the object which is being photographed to obtain the best possible image. This sensor comports three main modifications compared to the previous generations. (1)First, color filters are disposed differently upon the matrix. This new disposition is an answer to the need of noise reduction. To increase sensitivity, the sensor's output electronic gain has to be amplified but this procedure also increases noise. Noise can also be reduced by grouping pixels (pixel binning). Binning is often done by grouping pixels horizontally or vertically,generating flaws by increasing space between same-colour grouped pixels. Correcting these flaws greatly reduces resolution. For Super-CCD EXR, the disposition of color filters allows a diagonal blooming (a technology called “close incline pixel coupling”) and avoids flaws created by inter-pixel space. This blooming technique doubles the sensitive element's surface and thus increases sensitivity while keeping noise to a minimum. (2)Another benefit of Super-CCD EXR compared to previous generation sensors is its high exposure latitude. This sensor can perceive two different images for a same object: one in high sensitivity, the other in low sensitivity. With the same technology as the Super-CCD SR, the EXR uses Dual Exposition Control. Two same-color adjacent pixels have different sensitivities (high for one half of the sensor's photodiodes and low for the other half). This can be done by controlling the exposure time of photodiodes. A pixels have a 1-100th of a second obturation time whereas it is 1-400th of a second for B pixels. Thus A pixels record details in dark tones whereas B pixels record clear shades. The combination of both renderings helps maintain a good lighting on the whole image. Contrary to SR sensors, the photodiodes in EXR sensors are all the same size for a wider exposure latitude and greater sensitivity. (3)The third property of Super-CCD EXR is high resolution. Its structure allows the use of all pixels and its processor optimizes the signal processing to obtain the highest possible resolution. Even though it was primarily designed to have a greater sensitivity (and pixel grouping), the image quality is the same as with other 12-mega-pixel sensors. CMOS sensors technology (cf. infra) also evolves in terms of geometry. In one of the last generations, photodiodes have been rotated 45° to obtain a resolution 1,4 times as high as that of a conventional CMOS sensor with the same properties. Thanks to this new structure, the IMX021 generation of CMOS sensors generates high-quality images with low noise. As for Sony Exmor R sensors, their photodiodes are placed differently than for conventional sensors. They are found just below the color filter for a greater sensitivity and much lower noise.

16.CMOS sensors

CMOS sensors were created at the beginning of the 1990s. Among other things, they can directly make the charge conversion on the generation photosite thanks to their pixel amplifier. This characteristic give them the ability to get rid of several transfers and to increase the processing speed. Their main advantages come from the way they are manufactured.
  • Manufacturing identical (90%) to computer chips (particularly to the DRAM Dynamical Random Access Memory),

  • Cheap mass-production(大规模生产),

  • Direct charge conversion without any transfer: neither blooming nor smearing,

  • Each pixel has its own amplifier, no shift register: Active Pixel Sensor,

  • Each pixel can be individually addressed,

  • No complex time clocks,

  • Low energy consumption (100 times less than CCD),

  • High reading rate.

In the last few years, they have really come to the fore compared to the CCD sensors, which is directly linked to them being used in cellular telephony as video devices or on-board cameras. This is a direct consequence of their low-cost manufacturing and low consumption. For scientific applications, it is their operating speed (image rate), linked to the charge conversion on the creation site, which make them more appropriate for used than the CCDs. One of the last generation sensors can digitize 1024 x 1024 pixel images to a 5400 images/second rate... More over, each pixel can be independently driven (manually or automatically), which explains that they can be used in vision for highly contrasted scenes. However, they do induce a slight bias, which can generate major differences between the images sensed by the human eye and the rough image coming from the CMOS sensor. The size of the sensors is similar to the CCD's nd can be close to a micrometer in some cases. One of the problems raised by the CMOS technology is the loss in spatial resolution due to the presence of the amplifier A on the photosite (comparable to the optic nerve on the human retina). This is shown on figure 32 (sensitive surface in grey and amplifier in yellow). At first, only one transistor was used to amplify. Nowadays, these amplifiers can include 3 to 5 transistors (3T and 5T CMOS) and the fill factor can be significantly diminished. However, this loss can be compensated by using micro lenses, like for CCD with line transfer. Moreover, the fill factor is always higher for Full Frame CCD than for 3T CMOS sensors. For some CMOS models, we can also notice that even if the amount of information transferred is high, the performances in terms of signal-to-noise ratio can be inferior to the CCD's. However, they are more and more used in digital photography and lots of manufacturers include them in their digital box. Since 2007, CMOS sensors of more than 12.4 effective megapixels have allowed users to come close AND TO EXCEED the 24 x 36mm size, usually associated to photography. Since more recent days, CMOS are usually linked to an analog-to-digital converter by column, which improves the data acquisition speed up to almost 10 images/second. Finally, there is a tendency to develop smart cameras, in which thanks to the CMOS high-speed transfer associated with on-board processing, an interpreted image can be directly downloaded at the exit. Some of the smart camera functions are then directly integrated into the chip. This operating mode has more advantages in terms of size and simplicity. Figure 33 shows the comparative schematic drawing of all the ways that can result in data acquisition. Nowadays, CCD and CMOS cost pretty much the same for similar quantities (high-quality CMOS are manufactured on assembly lines separate from the DRAM...). For performant applications, the final decision does not have anything to do with “CCD or CMOS” but takes into account the adequacy of the product with the task that needs to be carried out. We need to keep in mind that CMOS allow high speed but that CCD keep on being the best for their high dynamics and their better fill factor.

17.Noise quantification

In this paragraph, we offer a description of the main sources of noise and of their respective influence on the signal's quality.
Read noise
The read noise is due to the fluctuations of photocharges at the terminals of the MOS capacitor constituted by the pixel (figure 34).
Dark noise
It depends on the charge accumulation duration, I.E on the sensor integration time, and on the number of sensor's dark electrons by second. Then, we get the relation :
Photon noise
Digitization noise

18.Comparison of CCDs and CMOSs

Uses and precautions

\(y\) 1.Depletion zones are in gradient and gradients should overlap so that the charge transfer occurs