On Automatization

Recording animal behavior usually does not require more than a trained eye of an experienced observer. However, there are also disadvantages that come along with "manually" recording animal behavior that have to be noticed: The performance of a single human observer may vary to some degree constrained by fatigue or mood. Secondly, each different observer may introduce an observer specific bias to the recording process. These obstacles are known for almost as long as animal behavior is recorded (1, 2). Variability can be minimized by introducing restrictive protocols for the recording process and by validation of inter-observer reliability (3). An automated recording system, however, is capable of minimizing the variability to the highest degree. Furthermore, some information (e.g. accurate metric measures) can not be gained by just observing an animal. Human observation is also limited regarding observation length, making it difficult to gain longitudinal data (e.g. circadian rhythm). On the other hand, the recording of complex behavioral patterns and interactions of animals is very difficult to automate. Given its limitations, automatization is favorable whenever the technical solution leads to a significant assistance and at least does not deteriorate the results. Contingent upon the respective scientific question, automated recording of animal behavior can be superior to manually scoring.

Digital Image Processing

Digital image processing has evolved rapidly in the last decades along with the technological progress in computer sciences
(4). Generally speaking digital imaging processing comprises any kind of computation applied to an image. This can either be a manipulation of an image (e.g. applying image filters) or gaining information from an image (e.g. image size). As computers rely on digital rather than pictorial information, an image that is to be processed by a computer has to be transferred into a computer readable format. A digitized image comprises a matrix of picture elements (pixels). Each pixel has a defined X- and Y-position and a certain color information. To simplify matters let us assume for a moment that we are colorblind and limited to perceive only 256 gray-scales ranging from black (0) to white (255). A digitized image (pixelmatrix) can then be regarded as a grid lying over the picture with each grid cell holding a value between 0 and 255 representing the gray value of the respective pixel (Fig. 1). To store this information on a computer a memory amount of 8 bit (28=256; values from 0 to 255) per pixel is needed. With regard to color images more memory is needed to store the additional information. There is a range of different "color models" available that uses different approaches to store color information. One widely used color model is the RGB color model, that uses triplets of color information for Red, Green, and Blue to define the color of each pixel. Besides the threefold memory usage basically the same matrix rules apply to color images as they do for grayscale images. As all pictorial information is stored numerically, mathematical operations can be performed with digital images. For example the brightness of a grayscale image can be increased simply by adding a constant value to each pixel. Instead of adding constant values, the gray-value of a pixel can be manipulated depending on the values of its neighboring pixels - this is how image filters work. With respect to behavioral analysis it is of special interest to apply rules for object recognition. As a simple example let us imagine a picture of a black mouse that is located on a white surface. Adjacent pixels within a certain color (or grayscale) range can be detected by simple computational procedures. This is done by working through each pixel of the picture and checking a) if its gray-value is within the required range and b) if at least one of its neighboring pixels is also lying within the gray-scale range. The application of these rules will detect areas within the image with a minimum of two pixels in size. In more sophisticated object detecting procedures the minimum pixel amount can be defined in order to prevent the tracking of small objects like mouse droppings. Subsequently the detected object can be subjected to further mathematical operations, e.g. calculating the circumference, detecting the center of mass, calculating the mean gray-value, etc.

 
A)
 
 
 
 
 

digitizing a mouse

B)
 
 

dgitizing a mouse

C)

digitizing a mouse

Fig. 1: Digitizing a mouse. A digitized image comprises a matrix of pixels. Each pixel has a defined X- and Y-position and a certain gray-value. The pixelmatrix can be regarded as a grid lying over the picture (A). Each gridcell can hold only one gray-value (B). Gray-values are ranging from 0 (black) to 255 (white) (C).

 


Animal Tracking


In order to collect spatial information about an animal's movement by means of digital image processing techniques the information has to be collected sequentially. This can be achieved by analyzing subsequent image frames of a digitized video. By means of extracting the X- and Y- coordinates representing the position of a mouse for each individual image frame, the path can be measured. Ideally,  time as an additional information, should be included. If the images are captured and analyzed at a constant framerate or, alternatively, the exact time for each coordinate pair is extracted simultaneously, valuable information like speed, stops, etc. can be calculated from the data. Talking about timelines, it has to be noted that depending upon the framerate (time resolution) the total pathlength may vary to a great extend. Like Benoit Mandelbrot put it: "Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line."
(5), the pathlength of a moving mouse also depends on how many tiny bits of distances covered are finally summed up to the total pathlength; taking only 10 samplepoints within 10 minutes of movement will result in a considerably shorter path than tracking the same movement sampled with 10000 X- and Y- coordinates. To process digital images for animal tracking basically a camera, a computer and appropriate software are needed. Images recorded by the video camera are digitized by means of a frame-grabber-board or, if the videosource already delivers digitized data, the camera can simply be connected to the computer by means of an USB (universal serial bus) or firewire-port.  A schematic setup for an open field test is shown in Figure 2.


Fig. 2:
Setup for an automated animal tracking system for an open-field test. The open-field is an 80 x 80 cm square arena with walls 40 cm high. In the open-field test mice have the opportunity to explore the arena during a fixed range of time, the dependent variables usually being total locomotor activity and qualitative measures of exploration (like thigmotaxis, proneness to the walls indicating anxiety related behavior as measured by the time spent near the walls or in the center of the arena).

Software

Image analysis is accomplished by an image processing software. Today there are various commercial animal tracking systems available. These systems are most suitable for animal tracking and subsequent analysis purposes and, what is more, they usually include a wide range of personalized customer support that enables you start tracking within a few days. To illustrate the functionality of any such image based tracking system I will present my homemade tracking program "Animal tracking". "Animal tracking" was written in "Analytical Language for Images" (ALI), an internal macro-language included in the imaging software "Optimas". In biological research (customizable) imaging software like Optimas usually is used for analyzing digital images obtained from microscopic data. Therefore this kind of software is widely distributed and can be found in almost any university. From an image analyst's point of view it does not matter whether the nature of the image to be analyzed is micro- or macroscopic. Hence, these customizable imaging software packages are a great tool to get hand on a reliable animal tracking solution.

Preprocessing

When capturing an image from a videocamera hanging above your test apparatus usually not all pictorial information is of any value. For example in an open-field test only the inner square of the open field is of interest. Depending on your imaging software, the region of interest (ROI) can be defined. In Optimas the ROI is marked by green lines. If the camera is not positioned directly above the center of the open field, the image will be compromised by a spatial distortion. In Optimas (and most likely in all "state of the art" imaging software) the spatial distortion can be calibrated by marking each corner and entering the exact X- and Y-coordinates of the points. Consequently each X- and Y-coordinates exported will be calibrated according to the defined spatial distortion.

Fig. 3: Spatial distortion and Region of Interest (ROI). The inner square of the arena is marked as the ROI. Therefore only objects detected within the ROI are considered for analysis. The camera has not to be suspended directly above the open-field. Spatial distortion can be calibrated by means of marking the corners and defining the exact X- and Y- coordinates of each corner. 

 
         
In order to detect a mouse with dark fur color on a white background, the gray-value range (threshold) representing the mouse has to be defined. Usually the imaging software provides proper tools to set the threshold level.

Threshold 
Fig. 4: Setting the threshold. The threshold is set by means of a threshold-dialog-box (right) representing the gray-value histogram (number of pixels for each gray-value). The threshold ranging from 0 (black) to 166 (medium gray) is highlighted within the original image (left) in yellow color. Within the ROI only dark objects (especially a mouse) are to be tracked. In order to get an estimate of the desired gray-value range, a black object (a remote control, if anyone cares) was placed in the open field.

 

Once tracking has started, every pixel within the threshold is most likely a part of the mouse. However, as mice tend to defecate in the open-field test, object detection should incorporate a lower size limit for the objects to be detected. Additionally, the tail of the mouse can be excluded from the detected area by defining a minimum wideness of the area. This allows to eliminate flickering movements caused by tail movements rather than by movements of the mouse itself. The centroid (in brief, the point on which a lamina of the same measures would balance when placed on a needle; see 6) of the detected area of pixels representing the mouse is calculated by the imaging software and subsequently exported as X- and Y-coordinates calibrated to the measures of the open field.

Fig. 5:
Tracking a mouse. After completing the setup, the software tracks the animal within the ROI. The area around the body of the mouse is marked red. The centroid of this area is exported as calibrated X- and Y-coordinates at a framerate of 5 frames per second (fps).


To analyze a digitized video, the image processing commands can be bundled as a macro that reads an imageframe, detects the object, extracts the coordinates of the centroid and then proceeds with the next imageframe. In a simplified form an animal tracking macro looks like this:
/****************************Animal Tracking****************************************
** Finds and exports X- und Y-coordinates of a tracked object. This               **
** is a simplyfied but working version of the tracking macro used in              **
** our lab.                                                                       **
** (c) 2004, Lars Lewejohann                                                      **
**                                                                                **
** www.phenotyping.com                                                            **
************************************************************************************/


BOOLEAN Cont=TRUE, Fnameok=FALSE;
INTEGER delay=200, framescaptured=0, starttime, duration, keypressed, Centerx, Centery, fh;
REAL fps;
CHAR Fname,text;

SetExport(mArPoints, 1, TRUE); // set bounds for extract

/**************************Get filename to save data in****************************/
while(!Fnameok)
{
 Fname=Prompt("Enter a filename:", "CHAR");
 if (OpenFile("c:/Daten/OF_" : Fname: ".OPS", 0x4000)) //check if file exists
 {
  If(Prompt("This files (" :Fname: ") already exists! Overwrite?",  0x1002))//Shall we overwrite it?
   {
    Fnameok=true; //Well, we'll overwrite it then
   }
 }
 else //file is not there
 {
  Fnameok=true;
 }
}
fh = OpenFile("c:/Daten/OF_" : Fname: ".OPS", 0x1002); //Create the file (fh is the file handler)


StatusBar = "Hit space bar to start tracking"; 
while ( keyhit() != 32 )//Wait and show until space bar (=char(32)) is pressed
{
 grab(3);
}   
starttime = DosTime(); //save starttime

/***********************************************************************************
******************************Start tracking****************************************
***********************************************************************************/

Do
{
 keypressed=Keyhit();//check for keys pressed during tracking session
 If (keypressed ==119) Cont=FALSE; //"F8" (=char(119)) will end the session
 grab(3); //capture a new image frame
 framescaptured++; //count the frames captured
 CreateArea(,,TRUE); // autocreate areas
 MultipleExtract (); // extract image data

/*****************************Sort areas found by size*****************************/

 if (mArArea)//if no area object is found the last known position is saved
 {
  NewOrder=Sort( mArArea, True); //Sorts Area-Objects by size
  NewOrder=NewOrder*2; //The variable has to be doubled in size to store both, X- and Y-coordinates
  If (GetShape(NewOrder)!=0)
  {
   Centerx=mArCenterOfMass[NewOrder[0]];   //reads X-value f the largest area
   Centery=mArCenterOfMass[NewOrder[0]+1]; //reads Y-value f the largest area
  }
 }
/*******************************Export data to file*******************************/
 GetDateTime();
//create a textline ("\n"=new line) holding the timestamp and coordinates with tabs ("\t") as item delimiter
 text=DateTime : "\t" : ToText(Centerx): "\t" :  ToText(Centery): "\r\n";
 WriteFile (fh, text);
 DelayMS( Delay ); //wait for a number of milliseconds defined in Delay (=200 ->5 fps)
}
WHILE(Cont); //Continue until F8 is pressed
/*******************************Post processing*******************************/
duration = DosTime ()-Starttime;
CloseFile (fh); //Close the file
fps= (real)framescaptured/duration;
MacroMessage ("Duration: ", duration, "s. Frames per second: ", fps, "fps"); //Duration and fps
Listing 1:
This is a simplified version of the macro we use in our lab to perform animal tracking. The language the macro is written in is called ALI (Analytic Language for Images). The "grammar" of ALI is similar to the computer language "C". ALI is an integrated part of the imaging software Optimas and comprises a large amount of additional image processing functions. User defined variables are marked blue, ALI inherent variables are marked purple, ALI functions are marked red, and comments are marked green.
 

The positional data along with a timestamp is exported to a textfile that can be subjected to further analysis. To visualize the path traveled the data can be copied into a spreadsheet program like MS-Excel. The path length can be calculated by means of the Pythagorean Theorem (a2+b2=c2) with a = Xn+1-Xn, b=Yn+1-Yn, thus the length covered, c, equals the squareroot of (Xn+1-Xn)2+(Yn+1-Yn)2. As we also know the framerate respectively the timestamp for each coordinate pair, we can easily calculate velocity data and the number of stops (zero velocity). The number of coordinate pairs lying within defined areas of the open field (e.g. center, corner, close to the wall) allows to further analyze spatial distribution (e.g. time spent in the center vs. time spent in the peripherals can give an estimate of anxiety related behavior).


Fig. 6:
Datasheet of an openfield test. The path can be visualized as a standard X-Y Graph. By means of vector calculation path length, velocity and qualitative parameters such as duration of stays in corners, stops, etc. can be derived.

The same technique can be used for various behavioral tests such as the elevated plus-maze test (test on anxiety related behavior, Fig. 7) or the barnes maze test (test on spatial memory, Fig. 8).


Fig. 7: Datasheet of an elevated plus maze test.

 


Fig. 8: Datasheet of a barnes maze test.

 

Activity Rhythm

Digital imaging techniques applied in animal behavior science are not limited to tracking the path of an animal. The movement itself can serve as a measurement for activity. Especially in such longitudinal observations as required for activity rhythm analysis the automatization of observation techniques is favourable. In the following I present an example of how the activity of a mouse can be easily recorded using imaging techniques. The activity can be detected by comparing areas (here: mousecages) within individual image frames over time. 
If the mouse does not move, all imageframes of the captured sequence will be the same. A sequence of images captured from a mouse that actively moves in its homecage, however, will comprise differences between the individual imageframes. Therefore, by means of a comparison of the subsequently acquired imageframes activty detection can be automized. As stated above, digital images can be regarded to as a pixel-matrix allowing image comparison to be done mathematically: By subtracting the gray-value of each pixel from the gray value of the corresponding pixel of the image that was acquired before.
 
 


 
Fig. 9: Image Subtraction. The subtraction of the gray-value from each pixel of the second image from the gray-value of the corresponding pixel from the first image results in blackness for pixels with identical gray-values and lighter pixels where the subtracted pixel is darker than the initial pixel.

In our lab this method is used to record activity of individually housed mice for five consecutive days. By means of this technique the intensity (number of active seconds per minute) and the duration (number of minutes with at least one active second) of the activity could be measured for four individuals at a time (Fig. 10).
 

Fig. 10:  Rhythm detection for 5 consecutive days. 


Closing Remarks

Digital image processing has become a viable tool for automatization of tests in animal behavior. Especially for frequently used tests to analyze rodent behavior like the open-field test, the elevated plus-maze test, etc. automatization by means of digital imaging has become indispensable. Every experimenter who finds himself/herself in the situation of testing hundreds of mice, as it is sometimes requested for example in behavioral phenotyping of gen targeted mice (e.g. 7), highly appreaciates the commodities that come along with automatization. Apart from the fact that "manual" scoring of large numbers of animals is very time consuming, this technique allows to gather information of higher quality due to better "fine tuning". Moreover, automated data collection allows to reduce the experimenter variability to a minimum degree (8). However, one should bear in mind that the establishment of an automated system has to include a thorough evaluation of the system by means of comparing the automatically gathered data to scores that were manualy recorded. Although automatization facilitates data collection in large numbers of tests, it is not suitable for all kind of behavior recording especially when complex behavioral patterns or interactions of animals are the object of research. Hence, automatization does not suffice to completely replace a trained eye of an experienced observer.

 

Acknowledgement

This work was supported by a grant from the Deutsche Forschungsgemeinschaft (Sa 389/5) to Norbert Sachser.


References

1:
Altmann J. (1974): Observational Study of Behavior: Sampling Methods. Behaviour 49, 227-267.
2: Martin P. & P. Bateson (1993): Measuring Behaviour: An introductory guide. 2nd edition. Cambridge: Cambridge University Press.
3: Caro T.M., Roper R., Young M. & G.R. Dank (1979): Interobserver reliability. Behaviour 69, 303-315.
4: Schoenherr S.E.: The Evolution of the Computer. From:http://history.sandiego.edu/gen/recording/computer1.html.
5: Mandelbrot B. (1977): The Fractal Geometry of Nature. New York: W.H. Freeman.
6: Weisstein E.W.: Geometric Centroid. From:
http://mathworld.wolfram.com/GeometricCentroid.html.
7: Lewejohann L., Skryabin B.V., Sachser N., Prehn C., Heiduschka P., Thanos S., Jordan U., Dell'Omo G., Vyssotski A.L., Pleskacheva M.G.,  Lipp H.-P., Tiedge H., Brosius J. & H. Prior (2004): Role of a neuronal small non-messenger RNA: behavioural alterations in BC1 RNA-deleted mice. Behavioural Brain Research.
8: Lewejohann, L; Reinhard, C; Schrewe, A; Brandewiede, J; Haemisch, A; Görtz, N; Schachner, M; Sachser, N (2006): Environmental Bias? Effects of Housing Conditions, Laboratory Environment, and Experimenter on Behavioral Tests. Genes Brain and Behavior 5: 64-72.

Scholarly reference to this article should be like: Lewejohann, L (2004): Digital Image Processing in Behavioral Sciences. http://www.phenotyping.com/digital-image-processing.html.