EEG, Facial Expression Analysis and Eye Tracking Analysis: A Survey

Abstract

The last few decades have experienced a resurgence in terms of research due to the current technological advancements. Automation is adopted in areas that were never expected before, initiating the idea of effectiveness and efficiency. The fields of electroencephalography, facial expression analysis, and eye-tracking analysis have moved several steps ahead with the aid of technology. The applications of these aspects are extended throughout the reaches of human beings, with education, research, and healthcare benefitting the most. Other than bringing in a sense of relief to patients like those suffering from brain disorders such as epilepsy as well as autism individuals, the technologies are opening up the social lives of individuals as they employ a greater logic of interactivity.

Introduction

An electroencephalogram (EEG) is an important and critical step towards the determination of the activity of the human brain. The test aids in the detection of electrical activities within the brain through the application of specialized tools called electrodes, which are attached to the scalp. The functioning of the EEG depends on the activity of the brain cells. The latter communicates with the aid of the electrical impulses, which tend to be active throughout the lifetime of individuals. The progress of the communication is revealed in the form of wavy lines on the recording of the EEG. It is important to note that EEG stands out to be the most commonly used test when it comes to the diagnosis of epilepsy. The determination of any form of inconsistencies within the resulting patterns can signify the presence of seizures or any other disorder linked to the brain (Lan et al, 2016). The main reason for conducting an EEG is determining the brain activity changes, which can be applied in the diagnosis of various brain disorders. Besides, the helpfulness of the test can be extended to other disorders, including stroke, brain tumor, sleep disorders and the inflammation of the brain.

Continually, facial expression analysis revolves around the determination and extraction of meaning from both the major and finer details that are linked to the facial expressions. For that reason, the analysis works on examining an individual’s face to obtain meaning from various details present in it. The fact that the face stands out to be more effective when it comes to nonverbal communication is reason enough for undertaking deeper research in its regard. From the facial expression alone, a list of important aspects can be revealed, aiding in earlier intervention through the initiation of countermeasures. For instance, the expression can portray the biomedical and psychiatric status of an individual, information that can be used by medical professionals in providing the best diagnosis (Tian, Kanade & Cohn, 2005). Other factors that can easily be read from facial expressions include personality, alertness, and emotion. The development in technology has brought significant changes to the aspect of facial expression analysis, with automation taking the forefront. Technologies like machine learning and computer vision continue to impact the analysis process, allowing behavioral scientists to obtain accurate facial measurements and other results.

Importantly, eye tracking is a simple-looking but technical technology that draws its necessity to the monitoring of the eye. It encompasses observing and recording the behavior of either the whole or specific parts of the eye such as movement or dilation of the pupil. In that case, the process of eye-tracking measures the exact position of where an individual looks at through the use of an eye tracker that helps in recording the various eye positions and movements. The working of an eye tracker largely depends on the cornea and the pupil, because it is the reflections emitted by the two parts that are tracked by the infrared camera (Majaranta & Bulling, 2014). In the same instance, their reflections are achieved after directing a near-infrared light into the eye center, resulting in what is called pupil center corneal reflection (PCCR). The application of eye-tracking is utilized across various research areas and fields, where the most commonly used types of eye trackers are the glasses and screen-based. Thus, it is essential to note that although the operations of EEG, facial expression analysis and eye-tracking might differ in several ways, they necessitate a change in any working environment across various industries.

Medical Use of EEG

The medical value of the tests is most applicable among patients with epilepsy, although it extends to other conditions that are linked to the brain. The medical procedures for the disorders often revolve around the clinical routine checks. However, the effectiveness in some of the initial checks attracts the initiation seizure recording, a process called ictal recording. One medical use of EEG is in predicting the short-term neurological results among long-term neonates who have contracted hypoxic-ischaemic encephalopathy (HIE). This can be achieved through the application of a mix of neuro-imaging and electroencephalogram. Often, the neurophysiological, neuroimaging and clinical parameters are used in predicting the prognosis among children with HIE. To achieve the same, there are critical aspects that should be assessed, including contributions of EEG, neuro-imaging, and ultrasounds in outcome prediction, the compatibility of the ultrasound and neuro-imaging results and any association of brain damage patterns with the EEG results (Leijser et al, 2007). After the assessment, EEG predicts the presence of the disorder by forming an abnormal background pattern as shown in Figure 1 below. On the other hand, the presence of the disorder in infants is availed by neuro-imaging through the portrayal of abnormal findings as revealed in Figure 2. Notably, the abnormal background of EEG is often associated with the critically abnormal findings of neuro-imaging, meaning that both their findings predict HIE in infants.

Figure 1: An illustration of a normal and abnormal background pattern as predicted by EEG

(Leijser et al, 2007)

 

Figure 9: An illustration of a clear and white matter-filled neuroimaging scans

(Leijser et al, 2007)

 

Advantages and Disadvantages of EEG

The process of carrying out an electroencephalogram poses both positive and negative outcomes. The tests pave the way for the exploration of brain functioning and neural activity among individuals. The main advantage of the tool is the accuracy that it depicts in the process of obtaining results of the brain movement. In other words, it possesses an effective and higher precision time due to the ability to capture the speedy measurements. Due to the quick occurrence of changes in the electrical activity of the brain, the determination of the precision moments requires the adoption of a high time resolution. The advancements in the current EEG technology allow it to perform the task, with the resolution of up to 1 millisecond (Zion-Golumbic, n.d.). Figure 3 below shows the readings of an EEG at different levels of alertness. The next factor is that the electrodes of an EEG are mounted on the scalp instead of into the brain, allowing researchers to study a healthy human brain. As shown in Figure 4 below, it tends to be more comfortable compared to other electrical recording devices. Besides, the inexpensiveness in the EEG equipment makes it a perfect option for testing. However, the main disadvantage of EEG is the lower quality spatial resolutions in its recordings. Taking measurements from the scalp asserts that the signal is a combination of all-electric fields that are perpendicularly directed to the scalp, proving the possibility of inaccuracies.

Figure 3: An illustration of the readings of an EEG at various levels of alertness

(Zion-Golumbic, n.d.)

 

Figure 4: An illustration of a subject wearing an EEG cap fitted with 128 electrodes

(Zion-Golumbic, n.d.)

 

Limitations of Practical Uses of EEG

Electroencephalography tests come in different types, depending on the material used in making the applied equipment. In each of the technologies, several drawbacks have been highlighted, slowing down the effectiveness of the desired outcome. One of the technologies is the single-channel EEG, which is mounted on the forehead to help in the detection of neonatal seizures (NS). A bigger difference is often revealed in the process of finding NSs with the use of a single forehead channel on one side and conventional EEGs on the other. The single-channel holds the capability of detecting at least 1 NS in 66 percent of records while the bicentral channel achieves the same in 90 percent of the records. Besides, out of around 330 seizures in place, a single channel can only detect 46 percent of them while the bicentral one detects 73 percent (Wusthoff, Shellhaas & Clancy, 2009). Thus, it reveals that neonatal seizures can hardly be detected by a single-channel EEG, a constraint that continues to affect the technology.

Another restraint occurs during the monitoring of anesthesia using an EEG based depth within the intensive care units and theatres. The processing of EEG monitoring is often applied to some features like the bispectral index (BIS). The latter operates like a memory monitor, where it works in reflecting the sedation of anesthesia depth among selected patients. However, it is important to identify the constrictions that it possesses both in paralyzed and non-paralyzed patients for its safe application (Hajat, Ahmad & Andrzejowski, 2017). In that context, the first constraint that it relays is that the end-tidal concentration does not represent the requirements of the patient and the effects of large-dose opioids. Hence, the operations of the bispectral index, in which the EEG based depth extends from, is limited to a specific range. Some of them include cases where a patient falls into a category identified to be of high risk, where the consciousness of the patient is impaired by brain surgery and where higher agent levels are required from the measurements of blood pressure and heart rate.

Clinical Trials of EEG in Autism Patients

The EEG tests are gradually but surely obtaining ground among the brain disorder patients, extending support to the autism ones, especially the children. Both testing and analysis of children with autism are becoming reliable, with the results obtained helping in the implementation of important control measures. The application of EEG, in this case, employs the development of the biomarkers, which are prioritized in the research towards various disorders linked to neurodevelopment. The study of the power spectral densities for resting EEG is always enhanced so that they can be used as biomarkers for the case of autism (Levin et al, 2019). The step that follows is the decomposition of the obtained data into various frequency bands that are normally pre-specified, including gamma, beta, alpha, and theta. Besides, features emerging from the power spectral densities are characterized into finer details through recent advances like One-Over-F and the Fitting Oscillations. Thus, it pushes for the detailed classification of each density’s frequency shape, proving a step towards validating the biomarkers analytically.

The next clinical trial involves the acquisition of biomarkers and controlling their quality in the quest of applying them in multi-site studies. In this case, the trials employ the autism biomarkers consortium, which improves the current ones in place by advancing the validation. While the trials might not reveal greater significance to the overall public, it is critical in evaluating laboratory behavioral eye-tracking (ET), EEG and video tracking (VT) measures that can be applied in clinical trials among children with autism. Within the structure of the autism biomarkers consortium for clinical trials, several features collaborate in the completion of important tasks. The Data Acquisition and Analytic Core (DAAC) helps in overseeing processes like the analysis of data, data processing, acquisition of ET data and the standardization of EEG and VT (Webb et al, 2019). In that case, the analytic and acquisition protocols for data are designed and documented, site training within the acquisition is facilitated among other things. Hence, the results among most of the children reveal a greater utilization of ET, EEG, and VT by the acquisition success.

Facial Expression Analysis Techniques

The face is one of the common features among individuals, with various expressions like happiness, sadness and anger distinguished easily. However, the critical and inner examinations of the facial expressions call for the application of specialized techniques, which include tracking the facial electromyographic activity (fEMG), manual coding and live observation of the facial activity and the automated analysis through the application of computer-vision algorithms. In the first case, fEMG is useful and mostly applied in tracking the facial muscles’ activities with the aid of electrodes that are attached to the surface of the skin. The fEMG works in the detection and amplification of the finer electrical impulses obtained from the muscle fibers whenever a contraction occurs (Krosschell, 2020). Notably, the fEMG sites are divided into the right and left corrugator supercilii that enhances eyebrow movement, allowing the forehead to wrinkle vertically, and the right and left zygomaticus that supports cheekbones and the mouth angle. Figure 5 below illustrates the positionings of the features.

Figure 5: An illustration of a facial electromyography

(Krosschell, 2020)

 

The next technique is the Facial Action Coding System (FACS), which employs the anatomic features in classifying the facial expressions. The approach employs a careful examination of the face videos, followed by critical descriptions of facial expression occurrences. The two processes are combined to form what is known as the elementary components referred to as the Action Units. The analytical phase of the approach sees the construction of emotions in a modular manner, which draws their dependence from the action units (Krosschell, 2020). Figure 6 below shows the positionings of facial anatomy based on emotions. The final technique is automated facial coding, which employs computer vision and machine learning algorithms (Lewinski, 2015). The technique further employs several automated steps, including face detection initiated by the framing of the face box, feature detection where landmarks like nose tip, mouth corners, and eye corners are identified, and the face classification where important features are put into the respective classification algorithms as revealed by Figure 7 below.

Figure 6: An illustration of the facial action coding system

(Krosschell, 2020)

 

Figure 7: An illustration of an automated feature classification

(Krosschell, 2020)

 

Application Fields of Facial Expression Analysis

The analysis of facial expressions is a wide area that encompasses fields like medicine, education and more importantly, human behavior. The uniqueness and effectiveness of the facial expressions provide enough reason why they are adopted in many areas. In the first place, it is used in automatically differentiating feigned and real pain expressions. The application of the temporal event analysis approach within the facial action coding system helps in initiating the difference, forming a clear cut of the emotions. The next application area is the automatic detection of fatigue among drivers. The analysis achieves the result by measuring the signs of drowsiness in the facial motion through the application of the Computer Recognition Toolbox (CERT). Some of the aspects that the tool focuses on include nose wrinkle, chin raise, and blinks (Bartlett & Whitehill, 2010). Another field of application is the automation in the analysis of psychiatric disorders. Through the application of a video-based automated approach, risorius and zygomatic movements are differentiated, allowing FACS to relay the patient results.

 

Facial Expression Recognition Software

Facial recognition analysis, just like its counterparts, requires specialized software that holds the capability of extracting and interpreting the data to be able to derive meaning from it. The types of software range from one field to the other, with each of them presenting the best results in their preferred study areas. The most applicable one is the Computer Expression Recognition Tool (CERT), which is majorly used in the academic field. It automates functions in a real-time environment and holds the ability to code up to 19 different facial expressions (Littlewort et al, 2011). The next type of software is the Kinect, which employs 3-dimensional sensors in recognizing expressions on the face. Besides, an SDK, which is a face tracking tool is used in the determination of the 3D location and head pose, the constituents of the fiducial points (Malawski, Kwolek & Sako, 2014). Another one is the automated bi-modal system used in the recognition of emotions. The software works by fusing emotions obtained from the signals of speech with the facial expressions (Datcu & Rothkrantz, 2008). Besides, the Viola & Jones face detector that is integrated into the software works in supporting vector machines and face shape extraction.

Stages of Facial Expression Analysis

The developments in the technology applied in the analysis of facial expressions have been continuous and improving in every step. The field is gaining momentum initiating the possibility of independence in the effort to reduce the individual energy applied in effecting the changes. In the same context, the recognition of facial expressions exists or rather operates in three distinct stages. The first one is face acquisition, which is considered to be a preprocessing stage. It functions in revealing various regions of the face within the sequences or the input images provided. Several proposals for the detection methods like real-time face detector and component-based framework have been made. The second stage is the feature extraction and representation, which focuses on the extraction of various facial features so that the changes caused by various facial expressions can be represented (Zhang, Zhao & Lei, 2012). In that case, it involves two types of features; the appearance, which detects changes in the texture of individual skin like wrinkles and geometric, which includes the shape and locations of components like the nose, eyes, and mouth. The final stage is facial recognition, which applies the initially extracted features and applies them in an attempt to initiate recognitions on various expressions.

 

Types of Eye Trackers

The debate is always hotter as to which are the best types of eye trackers that can guarantee the best results whenever they are used. The parties in such arguments have always preferred and defended the trackers that are effective in their respective fields of study. There are very many types of eye trackers in the market today, an outcome that comes in place due to the technological developments realized across the globe. One of the most notable ones in WebGazer, which presents the real-time gaze locations of web visitors with the aid of webcams present in mobile devices and laptops (Papoutsaki et al, 2016). The tracker goes into self-calibration whenever a visitor interacts with the various web pages, training a mapping between the screen position and eye features. The next types are the fixation pickers and the saccade pickers (Karn, 2000). The former tends to be slower and employ and employ typical proximity within their routines of analysis. Conversely, the latter is faster and efficient, deploying velocity-based routines for their analyses.

 

 

Eye Tracking Techniques

Eye-tracking majorly revolves around obtaining the exact position of the eyes in a particular direction. In most cases, several methods or techniques are applied for conducting motion tracking. The first technique is referred to as electro-oculography, which works in recording finer distinctions in the skin within the area surrounding the eye. The operations of the technique involve attaching sensors at the skin around the eyes so that electric fields can be measured whenever they initiate a rotation. The fact that it requires closer contact of the electrodes to the user reduces the extent of its application. The next technique is the scleral search coil, which depends on the voltage induced into the coil after moving in a magnetic field (Chennamma & Yuan, 2013). Normally, there is an integration of a mirror within its contact lens to give room for measuring the amount of light reflected. Another technique is the infrared oculography, which aids in measuring the intensity of the infrared light that has been reflected. It achieves this through the illumination if the eye using the infrared light that is originally reflected from the scleral. The final technique is the video oculography and it applies both the invasiveness and non-invasiveness aspects. In other words, the approach tends to be using the infrared and visible light depending on whether it is the single or multi-camera eye tracker that is in use.

Applications of Eye Tracking

Being an inclusive and efficient method of gathering important information regarding the wellbeing of individuals, it is applied in a range of fields. Besides, the current advancements in technology have brought about major developments in the field, proving to be one of the most effective techniques that should be considered. Some of the areas that eye tracking is highly utilized are computer science, advertising, and human factors and industrial engineering. Other important ones include the areas of psychology and neuroscience (Duchowski, 2002). Taking a look at the first one, which market research, eye tracking is used to pre-test questionnaire designs. The technique involves the analysis of the respondents’ reading patterns and the observation of their eye motions when tackling the various response formats as shown in Figure 8 below. Thus, it leads to improved design in surveys, causing an increment in the response rates (Koller et al, 2012). Another application is that it is used as a control tool when designing experiments as depicted in Figure 9 below. The technique can be suitable in instances where plain human eyes can be exposed to informational cues that are inflationary.

Figure 8: An illustration application of eye tracking in the pretesting of a questionnaire

(Source: Koller et al, 2012)

 

Figure 9: An illustration of eye-tracking applied as a control tool for experimental manipulations

(Source: Koller et al, 2012)

 

Eye Tracking Technologies in Detecting Autism

The relationship between eye tracking and autism has been flourishing and productive since researchers and medical practitioners turned their attention to the infants. The ability that the eye-tracking holds when it comes to characterizing autism at an intermediate level helps in the identification of the lower neurocognitive networks and functions and dysfunctions among individuals. The advantage of applying eye tracking in the study of infants is that it applies a simple motor language and it is non-invasive (Falck-Ytter, Bölte & Gredebäck, 2013). The main symptom of autism in earlier years include problems with attention disengagement and a reduction in the time spent at looking at people. In such cases, the application of eye-tracking is of greater importance it portrays the necessary features of the complex autism picture and transforms it into an easily understandable one. Importantly, eye tracking can be an interactive method that can be applied in modifying the gaze of autism patients. In other words, the uniqueness of eye tracking can be utilized in developing automated tools, which can be helpful among infants with problems in visual attention, allowing them to learn and master better fashions (Wang et al, 2015). Some of the techniques that can be applied here include video training-sessions, which encourage as well as direct the individuals on what to do.

Similarly, there is a bond between autism, eye-tracking and the entropy of patterns, which is applicable in the quest of eliminating the disorder. Eye-tracking can be applied in scanning the patterns of the faces of toddlers with autism disorder, by allowing them to take a look at static images. In that case, the application of the entropy comes in because it is required to obtain the differences in the attention within the patterns (Shic, 2008). Both gross and fine attention is revealed, allowing medical personnel to note the criticality of the condition and the possible interventions. Besides, a simple linear interpolation model (SLIM) can be used in the place of the present fixation algorithms due to several factors in place (Shic, 2008). One of the major reasons for adopting the SLIM model is its effectiveness when it comes to the compactness and completeness of the picture obtained from the results. The model works by processing and scanning patterns among children with autism and reveals all coincidences with the predetermined deficits obtained from face processing.

Conclusion

An electroencephalogram is an important test that helps in determining the activity of the brain among individuals. The main reason for conducting an EEG is determining the brain activity changes, which can be applied in the diagnosis of various brain disorders. Also, facial expression analysis works on examining an individual’s face to obtain meaning from various details present in it. The development in technology has brought significant changes to the aspect of facial expression analysis, with automation taking the forefront. Importantly, eye tracking is a technology that encompasses observing and recording the behavior of either the whole or specific parts of the eye such as movement or dilation of the pupil. The analyses of the three aspects have revealed their significance in the human body, especially within the health care sector. Thus, it is high time that attention and support should be turned to these options because they provide hope in earlier detection and elimination of various illnesses like epilepsy and autism.

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Bartlett, M., & Whitehill, J. (2010). Automated facial expression measurement: Recent      applications to basic research in human behavior, learning, and education. Handbook of face perception2.

Chennamma, H. R., & Yuan, X. (2013). A survey on eye-gaze tracking techniques. arXiv preprint arXiv:1312.6410.

Datcu, D., & Rothkrantz, L. J. (2008, September). Automatic bi-modal emotion recognition          system based on fusion of facial expressions and emotion extraction from speech. In 2008            8th IEEE International Conference on Automatic Face & Gesture Recognition (pp. 1-2).        IEEE.

Duchowski, A. T. (2002). A breadth-first survey of eye-tracking applications. Behavior Research             Methods, Instruments, & Computers34(4), 455-470.

Falck-Ytter, T., Bölte, S., & Gredebäck, G. (2013). Eye tracking in early autism     research. Journal of neurodevelopmental disorders5(1), 28.

Hajat, Z., Ahmad, N., & Andrzejowski, J. (2017). The role and limitations of EEG‐based depth   of anaesthesia monitoring in theatres and intensive care. Anaesthesia72, 38-47.

Karn, K. S. (2000, November). “Saccade pickers” vs.“fixation pickers” the effect of eye tracking             instrumentation on research. In Proceedings of the 2000 symposium on Eye tracking          research & applications (pp. 87-88).

Koller, M., Salzberger, T., Brenner, G., & Walla, P. (2012). Broadening the range of applications of eye-tracking in business research. Analise, Porto Alegre23(1), 71-77.

Krosschell, K. (2020, March 10). Facial expression analysis: The complete pocket guide. imotionshttps://imotions.com/blog/facial-expression-analysis/

Lan, Z., Sourina, O., Wang, L., & Liu, Y. (2016). Real-time EEG-based emotion monitoring         using stable features. The Visual Computer32(3), 347-358.

Leijser, L. M., Vein, A. A., Liauw, L., Strauss, T., Veen, S., & van Wezel-Meijler, G. (2007).        Prediction of short-term neurological outcome in full-term neonates with hypoxic-           ischaemic encephalopathy based on combined use of electroencephalogram and neuro-    imaging. Neuropediatrics38(05), 219-227.

Levin, A. R., Naples, A., Scheffler, A. W., Webb, S. J., Shic, F., Sugar, C. A., … & Faja, S.          (2019). Within Visit Test-Retest Reliability of EEG Profiles in Children with Autism         Spectrum Disorder and Typical Development. bioRxiv, 834697.

Lewinski, P. (2015). Automated facial coding software outperforms people in recognizing            neutral faces as neutral from standardized datasets. Frontiers in psychology6, 1386.

Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M., Movellan, J., & Bartlett, M. (2011,          March). The computer expression recognition toolbox (CERT). In Face and gesture       2011 (pp. 298-305). IEEE.

Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction.      In Advances in physiological computing (pp. 39-65). Springer, London.

Malawski, F., Kwolek, B., & Sako, S. (2014, August). Using kinect for facial expression   recognition under varying poses and illumination. In International Conference on Active         Media Technology (pp. 395-406). Springer, Cham.

Papoutsaki, A., Sangkloy, P., Laskey, J., Daskalova, N., Huang, J., & Hays, J. (2016, January).     Webgazer: Scalable webcam eye tracking using user interactions. In Proceedings of the            Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI 2016.

Shic, F. (2008). Computational methods for eye-tracking analysis: Applications to autism. Yale      University.

Shic, F., Chawarska, K., Bradshaw, J., & Scassellati, B. (2008, August). Autism, eye-tracking,     entropy. In 2008 7th IEEE International Conference on Development and Learning (pp.      73-78). IEEE.

Tian, Y. L., Kanade, T., & Cohn, J. F. (2005). Facial expression analysis. In Handbook of face             recognition (pp. 247-275). Springer, New York, NY.

Wang, Q., Celebi, F. M., Flink, L., Greco, G., Wall, C., Prince, E., … & DiNicola, L. (2015,          June). Interactive eye tracking for gaze strategy modification. In Proceedings of the 14th             International Conference on Interaction Design and Children (pp. 247-250).

Webb, S. J., Shic, F., Murias, M., Sugar, C. A., Naples, A. J., Barney, E., … & Levin, A. R.           (2019). Biomarker Acquisition and Quality Control for Multisite Studies: The Autism        Biomarkers Consortium for Clinical Trials. Frontiers in Integrative Neuroscience13, 71.

Wusthoff, C. J., Shellhaas, R. A., & Clancy, R. R. (2009). Limitations of single-channel EEG on the forehead for neonatal seizure detection. Journal of Perinatology29(3), 237-242.

Zhang, S., Zhao, X., & Lei, B. (2012). Robust facial expression recognition via compressive             sensing. Sensors12(3), 3747-3761.

Zion-Golumbic, E. What is EEG?. The Department of Psychology and the Department of Cognitive Science.

 

error: Content is protected !!