### Face Recognition Method of OpenCV

```Face Recognition
Method of
OpenCV
ANDREW STODGHILL
Overview

Introduction

Background

Theory

Experiment

Results

Conclusion
Introduction
Idea

Facial recognition for a Security Robot


Gain a better understanding of the subject

Derived from work last semester
Focus

Basic understand of OpenCV face recognition software and
algorithms

Methods and Theory behind the EigenFace method for facial
recognition

Implementation using Python in a Linux-based environment

Runs on a Raspberry Pi
Goal


One half research

General facial recognition methods

EigenFaces

OpenCV’s facial recognition
One half implementation

Create a system capable of facial recognition

Real-time

Able to run on a Raspberry Pi
Background
Different Facial Recognition
Methods

Geometric

Eigenfaces

Fisherfaces

Local Binary Patterns

Active Appearance

3D Shape Models
Geometric

First method of facial recognition

Done by hand at first


Find the locations of key parts of the face


Automation came later
And the distances between them
Good initial method, but had flaws

Unable to handle multiple views

Required good initial guess
Eigenfaces

Information theory approach

Codes and then decodes face
images to gain recognition

Uses principal component
analysis (PCA) to find the most
important bits
Fisherfaces

Same approach as Eigenface

discriminant analysis (LDA)

Better handles intrapersonal
variability within images such
as lighting
Local Binary Patterns

Describes local features of an object

Comparison of each pixel to its neighbors

Histogram of image contains information about the
destruction of the local micro patterns
Theory
Basic Idea

Let face image (, ) be a two-dimensional  by  array of (8-bit)
intensity values

Can consider image an  2 vector of dimensions

Image of 256 by 256 becomes a 65,536 vector of dimension

Or a point in 65,536-dimensional space
Basic Idea

Images of faces will not differ too much

This allows a much smaller dimensional subspace to be used to
classify them

PCA analysis finds the vectors that best define the distribution of
images

These vectors are then

2 long

Describe an  by  image

Linear combination of the original face images
Basic Idea

These vectors are called eigenfaces

They are the eigenvectors of the covariance matrix

Resemble faces
Method

Acquire initial training set of face images

Calculate eigenfaces

Keep only  eigenfaces that correspond to the highest eigenvalues


These images now define the face space
Calculate corresponding distribution in -dimensional weight space
for each known individual
Method

Calculate weights for new image by projecting the input image
onto each of the eigenfaces

Determine if face is known

Within some tolerance, close to face space

If within face space, classify weights as either known or unknown

(Optional) Update eigenfaces and weights

(Optional) If same face repeats, input into known faces
Classifying

Four possibilities for an input image

Near face space, near face class


Near face space, not near face class


Unknown face
Not near face space, near face class


Known face
Not a face, but may look like one (false positive)
Not near face space, not near face class

Not a face
OpenCV and Theory

Beauty about OpenCV is a lot of this process is completely
automated

Need:

Training images

Specify type of training

Number of eigenfaces

Threshold

Input Image
Experiment
Set-up

OpenCV running on a Raspberry Pi

Linux-based environment

Raspberry Pi Camera
Training

Database of negatives

AT&T Laboratories Database of faces developed in the 90s
Training

Captured Positives

Used the camera to capture images

Images were then cropped and resized
Training

Model was trained using positive and negative images

Creates training file that holds the -dimensional face space

Now have a base to recognize from
model = cv2.createEigenFaceRecognizer()
model.train(np.asarray(faces),np.asarray(labels))
Recognition

Steps to recognizing face
model = cv2.createEigenFaceRecognizer()
label, confidence = model.predict(image)

Capture image

Detect face

Crop and resize around face

Project across all eigenvectors

Find face class that minimizes Euclidian distance

Return label from face class, and Euclidian distance

Euclidian distance also called Confidence level
Test

Created four different Test

First data set uses 24 positive training images


Second data set uses 12 positive training images


Good pose variation, little lighting variation
Third data set uses 25 positive training images


Almost no pose and lighting variation
Good pose and lighting variation
Fourth data set uses second and third data set but with Fisherface
method
Results
Results

Results from Data sets 1-3, each one from 20 input images

Confidence represents distance from known face class
Data Set
Mean Confidence Max Confidence
Min Confidence
1
3462
3948
3040
2
2127
2568
1835
3
1709
2196
1217
Results

Results from eigenface vs. fisherface comparison
Algorithm
Data Set
# Training
Images
Mean
Confidence
Max
Confidence
Min
Confidence
Eigen
2
12
2127
2568
1835
Fisher
2
12
2029
2538
1468
Eigen
3
25
1709
2196
1217
Fisher
3
25
2017
2748
1530
Conclusion
Conclusion

Theory behind eigenfaces

Face space

Training

Simple implementation of OpenCV’s eigenface recognizer

Compared different training models


Number of training images

Pose and lighting variations
Compared eigenfaces and fisherfaces
Conclusion

Future work

Further testing of different training models

Implement updating facial recognition
Questions?
```