While face recognition algorithms have shown promising results using gray
level face images, their accuracy deteriorate if the face images are not frontal. As the
head can move freely, it causes a key challenge in the problem of face recognition. The
challenge is how to automatically and without manual intervention recognize nonfrontal
face images in a gallery with frontal face images. The rotation is a linear
problem in 3D space and can be solved easily using the 3D face data. However, the
recognition algorithms based on 3D face data gain less recognition rates than the
methods based on 2D gray level images. In this chapter, a sequential algorithm is
proposed which uses the benefits of both 2D and 3D face data in order to obtain a pose
invariant face recognition system. In the first phase, facial features are detected and the
face pose is estimated. Then, the 3D data (Face depth data) and correspondingly the 2D
image (Gray level face data) are rotated in order to obtain a frontal face image. Finally,
features are extracted from the frontal gray level images and used for classification.
Experimental results on FRAV3D face database show that the proposed method can
drastically improve the recognition accuracy of non-frontal face images.
Keywords: 3D rotation, Biometric, Depth data, Dimensionality reduction, Ellipse
fitting, Eigen problem, Eigenface, Face recognition, Facial features, Fisherface,
Feature extraction, Gray level image, IRAD contours, Linear Discriminant
Analysis, Least mean square, Manifold learning, Mean filter, Mean curvature,
Nearest Neighbor classifier, Pose estimation.