Abstract | In principle, the recovery and reconstruction of a 3D object from its 2D view projections require the parameterisation of its shape structure and surface reflectance properties. Explicit representation and recovery of such 3D information is notoriously difficult to achieve. Alternatively, a linear combination of 2D views can be used which requires the establishment of dense correspondence between views. This in general, is difficult to compute and necessarily expensive. In this paper we examine the use of affine and local feature-based transformations in establishing correspondences between very large pose variations. In doing so, we utilise a generic-view template, a generic 3D surface model and Kernel PCA for modelling shape and texture nonlinearities across views. The abilities of both approaches to reconstruct and recover faces from any 2D image are evaluated and compared. |
---|