October 22, 2019
Are you one of those people who can immediately recognise a face in a crowd and place them? These people are referred to as “super recognisers”. Super recognisers have an above average ability to recognise faces. Typically this is classified as being in the top 1-2% on a selection of face recognition tests, but for the average Joe a familiar face may plague you for hours - trying to piece together who they are, where you know them from, or if you even know them at all!
If we as humans struggle so much to recognise features on a face - how then do machines do this?
Typically facial recognition software reads the geometry of your face. Cameras measure distances between key facial features, or landmarks. Key factors include the distance between your eyes, the distance from forehead to chin, size of nose, distance from eyes to ears etc. The software identifies facial landmarks - one system identifies 68 of them - that are key to distinguishing your face. The result: your facial signature, or face print, as it is also referred to.
The interesting thing is that technology has further evolved from the 2D style data points such as above and is now able to take a 3D impression with the use of projecting light onto the face and using sensors to help capture the distance of various key factors from the camera itself and therefore account for depth. Apple’s FaceID on current iphones use approximately 30,000 infrared dots to line the contour of your face.
Systems using facial recognition then scan your face to compare it to a database of known faces to see if there is a match. A determination is made on how closely your face print matches the image(s) in the database. Once the system finds a match, you have been recognised. If the system is then able to put a name to your face, you have been identified.
The technology can still be fooled by facial expressions and or facial hair, but new technology is constantly evolving to improve this. One new such advancement is called Skin Texture Analysis. Skin Texture Analysis, for all intents and purposes, measures the same distances, but uses smaller points - almost from pore to pore, and may even be able to reliably tell the difference between twins not too long from now.
As users of technology we’re also helping these systems in our own small little ways, such as through Facebook photo facial identification, Google galleries, SnapChat’s filters etc. By using distortion, and us telling the system how well it recognises us - as per Facebook image algorithm - we are teaching the system to better recognise our changing features every day.