latest Post

Facebook's DeepFace Software Can Match Faces With 97.25% Accuracy

Have you noticed that Facebook is getting better at making suggestions for people to tag in the photos that you have uploaded? Facebook will only get better at identifying faces thanks to advances in artificial intelligence and “deep learning.” Facebook researchers are currently developing algorithms called “DeepFace” to detect whether two faces in unfamiliar photos are of the same person with 97.25% accuracy, regardless of lighting conditions or angles. As a comparison, humans generally have an average of 97.53% accuracy. This means that Facebook’s facial-processing software has nearly the same accuracy as humans.

Yaniv Taigman, one of Facebook’s artificial intelligence scientists, said that the error rate has been reduced by over 25% relative to earlier software that handles the same task. Taigman co-founded Face.com in 2007, which was acquired by Facebook a couple of years ago. Prior to the acquisition, Face.com built apps and APIs that could scan billions of photos every month and tag faces in those photos. Taigman developed DeepFace with fellow Facebook research scientists Ming Yang and Marc’Aurelio Ranzato, along with Tel Aviv University faculty member Lior Wolf.

According to MIT’s TechnologyReview.com, DeepFace uses a 3D model for rotating faces virtually so that the person in the photo appears to be looking at the camera. The angle of the face is corrected by using a 3D model of an “average” forward-looking face. DeepFace creates a simulated neural network to work out a numerical description of the reoriented face to determine if there are similar enough descriptions from the two images. This network involves over 120 million parameters using locally connected layers. The DeepFace team trained the network using a dataset of 4 million facial images belonging to around 4,000 people, which means that each identity had an average of over a thousand samples for testing. The facial verification technique can be used to complement the connection of a name to a face, known as facial recognition. Eventually, this could improve Facebook’s ability to suggest users for tagging in an uploaded photo and for other potential purposes. The DeepFace algorithms have also been successfully tested for facial verification within YouTube videos, but this was challenging because the imagery was not as sharp compared to photos.

DeepFace

DeepFace rotating a celebrity’s face to use for facial verification. Image credit: Facebook

Facebook currently uses different facial recognition algorithms to suggest friends when tagging photos. One of the factors used in the current algorithm includes the distance between a user’s eyes and nose in multiple photos. Facebook may sometimes suggest the wrong people to tag because of similarities in facial structures between multiple friends, which could potentially be fixed with DeepFace. Facebook’s tag suggestion feature was launched in June 2011, but many privacy advocates did not approve. Following a probe by the European Union, Facebook suspended the feature to make “technical improvements.” Facebook tag suggestions were reintroduced to U.S. users in January of last year with an option for users to remove themselves from the list of tag suggestions.

Additional details of DeepFace are published in a paper called: “DeepFace: Closing the Gap to Human-Level Performance in Face Verification.” The project and the paper will be presented at the Computer Vision and Pattern Recognition conference in Ohio this June. The DeepFace algorithms are purely just for research right now and currently does not affect Facebook’s 1.2 billion users.

About Unknown

Unknown
Recommended Posts × +

0 comments:

Post a Comment

} //]]>