Is the iPhone facial recognition feature racist?
KEY POINTS
- Apple accused of failing minorities after a Chinese woman realised her colleague could unlock her phone.
- Twitter user argued the technology reflects the bias of its creators.
Apple's Face ID was one of the most widely anticipated features of the iPhone X which came out in November. Using facial authentication - a sensor which creates a 3D map of the user's face - the phone no longer requires fingerprint access, but simply one look from its owner to open.
The system even learns face changes over time, meaning it can recognise someone in glasses, hats, facial hair or different makeup.
Apple was so confident about it, they issued a statement in which they said there was a "1 in a million probability of a random person unlocking your iPhone with Face ID."
One woman from Nanjing, China was recently unable to rely on Apple's facial recognition technology, however. Designed as an advanced security measure, she claimed her colleague could easily unlock her phone.
Yan had to prove the glitch alongside her co-worker in front of Apple staff before she was believed and allowed access to the phone, Shangaiist.com reported on Medium. After exchanging it, the same thing happened again - appearing to highlight an issue with the technology itself, not the individual phone.
A Twitter user reacted to the story and accused Apple of failing ethnic minorities by claiming it allowed racial bias to determine how its products work.
Apple has not immediately responded to IBTimes UK's request for comment.
In October, Apple's Vice President for Public Policy and Government Affairs, Cynthia Hogan, responded to questions about the face scanning technology:
"The accessibility of the product to people of diverse races and ethnicities was very important to us. Face ID uses facial matching neural networks that we developed using over a billion images, including IR and depth images collected in studies conducted with the participants' informed consent. We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity, and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users. In addition, a neural network that is trained to spot and resist spoofing defends against attempts to unlock your phone with photos or masks."
Twitter user BienSur_JeTaime argued that "devices can't be biased, but if the creators don't account for their own biases it shows up".
She added a video of soap dispensers not working for a black person on account of their skin colour.
When someone asked how soap dispensers were related to black people she responded with: "Not so much race as skin colour. A lot of 'recognition' tech has issues with recognising and responding to darker tones [because] it isn't calibrated for them. So if the hand isn't light the machine doesn't always know it's there to perform its function."
The initial tweet then became a wider debate with others reiterating the problematic nature of industries such as cinema. They noted how historically lighting in movies and makeup were designed for white skin and how, as a result, darker skins are now a problem for lighting crew.
This is not the first time that the technology industry has been criticised for bias and a lack of diversity. In August this year, The Guardian published a piece entitled Segregated Valley in which they pointed out that despite Google's vocal commitments to diversity, its workforce was 69% male and just 2% African American. In addition, only 20% of technical jobs are held by women.