Deepfake detection tools need to work with dark skin tones, experts warn

Detection tools being developed to combat the growing threat of realistic-looking deepfakes must use training datasets that include darker skin tones to avoid bias, experts have warned.

Most deepfake detectors rely on a learning strategy that largely depends on the dataset used to train it. Then it uses artificial intelligence to detect signs that may not be clear to the human eye.

This may include blood flow and heart rate monitoring. However, these detection methods don’t always work on people with darker skin tones, and if the training sets don’t contain all ethnicities, accents, genders, ages and skin tones, they’re open to bias, experts warn.

Over the past couple of years, concerns have been raised by AI and deepfake detection experts who say that bias is being created in these systems.

Rijul Gupta, synthetic media expert and co-founder and CEO of DeepMedia, which uses artificial intelligence and machine learning to evaluate visual and audio cues for underlying signs of synthetic manipulation, said: The datasets are always heavily biased toward middle-aged white men, and this type of technology always negatively impacts marginalized communities.

At DeepMedia, instead of being blind to race, our detectors and technology actually look for a person’s age, race, gender. So when our detectors are looking to see whether or not the video has been manipulated, it has already seen a large amount of samples of various ages and races.

Gupta added that deepfake detection tools that use visual cues, such as blood flow and heart rate tracking, may have an underlying bias against people with lighter skin tones, because darker skin tones in a video stream are much more difficult to extract a heart rate. From.

The inherent bias of these tools means they will perform worse on minorities.

We will see an end result of an increase in AI-driven scams, fraud and deepfake disinformation that will be highly targeted and focused on marginalized communities, says Gupta.

Mutale Nkonde, AI policy adviser and CEO and founder of AI for the People, said the concerns stem from greater exclusions that minorities face.

If we had technology that would keep some people safe, it really should keep everyone safe, and unfortunately, the technology isn’t quite there yet, Nkonde said.

We’re well educated on the problems facial recognition has in recognizing dark skin, but the general public doesn’t realize that just because the technology has a new name, function, or use doesn’t mean the engineering is advanced.

skip the previous newsletter promotion

It also doesn’t mean there isn’t new thinking in the field. And since there is no regulation anywhere in the world that says: you can’t sell a technology that doesn’t work, the underlying bias continues and reproduces itself in new technologies.

Ellis Monk, a sociology professor at Harvard University and a visiting faculty researcher at Google, developed the Monk Skin Tone Scale. It is an alternative scale that is more inclusive than the tech industry standard and will provide a broader spectrum of skin tones than can be used for datasets and machine learning models.

Monk said: Darker skinned people have been excluded from the way these different forms of technology have been developed from the beginning.

We need new datasets built that have more coverage, more representativeness in terms of skin tone and this means that we need some kind of standardized measure that is consistent and more representative than the previous scales.

#Deepfake #detection #tools #work #dark #skin #tones #experts #warn
Image Source : www.theguardian.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top