As the saying goes, hearing is not believing, seeing is believing. But now with the development of artificial intelligence technology, seeing is not necessarily believing. And this result is all because of deep forgery technology, or deep forgery for short. It is a hybrid of Deep Learing (deep learning) and Fake (forgery). It is an artificial intelligence-based human image synthesis technology. And the proliferation and development of this technology has caused concern.
In 2018, NVIDIA used artificial intelligence technology to synthesize photos of non-existent faces, and the researchers relied on an algorithm called Generative Adversarial Networks (GANs). The two can keep on going as if they were playing idiom solitaire with Do What You Want. That means, given enough time, GANs can generate a fake that looks more like a real person than a real person.
Since then, the ability of artificial intelligence to generate images of the human body has greatly improved. Yet it has also had some bad effects. Scammers have been able to use these generated fakes to deceive people, to splice faces into pornographic movies without their consent as a way to satisfy the desires of individual perverts, and even to undermine trust in online media. And the fake photos generated are more likely to gain people's trust than real photos. While it is possible to detect deep forgeries using artificial intelligence itself, the failure of tech companies to effectively reconcile that complex material is an indication that this path is not going to work.
A more important question is whether humans can spot the differences between dummy and real photos and how they relate to deep forgery. the results of a study in PNAS show that the situation is not promising, with people not yet as accurate at spotting fake photos as they are at random guessing and finding fabricated faces more trustworthy than real ones. The authors of the study write, "Our assessment of the fidelity of AI-synthesized faces suggests that synthetic engines have reached the uncanny point of being able to create faces that are indistinguishable and more trustworthy than real faces. "
To test the response to the fake faces, the researchers used an updated version of Nvidia's GAN to generate 400 fake faces with equal gender representation and 100 each of four racial groups. Blacks, Caucasians, East Asians and South Asians. They matched these faces with real faces extracted from the database initially used to train the GAN, and these faces were judged to be similar by a different neural network.
They then recruited 315 participants from Amazon's Mechanical Turk crowdsourcing platform. Each person was asked to judge 128 faces from a combined dataset and decide whether they were fake or not. And in the end they were only 48% accurate, which is lower than the 50% accuracy rate obtained from random guesses.
Deep faking out fake photos often has characteristic flaws and faults that can help people identify them. So the researchers conducted a second experiment with another 219 participants, giving them some basic training on what to look for before asking them to judge the same number of faces. But the result was that their performance improved only slightly, to 59 percent, an improvement of only 11 percent.
In the last experiment, the team decided to see if intuitive responses to faces could improve accuracy. People usually determine what is difficult to judge in a split second based on their first instinct. And with faces, trustworthiness is undoubtedly the first response people use to judge a person. However, when another 223 participants rated the trustworthiness of 128 faces, the researchers found that people actually rated the trustworthiness of fake faces 8% higher than the trustworthiness of real faces, a small but statistically significant difference.
Researchers found that the reason why fake faces look more trustworthy than real faces is that fake faces tend to look more like an ordinary face, and popular tend to trust ordinary faces more, especially the kind of face that looks harmless to win people's trust. This was confirmed by looking at the four least trustworthy faces (all real) and the three most trustworthy faces (all fake).
This study points out that those developing the underlying technology behind deep forgery need to think hard about what they are doing. It's important to ask themselves well whether the benefits of the technology outweigh the risks it poses. The industry should also seriously consider resuming some safeguards, such as making the watermark appear in the output photo for those who use the technology to create the price. The authors of the study said: Since the use of this powerful technology poses a major threat to people's lives, we should seriously consider whether there should be some restrictions on the practice of public and unrestricted deep forgery code that can be incorporated into any program by anyone at will. But unfortunately, it may be too late now. The public model is already capable of producing very realistic deep forgery photos, and it is unlikely that we will be able to take the model back.
Link to original article: https://blog.csdn.net/qq_43529978/article/details/123109543