And again we are coming back to DeepFake technology, which has settled in our reality for good, and unfortunately it causes more and more problems not only to the largest corporations, but also to famous people and power.
We recently wrote about the fact that it has become a dangerous tool in the hands of criminals who have already stolen millions of dollars and are elusive with its use. Their goal has been large companies so far, but security specialists warn that it is only a matter of time before private victims become the victims. All because now almost everyone, thanks to quite simple to use software, is able to create an image or recording imitating a human voice or appearance.
The growing technology giants, Facebook, Microsoft and others, decided to address this growing problem by announcing work on the Deepfake Detection Challenge (DFDC), which is to encourage the industry to create new methods to detect such manipulation and prevent the spread of false information. The challenge involves the involvement of paid actors to create a deepfake database, which will then be available to people who want to get involved and use it to create deepfake detection software.
The list of challenge participants is already The Partnership on AI and scientists from College Park, Cornell Tech, Massachusetts Institute of Technology (MIT), University at Albany-SUNY, University of California, Berkeley, University of Maryland and University of Oxford tech advices. And no wonder, because the DFDC challenge provides grants and prizes, for which Facebook itself has allocated $ 10 million – the program will start at the end of this year and will last until the end of March 2020. Why is such a short time foreseen? Well, the problem is urgent, especially in the face of next year’s presidential election in the United States, which will certainly be a very big challenge for many technological giants.
And if under the DFDC you can actually work out some effective solutions, then the corporations will still have a lot of time to implement them and prepare for the fight against the wave of deepfakes, which in the fall of 2020 will certainly flood the network (all the more so that legislators now are pushing large companies to present a fighting strategy for that time). What’s more, DFDC raises some key questions about how to regulate this technology and what is the right way to use it.
However, not everyone likes this approach, especially since Facebook and Microsoft call deepfak any material modified by artificial intelligence that was created to mislead others. And although this is about a deliberate change, some Internet users are afraid that this definition may also include, for example, memes and other satirical activities, which in turn may lead to another restriction of artistic freedom on the web. Similar conclusions are presented by the Electronic Frontier Foundation (EFF), which indicates that any top-down attempt to regulate deepfake technology can lead to unwanted censorship, and none of the automatic methods presented so far is able to effectively distinguish satire or parody from actual deepfake materials. What is your opinion? Do we need initiatives like Facebook and Microsoft, or do they themselves carry too much risk?