Deepfakes are a uniquely late 20s tool of disinformation. The audiovisual version of memes which show false statements next to political figures, they are more dangerous simply because humans tend to believe video more than other mediums.
Microsoft, Facebook, academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are all teaming up to build Deepfake Detection Challenge (DFDC) to combat this scourge.
“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online. Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes,” Facebook’s Mike Schroepfer explained on Thursday. “The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer. The Deepfake Detection Challenge will include a data set and leaderboard, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.”
As the 2020 elections draw closer, the work of these organisations will grow more vital. Deepfakes may be used for laughs now, but sophisticated parties could use it to stir the soup of mistrust that exists between people when it comes to the political.
Source: Facebook