Menu
See all NewsEngineering News
Research

Deepfake-Detection System Is Now Live

Free platform to help journalists determine validity of digital artifacts

Deepfake detection

Northwestern University researchers have launched a new, easy-to-use platform for detecting deepfakes, which now is available to a limited number of verified journalists.

Called the Global Online Deepfake Detection System (GODDS), the system is free for verified journalists who want to substantiate the authenticity of audio, images, and/or videos. To apply to use the free service, journalists must register here, using an official work-provided email address from a verified news organization. 

V.S. Subrahmanian

After the verification process, journalists simply upload the digital artifacts in question. Then, they will receive the assessment via email within 24 hours.

“Northwestern’s analysts will work with automated algorithms within the GODDS system to augment the best ideas underlying existing technological approaches to deepfake detection with power of context and background knowledge and, with human analysis, render an expert opinion on whether a digital artifact is likely to be a deepfake or not,” said Northwestern Engineering’s V.S. Subrahmanian, the lead investigator on the GODDS effort. “Not only will our analysts provide an opinion on whether a digital artifact is likely to be a deepfake or not, this opinion also will provide an understandable explanation for that opinion.” 

An internationally renowned expert on deepfakes, Subrahmanian is the Walter P. Murphy Professor of Computer Science at the McCormick School of Engineering and a faculty fellow at the Buffett Institute for Global Affairs. Subrahmanian also is the founding director of the Northwestern Security and AI Laboratory (NSAIL).

According to Subrahmanian, the development of GODDS arose out of a need to improve publicly available systems for deepfake detection. After performing rigorous tests on various systems, NSAIL members found that even the best publicly available algorithms are not strong enough to meet the standard of proof required by journalists and US courts. 

As deepfake technology continually improves, experts are finding it more difficult to determine the difference between fact and fiction. Misrepresentations of “real” digital artifacts are particularly difficult to detect.

“Imagine a true video of a riot in a town square in 2022 being represented as a video of an ongoing riot at the same location today,” Subrahmanian said. “Separating fact from fiction requires a lot more than merely subjecting the digital artifact to inspection as many of today’s deepfake detectors do. And, as deepfake-generation technology improves, the ‘tells’ used by humans today will likely disappear, making it impossible for humans to determine whether a digital artifact is a deepfake or not through visual or auditory inspection. Background information, context, and history will instead play a more important role in separating fakes from the real thing.” 

“Context and history are an integral part of the story being told using any digital artifact,” said Rand Waltzman, former program manager of the Social Media in Strategic Communications program at DARPA. “Without the whole story, one cannot accurately judge its veracity.” 

Because the number of human analysts trained to work with artificial intelligence is limited, GODDS is currently only available to a limited number of verified journalists. But Subrahmanian and his team eventually hope to offer the service to more users in the future.