V.S. Subrahmanian Discusses “Cat and Mouse” of Cybersecurity

Subrahmanian and TikTok’s Roland Cloutier discussed new and emerging security challenges for social media companies during a Buffett Institute Fireside Chat on April 12


Fireside Chat: New & Emerging Cybersecurity Challenges


From national security issues such as election integrity and anti-terrorism to malicious attacks like identity and financial fraud or intellectual property violations, cybersecurity is a moving-target grand challenge that profoundly impacts our lives.

V.S. SubrahmanianOn April 12, the Northwestern Roberta Buffett Institute for Global Affairs hosted a fireside chat to discuss new and emerging security challenges for social media companies. The event featured remarks from V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science in Northwestern Engineering and a faculty fellow at the Buffett Institute, and Roland Cloutier, global chief security officer at TikTok. Northwestern Associate Provost for Global Affairs and Executive Director of the Buffett Institute Annelise Riles moderated the event, which drew more than 250 attendees.

Moving at the speed of culture

Riles began by asking Cloutier about the cybersecurity priorities at TikTok, one of the world's largest media, social, and online technology companies, where he has functional responsibility for information protection, data defense, operational risk, workforce protection, crisis management, and investigative security operations worldwide.

“We don’t even call it cybersecurity anymore. We call it business operations protection,” Cloutier said.

“TikTok is a fun place made for self-expression. We have to protect that business model to ensure that our community feels comfortable and protected because that is TikTok.” 

Cloutier said his day-to-day activity depends largely on what’s happening around the world — his team manages ubiquitous platform security situations ranging in impact and scope from planned events like an Ed Sheeran concert and live broadcasting of National Football League games to evolving geopolitical crises such as the war in Ukraine.

“The world moves at an incredible speed, and TikTok has to move at the speed of culture,” Cloutier said.

Malicious behavior on social media

Subrahmanian shared insights in the sphere of malicious behavior on social media — such as bots and deep fakes — from a national security perspective.

“What we’re going to see over the next few years, and possibly longer, are far more sophisticated, coordinated campaigns,” said Subrahmanian, a leading authority in artificial intelligence and security issues.

He described the deep fake program feedback-loop that trains on authentic images, generates fakes, and then feeds images into a discriminator that determines whether an image is real or forged. The process continues until the discriminator cannot improve its ability to predict whether the generator produced a real or fake image.

“When the Russians create a deep fake video of President Volodymyr Zelensky telling his troops to lay down their arms, that will be examined and scrutinized by millions of people around the world and quickly uncovered,” Subrahmanian said. “Smart bot developers try to do things differently; they want to move the needle as much as they can without being discovered.”

He pointed out that, while deep fake technology has potentially beneficial applications such as computer code that mathematically creates impressionist-style art, malicious actors use deep fake technology to do harm — for example, putting words in the mouths of politicians or generating nude images of celebrities from widely available, fully-clothed photography.

“Deep fakes are getting very good in the case of images,” Subrahmanian said. “Not as good in the case of video or audio just yet, but they are getting better, and text fakes are making a lot of progress.”

Applying AI to automate defense capabilities

Cloutier reported that approximately 96 percent of violative videos are automatically found and removed by artificial intelligence applications before reaching the TikTok platform. Through deep machine learning and automation defense capabilities, the platform uses data such as account sign-ins, authentications, user location, likes, and follows to understand what a normal person can or cannot do and distinguish potentially malicious activity.

“If V.S. is normally in Chicago but I see his identity used in Belarus from 15 different device types, maybe there’s a problem,” Cloutier said.

To counter disinformation and misinformation, Cloutier explained that TikTok partners with external third parties for validation based on local culture, context, and language, such as education programs and local social organizations, as well as globally recognized fact-checking organizations. TikTok also provides users with the ability to help police the community by reporting content that violates guidelines.

“It’s an everyday lifecycle of defense,” Cloutier said. “We make sure illicit and violating behavior doesn’t get on the platform and, if it subversively does, we remove it and turn that back into technology to detect it again.”

Subrahmanian explained that one problematic area is the middle ground between the computational artifacts that are clearly bad and should be blocked and those that are clearly acceptable and should be allowed to go through to platforms.

“Companies are trying to keep their false positive rate low and their false negative rate low and they want to make this middle as small as possible,” Subrahmanian said.

He also noted challenges and limitations in AI security technology to detect analogous cases with an automated program.

McCormick News Article