CASMI Celebrates Launch with Ribbon Cutting, Panel Discussion
Ceremony featured remarks from Northwestern, Underwriters Laboratories leaders
From facial recognition to self-driving cars to smart homes to medical diagnostic systems, machine learning (ML) is the basis for transformational technologies we encounter every day. Yet despite the ubiquity of ML-driven applications, we still don’t fully understand the ramifications of this digital reality on our lives and communities.
Faculty, students, and industry professionals came together on April 8 to celebrate the launch of Northwestern’s Center for Advancing Safety of Machine Intelligence (CASMI). In partnership with the Digital Intelligence Safety Research Institute (DISRI) at Underwriters Laboratories Inc., CASMI will lead a wide-ranging research network to evaluate the human impacts of intelligent technologies and develop best practices for the design, development, and evaluation of systems to help ensure they are safe, equitable, and beneficial to all.
“We are now in a world where the technologies of intelligence are having impact absolutely everywhere,” said Kristian Hammond, CASMI director and Bill and Cathy Osborn Professor of Computer Science at Northwestern Engineering. “If we don’t understand the science behind that impact, we will not be able to guide it, and make sure that the technologies will be safe and beneficial for individuals and society.”
A ceremonial ribbon cutting on the third floor of Mudd Hall kicked off the event, which featured remarks by Hammond and representatives from Northwestern and Underwriters Laboratories.
Milan Mrksich, vice president for research at Northwestern, said CASMI’s goals align with the University’s overall research mission and focus on impact.
“Impact means bringing our basic and fundamental studies to the point where they benefit society and make the world a better place,” said Mrksich, the Henry Wade Rogers Professor of Biomedical Engineering and professor of chemistry and of cell and developmental biology. “That’s done through translation and through affiliations with our corporate partners, which provide us with knowledge, understanding, and guidance about the most important problems that need to be solved with the best new ideas. This center and having Underwriters Laboratories as a partner to develop those ideas is going to be a tremendous societal benefit and will drive a lot of science and recruit a great community to Northwestern.”
Terrence R. Brady, chief executive officer, president, and trustee at Underwriters Laboratories, explained that digital intelligence is a relatively new area of focus for the 128-year-old company.
“Digital security and AI are critically important new areas where we need to be involved, because the internet and decision-making technologies are now so deeply embedded in our everyday lives,” Brady said. “We want to drive impact — improvement in digital security and safety for all of us, for our families — and I believe this partnership is one of the most important ways we can make a difference. We all have an opportunity together to chart a safer, smarter, and more ethical course forward.”
Northwestern Engineering Dean Julio M. Ottino connected the intersection of ethics and the unplanned, unexplored impacts of artificial intelligence (AI) to his research in complex systems and nonlinear dynamics and chaos.
“I can recognize a good, creative idea when I see one,” Ottino said. “And this is really a new idea in the spectrum of how universities interact with the corporate world.”
Christopher J. Cramer, Underwriters Laboratories senior vice president and chief research officer and acting DISRI executive director, discussed the trailblazing cooperative agreement that established CASMI.
Our approach is not typical for industry working in universities. I'm grateful for the way we have set up both the governance and the leadership so that we can recruit the best ideas from around the globe. Senior Vice President and Chief Research Officer and Acting Executive Director, Digital Intelligence Safety Research Institute, Underwriters Laboratories
“Our approach is not typical for industry working in universities. I'm grateful for the way we have set up both the governance and the leadership so that we can recruit the best ideas from around the globe,” Cramer said. “We have an advisory board that includes people from industry as well as important contributors and developers of commercial products. I think the future looks incredibly bright, and I'm looking forward to cutting that ribbon and seeing the trains take off.”
Samir Khuller, Peter and Adrienne Barris Chair of Computer Science at Northwestern Engineering, noted the great opportunities for collaboration across the department that CASMI will yield, particularly for students.
“These types of partnerships give students the experience of seeing the real-life problems that they will encounter after they leave Northwestern,” Khuller said.
Khuller also underscored the complexity of machine learning and artificial intelligence.
“Machines are thinking and evolving. And safety in digital systems is complicated,” Khuller said. “If you open the hood of a car, you might not fully understand what every component does, but you can see pipes and tubes and their connections and it's easy to get a grasp of the complexity. But with digital systems, it’s more like a black box. You click a button, software gets downloaded, interesting things begin to happen. And being able to delve deeper into what's going on is critical.”
Virtual panel explores ethics, safety of AI
The day also included a virtual panel discussion attended by 145 guests featuring experts in machine learning and artificial intelligence.
Moderated by David Danks, professor of data science and philosophy at the University of California, San Diego, “Ethics, Safety, and AI: Can we Have it All?” challenged panelists to share their thoughts about how to achieve AI technologies that are truly ethical and safe for everyone.
“The language models trained on larger scale data really lack the notion of ethics or safety and reflect the biases and toxicity of humanity,” said Yejin Choi, the Brett Helsel Professor at the University of Washington Paul G. Allen School of Computer Science and Engineering and senior research manager at the Allen Institute for Artificial Intelligence. “The challenge requires more dedicated AI research but also more external efforts around AI to resolve complicated issues. We need to draw from deep insights and expertise from humanities, philosophy, social sciences, law, and other disciplines and sync new and truly cross disciplinary research.”
It’s critical to embrace the complexity of both the technical and social challenges, but also to make sure the perfect doesn’t become the enemy of the good. Associate Professor of Computer Science at Northwestern Engineering and of Communication Studies at Northwestern’s School of Communication and Director of Applied Science at Microsoft
Brent Hecht, associate professor of computer science at Northwestern Engineering and of communication studies at Northwestern’s School of Communication and director of applied science at Microsoft, is optimistic about researchers making progress toward leading AI to the best outcomes for the most people and navigating the complex middle.
“It’s critical to embrace the complexity of both the technical and social challenges, but also to make sure the perfect doesn’t become the enemy of the good,” Hecht said. “One way to square that circle is to be focused on solutions and less on problem identification. Find solutions that you can build that help people as quickly as possible and work from there. There is nothing wrong with a highly iterative, low-hanging-fruit-based strategy.”
Cara LaPointe, codirector of the Johns Hopkins Institute for Assured Autonomy (IAA) at Johns Hopkins University, believes we can have it all if we are deliberate in the approach.
“When you talk about safety and ethics and assurance, all of these issues live at the intersection between technology and humanity,” LaPointe said. “At IAA, we take a holistic approach. It’s not just about technology. It’s about the human ecosystems you’re going to put that technology into. You must look at how you develop ethics, governance, and technology in concert with one another and understand the strategic feedback loops between the different elements for us to end up at the result where we can drive safety, ethics, and trustworthiness into technology.”
Panelists also discussed how to create effective interdisciplinary teams that function across academia, government, and industry.
LaPointe introduced the element of stakeholder engagement and stressed the importance of quality underlying data to understand what is needed from AI technologies, what we value, and the impact on various stakeholders.
“We need more research in terms of how you take the idea of human-centered design to community scale and to society scale,” LaPointe said. “We have to be deliberate about finding models to bring together researchers and practitioners in different sectors and also bring in the stakeholder voice. Often the marginalized and vulnerable populations don’t have a voice in the process.”
“Think about where your incentive structures overlap with someone who you need to collaborate with and just dive in,” Hecht said.
Panelists shared insights about learning through AI deployments and strategies for measuring, testing, and tracking safety over time.
“In the olden days, we used to have laboratory settings in which we tested our AI models and that was sufficient. These days, that’s just not enough,” Choi said. “Real life scenarios are much more adversarial and diverse than what’s assumed in the laboratory setting.”
Choi suggested that viewing evaluation frameworks as a more evolving and dynamic practice as well as more public-facing demo systems that can be stress-tested before formal deployment may help reduce the gap between lab results and real-world outcomes.
The panel also considered open questions and challenges related to operationalizing values and ethics in AI.
“Digitizing an analog problem does not solve the underlying analog problem. I fear that, if we are not deliberate, we will codify and amplify negative impacts of these technologies,” LaPointe said. “The solution is understanding the underlying data that is going into these systems and how we can use data, software, and hardware in concert to end up at the impact we need.”
Projects
CASMI identifies and funds research initiatives led by teams at Northwestern and at partner institutions that advance the state of the art and answer key questions outlined in the Research Roadmap and Evaluation Framework.
"We are thrilled to be supporting these projects in CASMI’s first year, each of which investigates a key open research questions within one of our core focus areas of data, algorithm, interaction and evaluation," Hammond said. "We’re excited to be working with these five PIs from NYU, MIT, University of Wisconsin, University of California Santa Cruz and Cal Poly, who are now on our governance advisory committee; their work individually and as a group will help us to aggressively pursue CASMI’s research mission to operationalize safety in machine intelligence systems."