News & EventsDepartment Events & Announcements
Events
-
Nov13
EVENT DETAILS
Wednesday / CS Seminar
November 13th / 3:00 PM
Hybrid / Mudd 3514Speaker
Adrian Perrig, ETH ZurichTalk Title
Reaching Escape Velocity for Layer-3 Innovation: Deployability of a Next-generation Internet ArchitectureAbstract
It appears nearly impossible to deploy a new Internet architecture
that innovates at Layer 3 of the networking stack, as the obstacles
seem insurmountable: billions of deployed devices, legacy network
infrastructure with hardware-based packet processing with long
replacement cycles, operating systems of a sprawling complexity, and a
diverse application landscape with millions of developers. As a new
Internet architecture seemingly needs support by all of these
stakeholders, fundamental innovation at the network layer appears
hopeless.We identify dependency loops as a core barrier to the deployment of a
next-generation Internet architecture. We propose to break the
dependency loops with a virtuous cycle: the availability of
applications using the NGN will result in increasing amount of
traffic, encouraging more NSPs to deploy the NGN, fueling user demand,
inviting more applications to deploy. We postulate that 1 million
users with access to the NGN connectivity suffice to set the virtuous
cycle in motion.The aim of this talk is to imbue hope for the deployment of a
next-generation Internet architectures. With the expanding real-world
deployment of the SCION secure network architecture, we show how a
next-generation education network can be established and connected to
the commercial network. Applications running on hosts in these
networks can immediately make use of the next-generation
infrastructure thanks to a bootstrapping service, even without OS
support. To provide sufficient incentives to applications to build in
SCION support, we present a path towards reaching 1 million hosts in
SCIONabled networks. On the path toward this vision, 12 R&D
institutions on 5 continents are now connected with native SCION
connectivity ("BGP free"), reaching an estimated 250'000 users /
hosts. We present several applications and use cases that can be used
across these institutions.Biography
Adrian Perrig is a Professor at the Department of Computer Science at
ETH Zürich, Switzerland, where he leads the network security group. He
is also a Distinguished Fellow at CyLab, and an Adjunct Professor of
Electrical and Computer Engineering at Carnegie Mellon University.
From 2002 to 2012, he was a Professor of Electrical and Computer
Engineering, Engineering and Public Policy, and Computer Science
(courtesy) at Carnegie Mellon University. From 2007 to 2012, he served
as the technical director for Carnegie Mellon's Cybersecurity
Laboratory (CyLab). He earned his MS and PhD degrees in Computer
Science from Carnegie Mellon University, and spent three years during
his PhD at the University of California at Berkeley. He received his
BSc degree in Computer Engineering from EPFL. He is a recipient of the
ACM SIGSAC Outstanding Innovation Award. Adrian is an ACM and IEEE
Fellow. Adrian's research revolves around building secure systems --
in particular his group is working on the SCION secure Internet
architecture.---
Zoom Link
Panopto Link
DEI Minute:TIME Wednesday, November 13, 2024 at 3:00 PM - 4:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov18
EVENT DETAILS
Monday / CS Distinguished Lecture
November 18th / 12:00 PM
Hybrid / Mudd 3514Speaker
Kyros Kutulakos, University of TorontoTalk Title
The Ultimate Video CameraAbstract
Over the past decade, advances in image sensor technologies have transformed the 2D and 3D imaging capabilities of our smartphones, cars, robots, drones, and scientific instruments. As these technologies continue to evolve, what new capabilities might they unlock? I will discuss one possible point of convergence---the ultimate video camera---which is enabled by emerging single-photon image sensors and photon-processing algorithms. We will explore the extreme imaging capabilities of this camera within the broader historical context of high-speed and low-light imaging systems, highlighting its potential to capture the physical world in entirely new ways.Biography
Kyros is a Professor of Computer Science at the University of Toronto and an expert in computational imaging and computer vision. His research over the past decade has focused on combining programmable sensors, light sources, optics and algorithms to create cameras with unique capabilities---from seeing through scatter and looking around corners to capturing surfaces with complex material properties robustly in 3D. He is currently leading efforts to harness the full potential of technologies such as single-photon cameras and programmable-pixel image sensors, for applications in extreme computer vision and scientific imaging. Kyros is the recipient of an Alfred P. Sloan Fellowship, an NSF CAREER Award, and eight paper prizes at the computer vision field's top conferences, including the best paper award at ICCV 2023 and CVPR 2019.Research/Interest Areas: Computer vision, computational imaging
---
Zoom: https://northwestern.zoom.us/j/93072495148
Panopto: https://northwestern.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=7ac0e95d-a3ee-430e-8216-b22501572893
DEI Minute: Land Acknolwedgements https://tinyurl.com/cspac-dei-minuteTIME Monday, November 18, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Mix and mingle with fellow CS students and faculty every last Thursday of the month.
TIME Thursday, November 21, 2024 at 9:00 AM - 11:00 AM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec4
EVENT DETAILS
Wednesday / CS Distinguished Lecture
December 4th / 12:00 PM
Hybrid / Mudd 3514Speaker
Michael Franklin, University of ChicagoTalk Title
Enabling Diverse and Interconnected Data Ecosystems with Lakehouses, Open Formats and AIAbstract
Data and AI are driving innovation across industry, science and society. At the same time, data management technology has changed dramatically, due a combination of factors such as cloud-based data lakes, the development of open standards for data formats and catalogs, and the opportunities presented by generative AI and other AI technologies. The result of all of these changes is an increasingly vibrant data ecosystem that has the potential to span traditional information silos and data incompatibilities to enable people and organizations to leverage all of their data in ways that have until now, simply have not been possible. In this talk I'll survey this new landscape and give an overview of some of the (to my mind) most promising directions for research and innovation in this rapidly advancing area.Biography
MICHAEL J. FRANKLIN is the Morton D. Hull Distinguished Service Professor of Computer Science and Founding Faculty Co-Director of the Data Science Institute at the University of Chicago. He is also a Founding Advisor and Visiting Researcher at Databricks, Inc. At Chicago he served as Liew Family Chair of Computer Science, overseeing the department's rapid expansion in scope and stature. Previously he was the Thomas M. Siebel Professor of Computer Science at UC Berkeley where he directed the Algorithms, Machines and People Laboratory (AMPLab), and was part of the original team building Apache Spark, a popular open source data processing framework initiated at the lab. He is a Member of the American Academy of Arts and Sciences and is a Fellow of the ACM and the American Association for the Advancement of Science. He has served on the Board of the Computing Research Association and the NSF CISE Advisory Committee as well as the ACM Fellows Selection Committee. He received the 2022 ACM SIGMOD Systems Award, and is a two-time recipient of the ACM SIGMOD “Test of Time” award. He holds a Ph.D. in Computer Sciences from the Univ. of Wisconsin (1993).Research/Interest Areas: Data systems, Data Science, Systems and AI/ML, Data Markets
---
Zoom: TBA
Panopto: TBA
DEI Minute: TBATIME Wednesday, December 4, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec7
EVENT DETAILS
Fall classes end
TIME Saturday, December 7, 2024
CONTACT Office of the Registrar nu-registrar@northwestern.edu EMAIL
CALENDAR University Academic Calendar
-
Dec14
EVENT DETAILS
The ceremony will take place on Saturday, December 14 in Pick-Staiger Concert Hall, 50 Arts Circle Drive.
*No tickets required
TIME Saturday, December 14, 2024 at 4:00 PM - 6:00 PM
LOCATION Pick-Staiger Concert Hall map it
CONTACT Andi Joppie andi.joppie@northwestern.edu EMAIL
CALENDAR McCormick School of Engineering and Applied Science