News & EventsDepartment Events & Announcements
Events
-
Nov11
EVENT DETAILSmore info
Seeing Beyond Pixels: Holography's Mission to Craft the Ultimate Visual Experience
As VR and AR continue to emerge as next-generation computing platforms, current display technologies face significant limitations that restrict viewer comfort and hinder broader adoption. A major challenge is the accommodation-vergence conflict, which makes it difficult to create virtual images that convincingly mimic real-world visuals—a benchmark often referred to as the 'Visual Turing Test.' This thesis explores how holographic displays can overcome these fundamental limitations, addressing critical obstacles including the lack of algorithms for accurate 3D scene reconstruction with proper depth cues and motion parallax, trade-offs between field-of-view and viewing angle due to limited space-bandwidth product, and coherent speckle noise that significantly degrades perceptual quality.
During my defense, we will first review the state-of-the-art in holographic display technology, establishing the foundation for our contributions. We then introduce a novel algorithm that enforces accurate reconstruction of 3D content with correct motion parallax and depth cues. Building on this, we propose methods to overcome spatial and angular resolution limitations by introducing novel optical elements into the system that are designed using ML-principles. We then present two complementary approaches to noise reduction in holographic displays. First, we introduce a novel hardware-software co-design that significantly reduces noise without sacrificing temporal or spatial bandwidth using angular diversity in illumination. Second, we demonstrate that similar noise reduction can be achieved through polychromatic illumination, offering an alternative path to high-quality display. Each proposed solution is validated through both rigorous simulations and experimental prototypes, demonstrating practical viability in real-world conditions.
The methods presented in this thesis not only address current limitations in holographic display technology but also provide a foundation for the next wave of research in computational displays and AI-assisted optical system design, where machine learning techniques can further optimize both the optical and computational components of holographic displays. These advances bring holographic displays closer to meeting the demanding requirements of consumer AR/VR applications.
TIME Monday, November 11, 2024 at 11:00 AM - 1:00 PM
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov13
EVENT DETAILS
Wednesday / CS Seminar
November 13th / 3:00 PM
Hybrid / Mudd 3514Speaker
Adrian Perrig, ETH ZurichTalk Title
Reaching Escape Velocity for Layer-3 Innovation: Deployability of a Next-generation Internet ArchitectureAbstract
It appears nearly impossible to deploy a new Internet architecture
that innovates at Layer 3 of the networking stack, as the obstacles
seem insurmountable: billions of deployed devices, legacy network
infrastructure with hardware-based packet processing with long
replacement cycles, operating systems of a sprawling complexity, and a
diverse application landscape with millions of developers. As a new
Internet architecture seemingly needs support by all of these
stakeholders, fundamental innovation at the network layer appears
hopeless.We identify dependency loops as a core barrier to the deployment of a
next-generation Internet architecture. We propose to break the
dependency loops with a virtuous cycle: the availability of
applications using the NGN will result in increasing amount of
traffic, encouraging more NSPs to deploy the NGN, fueling user demand,
inviting more applications to deploy. We postulate that 1 million
users with access to the NGN connectivity suffice to set the virtuous
cycle in motion.The aim of this talk is to imbue hope for the deployment of a
next-generation Internet architectures. With the expanding real-world
deployment of the SCION secure network architecture, we show how a
next-generation education network can be established and connected to
the commercial network. Applications running on hosts in these
networks can immediately make use of the next-generation
infrastructure thanks to a bootstrapping service, even without OS
support. To provide sufficient incentives to applications to build in
SCION support, we present a path towards reaching 1 million hosts in
SCIONabled networks. On the path toward this vision, 12 R&D
institutions on 5 continents are now connected with native SCION
connectivity ("BGP free"), reaching an estimated 250'000 users /
hosts. We present several applications and use cases that can be used
across these institutions.Biography
Adrian Perrig is a Professor at the Department of Computer Science at
ETH Zürich, Switzerland, where he leads the network security group. He
is also a Distinguished Fellow at CyLab, and an Adjunct Professor of
Electrical and Computer Engineering at Carnegie Mellon University.
From 2002 to 2012, he was a Professor of Electrical and Computer
Engineering, Engineering and Public Policy, and Computer Science
(courtesy) at Carnegie Mellon University. From 2007 to 2012, he served
as the technical director for Carnegie Mellon's Cybersecurity
Laboratory (CyLab). He earned his MS and PhD degrees in Computer
Science from Carnegie Mellon University, and spent three years during
his PhD at the University of California at Berkeley. He received his
BSc degree in Computer Engineering from EPFL. He is a recipient of the
ACM SIGSAC Outstanding Innovation Award. Adrian is an ACM and IEEE
Fellow. Adrian's research revolves around building secure systems --
in particular his group is working on the SCION secure Internet
architecture.---
Zoom Link
Panopto Link
DEI Minute:TIME Wednesday, November 13, 2024 at 3:00 PM - 4:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov18
EVENT DETAILSmore info
EnCORE and IDEAL TRIPODS Institutes Collaboration website here
Monday, November 18 and Tuesday, November 19, 2024
The NSF TRIPODS Institutes—The Institute for Emerging CORE Methods for Data Science (EnCORE) at UCSD and The Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) at Northwestern University—are co-hosting a workshop titled "Foundations of Fairness and Accountability." The event will take place from November 18-19, 2024 in a hybrid format at the Department of Computer Science at Northwestern University, Evanston, Illinois. We plan to follow up with a second workshop at early Spring at the EnCORE institute at the University of California San Diego.
The workshop will feature a blend of talks and interactive discussions, focusing on key topics related to fairness and accountability. The main areas of exploration include:
1. Fairness in Resource Allocation
2. Fairness in Clustering & Ranking
3. Fairness in Prediction
4. Socio-Technical Aspects of Fairness & Accountability
5. Accountability in Experiment Design, with an emphasis on Replicability
6. Applications of these concepts across fields like Law, AI, and Biology.TIME Monday, November 18, 2024 at 9:00 AM - 4:30 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov18
EVENT DETAILS
Monday / CS Distinguished Lecture
November 18th / 12:00 PM
Hybrid / Mudd 3514Speaker
Kyros Kutulakos, University of TorontoTalk Title
The Ultimate Video CameraAbstract
Over the past decade, advances in image sensor technologies have transformed the 2D and 3D imaging capabilities of our smartphones, cars, robots, drones, and scientific instruments. As these technologies continue to evolve, what new capabilities might they unlock? I will discuss one possible point of convergence---the ultimate video camera---which is enabled by emerging single-photon image sensors and photon-processing algorithms. We will explore the extreme imaging capabilities of this camera within the broader historical context of high-speed and low-light imaging systems, highlighting its potential to capture the physical world in entirely new ways.Biography
Kyros is a Professor of Computer Science at the University of Toronto and an expert in computational imaging and computer vision. His research over the past decade has focused on combining programmable sensors, light sources, optics and algorithms to create cameras with unique capabilities---from seeing through scatter and looking around corners to capturing surfaces with complex material properties robustly in 3D. He is currently leading efforts to harness the full potential of technologies such as single-photon cameras and programmable-pixel image sensors, for applications in extreme computer vision and scientific imaging. Kyros is the recipient of an Alfred P. Sloan Fellowship, an NSF CAREER Award, and eight paper prizes at the computer vision field's top conferences, including the best paper award at ICCV 2023 and CVPR 2019.Research/Interest Areas: Computer vision, computational imaging
---
Zoom: TBA
Panopto: TBA
DEI Minute: TBATIME Monday, November 18, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Mix and mingle with fellow CS students and faculty every last Thursday of the month.
TIME Thursday, November 21, 2024 at 9:00 AM - 11:00 AM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec4
EVENT DETAILS
Wednesday / CS Distinguished Lecture
December 4th / 12:00 PM
Hybrid / Mudd 3514Speaker
Michael Franklin, University of ChicagoTalk Title
Enabling Diverse and Interconnected Data Ecosystems with Lakehouses, Open Formats and AIAbstract
Data and AI are driving innovation across industry, science and society. At the same time, data management technology has changed dramatically, due a combination of factors such as cloud-based data lakes, the development of open standards for data formats and catalogs, and the opportunities presented by generative AI and other AI technologies. The result of all of these changes is an increasingly vibrant data ecosystem that has the potential to span traditional information silos and data incompatibilities to enable people and organizations to leverage all of their data in ways that have until now, simply have not been possible. In this talk I'll survey this new landscape and give an overview of some of the (to my mind) most promising directions for research and innovation in this rapidly advancing area.Biography
MICHAEL J. FRANKLIN is the Morton D. Hull Distinguished Service Professor of Computer Science and Founding Faculty Co-Director of the Data Science Institute at the University of Chicago. He is also a Founding Advisor and Visiting Researcher at Databricks, Inc. At Chicago he served as Liew Family Chair of Computer Science, overseeing the department's rapid expansion in scope and stature. Previously he was the Thomas M. Siebel Professor of Computer Science at UC Berkeley where he directed the Algorithms, Machines and People Laboratory (AMPLab), and was part of the original team building Apache Spark, a popular open source data processing framework initiated at the lab. He is a Member of the American Academy of Arts and Sciences and is a Fellow of the ACM and the American Association for the Advancement of Science. He has served on the Board of the Computing Research Association and the NSF CISE Advisory Committee as well as the ACM Fellows Selection Committee. He received the 2022 ACM SIGMOD Systems Award, and is a two-time recipient of the ACM SIGMOD “Test of Time” award. He holds a Ph.D. in Computer Sciences from the Univ. of Wisconsin (1993).Research/Interest Areas: Data systems, Data Science, Systems and AI/ML, Data Markets
---
Zoom: TBA
Panopto: TBA
DEI Minute: TBATIME Wednesday, December 4, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec7
EVENT DETAILS
Fall classes end
TIME Saturday, December 7, 2024
CONTACT Office of the Registrar nu-registrar@northwestern.edu EMAIL
CALENDAR University Academic Calendar
-
Dec14
EVENT DETAILS
The ceremony will take place on Saturday, December 14 in Pick-Staiger Concert Hall, 50 Arts Circle Drive.
*No tickets required
TIME Saturday, December 14, 2024 at 4:00 PM - 6:00 PM
LOCATION Pick-Staiger Concert Hall map it
CONTACT Andi Joppie andi.joppie@northwestern.edu EMAIL
CALENDAR McCormick School of Engineering and Applied Science