News & EventsDepartment Events & Announcements
Events
-
Nov18
EVENT DETAILSmore info
EnCORE and IDEAL TRIPODS Institutes Collaboration website here
Monday, November 18 and Tuesday, November 19, 2024
The NSF TRIPODS Institutes—The Institute for Emerging CORE Methods for Data Science (EnCORE) at UCSD and The Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) at Northwestern University—are co-hosting a workshop titled "Foundations of Fairness and Accountability." The event will take place from November 18-19, 2024 in a hybrid format at the Department of Computer Science at Northwestern University, Evanston, Illinois. We plan to follow up with a second workshop at early Spring at the EnCORE institute at the University of California San Diego.
The workshop will feature a blend of talks and interactive discussions, focusing on key topics related to fairness and accountability. The main areas of exploration include:
1. Fairness in Resource Allocation
2. Fairness in Clustering & Ranking
3. Fairness in Prediction
4. Socio-Technical Aspects of Fairness & Accountability
5. Accountability in Experiment Design, with an emphasis on Replicability
6. Applications of these concepts across fields like Law, AI, and Biology.TIME Monday, November 18, 2024 at 9:00 AM - 4:30 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov18
EVENT DETAILS
Monday / CS Distinguished Lecture
November 18th / 12:00 PM
Hybrid / Mudd 3514Speaker
Kyros Kutulakos, University of TorontoTalk Title
The Ultimate Video CameraAbstract
Over the past decade, advances in image sensor technologies have transformed the 2D and 3D imaging capabilities of our smartphones, cars, robots, drones, and scientific instruments. As these technologies continue to evolve, what new capabilities might they unlock? I will discuss one possible point of convergence---the ultimate video camera---which is enabled by emerging single-photon image sensors and photon-processing algorithms. We will explore the extreme imaging capabilities of this camera within the broader historical context of high-speed and low-light imaging systems, highlighting its potential to capture the physical world in entirely new ways.Biography
Kyros is a Professor of Computer Science at the University of Toronto and an expert in computational imaging and computer vision. His research over the past decade has focused on combining programmable sensors, light sources, optics and algorithms to create cameras with unique capabilities---from seeing through scatter and looking around corners to capturing surfaces with complex material properties robustly in 3D. He is currently leading efforts to harness the full potential of technologies such as single-photon cameras and programmable-pixel image sensors, for applications in extreme computer vision and scientific imaging. Kyros is the recipient of an Alfred P. Sloan Fellowship, an NSF CAREER Award, and eight paper prizes at the computer vision field's top conferences, including the best paper award at ICCV 2023 and CVPR 2019.Research/Interest Areas: Computer vision, computational imaging
---
Zoom: https://northwestern.zoom.us/j/93072495148
Panopto: https://northwestern.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=7ac0e95d-a3ee-430e-8216-b22501572893
DEI Minute: Land Acknolwedgements https://tinyurl.com/cspac-dei-minuteTIME Monday, November 18, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov20
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Wednesday, November 20, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Thursday, November 21, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov21
EVENT DETAILS
Mix and mingle with fellow CS students and faculty every last Thursday of the month.
TIME Thursday, November 21, 2024 at 9:00 AM - 11:00 AM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Nov22
EVENT DETAILS
Description
To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses
TIME Friday, November 22, 2024 at 8:30 AM - 5:00 PM
CONTACT Indira Munoz indira.munoz@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec4
EVENT DETAILS
Wednesday / CS Distinguished Lecture
December 4th / 12:00 PM
Hybrid / Mudd 3514Speaker
Michael Franklin, University of ChicagoTalk Title
Enabling Diverse and Interconnected Data Ecosystems with Lakehouses, Open Formats and AIAbstract
Data and AI are driving innovation across industry, science and society. At the same time, data management technology has changed dramatically, due a combination of factors such as cloud-based data lakes, the development of open standards for data formats and catalogs, and the opportunities presented by generative AI and other AI technologies. The result of all of these changes is an increasingly vibrant data ecosystem that has the potential to span traditional information silos and data incompatibilities to enable people and organizations to leverage all of their data in ways that have until now, simply have not been possible. In this talk I'll survey this new landscape and give an overview of some of the (to my mind) most promising directions for research and innovation in this rapidly advancing area.Biography
MICHAEL J. FRANKLIN is the Morton D. Hull Distinguished Service Professor of Computer Science and Founding Faculty Co-Director of the Data Science Institute at the University of Chicago. He is also a Founding Advisor and Visiting Researcher at Databricks, Inc. At Chicago he served as Liew Family Chair of Computer Science, overseeing the department's rapid expansion in scope and stature. Previously he was the Thomas M. Siebel Professor of Computer Science at UC Berkeley where he directed the Algorithms, Machines and People Laboratory (AMPLab), and was part of the original team building Apache Spark, a popular open source data processing framework initiated at the lab. He is a Member of the American Academy of Arts and Sciences and is a Fellow of the ACM and the American Association for the Advancement of Science. He has served on the Board of the Computing Research Association and the NSF CISE Advisory Committee as well as the ACM Fellows Selection Committee. He received the 2022 ACM SIGMOD Systems Award, and is a two-time recipient of the ACM SIGMOD “Test of Time” award. He holds a Ph.D. in Computer Sciences from the Univ. of Wisconsin (1993).Research/Interest Areas: Data systems, Data Science, Systems and AI/ML, Data Markets
---
Zoom: TBA
Panopto: TBA
DEI Minute: TBATIME Wednesday, December 4, 2024 at 12:00 PM - 1:00 PM
LOCATION 3514, Mudd Hall ( formerly Seeley G. Mudd Library) map it
CONTACT Wynante R Charles wynante.charles@northwestern.edu EMAIL
CALENDAR Department of Computer Science (CS)
-
Dec7
EVENT DETAILS
Fall classes end
TIME Saturday, December 7, 2024
CONTACT Office of the Registrar nu-registrar@northwestern.edu EMAIL
CALENDAR University Academic Calendar
-
Dec14
EVENT DETAILS
The ceremony will take place on Saturday, December 14 in Pick-Staiger Concert Hall, 50 Arts Circle Drive.
*No tickets required
TIME Saturday, December 14, 2024 at 4:00 PM - 6:00 PM
LOCATION Pick-Staiger Concert Hall map it
CONTACT Andi Joppie andi.joppie@northwestern.edu EMAIL
CALENDAR McCormick School of Engineering and Applied Science