A Conversation on Ethics and AI
MSAI Assistant Professor Sarah Van Wart reflects on teaching about ethics in AI and the role ethics should play in the development of digital applications.
By Sarah Van Wart, MSAI assistant professor
When people talk about artificial intelligence (AI), there is a tendency to think about it as something that is autonomous and separate from the people who make it. However, AIs are designed, human systems that are powered by particular datasets and directed toward particular goals. As such, value judgments — meaning potential sources of bias — exist throughout the AI pipeline, from the training data, to the selection of features, to the handling of false negatives and positives, to the applications of automated prediction and classification.
By thinking of AI as a combination of data-intensive techno-mathematical theories, techniques, heuristics, and design decisions, we can ground an examination of ethics in the specific practices and techniques that AI practitioners, data scientists, and engineers do. This also includes a consideration of the limits and vulnerabilities of AI systems in particular contexts, as well as the possibility that AIs are inappropriate for some contexts altogether, at least in their current form.
For Northwestern’s Master of Science in Artificial Intelligence (MSAI) program, I teach a course called Human Computer Interaction (HCI). In HCI, I teach students how to design accessible, intuitive interfaces. To do this, they conduct user research, which involves going out, talking to people, and trying to understand their problems and challenges. Students then spend the remainder of the quarter building and refining web applications that address the needs of their users while adhering to some core design principles.
These design methods play an important role in producing useful, functional application interfaces. That said, apps like the ones prototyped in the class are ultimately released into the “real world,” where they mediate our lives, from the information we see to the choices we are able to make and the opportunities we have. Given this, it is also important for students to think at a higher level beyond what can be empirically tested in the lab, and ask questions about how their app might interact with broader social, cultural, and political realities. As Langdon Winner points out in his influential 1980 piece, Do Artifacts Have Politics?, artifacts and infrastructures reflect human values in ways that produce winners and losers, and are difficult to change once they’ve become embedded in our everyday lives. Therefore, helping students think about the values they’re embedding in their designs, the potential impacts of these choices, and the potential durability of their creations is a critical part of educating future designers and application developers.
I support these lessons with case studies, reflective writing assignments, and design exercises.
I like case studies because they offer a concrete way to examine how problems emerge through complex interactions between technology and society. Some case studies offer more clear-cut ethical lessons that aren’t particularly controversial, such as a medical support system that encourages doctors to prescribe “sponsored” medications like opioids (more information here).
However, most cases are more complicated, and involve competing rights and values that have always been in tension with one another, like weighing the right of free speech against the public health concerns that come from fake news or bullying. In these cases, the task becomes digging into how computer- and data-mediated ecosystems re-ignite age-old debates. For instance, how does the design of recommendation systems powered by AI influence how fake news and hate speech are propagated online? And how does this relate to the underlying business model of many platforms, which rely heavily on advertising revenue? Does this shift the conversation around balancing rights and public health concerns?
Other case studies reveal how problems emerge when the assumptions of designers do not reflect the lived experiences of those being designed for, including computer vision systems that do not recognize Black faces or Snapchat filters that caricaturize people of East Asian descent.
These cases raise questions around representation in tech:
- Who gets to become a designer or engineer of the infrastructures that we will all be subject to?
- How does this relate to broader society disparities relating to race, gender, income inequality, and educational access?
I also ask students to complete one reflective writing assignment, where they analyze the politics of a technology of their choosing. By politics, I mean:
- Who is affected by the technology?
- Who wins and who loses as a result of its creation and use?
- How, specifically, does the technology shape who wins and who loses?
While design is often characterized as an exercise in imagining a better world, Ruja Benjamin argues, in her 2019 book, Race After Technology, that “better” for some often means worse for others. And given that this idea of “better” is always contingent on where you are located within the system, the assignment asks students to think beyond their own, immediate interaction with a socio-technical system the next time they hop into their Uber or order something on Amazon. Who are the workers, advertisers, vendors, and customers? Are they all benefiting equally within the new socio-technical arrangement? Have older businesses been displaced, and if so, is this displacement an inevitable part of the “progress” narrative?
Finally, because the HCI course is very focused on prototyping an app, I try to create opportunities for students to make their assumptions — which are often value-laden — explicit, so that they can examine them before building their prototypes. We all have assumptions that we make as we interact with the world, and the less we know about the context for which we’re designing, the more likely it is for our assumptions to be wrong. Given this, I ask students to write them down, and consider how to examine their assumptions by talking to people, doing research, and trying to gain insight from diverse perspectives. This isn’t a replacement for creating more diverse design ecosystems or doing the work to understand the problem space, however, encouraging some reflexivity is a step in the right direction.
Reactions to topics regarding the social implications of computing infrastructures have been mixed. On the course evaluations I've received, some students have complained about reading and writing about ethics, which they saw as “busywork” that took time away from learning web programming or UX/UI methods. On the other hand, many students have expressed that they found the exercises to be valuable and went on to take my Computing, Ethics, and Society course.
My takeaway from these mixed reactions is that I need to keep improving on how I organize learning around ethics, so that I can better engage with people coming from a wide variety of backgrounds and vantage points. My goal is not to tell people what they should think, but to get them to ask more questions about how design choices shape our computing infrastructures and the world around us – for better and for worse.