Cathy O’Neil, author of Weapons of Math Destruction and one of the leading public intellectuals on bias in, and discrimination by, algorithms, recently published an op-ed in The New York Times. The controversial piece lamented the lack of academic attention to educating lawmakers and holding tech companies responsible for biased systems. She writes, “academics have been asleep at the wheel, leaving the responsibility for this education to well-paid lobbyists and employees who’ve abandoned the academy.” She bemoaned a lack of a “distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.

O’Neil is right about the need to educate lawmakers on bias in algorithms, and the need to work with tech companies to mitigate and prevent bias. But, as a storm of tweets and Facebook posts from academics revealed, she misrepresented the problem and discounted whole fields of research in the process.

As co-investigators on a large grant from the US National Science Foundationto investigate the ethics of pervasive data and algorithms, we would know. We have four years of federal funding, shared across six universities, to do precisely some of the research O’Neil calls for. What’s more, a number of us describe our fields as information studies, science and technology studies (STS), human-computer interaction (HCI), and sociotechnical research, all of which have a history of decades of research in understanding, critiquing, and improving the role of technology in society.

There are challenges that O’Neil accurately identifies. It can be tricky for academics to work closely with industry partners for reasons ranging from lack of access to lack of funding to conflict of interest. And it can be hard to distill the nuanced findings of technology studies — which often read as “it’s complicated” — into actionable design advice. Hard work is needed by both academics and technologists to bridge this divide.

In follow-up posts on Twitter (for example, here and here), O’Neil conceded that there is a lot of work being done in this space, and encourages academics to unite to have a louder voice and to communicate in a way that policymakers can understand. However, she misses some important reasons that academic voices just aren’t that loud in today’s tech debates. Many researchers doing this work come from hybrid disciplines, none of which have the name recognition of sociology or computer science (try telling people at a party you’re a professor of information). In addition, research — particularly social research — is slow. It may be several years from development of a research question to publishing results that answer the question. We promise that answers to many of O’Neil’s research challenges are underway at universities across (and beyond!) the U.S. A third issue (and yes, this is on us) is that the rewards for “public academics,” especially before tenure, are mixed. Instead, we publish findings in places where lawmakers and tech companies may not find our results — through no fault of their own.

But a bigger, deeper issue is that exactly the kind of research that O’Neil calls for — work that is technical and social in nature, and inquiry that is critical of particular moves in technology development — is under attack. Research in queer theory, race and privilege, and gender studies is exactly what is needed to advance fairness in algorithms. But this work, and the many scholars from underrepresented groups who have brought attention to these problems, have a long history of marginalization both within the academy and without. Findings recommending data protection, fairness, and restraint are often accused of holding up progress, innovation, or even new knowledge. Social research broadly is already a tiny portion of US research funding, and proposed budgets would shrink that even more.

But none of this means academics aren’t trying. Indeed, some of the very solutions O’Neil advocates, including comprehensive ethical education for future engineers and data scientists, are well underway in Information Schools, computer science programs, and statistics departments across the country. Undergraduate and graduate programs in each of our home institutions absolutely worry about “how the big data pie gets made,” to use O’Neil’s words.

And today’s students care, too. Partially because of the public acknowledgment brought up by O’Neil’s book (and books like it, and RadioLab podcasts, and Medium posts, and threads on Reddit, and…), students in our classrooms are eagerly discussing biased algorithms, big data surveillance, and tech ethics. We may not be steering the ship of technical progress, but our students will be. And they’re getting exactly the education they need to make the next generation of tech progress an ethics-driven one.

The PERVADE team:

Katie Shilton, University of Maryland — College Park: College of Information Studies

Michael Zimmer, University of Wisconsin — Milwaukee: School of Information Studies

Casey Fiesler, University of Colorado — Boulder: Department of Information Science

Arvind Narayanan, Princeton University: Computer Science Department

Jake Metcalf, Data & Society

Matthew Bietz, University of California — Irvine: Department of Informatics

Jessica Vitak, University of Maryland — College Park: College of Information Studies

Notes: