The history of surveillance in the United States is a long one. Our guest for the podcast on March 31, 2021, Dr. Chris Gillard, studies this very fact; Dr. Gillard’s scholarship focuses on digital privacy, institutional tech policy, surveillance capitalism, and digital redlining—a term that he defined on the podcast as “the creation and maintenance of tech practices, policies, pedagogies, and investment decisions that enforce class boundaries and discriminate against marginalized group.” As many of our Seminar guests have attested, too, access and relationships to contemporary digital technologies falls along racial, gendered, and classed lines, and the Internet—and the tools we use to access it—are made overwhelming by and for wealthy, straight white men in urban environments. And as Dr. Gilliard points out, access to the Internet is not the only thing historically minoritized groups are robbed of; these groups are also overwhelmingly stripped of their autonomy and privacy online.
Although worries about CCTV and post-Patriot Act wiretapping seem especially twenty-first century, eminent scholars have recently illustrated how the very foundation of our nation, including its formation of racial and class differences, depended on the institution of surveillance. In her groundbreaking Dark Matters: On The Surveillance of Blackness, Simone Browne makes clear the connections between “the Panopticon, captivity, the slave ship, plantation slavery, racism, and the contemporary carceral practices of the U.S. prison system,” illustrating how contemporary surveillance technologies of all kinds have been formed and informed by the U.S.’s methods of policing and categorizing Black life under slavery (Browne pg. 43). This is evident all over the facts of contemporary American life: police cameras disproportionally target and surveil predominantly nonwhite areas of our major cities, facial recognition software used to find criminal suspects were proven to be significantly biased against Black women, and universities increasingly rely on proven-to-be- racially biased predictive algorithms to make decisions about student admissions and funding. And while the racist motivation behind the entire enterprise of surveillance in the U.S. should never go understated, we should be aware that it impacts everyone. For example: in 2019, a Detroit Free Press report confirmed that the Michigan State Police’s photo database contains photos of nearly every Michigan resident, regardless of if they actually have criminal charges; although only 8 million adults lived in Michigan at the time, the MSP’s database holds a staggering 50 million images.
And Dr. Gilliard wants us to remember that all of this is neither natural nor inevitable. He has written about the long-running debates over Facebook’s well-known, as Facebook itself characterizes them, “mistakes”: promoting divisiveness, promoting hate groups, enabling age-, race-, and ethnicity-based discriminatory hiring practices, and aiding far-right political campaigns. These “mistakes,” in which Facebook preys on its user’s personal data, make up a lot of the contemporary social media landscape—and if Facebook had its way, this would likely go unchanged. But it is not social media’s destiny to function in this way. Today, we make our way into yet another wave of the Covid-19 pandemic, we await the verdict about George Floyd’s murderer, and live amidst a nation with surging hate crimes against Asians and Asian Americans. This is a profoundly difficult and, indeed, revolutionary moment in our nation’s long history—and a moment during which we must reassess so much of what might seem completely normal or otherwise inevitable about living in America in 2021, including the place and power of technology in our daily lives. On the podcast, we talked about how classrooms and pedagogical interventions are an essential tool in rethinking how we want technology to work for us.
Dr. Gilliard shared that in the broad cultural consciousness, students and young people are often imagined as being indifferent to matters of their own digital privacy, carelessly offering up intimate details of their personal lives to strangers online, or, alternately, overall pleased with how much time they spend in the mediated world of social media and Educational LMS (learning management systems). And this attitude has real-life consequences; as colleges and universities increasingly lean on Ed Tech software to monitor student “performance” and surveil student “experiences,” students’ rights to privacy and transparency are frequently violated –with the same old adage that “they don’t really care” tacked on as a rationale. But students themselves tell a very different story. Study after study, year after year, college-aged young people report that they care a lot about their privacy, and that they’re not all that thrilled about spending so much time online; a 2020 report confirmed that being in online-only, remote learning environments for the past year has taken a heavy toll on overall student mental health. Talking to students is only the start of untangling this web of issues, but it is a necessary start. Even as students are aware of and protective over their privacy online, they’re often shocked to learn just how significantly both education and non-education software alike are capable of violating their privacy, and for how many hours of the day they are being surveilled.
So when we talk about race, class, gender, and inequity in this country we should also be talking about privacy and surveillance—and what Dr. Gilliard has defined as digital redlining. Imagining a world where these deeply flawed infrastructures are not so deeply flawed is difficult work—but work that is made easier and more meaningful thanks to the work of scholars like Dr. Gilliard. Our Seminar group has a lot more to say about Ed Tech, privacy, and information access—so stay tuned.