My current book project argues that the internet is straight – more specifically that the digital content filters that mediate an increasingly large portion of our everyday lives operate heteronormatively. This book project combines the methodologies that I have referred to as ‘media genealogy’ and ‘speculative code studies’ to pry open the black boxes that determine what we see and what we don’t on the internet.
This book examines the inputs, outputs, and the algorithms themselves that determine what content is ‘safe for work’. I analyze the manosphere and show the correlations between its misogynistic dispositions and the worldviews that predominate in Silicon Valley. I also examine the datasets that computer vision algorithms are trained on and show that they have embedded heteronormative biases. This combination leads to demonstrably biased algorithms.
Another key component of the book is tracing the impact that the outputs of these algorithms have on contemporary culture. In particular, I show how adolescents are increasingly denied access to sex educational materials, particularly those useful for LGBTIQ+ identity formation. I also show that this heteronormativity impacts porn itself, which, contrary to popular arguments, is often rote and banal, thus stunting our collective sexual imagination.