Lecturer, University of California, Berkeley
Contributor, Platypus: The CASTAC Blog
I am interested in understanding the role of computing, data, software and algorithms in institutions and workplaces using historical and ethnographic methods. More broadly, I am interested in the relationship between institutions, technology and knowledge production.
Contributions to Platypus: The CASTAC Blog
Editor’s Note: Today, Shreeharsh Kelkar brings us the inaugural post in a new series on Fake News and the Politics of Knowledge. The goal is to tackle the knowledge politics of both so-called “fake news” itself and the discourse that has cropped up around it, from a wide range of theoretical perspectives on media, science, technology, and communication. If you are interested in contributing, please write to firstname.lastname@example.org with a brief proposal.
Donald Trump’s shocking upset of Hillary Clinton in the 2016 US Presidential Election brought into wide prominence issues that heretofore had been debated mostly in intellectual and business circles: the question of “filter bubbles,” of people who refuse to accept facts (scientific or otherwise), and what these mean for liberal democracies and the public sphere. All these concerns have now have coalesced around an odd little signifier, “fake news” .
Most CASTAC readers familiar with science and technology studies (STS) have probably had conversations with friends—especially friends who are scientists or engineers—that go something like this: Your friend says that artificial intelligence (AI) is on its way, whether we want it or not. Programs (or robots, take your pick) will be able to do a lot of tasks that, until now, have always needed humans. You argue that it’s not so simple; that what we’re seeing is as much a triumph of re-arranging the world as it is of technological innovation. From your point of view, a world of ubiquitous software is being created; which draws on contingent, flexible, just-in-time, human labor; with pervasive interfaces between humans and programs that make one available to the other immediately. Your comments almost always get misinterpreted as a statement that the programs themselves are not really intelligent. Is that what you believe, your friend asks? How do you explain all those amazing robot videos then? “No, no,” you admit, “I am not saying there’s no technological innovation, but it’s complicated, you know.” Sometimes, at this point, it’s best to end the conversation and move on to other matters. (more…)
How can we account for the radical uncertainty of change when we think about the future, but its seeming inevitability when it comes to the past? This is, arguably, the hardest part in doing the history and anthropology of technology. It is also, not surprisingly, the hardest to teach our students. In what follows, I suggest that the experience of watching (and playing) sports might be of help here.