Member Profile

Shreeharsh Kelkar

Lecturer, University of California, Berkeley

Contributing Editor, Platypus: The CASTAC Blog

Research Interests

About Shreeharsh

I am interested in understanding the role of computing, data, software and algorithms in institutions and workplaces using historical and ethnographic methods. More broadly, I am interested in the relationship between institutions, technology and knowledge production.

Contact

Email

Publications

Articles

Anthropology in and of MOOCs

Rachel Flamenbaum, Manduhai Buyandelger, Greg Downey, Orin Starn, Catalina Laserna, Shreeharsh Kelkar, Carolyn Rouse, Tom Looser (2014) | American Anthropologist 116(4): 829-838 | http://dx.doi.org/10.1111/aman.12143

Contributions to Platypus: The CASTAC Blog

View all of Shreeharsh's posts on Platypus: The CASTAC Blog.

Three Perspectives on “Fake News”

Editor’s Note: Today, Shreeharsh Kelkar brings us the inaugural post in a new series on Fake News and the Politics of Knowledge. The goal is to tackle the knowledge politics of both so-called “fake news” itself and the discourse that has cropped up around it, from a wide range of theoretical perspectives on media, science, technology, and communication. If you are interested in contributing, please write to editor@castac.org with a brief proposal. 

Donald Trump’s shocking upset of Hillary Clinton in the 2016 US Presidential Election brought into wide prominence issues that heretofore had been debated mostly in intellectual and business circles: the question of “filter bubbles,” of people who refuse to accept facts (scientific or otherwise), and what these mean for liberal democracies and the public sphere.  All these concerns have now have coalesced around an odd little signifier, “fake news” [1].  

 

(more…)

How (Not) to Talk about AI

Most CASTAC readers familiar with science and technology studies (STS) have probably had conversations with friends—especially friends who are scientists or engineers—that go something like this:  Your friend says that artificial intelligence (AI) is on its way, whether we want it or not.  Programs (or robots, take your pick) will be able to do a lot of tasks that, until now, have always needed humans.  You argue that it’s not so simple; that what we’re seeing is as much a triumph of re-arranging the world as it is of technological innovation. From your point of view, a world of ubiquitous software is being created; which draws on contingent, flexible, just-in-time, human labor; with pervasive interfaces between humans and programs that make one available to the other immediately. Your comments almost always get misinterpreted as a statement that the programs themselves are not really intelligent.  Is that what you believe, your friend asks?  How do you explain all those amazing robot videos then?  “No, no,” you admit, “I am not saying there’s no technological innovation, but it’s complicated, you know.”  Sometimes, at this point, it’s best to end the conversation and move on to other matters. (more…)

Teaching (Non)Technological Determinism: A Theory of Key Points

How can we account for the radical uncertainty of change when we think about the future, but its seeming inevitability when it comes to the past?  This is, arguably, the hardest part in doing the history and anthropology of technology.  It is also, not surprisingly, the hardest to teach our students.  In what follows, I suggest that the experience of watching (and playing) sports might be of help here.

(more…)