




Vauhini Vara was not asked the following question, nor did the author provide the accompanying answer:
“Q: Vauhini, your new book ‘Searches: Selfhood in the Digital Age,’ takes an intriguing look at how our sense of identity is shaped by digital technologies. What inspired you to write this book?
“Vauhini Vara: I’ve been fascinated by the intersection of technology and identity for a long time. As we use the internet, we leave behind these digital traces — search history, social media posts, data footprints — that create a version of ourselves that might not entirely match our lived experience. These technologies influence the way we view ourselves and how others perceive us. Writing ‘Searches’ was a way for me to explore how these tools affect our sense of self, both in subtle and profound ways. I wanted to understand how the algorithms and platforms that dominate our digital lives could be changing who we are.”
Welcome to our brave new AI world.
In 2021, Vara published an essay about her sister’s death that went viral; in the piece, she fed prompts to AI and published its made-up (and occasionally unhinged) contributions. In her new book, Vara explores her identity even further in the digital age: There’s a decade’s worth of her online searches, a Google translation of an essay she tried writing in Spanish, and a look at whether we’re morally complicit when we use social media or buy from Amazon. And it also contains chapters “written” by AI commenting on and analyzing her writing.
So in the spirit of her work, this reporter went to ChatGPT and typed in: “Write an intro and a Q&A with author Vauhini Vara about her new book, “Searches: Selfhood in the Digital Age.” Over two attempts, it concocted a short intro full of big words and big ideas, but not much of myself or Vara, a pattern it replicated in the Q&A. The second time it attempted one personal touch, adding that it “had the opportunity to sit down with Vara,” which was untrue. For all the attempts at feeling real, it seems AI lacks the ability to come up with questions or answers that bear the genuine imprint of human experience.
The interview that follows, which was done over video between two humans, has been edited for length and clarity.
Q: Four years on from your essay, would AI do a better job of writing about your sister’s death?
A: If I’m sitting down to express something that happened to me, it’s definitely impossible that an AI model is going to be able to express that on my behalf. There are no advancements possible that would allow that AI model to be me.
But also some research has suggested that the models have gotten less creative. So ChatGPT is not as capable of creating texts that sound creative or original. These models are increasingly being built to be good at being predictable, essentially customer service chatbots. To do that you won’t have a lot in common with the features of great literature.
But one can imagine a future where these technologies become so cheap that there are viable noncorporate AI models that people might use to generate creative text.
Q: Will the biases you write about — the casual misogyny and racism built into AI by what’s largely a cadre of White male workers — get better or worse as AI feeds on technology?
A: I don’t think technology companies have cracked the code yet on those questions of accuracy and bias. It’s something that they say they’re working on. But it’s concerning. We can imagine a trajectory in which these big technologies become more powerful and more intertwined with powerful government forces — that is a formula for technology to be used against us, for further diminishing of our own agency, our own individualism, our own homegrown community relationships, and increased dependence on these platforms that have, whose creators clearly have their own agendas.
Q: You explain that if you Google “best treatment for a specific cancer,” you’ll get a different answer in Colorado than in Barcelona. Most of us just unthinkingly rely on Google to filter our searches, so this seems like a serious concern.
A: These products have a veneer of authority that makes it seem as though there’s an objective truth in what they’re telling us, and that’s really problematic. When we’re scrolling through social media, we believe we’re being shown things that are popular and of interest in some objective sense. When we shop on Amazon, we don’t recognize that these companies are calibrating what their home screens look like specifically to sell things to us. And so our understanding of the world is being shaped by what these companies are giving us in a way that isn’t an accurate representation of the world.
Q: Are your Google searches more intimate and interesting than mine because I’m Gen X and grew up without all this technology?
A: I was in middle school when I started using the internet, and that’s when we start developing our identities so I wonder if that leads people of my generation to see the boundaries to be more porous than it is for people of previous generations between how we engage with the nononline world and the kinds of things we ask Google or review on Amazon or talk about on social media. The generations that came after mine are so online that they might even contest the notion that there is a boundary. It’s a very old-fashioned notion to say that there’s a difference between the real world, as we sometimes say, and the online world.
One of the challenges of being human and also for me writing this book is how it’s probably impossible to disentangle our sense of our own identities and our own communities and the world around us from the understanding that we get of those things through using the internet, which is filtered to us through the products of big technology companies.
Q: You write about how modern communication is owned and manipulated by and for the rich and powerful. Can we change that, or does what’s easier for us always win out over what’s better for us?
A: One reason I write about my own complicity using this technology, and by extension, our complicity as communities, is that it’s a prerequisite to having space to ask what other possible choices we should be making. If we’re using these companies’ products because they’re essentially forcing us to and we have no other choice, then it’s inevitable that we will have a technological capitalism-dominated future. The implicit argument contained in my own acknowledgement of my agency here, of saying I’m a villain, too, is that means we can make other choices.
There’s narrative tension of wondering if I’ll continue to use these products, or divest myself from them all. I’m still using them at the end, but that’s not the end of the story. I’m trying to open up the question more broadly because I haven’t figured out the answer. I’m putting the question to all of us to say, “Now what can I do?”
Q: You end with a survey of women that asks numerous questions, but ends by asking what technology they would invent to create a better way of being human on Earth, and then you ask what our world will be like after that creation.
A: I was really gratified to see how original and beautiful these responses were. It was a testament to people’s humanity and said that more clearly and evocatively than I could explain it in exposition on the page.
Q: The answers are hopeful, others are skeptical, but you end with one that’s just two words: “That depends.”
A: I don’t think of that as a wishy-washy stance. I think of it as encapsulating the book’s central idea, which has to do with our agency and choice. If something depends, what does it depend on? It depends on us, on what we choose to do.