A lot has been said about artificial intelligence and the possible benefits and dangers it presents to society. Some of these discussions have shown the nuance needed to truly dissect what is at stake, specifically the discussions around humanity and what it means to be human. I have had the better part of the last year forming my opinions on the topic—so I figured I would add my own two cents— centered around Silicon Valley’s view of what it means to be human.
Silicon Valley’s quest to build computers that are just as if not more intelligent that humans is an open secret—and to do this they need our help. These next generation intelligent systems need to be trained on billions of human generated data in order to achieve the holy grail of autonomous agents capable of reasoning. Our help in this context are the pictures, text, audio and video we generate in our day to day digital lives— both those we post on their closed platforms and on the open web. Today, the main mode for interfacing with these intelligent systems is through chatbots— and the common slogan we are required to internalize when using these chatbots is to “Talk to it like a human”.
I must admit the first few times I heard this phrase, it didn’t really register, it seemed to me to be a mixture of product and marketing speak. But the more I was exposed to it, the more a certain uneasiness in the back of my mind grew, and then I realized not long after that I was experience cognitive dissonance trying to reconcile the human Silicon Valley wants me to be while interacting with this chatbot and the human they view me as. This uneasiness only grew as the limitations these chatbots suffer from started to become apparent.
Human-to-human conversations while complex have simple expectations on agency. When we speak to others, we expect them to respond— regardless of the type of conversation, there is a minimum level of agency we expect from them. We expect that they can take in what we say and then freely form their own thoughts and respond. In order to talk to the chatbot like a human, this expectation must also hold. We must believe that the chatbot has the minimum level of agency that we require when we speak to other humans— and just like how we change the way we speak when talking to a child versus a full-grown adult, we can also expect to do the same when we speak to a more powerful chatbot versus a weaker one. I think it is safe to say that Silicon Valley does not want me to speak to their chatbot like a child—they want me to speak to it like I would a full grown adult with agency. To trust it and to be open with it. So the expectation on my end is clear, but then I must ask— am I afforded the same agency by Silicon Valley in my day to day use of their products? And the answer to that is no.
AI chatbots are the natural successors to the algorithmic attention powered feeds that have dominated the internet landscape for the past decade, and if we are to understand them, it is only right that we take into account the technologies that allowed them to come about. We can inspect algorithmic attention powered feeds through multiple lenses to truly understand their nature, but the aspect that I think is crucial given the topic at hand is their relationship with user agency. Fundamentally, these feeds exist to take away the need for most if not all work on the users end. In an ideal scenario, an algorithmic feed would only show you things that you enjoy, but we live in a reality far from this ideal. Even if we were to ignore the fact that most users were forcefully migrated from being able to curate their own feeds to now having an algorithm decide, these feeds for the most part consistently fall short of their expressed goals. Not only do they tend to blatantly disregard user feedback and controls, they also unfortunately do a poor job of showing you things that you enjoy, choosing instead to optimize for things that grab and keep your attention for as long as possible. So given what we know about algorithmic attention based feeds, can we reasonably expect its successor to not suffer from the same types of issues— at least with respect to user agency, especially when it wasn’t designed to address these shortcomings? Personally I don’t think so.
Moving on, if we examine the major internet platforms Silicon Valley has to offer, searching for bastions of user agency is akin to trying to find a needle in a haystack. It is common knowledge that the more a profit-focused internet platform grows, the more it needs to extract value from its users. And one surefire way to achieve this is to take away as much agency from them as you can get away with. You’ll seldom find any sizable internet platform that doesn’t default to a “we own every single thing you post on our platform and we reserve the right to do whatever we want with it” policy, and this same type of policy now extends to these AI chatbots. Even after laying claim to the data you provide, not only is the data monetized infinitely without any financial renumeration to you, but the means to remove or transfer said data is made purposefully opaque if it exists at all.
On the topic of monetization, the way most of these platforms make their profit is through advertising, and it is through this medium the lack of agency shines brightest. Online advertising in its current form has been the longest-running invasion of privacy in modern history. We can have a discussion about the practices and technologies that allowed the industry to develop this way, from third-party cookies to fine-grained targeting, but what isn’t lost to me is the utter lack of agency given to users like myself when these systems were built and codified. That your browsing history was open game for all and sundry willing to pay a nominal fee lets you know what the true motivations are. The last few years have seen a much needed clawback from regulators and the broader public which led to consequential regulations on how online advertising should work, but the underlying motivations have not changed, and one cannot be too optimistic that the current state will not be maintained in some way or another in the near term world of AI chatbots.
So what does it mean to talk to an AI chatbot like the human Silicon Valley views me as? A few things come to mind: be exploitative, disregard it, be occasionally difficult and distrust it. But then you have to ask if it is worth the experience if this is how you interact with the technology. You are better off avoiding it altogether. But if you can view it as a tool that can be effective given the right conditions, then interact with it as such. Just a tool and only a tool, anything more and you leave yourself open to an unequally yoked relationship—with little know course for remedy as of today.