Someone wrote to me responding to my view that the capitalistic intent behind AI companies will send them down the same path of monopolisation as some previous information technologies, like social media.
Something about the response was off, so I pasted it into GPT Zero to find if it was AI-generated. Turns out, it probably was, partially.
I know that such scans are somewhat iffy, but I was going by instinct. Before the scan told me it was probably AI-generated, I felt it myself.
Not that the argument itself was without merit, but the fact that it was written using AI form an important par of the context of this letter. I’ll quote my correspondent and respond to the points they were making below.
The first point they made was:
I respectfully disagree with the view that AI will follow the same path of monopolization as past technologies. While historical technological revolutions often consolidated power through capital requirements, AI presents a fundamentally different dynamic.
I think it is pretty evident at this point that AI is following the exact same path as far as consolidation of power is concerned. Open AI is no more open source and is in fact very much profit oriented. ChatGPT is being indiscriminately integrated into every toaster, towel, and toilet. Massive amounts of money is being thrown into PR for AI as creative as well as non-creative industries oppose human workers being replaced with machines.
This all has happened before. Nothing new to see here. This is not a fundamentally new dynamic. In fact, what is being referred to as AI is the result of a massive theft of intellectual property that was effected all over the internet. That’s the only dynamic here that is “fundamentally different”.
Their next point was:
The revolutionary aspect of AI lies in its democratization of intelligence. By drastically reducing the cost of accessing high-level cognitive capabilities, AI creates an unprecedented leveling effect. An individual of average technical skill but extraordinary vision can now leverage AI to build sophisticated applications, launch companies, and make complex strategic decisions at minimal cost. I speak from direct experience having implemented this approach successfully in my own work, despite being someone of modest technical background with ambitious goals.
I thought intelligence was already democratised. The idea that some people do not have access to intelligence is problematic at best. People lack experience, not intelligence. Besides, what AI is providing is not real intelligence. What it is going to do is make us too lazy to do the things we may otherwise have easily done. For example, my correspondent’s comment is quite possibly written using AI. They aren’t expressing themselves. They are letting AI give form to their views.
It is not my case that AI tools are useless. Nor that they are not going to make life easier in many respects. But the fact of the matter is that there is no shortcut to building sophisticated applications, launching companies, and making complex strategic decisions at minimal cost. There is value in learning how to do all of these things. People who only take shortcuts never grow the muscle in their legs that might enable them to travel long distances. Getting to a place quickly and easily has its value, but not if the goal is to keep travelling beyond that point.
A cab can take you from one place to another. But you won’t become a driver as a result of it. If all you care about it getting somewhere, by all means, use AI tools to make life easier. But if you have greater ambitions than that, know that AI can’t and shouldn’t be the future for you. It may very well do the exact opposite of what you are expecting.
Their last point was:
For a compelling perspective on AI's transformative potential, I'd recommend reading "Machines of Loving Grace" by Anthropic CEO Dario Amodei. Try Claude by Anthropic if possible.
I am not a mindless technophobe. I am, in fact, often an early adopter. My opposition to AI hype is not instinctive. I do recognise the potential here and it is for that exact reason that I advocate for responsible and realistic applications of it.
As a fiction writer, I have tried to use AI tools like Claude to write as well as to perform secondary tasks such as worldbuilding. Their utility is at best middling. And when they do make something that may be described as original and compelling, I feel icky as a writer who likes to say he wrote what he wrote.
This ick of mine may seem a subjective reaction, but I mention it for a reason. I come from a creative background. People in my community have been stolen from, insulted, and violated by the companies that built AI even as they try to sell us the tools created using our work. This violation is hard to communicate to someone who doesn’t understand creative engagement. Suffice it to say that if accomplishment gives you some emotional reward, you will probably find it diluted when you use machine learning tools to accomplish something. This is a human need that makes little market sense, but it is something that needs to be part of the AI conversation, especially when it comes to the utility of said tools for creative people.
As for the recommended essay ‘Machines of Loving Grace’ by Anthropic CEO Dario Amodei, my first reaction would be that when it comes to reading about the pros and cons of AI, maybe we should study the views of people who are not CEOs of AI companies. But here is what I think of his essay anyway.
He is optimistic about AI making great advancements possible in medicine and biology. I am as well. AI should definitely be put to use in the pursuit of welfare goals that we are incapable of or really slow at. My problem is with using AI to do things we are quite capable of doing.
Also worth pointing out that not all the health concerns we have right now exist because we don’t have the scientific solutions. They exist because parasitic pharma companies refuse to reduce the price of medicine. They exist because political forces prevent healthcare from reaching everyone who needs it. They exist because massive misinformation campaigns (often aided by AI) are spreading an unhealthy suspicion of medical science in the minds of otherwise reasonable human beings.
The problem isn’t processing power. It’s a lack of will and motivation caused by social power structures. A lot of utopian thought fails to take this into account.
Don’t agree with me? Tell me how AI will ‘solve’ caste discrimination, Islamophobia, and racism. In this, Amodei seems to agree with me even though he sees these things through the lens of economic disparity. I am reminded of dominant caste Indians reacting to Caste-based Reservations saying it should be on economic basis:
I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity.
Most of the greatest problems that face humankind in the 21st century are social and cultural. Even when the scientific solution exists, it doesn’t get implemented because powerful quarters benefit from the problem.
The only changes we see are changes that filter through the ironically titled ‘free market’. The reason we don’t have universal basic income is not that there isn’t enough money for everyone. It’s because we don’t have a culture of equality and compassion. Instead, we have fandoms devoted to the praise of CEOs who swallow ever larger slices of the pie that, in an ideal world, was meant to feed all of us.
Amodei is no doubt a smart person, but the essay betrays what I often find described as White futurism. He says AI will solve some of our biggest problems, but what he descibes as our biggest problems are, in the larger scheme, somewhat low hanging fruits. They are problems that look huge to the privileged, but they don’t form the bedrock of human discontent in the 21st century. Amodei for example talks about the genetic aspect of mental health, but fails to mention that no small part of it is capitalist pressures that cause people to work three jobs just to survive.
I won’t go too deep into his essay. Mostly because I am not qualified enough to comment on all aspects of it. But I do think that overall, the question before us is not about whether AI is useful or not. It clearly is, and can be in even more situations.
I think the thrust of the AI skepticism I am presenting is the cost of AI and whether it is a price worth paying given what we understand about three things — how this thing came into being, what it is being used for, and what it seems it is going to be some day soon.
The answers to the first two questions are with us and they don’t make me feel good. The last answer is yet to come.