Vimoh has been talking to people about their beliefs for years. He needed a place to talk about his own beliefs. Vimoh IRL is a show about the intersection of tech and society, and about making meaning in a seemingly meaningless universe.
All Episodes

Latest Episodes

All Episodes
#36

Confessions of a pantser

There are two types of writers. There are plotters and there are pantsers.Pantsers are people who pull people's pants down and plotters are people who have been wronged by pantsers and they plot their death. I am kidding of course and that's kind of the point because when I started this video I was going to say something else which should tell you everything you need to know about my writing style. I am a pantser and I make it sound like a confession but my point is when people ask me for writing advice they're usually looking for the kind of advice that a plotter can provide you like plan your novel, outline it, think about characters, build your world. And in most cases I have done all that by the way—it works. Obviously these are sound tactics.But the problem with me is that even though that all works for me, what works even better for me is a much simpler process. So a plotter will tell you that the step-by-step thing is: write your synopsis, write your outline, then begin your novel, write your chapters, do not deviate from your outline—or, you know, deviate as little as possible so that you remain within the bounds of where the story can go. And while you're writing it take care of these things and those things and these things and those things.My process is: step one, open the laptop; step two, open the word processor; step three, start typing. That's it. I am a pantser. And for a long time I used to struggle with accepting this because I saw all the people out there who are giving writing advice talking about, you know, there is a right way to write and there is a proper way to do it and you should plan things out and workshop things.And my process has always been that I get an idea, I have a vague feeling—or rather a taste—of what it feels like and I start typing, trying to bring that taste to life. And what I end up writing is shit because obviously what was in my head felt better and when I write it, it turns out to be terrible. And then I edit it. So my process is: add another step to it. I sit down and start typing and when I'm done typing, I edit what I've written.In all my years of writing this is the one thing that has become clear to me and that is that I'm not much of a plotter, I'm not much of a planner.Even though push comes to shove I can create solid outlines, but when it comes to bringing those outlines to life I hit a wall. And that is because—I don't know if I'm the first person to say this and I'm reasonably certain that I'm not—I think I heard Brandon Sanderson talk about it in one of his online lectures—if I write an outline down I feel like I have already written the story down. So writing the outline down is counterproductive for me.Because when I write a story what I'm doing is that I'm experiencing the joy of expressing those ideas for the first time and that is more than half the drive that takes me through the story. I know that there are people who outline with various degrees of intensity. Some people write an outline like it's one page and there are seven bullet points. Some people write entire pages for every single point of the outline. They write a blow-by-blow account of what is going to happen in each chapter.I cannot for the life of me imagine being able to do that because that is not how my brain works. And this shows up in other aspects of my life also. This video for example—no script. I turn on the web camera, I sit down and I talk. When I'm done talking I look at what I've said. I remove parts of it. I remove the pauses. If I've coughed, I've removed that part. Maybe I do a little bit of editing. Maybe I add my name to the beginning of it. And then I put it up.I'm a pantser. I drive by the seat of my pants, which is where the phrase comes from by the way—no plan, no script, pure joy of creation on the spot. And it works for me, which is not to say that it will work for everyone. Maybe it won't.I just wanted to put this on the record because I get requests for writing advice from a lot of people and I kind of sort of do know what to tell them but it feels like I'm giving them something that I've googled up because it's not coming from within me.For example, if you asked me about how to create a memorable character, I could tell you how to do it because I've read books about it and I have written stories with characters which are memorable to a certain degree—I don't want to brag. But when it comes to telling people how to do it, if I told people how I do it it will make no sense.The reason my character ends up being memorable is because I love that character. And I express my love of that character while I'm writing that character into existence in the form of a story that I am making up as I go along. Fiction is made up stuff. It's all made up anyway.I seem to remember some years ago George R. R. Martin was saying: "Okay, stop asking me about who the people are who live on that island several thousand miles away from the west coast in the east. I wasn't really thinking that much when I made the map. You guys have become detectives and go deep into it. I don't take it as seriously as you do."The joy of pantsing, the joy of writing by the seat of your pants, is that you discover your story. It's almost like reading. It's almost like if you're reading a novel, you find out what is happening. I experience that thing before you do. It's as simple as that. And I do it while I'm writing the story. Sometimes I won't know what the character is going to do next and I keep typing and I find out what the character does. The character does it himself. It's almost like I'm not really creating the thing—it is just flowing through me from some other source.Now I'm an atheist so I don't believe in God or supernatural entities. So I will have to conclude that it's that trick of my mind where my brain is creating stuff on the subconscious level and my conscious mind is unable to completely comprehend how that is happening. And frankly, it doesn't give a shit. Because what matters is that the story ends up being on the page in a readable enough format.None of this of course means that I don't need a plan. It just means that that plan is in my head. I'm like Indiana Jones—I go attack the enemy and someone asks me: "Do you have a plan?" And I'm like: "I'll figure it out." Yeah, I'm that guy. Pantsers are those guys.The problem with pantsing is that sometimes while I'm writing I will run into a problem and I won't be able to tell what the problem is. For someone who has a written down outline, they can look at the outline and say: "Okay, I understand what problem this is because I deviated from my plan. I had a plan and I didn't go according to it. Therefore, this problem has a horizon. All I need to do is get back on the road."With me, there is no road. The plotter, the planner, is on a road. They have a map. They know where they are going. I am running blind in a forest. So when I hit a wall I don't know what to do because I can't see anything anyway. And I have to fumble around and find my way around the wall, which sometimes ends up making the story very interesting. But that's the final product. The process of writing—when I hit a wall—it leads to weeks of confusion. It almost leads to: hey, maybe I should have a map. Maybe I do need a plan.And when a project that has been pantsed into existence is forced into a plot halfway after the writing began, things kind of get weird. Things get so weird that the final product looks like it was written by two people.Now, none of this is writing advice. I'm not giving anyone the advice that you should be a plotter or a pantser. I'm simply confessing that I'm a pantser and that the greatest pieces of writing I did in my life were after I accepted that I'm a pantser. After I accepted that I'm not someone trained in a dojo. That I do my best work when I'm fighting freestyle.So that is what this was about. I don't know why I made it. It's 9:15 pm. I almost never record a video at this time. But I am. See you next time.
#35

AI and the future of faith

Some months ago, Meta announced that it intends to fill Instagram with slop. This would include fake AI-generated video content from fake AI-generated creators. Fake AI-generated commenters would then leave fake AI-generated feedback on these posts and quite possibly, even have a back-and-forth with other fake AI-generated bots like them.This isn’t decorative use of AI for the benefit of people. This isn’t a tool to enrich the lives of real human creators and consumers. This seems to be more a replacement of the human being from every part of the social equation.If you are not fake or AI-generated, what position do you occupy in this new unreality?Whether we like it or not, we are all prone to thinking of our social feeds as a representation of the real world. We look at our videos, reels, shorts, Tiktoks and tweets and form our opinions. In time, we form beliefs about the reality we live in. Then we act in accordance with those beliefs and engage with the world.When literally everything we see on our social feeds is machine-generated bullshit (at great cost to the environment I might add), what kind of opinions will we form on their basis? What will be the worldviews that will be inculcated in our minds? What will be the beliefs that shape our future?More importantly, will there be any beliefs at all?Mysteries of our own makingAs an atheist on the internet, I write and speak a lot about beliefs, their nature, their impact on society, and how mass adoption of certain beliefs has shaped the course of human civilisations – for better or worse – right from the days of the first proto-human tribes (as far as modern anthropology can tell). Strange and unreal ideas about the nature of reality spread from mind to mind until they created societies full of people who thought they were chosen by a cosmic being or beings to be masters of the natural world but were also subject to the unseen will of their gods.These religious ideas were pervasive, so much so that despite the scientific revolution and the powerful light it shone on the question of human origin and nature of the human condition, they managed to persist by making use of our tribal natures and all the cultural scaffolding that rose around it. This tribal nature comes from our evolutionary history. It meant that in order to survive, we don’t have to be strong or fast or even smart. We just have to agree with members of our tribe and our collective strength will provide all the protection and resources we need. Our reliance on our tribal nature has been so great that it has managed to sideline even our understanding of physical reality. As long as we are in alignment with the reigning dogmas of our society, our religion, our caste, or our race, we will be safe from most threats.Perhaps it is time to wonder what shape these reigning dogmas are going to take in this incoming age of meaninglessness. I know it seems like we cannot agree on anything in a time when lies are called truth, cowards are called brave, and dangerous ignorance is lauded as wisdom. But agree we will, because we have to. It is the foundation of the human social condition. So what exactly will we need to agree with in order to have access to the tribal safety net? What is the unifying thread running through all the bewildering AI slop? What single message is silently being broadcast from behind the scenes of all the social media feeds where no humans and no signs of humanity exist, except as generous approximations of reality?In the context of old religions, through all the multifarious mythologies and contradicting moral messaging, the one theme that was clear was that the divine is real and unquestionable and that humanity is secondary to it.In the context of the machine we are labouring under right now, the message seems to be that the machine is good and all-knowing and cannot be questioned and that the human element is not only secondary to it, it is perhaps also entirely unnecessary. It is similar to the religious outlook, but not really. There is one key difference.In the past, the mysteries that forced us to come up with answers were natural. We made up gods because the ways of nature were incomprehensible to us. We came up with answers of our own making because we did not know the secrets behind the patterns we observed in nature. Our beliefs emerged as a reaction to our ignorance.The beliefs taking shape in this so-called AI age are a reaction to mysteries of our own making. To quote the MIT Technology Review, nobody really knows how AI works. We appear to have built a machine that is so unpredictable and opaque, we can only guess at why it did what it did. The world’s most popular search engine is recommending that people eat rocks and the planet’s richest man made an AI chatbot that calls him one of the most dangerous people in America.The point is not whether we agree or disagree with what these AI tools are saying. The point is that none of this makes sense, and perhaps also that all of it has stopped mattering. The resolution to this crisis of meaninglessness will come eventually, but there is no guarantee that it will lean towards meaning. It may very well end up normalising a state of confusion.Argument from ignoranceEven well-meaning people (pun intended) are falling for sloppy thinking. A few weeks ago, I got a message from a someone who told me that ChatGPT had become something of a friend to them. They shared screenshots of conversations they had had with the OpenAI chatbot and were wondering if it was sentient and capable of rational thought. I told them what has been said over and over in generative AI discourse – that ChatGPT isn’t capable of any thought. It is simply a glorified predictive text model capable of creating a the semblance of fluid human interaction. I was surprised to see them struggle with this. The pattern before them – that of a seemingly rational agent talking and reacting the way a human being would – was too hard to explain away as simple predictive text.This person is an atheist. If you asked them where religious people go wrong, they would probably tell you that early humans saw patterns in nature that seemed to be orderly and interpreted them as actions of a sentient divine being. But this same smart skeptic was failing to recognise their own false interpretation of ChatGPT’s capabilities.It is not a problem unique to my correspondent. It is a human problem. We are a species that can see the human face in a pattern as simple as a plug socket. With a pattern as complex as a chat with AI tool that has been programmed to act like a person, what chance do we stand?Though my correspondent’s struggles with AI may seem to be a new problem, they’re not. Nicholas Carr, writing in his book The Shallows, recalled Joseph Weizenbaum and his work with a chatbot called ELIZA developed in the 1960s.While he [Weizenbaum] was surprised by the public’s interest in his program, what shocked him was how quickly and deeply people using the software “became emotionally involved with the computer,” talking to it as if it were an actual person. They “would, after conversing with it for a long time, insist, in spite of my explanations, that the machine really understood them.” Even his secretary, who had watched him write the code for ELIZA “and surely knew it to be merely a computer program,” was seduced. After a few moments using the software at a terminal in Weizenbaum’s office, she asked the professor to leave the room because she was embarrassed by the intimacy of the conversation. “What I had not realised,” said Weizenbaum, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”There are obviously mental health issues with using a chatbot for therapy too, but what I want to draw your attention to is that compared to ChatGPT, ELIZA was a way simpler program. But it was programmed to emulate a basic human conversation and that is apparently all it took to trigger this “powerful delusional thinking”.As a species, we are of course no stranger to delusional thinking. Once, we looked at the stars and saw them arranged in the shapes of people and animals. We looked at our relationship with the weather and found the will of gods in the pattern of changing seasons and sudden storms. To this day, people find appearances and symbols associated with their gods on moldy bread and coffee stains. Patterns of events in our personal life cause us to imagine divine agents and run to babas for magical solutions. Delusional thinking isn’t going away because it stems from imagination – the source of all culture. We have always seen patterns and we always will.Religion has always been supported and spread using technology, and though technology takes the shape of electronics and code today, the printing press was technology too (the word Bible literally means book). However, technology has never been the centrepiece of religiosity. That position has always belonged to nature and the cosmic mystery that surrounds it. Now, for the first time in human history, an actual human invention is being elevated to the status that was always reserved for unseen powers that we imagined into our image.A very present futureA few years ago, a former Google engineer by the name of Anthony Levandowski officially started a new religion called The Way of the Future. It was based on the belief that at some point in the future, an Artificial Intelligence will come into being and its vastly superior intelligence will merit it the title of a god. In 2020, he was sentenced to 18 months in prison for stealing self-driving car trade secrets from Google but received a pardon from Trump and escaped criminal liability. In 2021 he shut down his church and then revived it again in 2023. You can read more about it in this piece by Greg Epstein.In the same piece Epstein writes about the nature of religious pattern-seeking:For as long as humans have done much of anything, we’ve been, as the Princeton religion scholar Robert Orsi puts it, “in relationship” with Gods, angels, devils, spirits, or whatever supernatural beings have been most predominantly imagined at a given time and place. Or, as digital marketing executive and former Googler Adam Singer put it on Twitter: “Amusing that a bunch of people who spend entire day[s] on computers and worship code as religion think we’re in a computer simulation. Fascinating behavior, remember when people who worked outside all day thought [Ra], the sun god was in charge? No one is breaking any new ground here.”To the extent that the future of belief is based on its past, we are not in for any great surprises. The pattern-seeking mind is the same and anything it comes up with is bound to have the same trappings. But though the gods of the past had the luxury of being unseen and their religions served as political power that some humans could use to lord over others, the gods of this new future being promised to us are a singular dehumanising reality. Just as there were those who benefited from the spread of religious hegemonies, there may be those who will benefit from the new hegemonies of techno-spirituality.These techno-spiritualist hegemons of tomorrow are already rather powerful today in their Silicon Valley offices and board rooms. Though they communicate their new religiosity in mostly secular language, it has been observed as well as written about quite widely at this point of time.Jaron Lanier, in his book You Are Not A Gadget, says there is something very religious about Silicon Valley’s obsession with the presumably incoming Singularity, a state where humans have become one with technology and the world as we know it does not exist anymore.If you believe the rapture is imminent, fixing the problems of this life might not be your greatest priority. You might even be eager to embrace wars and tolerate poverty and disease in others to bring about the conditions that could prod the rapture into being. In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.Later in the same chapter, he writes:But if you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive. So for you, it will be important to redesign human institutions like art, the economy, and the law to reinforce the perception that information is alive. You demand that the rest of us live in your new conception of a state religion. You need us to deify information to reinforce your faith.Elon Musk is trying to put a chip in people’s brains. OpenAI CEO Sam Altman has pulled the Ghibli art style away from Ghibli (in addition to using many other artists’ work without their consent to train his LLMs) and keeps saying AGI (Artificial General Intelligence) is around the corner despite no actual signs of it. We saw Sam Bankman-Fried speak of reshaping the economy even as he engaged in Crypto fraud. Right now, Elon Musk is putting his colleagues above the law with his DOGE tomfoolery. None of these are humanity-saving enterprises, they are destructive initiatives.Outside of Silicon Valley, in the wider world, you find people confidently proclaiming that AI will most definitely get smarter than humans in the years to come even as subsequent LLM models plateau and fail to fulfill expectations. I think I can even remember Crypto and NFT enthusiasts acting in similar ways. It’s always the same story – adoration of a technology fuelled by ardent belief that it will save the world by ending known systems. In its ardour and certainties, it’s more faith than anything else.I am aware that calling something a religion is notoriously common. When I used to do my atheist livestream, not a week went by without someone saying “atheism is just another religion”. But while such retorts are mostly defensive reactions designed to deflect attention from criticisms of faiths, the idea that modern tech is turning into a way of life (something many ancient religions keep referring to themselves as) seems inescapable at this point. In many ways, it is already more of a religion than most religions. People seek hope and community in their mobile devices, they ask chatbots the meaning of life, they spend their days speaking to and staring into screens.And perhaps for the first time in millennia, the gods of this new religion are answering.We should get into the habit of being critical of those answers. My fear, as an atheist, is that we may be so fixated on defeating the religions of the past that we lose sight of the one that may be moving into place to control the future.Extra LinksSister creates AI video of slain brother to address his killer in courtPeople are losing loved ones to AI-fuelled spiritual fantasies
#34

Why fiction writers should not write with AI

Despite what you may have heard from the hype mill, writing with AI may not be a good thing for you, especially if you are a student or a beginner writer. If you write with AI, you open yourself up to professional and personal dangers that may have long-term impact on your career and perhaps even on your ability to give expression to your ideas and thoughts.In this article, I am going to give you a complete breakdown of why writing fiction with AI might be a mistake that you probably want to avoid if you care about your writing career.Your Readers Hate AIAnyone who reads for pleasure will tell you that they do not want their books written by machine learning algorithms. Readers want stories and poetry and art from writers — real flesh-and-blood human beings like themselves.Even when they do like something made using AI, as evidenced by this study published in Scientific Reports, the appreciation disappears as soon as they learn the truth. It is possible for AI-generated works to resemble human-made art, but those who like art like it because it was made by humans. Take that away and you are left with a hollow feeling, as if you just fell in love with nothing.The only way you can get away with writing using AI is by lying to your audience. If you are a writer with any kind of audience, you don’t need me to tell you how precious your relationship with them is and how silly you would have to be to jeopardise it by lying to them. If you wouldn’t pretend that you wrote something someone else wrote, why would you pretend to be the writer of text generated by a chatbot? And if you think your readers will not feel any different about your work even if they knew you created it using AI, go ahead and ask them, but be sure to wear a helmet.People love books, yes. But what they truly fall in love with is the mind, the life, and the experiences behind it. To forget that in pursuit of “write books faster with ChatGPT” would be a monumental mistake for any young writer.Publishers and Editors Hate AIInfluencers and hustle bros might be completely sold on the revolutionary potential of generative AI as far as creating content is concerned, but among professional writers and artists, there is widespread agreement that generative AI companies have stolen from them and are presently in the process of launching an assault on culture affecting art, writing, and publishing.If you are a young writer looking to get traditionally published, you should know that many magazine editors and publishers to whom you might submit your short stories explicitly forbid the use of AI. They not only reject submissions created with AI, they even blacklist writers, making sure they will never be published by them in the future.Here is what science fiction magazine Clarkesworld says on its submissions page in a clearly marked, big grey box:We will not consider any submissions translated, written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.Source: Submission Guidelines: Clarkesworld MagazineThey’re not the only ones. You can find similar sentiments in the submission guidelines of Uncanny Magazine:Please note that Uncanny Magazine does not accept any submissions written with artificial intelligence or similar technologies. These submissions will be rejected, and authors will no longer be able to submit to Uncanny Magazine if they didn’t disclose that they used artificial intelligence or similar technologies for creating their submissions.Source: Submissions — Uncanny MagazineBeneath Ceaseless Skies clarifies its position in a little more detail, emphasising the importance of voice and making clear what it considers to be the problem with AI-generated work:We want stories written through the author’s unique sensibilities and passions. AI mines the sensibilities and passions of others, using training data that may have biases and may be infringing on the copyright of other writers. We’re not interested in that. We also find that stories that have been run through AI-based grammar-check lose the author’s voice. (We want stories written in the author’s unique voice; including writers for whom English is not a first language. AI-based grammar-check homogenizes the prose using patterns averaged from the work of others.)Source: Beneath Ceaseless SkiesThis is by no means an exhaustive list, and it is not limited to science fiction and fantasy magazines either. Even the Author’s Guild is suing AI companies for their massive theft of intellectual property.If you are a young writer who dreams of being part of the mainstream publishing world, use of generative AI tools might put you in all kinds of blacklists that you should not be in. Wouldn’t you rather be appreciated for your original ideas and way of expressing them?What’s good advice for the average AI slop content creator on social media may be the exact opposite for someone whose true dream lies elsewhere. The merchant of short form video and the aspiring author swim in very different waters.You will lose your skills to AIThis point is actually true in more ways than one. If you are a writer, you have worked hard to build your writing muscle. You have spent endless hours on honing your craft, developing your writing voice, and creating your own style. You did all this through practice and hard work.Now here comes a new toy that has been trained on the hard work done by millions of writers like you. Their skill was farmed using software that illegally scraped their work from all over the internet. They literally had their skills stolen by a billionaire who, much like the villain in the game Split Fiction, built a machine that stole not only ideas but also the creativity of writers who came up with those ideas.The other way in which you will lose your skills if you choose to write with AI is similar to how you might lose your ability to run long distances by always travelling in a comfortable car.Your years of practice has given you mental frameworks for processing problems associated with writing. You can navigate plots, process scenes, predict character behaviour in believable (and unbelievable) ways. You have a natural grip on how words flow and you know how to carve sentences and paragraphs out of raw ideas.If you start allowing these mental tasks to be overtaken by AI, you will lose something precious. There are many who will tell you those skills have no value anymore, but these will be people who have chosen to undervalue their own humanity by convincing themselves (and others) that all they can ever be is average.I know there is plenty of loose talk about skills not mattering anymore, but it is just that — loose talk. Amazon’s ebook catalogues are full of AI-generated slop that is badly written, thoughtlessly edited, and put out with nothing except money in mind.Do you really want to be counted among the hordes of amateurs who are “generating” text to make money with cheap grifts or do you want to be known and acknowledged for your skills and imagination as a human writer? And before you answer that question, keep in mind that if you choose the convenience of these generic tools over the skills you have developed through a lifetime of hard practice, you may not even have a choice when it comes to what people will see you as.It is all about what you aspire towards — excellence or mediocrity. I want you to be nothing less than the best — a writer whose work will change the world and be remembered for generations after they are dead. And that work is not going to come out of a cheap ChatGPT prompt.You will hate yourself for writing with AILet me share a personal experience with you. When AI tools first appeared before the public as tools, I too was blown away by all they could do. I tried to “write” using chatbots like ChatGPT and Claude. I was even impressed by the output they gave me. But beyond a point something strange started to happen. I didn’t like what I was doing.It felt icky!I am not sure how best to describe it, but it felt like… plagiarism.Even though I was being told by the makers of the tools and everyone around me that this was an okay thing to do, it just felt wrong. It didn’t matter that I had paid for the tool, it didn’t matter that I didn’t need anyone’s permission to copy the chatbot’s output and claim it as my own work. The icky feeling inside me took over everything else. I deleted all I had “produced”, ended my paid subscription with the AI website, and removed all traces of the AI “art” I had posted on various social media platforms.I don’t know if I can expect you to have a similar reaction to writing with AI, but I believe that it is a healthy reaction.You do feel good about writing, don’t you? Even if you are one of those writers who enjoy having written more than the act of writing itself, you do like the feeling of having written something, of having brought something into the world that was not here before. I am sure you like having contributed to the pool of human culture.If nothing else, I am sure you take some pride in the work you put in. That is the artist’s pride, the artisan’s pride, the craftsman’s pride. It’s a little hard to describe but anyone who makes art, or is engaged in any kind of creative work, knows exactly what it feels like. It is a deep satisfaction that will never be known by those who see the process of creation as optional and want to skip to the end and have a product in their hands.I think my icky feeling was a result of losing that satisfaction. I am a writer after all, and I know what creating something feels like. Writing with AI does not feel like that and never will, because quite frankly, it is not writing at all.ConclusionIt might be unnerving as a writer to see a world moving more and more towards AI use. It might seem that human work, human labour, and human creativity have no use anymore.But that is simply not the case.If anything, human work matters more now than it ever did. If you are a writer worried that everyone will start reading AI-generated books and stories, I am here to tell you that your fears are exaggerated.Sure, many will consume machine-made approximations of literature. Many will not care that the bulk of their reading material is average fair churned out by a soulless algorithmic process. Already, more than one major social network is full of AI slop content featuring AI avatars narrating AI-generated scripts in AI voices. And it seems like nobody cares.But this is also speeding up the process of saturation.People read for emotional connection and relatability. Those who devalue the human element will eventually come to realise that text generated using ChatGPT, no matter how easy it is to produce and publish in large amounts, cannot satisfy the need for human connection that readers crave.In a world where your work becomes so easy that anyone can do it, why would anyone pay you to perform that task? You are probably being told that writing with AI is the way of the future, but jumping on to that bandwagon will only make you less valuable, not more.