OPINION | This Article is SO Cool! You Should Put it on a T-Shirt!
DISCLAIMER: This article contains content generated by Artificial Intelligence to be used an as example. It is provided for informational and educational purposes only.
As artificial intelligence continues to evolve, one of the most intriguing questions arises: Can humans truly distinguish between AI-generated text and that produced by human hands? With advancements in natural language processing and machine learning, AI systems are now capable of crafting narratives, essays, and even poetry that can mimic human writing styles with remarkable precision. This blurring of lines poses not only a challenge for readers but also raises profound implications for creativity, authenticity, and the very nature of authorship. Can the signature of humanity ever be replicated by algorithms, or will certain subtleties always remain uniquely human?
For instance, could you tell that the opening paragraph to this article was generated completely by artificial intelligence? Can we count on anything that exists in an online space being genuine anymore? Does it keep you up at night thinking that your online friend ‘_xXcharliesangelXx_’ from Denver, Colorado might really just be an automated bot who doesn’t actually care about the fact that there’s a baby pygmy hippo in Thailand who bites people? To explore these questions further, we can turn to the existence of the Dead Internet Theory, which says that, yes, everything that you interact with online is only an illusion carefully crafted into appearing real! Hooray! Cookies for everyone!
As most of the greatest online conspiracies did, the earliest traces of the Dead Internet Theory spawned from the depths of 4Chan, a discussion board site where users reply to one another under a chain, similar to Threads or Reddit. Although references to the theory can be credited back to as early as 2016, it was heavily popularized in 2021 because of a post from an online user named InternetPirate titled “Dead Internet Theory: Most Of The Internet Is Fake.
While some of the content in the original post borderlined on unnecessarily crude and definitely not appropriate to be put in an article for a highschool newspaper, it amplified the conversation surrounding how much of what we consume online is really human to human interaction. The degree of seriousness in the usage of the theory is quite split however, with some believers claiming that as much as 80 per cent of online conversation is bot fueled. However, many of the people who use the term are simply referring to a noticeable spike in AI powered accounts, not necessarily saying that it makes up a majority of consumable content.
After its recent reprise in popularity, main supporting arguments for the theory lie in the fact that the reply sections for many of the most used social media sites are now infested with meaningless replies that only work to either 1) boost engagement for the original creator through comment numbers or by expressing nonsensical political views meant to fish for more responses, or 2) to advertise products.
You can test this yourself by opening any social media app and scrolling to recent replies where there’s just fifteen people commenting the heart eyes emoji under a video about snakes eating mice alive, or replying about how the 2020 US election was a scam under a video talking about childcare policy. Their only use is feeding social media algorithms, which tend to push videos that have more comments, which also means more bots will target them, and then the cycle continues.
Focusing on the ‘advertising’ function, despite the amount of bot accounts and automated replies that exist, readers will find it comforting to know that truthfully, AI on its own is not at a level in which it can participate in social media without being immediately detected. Humans will always find a way to protect their digital spaces when things tip the line from being a nuisance to genuinely harmful.
One of my favourite examples of the lack of power AI has without human monitoring came in June 2023 when users online started identifying that bots on platforms were tagging replies with key phrases like ‘this would be cool on a shirt!’ or ‘you need to sell this’, under posts of art or witty text. Those bots would then, to put it simply, steal the artwork and immediately set up a webpage that copied and pasted the original post onto a dropshipping-esque site.
Warnings were given that responses in that nature would only work to harm the original artists, and in retaliation, people started commenting phrases that matched what drop ship bots were looking for under as many posts as possible, overwhelming their sites and drowning out access to stolen art. Some users took it a step further, such as Reddit user u/IStoleYourToastLol who posted a hand drawn image of Mickey Mouse claiming he “smelled like rotten eggs” accompanied with more text that stated “This is NOT a parody! We committed copyright infringement and want to be sued by Disney. We pay ALL court and tribunal fees.” They then encouraged commenters to reply to the image with the “I want this on a shirt” rule, which led to hundreds of bot sites putting the image up.
Photo Credit: nemoshirt.com
Disney is notoriously known for being, for lack of a better wording, ruthless in the takedown of any use of their characters or branding for profit. As these AI bots mindlessly started pasting the satirical photo on everything from shirts to sweaters and mugs, it opened up their sites to that level of punishment, because while smaller artists can’t do much about stolen art, Disney absolutely can.
And sure, admittedly Mr. Walt Disney’s (allegedly) frozen head might not be out there suing every Redbubble copycat, since searching up “mickey mouse rotten egg shirt” still gives me about fifty different options to purchase from. The message has more to do about human resistance, and finding humor in a problem that started out as detrimental to artists’ livelihood.
This is all to say that fear surrounding a completely digitized and artificial online existence shouldn’t pose a threat as of now, since with even the smallest degree of digital literacy, AI that isn’t constantly being censored or monitored by humans becomes extremely easy to identify. These accounts capitalize on identifying single words or phrases, oftentimes completely ignoring other context of the original post when they generate their replies. If you click on their accounts, all you see is a low follower count, high following count, and the same reply being posted over and over again.
That being said, with no guidelines on AI usage in a world where technology develops at a much faster rate than laws and regulations can be considered, there’s always room for humanity to reach a point in which the majority can’t keep up. It’s true that the ‘ease’ I claimed in which AI can be detected is mostly conditional to a younger generation who are used to being online constantly, but older generations for the most part aren’t able to adapt as easily.
So when AI powered robots do eventually take over and stop us from distinguishing ourselves from them like in the 2018 film Extinction starring Michael Pena, I would like to be conscious of the fact that The Griffins’ Nest publishes all their articles online. If I am now cementing this 1300 word rant as an unerasable part of human history that can be traced back to me when androids come to wipe us all out, here is my damage control: AI is sooooo awesome and there is nothing creepy or environmentally wasteful about it! I love the entire industry so much!