c/o LinkedIn

Is AI Going to Trap You in a Permanent Underclass?

As the capabilities of artificial intelligence (AI) continue to grow at an exponential rate, there has been a whirlwind of fear around where humans will fit into this AI-driven world. From the Silicon Valley tech bros to Wall Street investment bankers and Pentagon officials, the corporate world and bureaucratic class have convinced themselves that a future without AI integration is impossible. They now want to convince you that if you cannot adapt, you will be left behind. But how true is this really?

The idea of AI creating a “permanent underclass” is one I’ve seen circulating a bit on social media over the past few months, with some people claiming that AI will severely upend the labor market and that most people will be out of jobs as soon as 2030. This idea has only accelerated after Anthropic, the creator of Claude (ChatGPT’s most significant competitor), released a report saying that people working in every single major job except for trades—jobs that AI literally cannot do—will be replaced. This obstacle to the automation of the trades will soon disappear, as you can already “rent your body” to an AI to do manual labor.

So, as far as we can tell, there is basically no way to get around this AI boom. The only thing that you can be advised to do at this point is to just get incredibly proficient at using ChatGPT, Claude, Gemini, Cursor, Copilot, Perplexity, Lovable, Wispr Flow, Granola…ugh, these names suck!

In all honesty, the likelihood of AI replacing the majority of jobs is most likely not that high. While humans in white-collar jobs are more prone to seeing higher rates of unemployment caused directly by AI (and sooner) than those in blue-collar jobs, a line will be drawn somewhere. There is no world in which the majority of service workers, professors, doctors, artists, actors, musicians, etc., will be replaced by AI. What AI truly lacks is not intelligence, but rather accountability. A doctor who makes a fatal mistake during a surgery will carry the weight of that. A singer who releases music about heartbreak is choosing to be vulnerable. These sentimental distinctions are the entire reasons we seek those people out in the first place, and no model trained on human data can fully replicate being human.

On a fundamental level, as humans, we have a purpose in everything we do, and things so closely related to the human experience will never become automated. Yes, ChatGPT can answer some of your questions, but answering a question via predictive text versus actually understanding the person asking it is two very different things. The value of a great teacher, doctor, or artist isn’t solely in the output they produce, but in the lived experience they bring to it. It would not make sense to render any of these jobs obsolete and try to replace them with AI. 

It is also worth considering that, on some scale, this has happened before. If you told a farmer from the 1700s that farmers would make up less than 2% of the U.S. population today, they’d probably freak. But alas, the invention of the tractor happened, and we adjusted. Moreover, extensive research has been done on this parallel in particular, finding that the automation of farming methodologies did not replace every farmer entirely but instead just created new jobs. I mean, who in the 1700s was saying they wanted to become an accountant when they grew up? 

That said, we should still be terrified of this happening at some point sooner than we’d like it to be. In human history, we have never experienced such rapid development in our technology, both in terms of quality and adoption. This runs an incredible risk of fostering the greatest wealth inequality the world has ever seen. According to recent estimates, 84% of people in the world have never used AI before. It would be nice to believe that this just implies that AI usage is not as mainstream as it might appear to someone, say, on a college campus, and that having such fear is irrational in the first place. But the sheer amount of resources owned and operated by the same companies creating or funding these AI tools suggests otherwise. Wide-scale AI adoption is the goal, and whether the rest of us are ready or not is not exactly a part of the plan. 

If you’re even a little bit tapped into all of this, you may have seen that OpenAI recently shut down its video generation model Sora, which was expected to be so big that even Disney licensed all of their creations to OpenAI to use in Sora. But did you also see that OpenAI raised $122 billion from investors who thought that OpenAI was worth as much as $852 billion presently?

Whatever fights we may believe we’re winning against these systems, we’re not. We’re in completely unparalleled territory, and it is proving to be incredibly dangerous. Not even two months ago, Anthropic’s Head of Safety resigned, with a note saying “the world is in peril” and that he was leaving to study poetry. If that isn’t a wake-up call, I don’t know what is.

The truth is, it is difficult to tell if AI will structurally create new boundaries socioeconomically, although much of the data seems to be pointing towards yes. What is not difficult to see, however, is that the people building these systems have a vested interest in making sure you need them, and the gap between those who can afford to adopt and those who can’t is already starting to widen. The question of whether AI will create a permanent underclass may ultimately rely less on the technology itself and more on who gets to decide how it’s used.

AI is an incredibly powerful tool when used for good, and to its credit, there is plenty of good already happening. From faster scientific discovery to solving decades-old mathematical problems, the upside is real and should not be ignored. The danger thus does not lie in the technology itself, but rather in making sure those benefits don’t stay locked behind the same doors as everything else. 

All that said, if you were to ask me what my position is on whether AI will create a permanent underclass or not, my honest answer would be that I don’t know. What I can say is that the window to shape this development—whether it be through policy, public pressure, or the basic act of paying attention—is not staying open forever. We are in a place right now where we each must make a conscious decision about what we personally are willing to do to escape the risk of getting left behind.

Shloka Bhattacharyya is a member of the class of 2028 and can be reached at sbhattachary@wesleyan.edu.

Leave a Reply

Your email address will not be published. Required fields are marked *