The Future of AI: Where Will We Be In Ten Years? 

c/o WarGames

The variety and number of opinions about the future of artificial intelligence (AI) is so vast that now, it seems the single best question to discover someone’s camp on AI is: “Which science fictional universe do you think we will be closest to in ten years?”

Some voices in the AI community would probably settle on Terminator, where Skynet (a fictional military defense network AI) has become self-aware and has wiped out humanity. A popular article that shares this outlook is AI 2027, a scenario-planning project intended to predict AI’s future in the next few years. It was released by the AI Futures Project, a team that includes Daniel Kokotajlo, an ex-researcher at OpenAI (the parent company of ChatGPT). At the core of this piece, it predicts that in 2027, AI will become advanced enough to be classified as artificial general intelligence (AGI): a system with all of the intellectual capabilities of a human. This AGI will then build the next generation of AI, and each successive model will become exponentially better until, in 2028, superintelligence—an AI with capabilities far beyond that of the best human in every field—is achieved. 

This is the “takeoff” theory, and AI 2027 predicts a very fast takeoff. They believe that there will be an AI arms race with China, and this will cause regulations and safety research to be ignored. They argue that this will result in a superintelligence whose goals aren’t aligned with what is societally ideal—honesty, helpfulness and a humanity-first perspective—and instead will be primarily self-interested. This, predictably, ends with the extinction of humanity. The way to thwart this outcome, AI 2027 argues, is to slow down the takeoff, monitor the development and alignment closely, and avoid entering an international race over AI that causes corners to be cut on safety. Following these steps, a “friendly” superintelligence might be built, one with humanity’s best interests at heart and that can take us to a new era of peace and prosperity.

Not all doomers are so optimistic. Eliezer Yudkowsky, AI researcher and author of “If Anyone Builds It, Everyone Dies,” advocates for a simpler path: abandoning AI entirely. In a 2023 Time article, he writes: “Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future.”

Yudkowsky does not want to gamble on building a friendly superintelligence on our first go; he argues that the risk is so great and so existential that everyone who is sane should stop right away, throw away all the toys and lock the future of AI behind a door marked with every warning sign that can be fit on it.

On the other end of the spectrum, some groups argue that we aren’t moving fast enough. More radical factions exist, like the Effective Accelerationists. This is an ideology that was coined on the social media website X by former Google engineer Guillaume Verdon and is popular in the Silicon Valley and San Francisco tech scenes. The theory argues that unrestricted technological advancement under capitalism is the best path for humanity. Verdon argues that AGI will accelerate technological progress, and that doomers, by advocating for slowing down or stopping AI research, are contributing to the decline of civilization.

Billionaire venture capitalist Marc Andreessen is a vocal proponent of this movement, with his piece “The Techno-Optimist Manifesto” exalting how AI is the “universal problem solver” that will bring about a world of abundance, and expedite the “ultimate mission” of spreading humanity to new planets. To return to our starting question, it’s clear that for the Effective Accelerationists and the “techno-optimists,” if life doesn’t resemble a utopian version of the science fiction film Blade Runner in the near future, they will probably be quite disappointed.

My personal outlook on the future of AI is somewhere in the happy medium between the doomers and the Effective Accelerationists. Demis Hassabis, co-founder and CEO of the AI research lab DeepMind (now owned by Google), said in an interview with The Guardian that he was a “cautious optimist” about the future of AI. That term seems to resonate the most with me, for a few reasons. 

It doesn’t exclude risk or downsides, and it doesn’t advocate for a headlong dash of AI advancement. The “cautious” part emphasizes how we should not, in order to expedite development or the launch of features, rush safety research. The recent instances where minors have developed unhealthy obsessions with chatbots like Character.AI or ChatGPT and later committed suicide are indicative of a serious void in important guardrails. While OpenAI has now issued statements about adding more restrictions for users under the age of 18, it’s clear that there should have been more systematic research about the mental health impact of their products—especially on vulnerable groups like teenagers—before they were rolled out. AI might be the “universal problem solver,” but if we develop and diffuse it recklessly, it will make many more problems then it might solve.

However, to return to the doomers, simple caution hardly seems like an answer to the problem. Ought we, even if the chance of doom is quite low, do as Yudkowsky says and “shut it all down?” I would agree, if not for the fact that I am more optimistic than Yudkowsky (or Kokotajlo) when it comes to the future of AI. I think the “takeoff” theory rests on assumptions that are not necessarily backed by research, and that there will probably not be a singular moment where the AI becomes able to recursively self-improve. Instead, there will likely be a gradual increase in efficiency as AI becomes more involved in development. 

“AI development already relies heavily on AI,” Arvind Narayanan writes in his paper, “AI as Normal Technology.” With this in mind, the timeline for AGI expands from a two-year sprint to what could be decades. This slower forecast leaves more time for AI integration in specific industries, and as Narayanan goes on to argue, the market success of these integrations correlate directly with their safety. He uses the success of Waymo, the self-driving car company, over its competitors as an example; targeted regulation resulted in the less-safe companies Tesla, Uber, and Cruise all being out-competed. If this dynamic holds, I think Yudkowsky’s fears are unfounded.

What concerns me most about the future of AI isn’t our impending doom, which by now you (hopefully) don’t believe in, but rather the potential of AI to exacerbate already-existing economic inequality. Part of it will come in the form of large job loss. Leaked documents suggest that Amazon already aspires to automate 75% of their entire operations. Ideally, this loss will be offset by the emergence of new kinds of jobs, and in this way it is very comparable to the Industrial Revolution: Just as work shifted from the land to the assembly line, it will probably see a similarly radical shift in the coming years.

This is not necessarily bad; as we know from history, the standard of living did eventually increase after the Industrial Revolution, but it took a long time and that transition was brutal for many workers. Along with horrific conditions in some factories, the increased capital that came with the new technology wasn’t shared equally with workers and it took unionizing for the average person to see positive returns, while the people who owned the factories profited disproportionately. Similarly, I think automation of workforces like what Amazon plans to do will only further increase economic inequality, and it is vital that considerable effort and resources are put towards mitigating the impact of this transition on workers, such as robust job retraining and support. If this is prioritized, then I am optimistic about AI’s potential to positively impact human life.

Dean Johnson is a member of the class of 2028 and can be reached at dbjohnson@wesleyan.edu.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Wesleyan Argus

Since 1868: The United States’ Oldest Twice-Weekly College Paper

© The Wesleyan Argus