In the past few years alone, the rapid advances in artificial intelligence have captivated the public’s imagination, ushering in an time of remarkable technology dubbed The Intelligence Age.

What was once the stuff of science fiction—self-driving cars, human-like language models, and automated decision-making systems—has become part of everyday life. But as we look forward towards the next revolution, Artificial General Intelligence (AGI), the question of who controls its development and for what purpose may dictate the future of humanity itself.

Unlike today's narrow AI systems, which are designed for specific tasks, AGI would possess the ability to understand, learn, and apply knowledge across a wide range of domains—similar to, or even exceeding, human intelligence. This isn’t a far reach from today’s most powerful LLMs such as GPT and Claude.

AGI promises to revolutionize industries, accelerate scientific discovery, and address some of humanity’s most pressing challenges, from climate change to healthcare. Yet, as this technology rapidly advances, its development is increasingly concentrated in the hands of a few powerful entities—corporations and governments with their own interests at stake. This growing concentration of power raises critical questions about access, transparency, and control.

If AGI is shaped by profit-driven motives or strategic priorities, the transformative potential of this technology could be steered toward the privileged few, leaving the broader public with little influence over how it reshapes the world we all share.

OpenAI and the Profit Motive

Facing a projected loss of $5 billion in 2024, OpenAI has announced plans to shift its financial model, moving towards a for-profit structure that offers equity to shareholders. Originally designed as a “capped-profit” hybrid organization, this transition raises serious concerns about the future of AGI and the potential alignment of its goals.

Under this new structure, OpenAI—like many other tech companies—faces pressures to deliver rapid financial returns to investors. This shift in focus introduces the possibility that AGI's development could prioritize maximizing corporate interests, whether through targeted advertising, market manipulation, or political influence, over broader societal well-being.

Sam Altman, Midjourney

For instance, OpenAI plans to raise the ChatGPT Plus subscription cost to $44 within the next five years, making access to their models increasingly exclusive. As compute costs rise and AI access becomes more expensive, it risks becoming unaffordable to the public. This for-profit shift guarantees less democratization of AI and significantly reduces the chances of OpenAI releasing major open-source models.

The Alignment Problem

At the heart of AGI's development lies one of its most formidable challenges: the "alignment problem." This refers to the difficulty of ensuring that AGI systems pursue goals aligned with human interests and ethical principles. As AGI grows more powerful and autonomous, the risk of misalignment becomes increasingly dangerous.

For-profit corporations, especially those driven by intense competition to monetize AGI, may not expend the necessary resources to solve this problem, potentially resulting in intelligence systems that act in ways fundamentally at odds with human values.

One of the greatest dangers of misaligned AGI is its unpredictable nature. AGI, unlike current AI systems, has the potential to evolve beyond its initial programming, learning in ways that could make its behavior difficult to forecast or control.

A system optimized for a specific goal, like maximizing corporate profits or increasing efficiency, might pursue that objective at the expense of ethical considerations or even human safety. For instance, an AGI tasked with optimizing a global supply chain could, in pursuit of cost savings, disregard labor rights, environmental protections, or even human life.

The system’s ability to operate at a speed and scale far beyond human intervention means that, once it begins to act in a misaligned way, humans may have limited ability to stop it.

The alignment problem is compounded by the inherent difficulty of defining human values in a way that AGI can understand and consistently follow. Aligning AGI with human intentions is far from simple, given the complexity and diversity of ethical principles across cultures.

Regulations Fall Short

Despite these risks, regulatory efforts to rein in AGI development remain limited. Just this weekend, California Governor Gavin Newsom vetoed a bill that would have imposed the nation’s strictest artificial intelligence regulations. Senate Bill 1047, authored by state Senator Scott Wiener, aimed to hold companies liable if their AI systems caused harm—whether by planning a terrorist attack or causing widespread misinformation. The bill would have required companies to test the most powerful AI systems before release, formalizing commitments that many tech firms, including OpenAI, have made only voluntarily.

The tech industry, however, fiercely opposed the bill. Executives, venture capitalists, and even politicians like Rep. Nancy Pelosi argued that stringent regulations would stifle innovation and impose undue legal risks on developers. They claimed that penalizing AI creators for potential misuse of their technology was impractical, suggesting that blame should rest solely on the individuals who use AI for harm.

The Political Lobby

The story of Senate Bill 1047 may be just the beginning of many legislative oversight failures yet to come.

As AGI becomes proprietary and increasingly valuable, the companies that control it could leverage this advantage to reshape political landscapes. The immense power of AGI to optimize decision-making and predict outcomes grants these corporations not only economic dominance but also the ability to influence policy.

The billions of dollars gained from AGI’s creation will likely be put towards political lobbying, driving AI deregulation efforts, seeking to minimize governmental oversight and safeguard company interests.

Political Lobbying, Midjourney

AGI’s unparalleled analytical capabilities would allow companies to refine their lobbying strategies with precision, predicting shifts in public opinion, political dynamics, and legislative opportunities. Armed with these insights, they could craft more persuasive campaigns, back influential candidates, and push for policies that limit AI regulation.

In addition to better-targeted lobbying, AGI could help craft highly sophisticated arguments that exploit legislators' limited understanding of complex AI systems. Lawmakers, often behind the curve on technology, may struggle to resist the influence of AGI-backed proposals that appear well-reasoned but primarily serve corporate goals.

This dynamic risks creating a self-reinforcing cycle, where the entities controlling AGI consolidate both economic and political power. By shaping the very regulations intended to govern them, these corporations could further entrench their dominance, making it harder for governments or the public to challenge their control.

A Shrinking Public Role in AI

As AGI development accelerates, the role of the public in shaping and accessing these powerful technologies is diminishing. While open-source models are being released at a rapid pace, the most advanced systems remain proprietary, locked behind corporate doors.

Benchmarking has consistently demonstrated significantly higher efficacy for proprietary models across the vast majority of tasks. Companies like OpenAI and Anthropic have kept their top models either out of public reach or behind paywalls, limiting access to the most powerful AI tools.

Even if these models were made available, another divide remains: the high computational costs required to run these models. Large corporations and wealthy individuals possess the resources necessary to run AGI, while the average person or small developer simply cannot afford the compute power needed. The majority of the human population does not even own a GPU capable of running the smallest of edge models.

Beyond economic divides, the public is also losing influence due to the increasing complexity of AI models. As systems become more sophisticated, they require deep expertise not just to develop but to understand. This growing knowledge gap between AI experts and the general public makes it harder for everyday citizens to have a say in how AGI is designed and deployed.

At the same time, corporate and government secrecy around AGI development further diminishes public input, particularly when AGI is first developed for military or financial purposes. Historically, some of the most transformative technologies—like rockets or satellites—were initially designed for strategic or defensive uses. Rockets, for instance, were originally developed to carry intercontinental ballistic missiles (ICBMs) long before they were adapted for space exploration, and spy satellites were designed to observe activities on Earth before being repurposed for astronomical research. The same trajectory could easily apply to AGI, with its first applications tailored toward securing economic or military advantages rather than benefiting broader society.

Another key factor is the concentration of data. Much of the data needed to build cutting-edge models is owned and controlled by large corporations. These companies, with their vast data reserves, can train increasingly sophisticated AI systems, while the public and smaller entities have limited access to such data.

The broader picture reveals a weakening public voice in the development of humanity’s most significant potential invention. By democratizing knowledge, data, and computing resources, we aim to counter this trend and regain our place in the conversation.

A Deal With AGI

The dawn of AGI challenges us to reconsider what it means to be human in a world where intelligence is no longer our exclusive domain. As we hand over the reins of decision-making to systems more powerful than ourselves, we are forced to ask: what role do we play in shaping the future of our own creation?

Will AGI elevate human potential, or render it obsolete?

Can we teach machines to honor the complex web of human values, or are we destined to lose control of what we’ve built?

In the pursuit of limitless intelligence, are we prepared for the limits it may impose on us?

The answers are not yet clear. But in the pursuit of superintelligence, the future of humanity may depend not just on what AGI becomes—but on what we choose to become alongside it.

Midjourney

Mirror文章信息

Mirror原文:查看原文

作者地址:0x0c778e66efa266b5011c552C4A7BDA63Ad24C37B

内容类型:application/json

应用名称:MirrorXYZ

内容摘要:wbXBlmmGtO9I-cQOlpgXlnQjVYq3aOD-shWcVI0SXKA

原始内容摘要:hleMYy7UTjckdogQP3EpDAac534qseS1z5jjg-ixEiU

区块高度:1517653

发布时间:2024-10-01 01:50:30