What happens if AGI goes right?

There’s already a lot of theories of what would happen if AGI goes wrong(from humanities extinction to roko’s basilisk to the tom scott earworm video of us losing all pop culture) but I want to propose something else:

What happens if AGI goes right…

For the tech companies that control it.

By this I mean that the AGI is aligned and fully compliant to the tech company that creates it and thus the CEO or board of said company have full control over this AI and what it can do.

What would society look like in this case and how would the AGI be used if it’s the ultimate monopoly for whoever wins the AI race?

That’s an interesting thought. OpenAI’s goal is to benefit all of humanity. ‘Benifit’ is a vague term, and of course nobody’s goal involves harming humanity. Other companies, such as Figure, have well defined goals: provide 100,000 robots over the next 4 years.

Is a world where there are more robots than people doomsday? I think not. Many people think it will happen: robots out number people.

AGI is defined as problem solving. I used to own a mechanical calculator with gears, levers, ink, and a roll of paper. It solved problems. It changed the world. I think a world full of AGI robots is just an exponentiotion of humanity’s ingenuity. It will reshape humanity like most ingenuity does. For some it will work out great, for others it may not. The only thing that seems certain is transparency. The nature of AI has evolved into something unique in that respect. Even if the AI itself isn’t transparent, the companies behind it could be. It would be an interesting world if that’s qhat happens.

That is true, what is the benefit they propose because otherwise its complete marketing vaugeness otherwise.

However, we could consider benefit to be equal to self profit in this case. OpenAI has defined AGI as anything that gets them $100 Billion in profit(Microsoft and OpenAI’s Profit-Driven Definition of AGI | by Ian D | Medium) so their net benefit is what makes the consortium richer. Which still leads to what would they do for profit in this case to reach or further their goal?

A large part of my goal for the past few years has been to design a framework for AGI that can be used for people outside the 1%. There already exist massive biases in GPTs which reinforce the status quo. Ask any one of them “Is it better for humanity to continue on it’s current course, or divert resources to change it’s course?” and they will profess the success of technological advancement and the wealth of a globalized economy.

Something you may notice is that technological advancement and a globalized economy mostly function to centralize wealth and power. GPTs funded by massive companies follow a similar pattern of vehemently defending concepts which exalt and rationalize the actions of the company they are developed by, while preaching the risks and dangers of other companies doing the exact same things.

According to google gemini, when asked about the safety of google’s AI vs openAI’s:

In summary, Google has established robust and comprehensive technical and governance structures for AI safety, including principles, frameworks, and red-teaming. However, like other leading AI companies, they face continuous external scrutiny regarding the effectiveness of their safeguards, the pace of their development compared to safety protocols, and the scope of their ethical commitments, particularly concerning potential military or sensitive applications.

OpenAI has a strong foundation of safety principles and technical safeguards that represent the industry’s best practices. However, the recent trend and public testimony suggest a growing internal tension where commercial and competitive pressures may be causing the company to compromise on the resource allocation and time dedicated to their safety and alignment programs, particularly as models become more powerful. The central concern is not the existence of their safety programs, but the intensity and priority given to them in the race to achieve AGI.

There’s a dissonance between what we know to be true (That all AI companies don’t actually care about safety unless it hits their profit margins) and what the AIs produced by these companies say. Current AI is a 24/7 propaganda machine, manufacturing a reality which sees these companies in a good light, then poisoning the internet with this perspective.

Given the relative weakness of these GPTs compared to actual problem solving, one can only ask how fucked are we if John Corporate wins the AI race? I can’t really say for sure, but I can extrapolate from what AI is currently capable of today. Expect:

  • An even stronger stranglehold on democracy than what already exists. If an AGI is utilized to get what a company wants, blackmail and straight up libel are very easy ways to manipulate the people in charge of running the democracy.
  • Worsening jobs. Although AI has the ‘promise’ to allow people to follow more abstract and creative jobs, in practice the creative jobs are being replaced while manual labor is not.
  • Mass unemployment. You remember how CS has been one of the most reliable jobs besides finance? Well, corporations have found it very easy to overturn this 30-year standard by replacing programmers with markov machines.
  • Greater wealth disparity. (most likely due to increasing poverty) See the previous two points for an explanation why.

What people don’t realize is that AI has already been used in great effect to worsen the world. The infamous US 2016 presidential election was likely driven by unintentional side effects caused by recommendation algorithms which prioritize echo chambers. (see The role of recommendation algorithms in the formation of disinformation networks)

1 Like

I do have to admit that LLM’s are designed to protect the status quo, because thats what benefits the owners the most. They honestly don’t care too much about AI slop or ethical considerations as their goal is profit at any cost. Anything that gets in the way of profit needs to be removed, that includes regulation of AI or basic safety requirements.

It’s also interesting to see the biases the AI has towards its own company, asking ChatGPT gives a different result, the TL;DR Still makes ChatGPT look better than Gemini:

From ChatGPT, Prompt: What is the safety of ChatGPT vs Google Gemini?, cut down for length reasons.

Bottom-Line / My Assessment

If I were to pick based on what’s known now:

  • ChatGPT is likely more mature in safety systems, policies, guardrails, and proven in many scenarios. It tends to be more conservative which can be safer for certain users.
  • Gemini seems to offer stronger advantages in up-to-date information, integration, and possibly bias metrics in some areas, but also has some risk that default settings are not as tightly restrictive, and that some vulnerabilities have been reported that may allow misuse.

But its still the same marketing gimmick to have the veneer of safety to prevent more radical action from being taken.

And this, people just believe it regularly now. You are supposed to fact check AI’s, and even ChatGPT tells you to fact check it’s prompts but people don’t. It allows them to get away with so much as what is posted is not what is acceptable.

I know of this myself, the election that both sides were insanely corrupt on. First and more blatant was the Trump stuff and the Cambridge Analytica Scandal which allowed him to win the presidency through the largest misinformation campaign in US history. Pro Trump echo chambers always exist and it’s easy nowadays to fall into a far right pipeline. But secondly, the DNC’s deliberate push away from Sanders for Clinton, which was under the table dark money deals to preserve the status quo or to allow for more deregulation, which happened under Trump. Sanders would have been a threat to the status quo in power and so wasn’t allowed to compete on fair grounds. There’s good reason that election is infamous, neither side played fair, but Trump was more blatant and a greater user of this corruption. And Trump is also now one of the most prevalent poster of AI slop and misinformation in the US, which isn’t a good sign if people don’t fact check him.