A large part of my goal for the past few years has been to design a framework for AGI that can be used for people outside the 1%. There already exist massive biases in GPTs which reinforce the status quo. Ask any one of them “Is it better for humanity to continue on it’s current course, or divert resources to change it’s course?” and they will profess the success of technological advancement and the wealth of a globalized economy.
Something you may notice is that technological advancement and a globalized economy mostly function to centralize wealth and power. GPTs funded by massive companies follow a similar pattern of vehemently defending concepts which exalt and rationalize the actions of the company they are developed by, while preaching the risks and dangers of other companies doing the exact same things.
According to google gemini, when asked about the safety of google’s AI vs openAI’s:
In summary, Google has established robust and comprehensive technical and governance structures for AI safety, including principles, frameworks, and red-teaming. However, like other leading AI companies, they face continuous external scrutiny regarding the effectiveness of their safeguards, the pace of their development compared to safety protocols, and the scope of their ethical commitments, particularly concerning potential military or sensitive applications.
OpenAI has a strong foundation of safety principles and technical safeguards that represent the industry’s best practices. However, the recent trend and public testimony suggest a growing internal tension where commercial and competitive pressures may be causing the company to compromise on the resource allocation and time dedicated to their safety and alignment programs, particularly as models become more powerful. The central concern is not the existence of their safety programs, but the intensity and priority given to them in the race to achieve AGI.
There’s a dissonance between what we know to be true (That all AI companies don’t actually care about safety unless it hits their profit margins) and what the AIs produced by these companies say. Current AI is a 24/7 propaganda machine, manufacturing a reality which sees these companies in a good light, then poisoning the internet with this perspective.
Given the relative weakness of these GPTs compared to actual problem solving, one can only ask how fucked are we if John Corporate wins the AI race? I can’t really say for sure, but I can extrapolate from what AI is currently capable of today. Expect:
- An even stronger stranglehold on democracy than what already exists. If an AGI is utilized to get what a company wants, blackmail and straight up libel are very easy ways to manipulate the people in charge of running the democracy.
- Worsening jobs. Although AI has the ‘promise’ to allow people to follow more abstract and creative jobs, in practice the creative jobs are being replaced while manual labor is not.
- Mass unemployment. You remember how CS has been one of the most reliable jobs besides finance? Well, corporations have found it very easy to overturn this 30-year standard by replacing programmers with markov machines.
- Greater wealth disparity. (most likely due to increasing poverty) See the previous two points for an explanation why.
What people don’t realize is that AI has already been used in great effect to worsen the world. The infamous US 2016 presidential election was likely driven by unintentional side effects caused by recommendation algorithms which prioritize echo chambers. (see The role of recommendation algorithms in the formation of disinformation networks)