What are some defining characteristics of a decade in which transformative AI arrives? I’m struck by the archaic and somewhat medieval outline that comes to my mind. It will probably be very different from the flat, accessible, and homogenizing world that was taking shape between the 1960s and early 2000s.
In contrast, our current moment is something of a golden age for individualism and decentralized power. We are lucky to be alive at a moment when the future trajectory of our world is still unusually malleable by such a large number of people and organizations.
The first thing that comes to mind is the increasing importance of capital. As many forms of labor become increasingly substitutable with combinations of AI and specialized software, I expect the relative value of capital required to develop and use these services to increase as well. This could contribute to the intensification of both the good and bad effects of wealth inequality we already observe in the present day.
Those with more assets have usually always had more power to influence the world around them than those with less. But this could be increasingly true in the next decade. Using human labor to earn more assets might become increasingly difficult.
There is a spectrum of good and bad things that wealth inequality currently plays some role in causing. Innovation and market dynamism are at the positive end, while a lack of access to quality healthcare and education are at the negative end. These could become even more extreme in their effects unless there are proactive efforts to change those outcomes. However, the governments of the countries where most of the new wealth is being generated have a spotty track record when it comes to effective redistribution schemes - so I think intensification of both categories of effects of wealth inequality is likely.
Intensification of social structures could encompass a much broader span of things than just wealth inequality. AI systems that become substitutes for human labor will encode certain sets of preferences and values that then have impacts at scale. Unless companies and governments engage in some forethought and preparation, those preferences and values might not be so easy to change either during development or after deployment.
How those preferences and values are intentionally set (via selection of training datasets, reinforcement learning from human feedback, world model design, testing and creation of guardrails, etc) - or aren’t - could have immense consequences. A concrete early example of this debate is the public focus on social media content recommendation algorithms and the ways in which those may have already had massive effects on people’s behaviors, health, and political beliefs (especially children).
The precise combination of these preferences and values could be so important that AI systems and their components become highly targeted by governments, corporations, political organizations, and wealthy individuals looking to pursue particular agendas (just as social media platforms currently are).
Imagine organizations pursuing multi-year strategies to influence the distribution of content in a training dataset for a critical commercial AI system - or creating and deploying agents that persistently try to jailbreak critical government AI systems and record successful prompts for future use, while disguising their interactions as human-like. As the stakes rise and capabilities advance, these scenarios and many others become increasingly likely (if they are not already happening). All of this could happen in addition to and on top of the current persistent state of traditional cyber-warfare occurring amongst companies and countries, where model weights and other types of AI intellectual property are already high-value targets.
Regardless of which intentional choices are or are not made when designing large language models, it is not currently possible to understand the exact logic by which a model returns a certain response. We can make some guesses through experimentation and reverse engineering neural networks, but are far from having complete knowledge. If AI systems eventually substitute for human labor on a large scale, the lack of interpretability could have all sorts of strange side effects.
Think of the complex incantations and rituals that medieval magicians (and their royal or mercantile sponsors) thought were effective for summoning and consulting spirits and entities. It is not a stretch to imagine that honing in on the exact right combination of process & ingredients (contextual data, prompt language, chain-of-thought reasoning, training compute, fine-tuning, etc) needed to obtain certain outcomes could become even more of an arcane process of this sort. At the most basic level, a clear trend exists where most of the leading AI development companies have already gradually decreased the amount of design and development information they publicly disclose with each subsequent release. The potential result is a world where only very few can even approximate how and why some very consequential decisions and actions are taken.
But the above possibilities - about increasing wealth inequality, intensifying social structures, and more opaque technology - actually make me feel much more hopeful about the present moment and the opportunities at hand. Our current moment is one where anyone aware of the above trends can earn more of the capital that will be increasingly useful, anyone can take the time to understand how the design choices being made for LLM development and attempt to change them through various technical or policy means, and anyone can peruse a vibrant ecosystem of public research.
If this level of agency is open to us now, then the worst possible outcomes of the above trends - of extreme over-centralization, entrenched inequality, and incomprehensible systems - could certainly be avoided. Just because current forces have inertia trending in certain directions does not at all mean they are certain. There are many ways in which AI systems can definitely be leveraged intentionally to strengthen individual liberties and steer us towards brighter futures, and I look forward to writing more about those as well.