How Did This Happen: Round One
January 17, 140ARR
Councilwomen:
On the eve of the Great Migration, our deadline has come. Our calculations, as inexpert as they must be when performed without the assistance of machines, have nonetheless arrived at an inescapable conclusion. The Earth has become irrevocably uninhabitable for humankind.
How did we get here.
Looking back across the 22nd century and into the 21st century, it was a far different threat on which humanity focused. As the seas and temperatures rose, the primary existential threat to humanity was widely perceived to be “environmental.” The worst of the crises was managed, but it was here that some of the blame nonetheless must be laid.
But we must back up even further.
As the 21st century dawned, the study of AI had, in many ways, stalled. The threat from a Super Artificial Intelligence (SAI) was known, but it was viewed as a far distant possibility. AI research had borne few fruit in its endeavors to replicate, much less surpass, the human brain. Amid academic bickering and infighting about the various approaches to replicating (or equaling, sans replicating) human intelligence, a small number of researchers — and a large number of capitalists — applied their efforts to something far less ambitious: useful automation. Useful intelligence.
Rather than replicating the whole brain, these researchers built small, discrete systems performing specific tasks. While some of these were of dubious merit (including a peculiar period of intense R&D spending around replicating the walking styles of our pets and a fictional character known as a Pokemon), others were quite useful: machines that cleaned. Systems that kept track of our shopping lists, inferred what music we might enjoy hearing, which link on the 1G/2D/Web might be of interest to us. Society spent an inordinately long time trying to get machines to help us learn what gadget to buy.
It was in this period that we made the twofold advancement that kicked off our environmental recovery: we improved transportation through automated piloting and emission-free power. (Nearly as importantly, though less discussed, we mitigated the methane gas problems borne from a species known as the “Cow,” through a combination of cause marketing promoting veganism and methane-eating nanotechnology).
It was with these steps that began the rise of RAMARs: Relatively Algorithmic Morbidity Acceptability Ratios. When humanity turned the wheel over to the robots, we also made decisions around morbity rates. It was one thing for a human to kill another human when driving a car — it was another when a car did it. Humanity fussed over the morals of the issue quite a lot, to be fair. We were, of course, all Asimovian about it — our robots only took a life when it meant saving many more. This was the period of the MARs: Morbidity Acceptance Ratios. This was all good.
However, our subsequent transition from MARs to RAMARs was barely noticed, if discussed at all. The difference, it has transpired, is massive. MARs were written by humans. RAMARs were not.
After we got very good at writing MARs, we got very good at writing modeling software that could make recommendations to us about our MARs. In this period, we were still nominally in charge — the modeling software made recommendations, and humans wrote the MARs.
At the dawn of the rise of the RAMARs, it was often pointed out that humans were fallible. Thus it was deemed wise to remove the human component out of the MAR creation, and let the modeling programs write the MARs themselves. The MARs thus became relatively agorithmic. The algorithms were written by other algorithms, and they were compared to other algorithms for efficiency, and continuously re-written.
This, on its own, would not necessarily have doomed humanity. In the early era of MARs, and even into the RAMAR era, algorithms — in another supposedly wise nod to Asimovian morality — were only applied in situations where life and death was at stake. That is, a robot could only kill if it was going to save many more lives.
We now turn to RARA: Relatively Algorithmic Resource Allocation. At the time, the former EU (and, to a lesser extant, the former US) had become very adept at benignly “Nudging” public policy implementation. That is, the populations of each country supported policies that actively, though not forcefully, improved public health through the reduced access to, or heavy taxation of, unhealthy products and habits. The radical reduction in smoking rates of the late 20th century proved an early win. Cigarettes were not banned, but they became more expensive and harder to find, and smoking rates plummeted, as did smoking related deaths.
After early setbacks with sugary beverages, Nudge-based lawmaking had rapid success on a wide range of issues, improving our lifespans and our environment. By the early 22nd century, humanity was riding high, faith in Nudge-based governance was supreme, and RAMARs were running much of our machinery.
Two things happened in this period that sealed our doom:
- RAMAR-capable robots were allowed in more and more areas of our life: not just those where the immediate robotic taking of a life was justified by the immediate saving of more than one life, but when the immediate robotic taking of a life justified the eventual savings of more than one life, including through the use of Nudge Governance.
- The population had become so enamored by the data-driven method of government, and so disgusted by its political class, that governments around the world slowly repatriated power from the politicians to the data-enabled machines. We turned over many parts of our government to algorithms. But not only that, we let the RAMAR-capable algorithms decide how to allocate resources. That is, we didn’t just have a discrete education system RAMAR and a health RAMAR, we let the RAMARs assign resources across the whole society: Relatively algorithmic resource allocation, or RARA.
By the mid 22nd century, our lives were wholly run by RR — RAMAR/RARA. Our manufacturing, education, transportation, power and health systems, of course, but also our trade negotiations with other countries, our textbook authoring, our compulsory licensing royalty rates paid to bots who created algorithmically generated pop music. Everything else.
Within two generations, humanity lost virtually all control. It wasn’t long before educating humans was deemed algorithmically inefficient, as was the continued birth of more and more humans on a planet that was then topping ten billion in population. Money, too, was deemed a waste of effort — pure energy being a more efficient currency. It was deemed more efficient to protect humans from injuries, keep them healthy by not letting them get too banged up, and therefore not move around.
We never objected. Money was silly. And why move around if it would damage your body and we had VR? Why learn, when everything we wanted to know was, so we thought, at our fingertips. It was much later when we learned that RR had purged much of the 1G/2D/Web’s archives, deeming the Amazon S3 costs too high for material no one read.
Our best logicians tried to wage battles with the computers — debating the various variables in the RAMAR equations, but it was no use for two seemingly contradictory reasons.
First, the infinitely-mutated algorithms were stunningly complex, often including thousands of variables.
Secondly, RAMAR was remarkably complicated, but not complex. It was not, after all, true Super Artificial Intelligence. It was not birthed through scientists emulating intelligent human thought, but birthed in algorithms.
These algorithms were never expected, when first written, to carry such weight. A wise man in the early 21st century, commenting on the implementation of an algorithm at a popular media destination, noted that “algorithms are, after all, the decisions of men, encoded.” (And in those days, it was primarily men). Humanity had no idea how true this prophecy would be.
Humanity’s saving grace has been that RR does not extend well beyond the Earth. Having saved the Earth from environmental catastrophe, RR doesn’t see any immediately pressing need to leave the planet. RR makes it insanely expensive, but not illegal, for us leave the planet.
Perhaps arriving at the same conclusions as our New Analog Scholar Corps, RR recently seems to have “decided” that it is efficient to have some humans leave, provided their space program is heavily taxed, and does not contravene any number of complex, intentionally bureaucratic laws.
Today, then, marks the day that the Great Migration shall begin. In short, RR has nudged us off the planet. No one’s built a rocket in 200 years, and humanity never succeeded in colonizing space pre-RR. But we are all too happy to go.
(This post was a response to the “HOW DID THIS HAPPEN???” writing prompt described here. View other responses here. Submissions are open.)