Robin's Rules of Order for AI
Discussions about AI have erupted into the public conversation, reaching from the tech community, to living rooms, to the聽. To cut through the clamor, AI Institute Director Robin Feldman proposes Robin鈥檚 Rules of Order as we bring modern AI into tech and our daily lives responsibly and intentionally. She offers these thoughts as both a 聽and a techno-realist.
Rule #1: Distinguish Real-time Dangers from Distant Dangers
So, what keeps me up at night . . . and what doesn鈥檛?
“I鈥檓 less worried about rogue robots taking over the world in the future than rogue humans (or rogue nations) wreaking havoc with AI tools today. A new type of Cold War is under way, and the nation with the most-advanced AI will dominate. In the spirit of the football playoffs, it鈥檚 all about offense and defense.
On the military front, our conventional weaponry, as well as our ability to protect civil infrastructure, must keep pace with AI鈥檚 quantum speed. As Eric Schmidt pointed out, our military procurement system is not well-suited for AI innovation.
Here鈥檚 how I would describe the mismatch between military procurement and AI innovation. . .imagine pulling a plow with a Lamborghini. It鈥檚 a waste of the Lamborghini, the plow won鈥檛 work very well, and the whole thing will get stuck in the mud.
On the non-military fronts, we face attacks on the basic foundations of democratic society. These aren鈥檛 limited to attacks on freedom of speech, these are an assault on the deliberative process itself, as well as democratic principles and values.”
How Social Media + AI 鈫 Experimentation on Humans by our Adversaries
“Would we ever allow a foreign nation to use our citizens as unwitting guinea pigs in scientific experimentation? That鈥檚 what we face today. In combination, AI and social media can allow our global adversaries to engage in real-time experimentations on humans鈥搒pecifically, on you and me. Our young people, in particular, are endless guinea pigs, being fed slightly varying diets as observers determine the exact amount of potion that will drive and sustain our reactions.
Press reports explain, for example, that social media magnifies hopelessness in teens and young adults by feeding them an endless torrent of information precisely tailored to mirror their deepest feelings. But why should we imagine the only aim is to magnify our own feelings? The technique also can be used to drive and direct our feelings.
Imagine endless deep-fake material created by state-sponsored bots designed to rivet the attention of young people, shift their views, and assure them that others think the same way. And then imagine the effects amplified through a campaign of poisoning training data by artificially elevating information in the data stream. This form of sustained, sophisticated campaign could have a widespread impact on public views, sowing disorder and discontent.
All of this can be done through messages that shape societal views on the democratic process and attempt to destabilize the nation. The challenge, at the end of the day, is to sustain a deliberative process when the 鈥渄eliberative鈥 messages we receive can be carefully curated and fed to us by global adversaries.
None of this is to suggest that we should halt or ban AI. Innovation moves forward, and we cannot behave like the original 鈥渟aboteurs鈥 throwing our sabots (shoes) into the machinery to stop industrialization. Instead, we should focus on maintaining our lead in the international race for AI technology. Like any other form of cold war, we have to guard against societal harms and destabilization beyond bullets or balustrades.”
What the Y2K Panic Demonstrates About Distant Dangers
“Although ancient history for anyone under 25, the year 2000 was anticipated with quiet dread. It was not because of a doomsday end-of-the-world prediction, but because of a well-founded, computer-science concern that all technology products would instantly stop working.
Why did we fear Y2K? Since the start of the digital revolution, programmers had been saving precious storage space by writing years with only two numbers. It鈥檚 like talking about the 鈥60s鈥 instead of saying 1960.
Unfortunately, programmers didn鈥檛 focus on what would happen when the 1900s became the 2000s.
As the year 2000 loomed, experts predicted that when the clock struck 12:00 AM on New Year鈥檚 Day, computers would go haywire鈥攂efuddled by a date that literally did not compute.
Elevators would stop running. Flights would be grounded. Data would disappear. ATM machines wouldn鈥檛 work, potentially leading to a run on banks. Security and communications would fail, and general mayhem would ensue. But after an extraordinary scramble of reprogramming, involving partnerships between the public and private sectors, the dawn of the millennium passed with barely a hiccup.
Today, we are again faced with a transformative technical advancement. And we can find much wisdom in the lessons of rounding the millennial corner.
First, today is the time to think about where technology will lead tomorrow. As we are seeing from the downsides of social media, tomorrow is a little late.
Second, envisioning a disaster doesn鈥檛 necessarily mean it will happen. To the extent there are concerns about future AI disasters, we can face them and determine how to avoid the disaster.
Finally, avoiding disasters generally requires enormous cooperation between the public and private sectors.
In the end, there is much we can learn from Y2k about distant dangers.”
Rule #2: It鈥檚 All About Verification
The name of the game right now is 鈥渧erification.鈥澛 Without that, society cannot develop the trust that is needed to benefit from AI鈥檚 potential, not to mention keep us from tearing each other apart.
Verification at Home
Verifying Source and Nutrition Content of the Virtual Material We Consume
The Problem
How do we trust any virtual content we see or hear in a world where what we see and hear can be created or altered by AI without any indication. The avalanche of information coming at us can be what I call 鈥渂lindingly blurry.鈥
Now, just to be clear, I would never suggest there should be an arbiter of absolute 鈥渢ruth.鈥 I believe truth exists, but I don鈥檛 trust anyone else to tell me what it is. I want to make my own decisions, but I want them to be well-informed.
Industry is developing various forms of trust tech, but pin-prick approaches are unlikely to bring the widespread assurance that society needs. What we need is something that will give us confidence that we know the source of the material and can judge the quality of the nutrition content of the materials we consume daily.
That is, we want to know its source and to what extent it is AI-generated or altered. We have models for this from the private Good Housekeeping seal of approval to the public quality control by the FDA.
Rule #3: Human Communication is for Humans
We tend to think that making AI more human-like makes it more useful and easier to use. But what if the opposite is sometimes true? What if the very features that make AI feel human鈥攊ts natural flow and conversational tone鈥攁lso make it more inefficient and even dangerous?
Why Does AI Sound Like Us?
The LLMs we have today were actually taught to approximate human communication. The version of ChatGPT that burst onto the scene in late 2022 trained first by feeding lots of data and getting it to predict the next chunk of characters, but it was further refined by giving the system examples of interactions between humans and simple chatbots, so that the AI could learn the chatbot style of interaction. That transformation was an essential part of what created the magic of modern LLMs鈥攖he eerie sense that we are speaking to something sentient.
The Inefficiency of Human Language in AI
At the moment, the most efficient approach is to speak into a machine and then read the output, but from an efficiency standpoint, that is remarkably slow and cumbersome. As simple examples, think of the iPhone feature that allows you to nod if you want to answer an incoming call. Or think of pushing buttons on a keyboard, or colored lights that signal on-air status. Even American Sign Language uses shorter forms of communication. Why should we imagine that human language is the proper vehicle for humans to interact with machines, or for machines to interact with each other?
The Dangers of AI鈥檚 Human-like Communication
In using human language, AI can lull us into trusting its sentience and believing that we can trust its responses, as well as seduce us away from time spent with other human beings. There are serious psychological consequences for the latter, from dampened conversational skills and social isolation to, at worst, what is improperly termed 鈥淎I psychosis.鈥
Human Communication is for Humans
There is something undeniably attractive about a machine that can speak like we do鈥攊t lowers barriers and invites interaction鈥攂ut it is inefficient and risky. As we shift beyond models trained on documents to models trained on physics, best practices and policies should encourage different modes of human-machine communication or machine-machine communication, while preserving pure language for humanity. In short, human communication should be for humans.