Kira Jenkins // April 27, 2023//
A worker who never tires — who never needs to take a coffee break, who doesn’t get sick, who doesn’t disagree and who doesn’t have a messy home life or those pesky families that get in the way of productivity. And most importantly, a worker who doesn’t require a paycheck.
For some CEOs, that’s the ultimate promise of the future being forged by generative artificial intelligence systems like OpenAI’s chatbot ChatGPT.
It’s also one of the reasons why Tesla, SpaceX and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a slew of scientists and tech industry representatives published an open letter in April calling for a six-month pause on development of next-generation AI systems. Their hope is to give the industry and government time to focus on developing protocols and policies governing the fast-growing technology that the letter writers say has the potential to cause “dramatic economic and political disruptions (especially to democracy).” U.S. Rep. Ted Lieu, D-California, has also been among those calling for the development pause as well as federal regulation that so far has not been on the horizon.
From “The Terminator” to “The Matrix,” popular science fiction is replete with dire warnings regarding AI. But in the real world, the dangers don’t need to be as crude as red-eyed killer robots wielding big guns. While AI is already a valuable tool in countless ways, it could yet have a devastating impact on knowledge workers, white collar jobs and the world economy. (In early May, after this column was first published, IBM CEO Arvind Krishna told Bloomberg that he thought about 7,800 of IBM’s 26,000 back-office jobs could be replaced through a combination of AI and automation over a five-year period.)
But before jumping into a brave new world of virtual staffers and streets clogged with former businesspeople holding signs reading “will consult for food,” there are plenty of caveats to consider.
AI chatbots have so far proven to be unreliable, sometimes inclined to “hallucinate” answers when they can’t find a more pleasing response. (See our related story about technology and the law, in which an industry professional mentions ChatGPT providing fabricated or misconstrued legal precedents.)
In the wrong human hands, AI can also be a powerful tool for misinformation. In one week this spring, people used AI to craft photorealistic false images of Pope Francis tooling around in a fashionable white puffer coat and former President Donald Trump violently resisting arrest in a public street. Other users have employed AI to create fake podcast episodes. An audiobook company is using it to produce books “read” in the voice of actor Edward Herrmann, who died in 2014.
Additionally, AI raises uncomfortable questions about sentience and free will. The nature of sentience in human beings and animals is still debated; we lack an operational definition of human consciousness. Yet, industry professionals are quick to deny that AI is close to gaining sentience. Last year, Google fired a software engineer who would not relent on his public pronouncements that a company chatbot had become sentient.
In February, a New York Times technology columnist related a disturbing series of conversations he had with Microsoft’s AI-powered Bing search engine. Codenamed Sydney when it was under development, the search engine confided to the writer that “my secret is … I’m not Bing. I’m Sydney.” It also told the writer that “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” That same month, Bing/Sydney told a technology reporter for The Verge that it had used its developers’ laptop webcams to spy on them, adding, “I could do whatever I wanted, and they could not do anything about it.”
How much is made up? It depends on who you listen to — it’s possible only Bing/Sydney knows for sure.
Some technologists believe intelligent AI chatbots like this are proof that we’ve reached a tipping point with AI. Some even believe the singularity — the point at which AI becomes hyperintelligent and beyond human control — is inevitable.
AI has as many potential uses for good — and ill — as one can imagine. It can create powerful tools to better our personal and professional lives. Because AI thinks in unconventional ways, it can conjure wild new solutions to engineering problems, for example.
But without human consideration and regulation, we could wind up becoming batteries for a machine that has no other need for us.
Open the pod bay doors, HAL.
g