→ View All
In Memory Of
Helen Stuart August 19, 1926 - February 19, 2024
El Dorado County Community Concert Association
Listed under: Art, Culture & Media
Open letter from Elon Musk, Steve Wozniak and others calls for “pause” in high-level AI research.
Artificial intelligence, experts say, is moving closer toward matching human intelligence. Should we stop it? Tatiana Shepeleva / Shutterstock Shutterstock License
OpenAI, a San Francisco company co-founded by Elon Musk, started as a nonprofit so that it could develop artificial general intelligence applications that “benefit all humanity.” But when OpenAI—which in 2020 converted to a “capped” for-profit structure—released the latest version of its ChatGPT online program, Chat GPT-4, it appeared to have sounded alarm bells throughout Silicon Valley.
On March 29 Musk—who is no longer on OpenAI’s board of directors—along with Apple co-founder Steve Wozniak and about 1,000 other signatories published an “open letter” calling for a six-month pause in AI research. If all researchers now working on AI cannot put the pause in place, the letter says, federal and state governments should step in and impose a moratorium.
What is ChatGPT and why is it starting to scare even the leading names in California's technology industry—to the point where they now feel that it would be safer for the world if development of new AI systems were put on hold, however briefly?
“ChatGPT is scary good. We are not far from dangerously strong AI,” Musk said when the initial version of ChatGPT was released in November of 2022.
What Does a Large Language Model Like ChatGPT Do?
Accessible via an ordinary web page, ChatGPT is a type of AI known as a Large Language Model. A “language model” is an AI system that can understand (or appear to understand) ordinary written text, and a Large Language Model is pretty much what it sounds like—a language model that processes massively large amounts of data, so much that it may, in some cases, be measured in petabytes.
One petabyte is equal to 1,000 terabytes of data. That by some estimates is so much data that if it were printed out, it would take 500 billion standard-size pages to contain it all.
Large Language Models (LLMs) are “trained,” as AI researchers say (GPT stands for “Generative Pre-trained Transformer”) with so much data that they can appear to engage in normal human conversation—or what sounds like normal conversation, anyway. They can answer an astonishing variety of questions and even compose essays, emails and other written documents that, at least on first glance, are indistinguishable from writing created by native speakers of English (or whatever language the LLM is trained in).
But LLMs have limitations, the main one being that they are expensive. Even a relatively limited LLM can cost neary $2 million to develop and $87,000 per year to run on a single Amazon Web Services processor, according to an analysis by TechCrunch. Larger models, that process social media posts and Slack messages for example, could be prohibitively expensive to develop, according to the TechCrunch analysis.
On the other hand, OpenAI appears flush with cash thanks to a “multiyear, multibillion dollar investment” from the Redmond, Wash.-based tech giant Microsoft. The mega-firm says that its investment will allow it to “independently commercialize the resulting AI technologies.”
Whatever its virtues and drawbacks, ChatGPT is unquestionably popular. According to an analysis by UBS Bank, the online app accrued 100 million users in just the first two months of its existence. By comparison, TikTok, which was previously the fastest-growing app, took nine months to reach the 100 million mark.
What’s So Scary About AI, Then?
Artificial intelligence has a frightening reputation, thanks in large part to its portrayal in Hollywood movies. The homicidal computer HAL-9000 in the 1968 film 2001: A Space Odyssey is an especially masterful fictional portrayal of AI gone haywire. The Terminator action movie franchise offers a more recent and more apocalyptic example of AI gone amok—with the film’s “SkyNet” system suddenly waging all-out war against humankind and wiping out civilization as we know it.
The type of artificial intelligence portrayed in those and other science fiction films is actually known as artificial general intelligence, or AGI. The difference is that artificial intelligence is used regularly by millions of people every day already. Text message autocorrect, Apple’s Siri Virtual Assistant, Google Maps, Facebook and other social media algorithms and the movie and TV show recommendations served up by Netflix are just a few examples of AI that people have come to take for granted.
While AI usually performs specific tasks—finding you the best route to drive home, for example—a machine powered by AGI (sometimes called “Strong AI”) would display, according to a description by venerable computer-maker IBM, “an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.”
Conscious machines that can think like humans? That does sound like something out of a scary sci-fi movie, like Terminator or 2001. There’s only one hitch. There is no such thing as AGI.
At least not yet, and it is a matter of considerable debate when, or if, the technology will get there. But ChatGPT, at least according to some experts, is getting close. A report by the consumer electronics site BGR claimed that the next upgrade of the AI chatbot, ChatGPT-5, “could make ChatGPT indistinguishable from a human.” Researchers for Microsoft in a paper published on March 22 asserted that “early experiments” with ChatGPT-4 showed “sparks of Artificial General Intelligence” and that the online chatbot could “reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
Whether ChatGPT-4 is has actually reached near-human levels of intelligence or not—and it’s still unclear what “human intelligence” means, specifically—one problem identified by critics such as New York University Professor and machine learning expert Gary Marcus remains: transparency. Or the lack of it. The claim of human-esque intelligence can’t even be “tested with serious scrutiny, because the scientific community has no access to the training data. Everything must be taken on faith.”
What Does the Musk Open Letter Say?
Transparency refers to the ability of other computer scientists, and users as well, to access the data used to “train” an AI system. The “cardinal rule” of artificial intelligence research, according to Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, authors of the AI Snake Oil newsletter, is “don’t test on your training data.” In other words, AI can only be said to “work” when it can solve problems involving data it has not yet seen.
When AI systems are tested on their own training data, there’s no way to know if they will work outside of the lab.
“It’s no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings,” wrote MIT Technology Review Senior Editor Will Douglas Heaven. “This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world.”
Transparency is one of the attributes that Musk, Wozniak et. al. hope will be addressed during their hoped-for six month pause in AI research, according to their letter—which, it should be mentioned, does not call for a pause in all AI research. The letter only demands a time-out in “the training of AI systems more powerful than GPT-4.”
The six month hiatus, the letter appears to say, would be mainly devoted to developing a set of ethical standards and protocols that would prevent any future Artificial General Intelligence system from turning into SkyNet—or at least into an economic factor that could cost people their livelihoods.
“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks. “Such decisions must not be delegated to unelected tech leaders.”
Breaking news article about a local or state topic.
You are subscribed!
Look for our confirmation message in your email inbox.
And look for our newsletter every Monday morning. See you then!
You're already subscribed
It looks like you're already subscribed to the newsletter. Not seeing it in the email inbox of the address you submitted? Be sure to check your spam folder or promotions folder (Gmail) in case your email provider diverted it there.
There was a problem with the submitted email address.
We can't subscribe you with the submitted email address. Please try another.