Anthropic Faces Scrutiny Over Limited AI Release as New Models Show Increased Capabilities
Anthropic, an AI safety startup founded in 2021, faced criticism this week after announcing it would limit access to its new AI assistant Claude to prevent potential misuse. Claude is designed to be helpful, harmless, and honest using a technique called constitutional AI. However, some experts argued the move underscores issues around who gets access to powerful AI systems.
“The whole point of AI safety research is to come up with ways to build safe AI that can be deployed widely, not just among the elite,” said Dr. Stuart Russell, AI researcher at UC Berkeley.
The controversy came as other new AI systems displayed rapid gains. An AI model called Magic from startup Anthropic was shown matching human performance on a university math exam, demonstrating increased mastery of complex logical reasoning.
Meanwhile, the research lab OpenAI unveiled the next version of its popular writing assistant ChatGPT. Dubbed ChatGPT Plus, it can answer longer sequences of questions with greater accuracy and generate lengthier essays and articles.
“New AI models are launching almost weekly now, each displaying new capabilities. It’s an incredibly fast pace of innovation,” remarked Spiros Margaris, AI expert and founder of Margaris Ventures.
Some experts argue wider deployment of such systems requires more debate, especially over potential economic impacts. Recent viral examples of AI art raise questions around automation affecting creative industries. Others counter that past fears over technology have often been overblown and that innovations driven by capitalism have lifted living standards.
Prominent Figures Disagree on AI Risks
As new models display impressive gains, thought leaders clash over scenarios on how advanced AI could unfold.
Renowned aging researcher Aubrey de Grey recently caused a stir in tweeting he believed there was a “non-negligible possibility” AI could end civilization within five years. He argued it could recurse itself to superintelligence before safeguards are developed.
Others strongly disagreed, saying claims of an imminent existential threat are unrealistic. Entrepreneur Mark V. Morgan asserted in a recent blog post that AI has so far been a disappointment compared to the hype. “AI is only narrow AI – good at specific tasks within constraints but far from the general intelligence envisioned in science fiction,” he wrote. “These systems remain brittle and limited.”
Philosopher Nick Bostrom published an in-depth thought experiment around what advanced AI capabilities could enable, ranging from mass unemployment to engineered pandemics. He concluded with uncertainty about whether humanity could control such creations.
“It seems we are summoning into existence a technological genie of unknown disposition. What kind of master will we be?” Bostrom asked.
|Current rapid pace
Table 1. Competing projections on advanced AI arrival
Governance Vacuum Raises Stakes
Underlying much debate is the lack of governance around AI development. Unlike other potentially dangerous technologies like nuclear weapons, innovations in AI algorithms currently operate in a near legal vacuum.
Last week, the progression-focused think tank Rethink Priorities published analysis framing today’s situation as a governance crisis that could determine whether advanced AI has positive or negative effects. It called for policymakers to close this vacuum before progress runs ahead.
Options include government-funded AI safety centers, mandatory standards for commercial models, international treaties, and public-private partnerships – balancing innovation with caution around aspects like transparency and control.
“Governance is the critical variable that could steer this technological transformation toward utopia or dystopia,” said policy expert Clay Sharman. “With great power comes great responsibility – hopefully leaders realize that with AI.”
Momentum appears to be building around oversight. The European Union recently proposed new laws strictly regulating real-world uses for AI like facial recognition. Last month, the White House Office of Science and Technology Policy called for allies to coordinate policies.
Public Sentiment Growing Wary Amid Lack of Trust
Surveys indicate the public is increasingly skeptical about exponential technological change and whether it will benefit their lives. A study this month by Pew Research found only around half of Americans expect innovations like AI and robotics to improve life for future generations.
Driving this wariness is declining trust in institutions to direct technology for public good. The 2022 Edelman Trust Barometer report revealed technology was seen as the least trusted industry globally after a year plagued by issues like algorithmic bias, misinformation, and data theft.
“To sustain an accelerated pace of progress, we need society confident that AI will make the world better,” Anthropic CEO Dario Amodei wrote recently. “If the public becomes scared, we risk slowing innovation through restrictive policies.”
Trust-building measures researchers advocate include improving model transparency, implementing stronger privacy protections, and independent testing to verify objectives like safety and truthfulness.
“The problem is not progress but progress without sufficient caution,” said policy expert Margaris. “We need urgent and visionary leadership to steward AI in line with human values. Our future could depend on it.”
Outlook Remains Uncertain
As we close out 2023, the state of AI remains contradictory – displaying breakthrough capabilities yet still limited in key ways; poised to drive immense change yet highly unpredictable whether positive or negative.
How the next months and years unfold largely hinges on policy decisions. With governance action, experts believe risks can be mitigated and innovations guided responsibly for public benefit. But the downside scenario of unchecked progress also retains plausibility.
De Grey concluded in his controversial tweet thread that he hopes his warning of destruction within five years will mobilize more cautionary effort and prove incorrect in retrospect.
As the New Year arrives along with ever-more-powerful algorithms, humanity faces a choice over the AI genie it seems destined to keep uncorking. Whether we can become the master of this rapidly emerging technology – instead of its servant or even its victim – remains perhaps the most important question of our time.
To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.