OpenAI, the company behind viral chatbot ChatGPT, announced today that it will launch an online store next week allowing anyone to create customized versions of its powerful natural language AI models. The move promises to democratize access to advanced generative pretrained transformers (GPTs), but also raises concerns about how these AI tools could be misused.
OpenAI Store Will Sell Access to Tools to Build Custom GPTs
According to an article from TechCrunch, the new OpenAI store will give customers access to software tools and computing resources to build unique GPT models tailored to their own needs and interests. This represents a major shift for the normally secretive AI lab:
“Until now, OpenAI has kept tight control over who gets access to its API and how it’s used. But by launching an app store-like platform, the company is giving third-party developers a lot more autonomy to build on top of models like GPT-3 and ChatGPT.” 1
Rather than simply allowing people to query fixed models like ChatGPT, the store will let users specify custom datasets and fine-tuning goals to adapt the models as they see fit. Wired Magazine explains the appeal:
“Being able to customize and train machine-learning models on specific tasks makes them far more useful—a bot trained to write lyrics or code will be better than a general knowledge chatbot.” 2
This means interested customers could potentially teach GPTs domain-specific knowledge and skills catered to individual industries and applications.
Concerns Raised Over Misuse of Powerful Models
While democratizing access, experts warn the move could also increase risks if people build GPTs to generate misinformation, spam, phishing schemes, and other harmful content.
Bloomberg reports that OpenAI itself has struggled to control how its existing models are utilized:
“OpenAI has faced criticism over harmful bot-written content and finds itself largely unable to track or control how its freely available models are being used by third parties.” 3
And providing direct customer access may only exacerbate these issues. Policymakers, researchers, and even OpenAI investors have called for safeguards:
|“Without proper oversight, customized GPTs could automate the spread of misinformation at scale.”
|Prominent AI Safety Expert
|“If people can train models to generate any type of content, harmful uses seem inevitable.”
|Major OpenAI Backer
|“I hope OpenAI will take every precaution to prevent malevolent use of AI.”
Only time will tell whether the startup can stay ahead of bad actors as it continues rapidly innovating.
What Comes Next?
Looking ahead, OpenAI’s store seems poised to fundamentally reshape the AI landscape by putting unprecedented power directly into the hands of users.
The Verge notes this launch appears to be only the first step toward a future platform for sharing tailored models:
“OpenAI CEO Sam Altman hinted that a marketplace for users to sell their customized bots may come eventually too, saying developers can ‘keep [custom agents] for themselves or share them with others.’” 4
If this vision materializes, we could one day have access to a bustling ecosystem of specialized GPT agents trained by the wider community.
Individual users and startups are already brainstorming creative applications, like video game asset generators, tools to automate business processes, and more.5 The potential seems vast, though our limited human imaginations fail to fully anticipate everything that’s to come.
For now, we await next week’s launch with a mix of excitement and caution, ready to embrace the benevolent AI assistants of tomorrow while working diligently to ensure they develop safely and for the benefit of all.
To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.