Meta CEO Mark Zuckerberg laid out ambitious plans this week to cement the company’s position as an AI leader, including deploying hundreds of thousands of advanced AI training chips, consolidating research efforts, and developing open-source artificial general intelligence (AGI).
Massive Chip Investment for AI Supremacy
In an interview at Meta’s headquarters, Zuckerberg said the company will have 350,000 Nvidia H100 GPUs by the end of 2024 – the most advanced AI accelerator on the market. This represents a multi-billion dollar investment to scale up unsupervised training of conversational AI models like Llama.
“We are committed to developing both self-supervised and supervised AI to achieve our goal” – Mark Zuckerberg
The H100 delivers up to 60% better performance than its predecessor for recommendation systems, conversational AI, and computer vision workloads that are key to Meta’s products.
|Performance vs Previous Gen
|Recommendation systems, conversational AI, computer vision
This massive chip investment will allow Meta to continue rapid iterations and scaling of models like Llama, staying competitive with the torrent of AI advancements over the past year from startups like Anthropic and large tech rivals.
Consolidating AI Research and Product Teams
In conjunction with the expanded compute infrastructure, Zuckerberg discussed consolidating previously disparate AI research and product development teams.
“We have a big opportunity to ramp up all this work even more … I’m quite excited about this.”
Combining teams is expected to accelerate developing new AI directly into Meta’s family of apps at global scale. Zuckerberg was previously criticized for lacking deep involvement with AI developments at Meta, but described much closer collaboration moving forward.
Pushing Open Source AGI with Industry Partners
Perhaps most ambitiously, Zuckerberg repeatedly emphasized Meta’s goal of developing and open sourcing artificial general intelligence (AGI) that can perform a broad variety of tasks as well as humans. He even directly responded to expert alarms about rushing to develop transformative AGI without sufficient safety measures.
“I think it would be too risky for any one company to build general AI on its own”
This open, collaborative approach is meant to distribute both the benefits and accountability for safely guiding cutting-edge general intelligence research. However, many experts remain concerned about this “AI race” between big tech firms.
With Meta now on track to deploy nearly 600,000 advanced AI training chips by next year, the company is positioned to rapidly match industry breakthroughs like chatbots reaching human levels of conversational ability. Zuckerberg plans to provide further updates on internal progress developing and scaling models like Llama, as well as partnering with academic researchers and other tech giants toward responsibly building AGI.
How models like Llama and any future AGIs are applied across Meta’s family of apps will also be closely watched. Text generation for advertisements and content recommendations could become widespread, while still respecting user privacy commitments. Additionally, further unifying infrastructure and teams could enable new smart features across Facebook, Instagram, WhatsApp and Meta’s VR platforms.
While experts continue debating risks from the torrid pace of AI development, Meta is clearly plunging further ahead chasing industry supremacy. The massive investment in next-generation hardware and focus on groundbreaking models underscores their commitment over the long-term. What remains to be seen is if Zuckerberg’s more open, collaborative approach toward AGI research will bear fruit faster than the proprietary work underway at other major players like DeepMind, Anthropic and OpenAI.
To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.