Intel has unveiled its much-anticipated new Core Ultra processors, representing a major leap forward in AI-powered computing for both consumer devices and data centers. The new chips integrate new specialized AI computing cores to accelerate workloads like image processing, speech recognition, and recommendations.
Key Details on the New Core Ultra Processors
The flagship consumer chip is the Core i9-13980HX, which Intel dubs the world’s fastest mobile processor. Key specs:
- 24 cores (8 performance cores, 16 efficiency cores)
- 5.6 GHz max turbo frequency
- Up to 96 EUs Intel Arc graphics
- Integrated AI acceleration with Intel Deep Learning Boost
For servers, Intel announced 4th Gen Xeon Scalable “Sapphire Rapids” processors with up to 60 cores per chip and integrated acceleration for AI inferencing and training.
The Core Ultra represents the biggest architectural shift for Intel in a decade. The chips utilize a chiplet-based design, connecting cores and other elements with new Intel RibbonFET interconnect technology enabling fast data transfer.
Integrated AI Acceleration: A Game-Changer
The key innovation in Core Ultra is the addition of dedicated AI acceleration cores called Intel Distributed Intelligence (Intel DI) engines. These new specialized processors allow for tasks like background noise cancellation and image upscaling to happen instantly on the device rather than relying on the cloud.
Intel Deep Learning Boost Capabilities
- Up to 2x higher inferencing throughput
- Up to 9x higher TOPS (AI compute metric)
- Supports popular frameworks like PyTorch and TensorFlow
For users, this means a more responsive, personalized, and secure experience. The on-device AI will enable things like real-time background blurring on video calls, smarter recommendations, and enhanced search. For enterprises, it allows more data processing to happen at the edge while improving latency, efficiency, and privacy.
Battle for AI Chip Supremacy Heating Up
The Core Ultra launch positions Intel to compete with Nvidia and AMD to power the next generation of AI applications. Intel CEO Pat Gelsinger took direct aim at market leader Nvidia in the launch presentation, saying “The entire industry is motivated to eliminate the CUDA monopoly.”
Nvidia pioneered GPU-powered AI acceleration and Intel aims to offer CPU and integrated acceleration solutions as an alternative. With multiple vendors providing high-performance AI silicon, we may see faster innovation and more affordable access to cutting-edge AI capabilities.
Real-World Performance Looks Promising
Early benchmarks of Intel’s integrated graphics show tremendous Gen-on-Gen performance uplifts, suggesting the company may have achieved the efficiency leap needed to close the gap with discrete GPUs. AI TOPS performance is also significantly higher.
There are still questions around single-threaded CPU performance compared to AMD’s Ryzen chips. But with cores optimized for parallel workloads like AI, Core Ultra shapes up strongly for emerging use cases even if traditional CPU metrics are less emphasized.
What This Means for Consumers and Enterprise Users
For consumers, integrated AI acceleration paves the way for more responsive and personalized experiences across devices. Background processes like indexing data to enable faster searching or filtering visual noise on video calls can now happen instantly rather than introducing lag.
Enterprises will benefit from edge-based data processing that improves efficiency, privacy and latency while reducing dependence on the cloud. Workloads like automated quality assurance on manufacturing lines, real-time supply chain optimizations, and predictive maintenance analytics are now possible at the point the data is collected using Intel DI engines.
Ultimately it means smarter, more intuitive interactions between humans and computers, whether you’re on a video call or factory floor. This represents a major step towards ambient computing long envisioned by the industry.
What’s Next for Intel and Core Ultra
Dozens of laptops featuring Core Ultra processors are expected in early 2023 from major manufacturers like Lenovo, HP, and Dell. For those eager to upgrade, Core i9 systems with Arc graphics will offer the most profound experience showcasing Intel’s AI capabilities.
On the datacenter side, some of the largest cloud providers and enterprises will begin adopting 4th Gen Xeon Scalable processors with Sapphire Rapids.
This is the beginning of a multi-year roll-out of products featuring integrated AI acceleration at Intel. The modular chiplet design used for Core Ultra will likely underpin next-generation data center, networking, and infrastructure processing chips in 2024 and beyond.
With software ecosystems now built around AI-centric chips provided by multiple vendors, developers will flock to enhance existing applications while dreaming up new ones not possible before.
The Bottom Line: Faster, more efficient, responsive, and intuitive AI is coming to devices everywhere thanks to new specialized hardware unveiled this week. For Intel, it’s the manifestation of a multi-year technology journey towards their vision of AI ubiquity. This launch plants their flag as a viable third option for powering the AI future beyond established players like Nvidia and AMD. Exciting innovations lie ahead in this intensely competitive domain as Intel sharpens its aim at AI workloads with ground-up specialization.
To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.