GIGABYTE, the leading PC components and laptop manufacturer, unveiled a range of innovative new AI and high-performance computing (HPC) servers at CES 2024 this week. The new systems leverage the latest hardware and software to deliver unprecedented performance for AI, deep learning, and other data-intensive workloads.
GIGABYTE Debuts Complete Range of New AI and HPC Servers
At its press conference at CES, GIGABYTE showcased its end-to-end server solutions optimized for AI and HPC applications (Source: BNN Breaking News). The new lineup includes:
- HPC servers packed with the latest GPUs for scientific computing
- High-density AI inferencing platforms powered by A100 and H100 GPUs
- Liquid-cooled systems for energy efficient AI training
- Workstation servers for developers and data scientists
- Building block servers that can scale up to thousands of nodes
This comprehensive portfolio aims to meet the surging demand for AI and HPC infrastructure across industries.
Key highlights of GIGABYTE's new AI and HPC servers:
* Up to 160 CPU cores (320 threads)
* Up to 16 latest-gen GPUs
* Support for PCIe Gen 5 and DDR5 memory
* OCP 3.0 compliant configurations
* Advanced liquid cooling technologies
* Workflow-optimized designs
* Built-in security features
“Data is the new oil driving digital transformation. Our new systems unleash the power of data through artificial intelligence and high performance computing,” said Jackson Hsu, Director of the GIGABYTE Channel Solutions Product Development Division.
Cutting-Edge Technology Powers Next-Gen AI and Scientific Discovery
GIGABYTE’s heavy focus on AI and HPC comes at a time when these technologies are fundamentally transforming business and research. From personalized recommendations to self-driving cars to drug discovery, AI promises to revolutionize every industry. However, training and running complex AI models requires computing infrastructure an order of magnitude more powerful than traditional data centers.
Similarly, everything from climate science to astrophysics increasingly relies on supersized HPC systems crunching vast troves of data. Top supercomputers today pack over a million cores to enable breakthroughs not possible with mainstream servers.
To make AI and HPC accessible for enterprises and labs, GIGABYTE has packed its new servers with bleeding-edge hardware:
- NVIDIA H100 GPUs – The latest data center GPUs deliver 2x the AI training performance over the previous generation. H100’s Transformer Engine accelerates recommendation models used by online retailers.
- 4th Gen AMD EPYC CPUs – With up to 96 Zen 4 cores, these server processors enable much higher performance for parallel workloads compared to Intel-based systems.
- Support for PCIe Gen 5 – By doubling interface bandwidth over PCIe Gen 4, more data can flow in and out of GPUs and networking cards in real-time.
- High-speed memory – DDR5 memory modules running at 4800 MT/s future-proof the systems to keep data streaming.
“Our servers set new standards for AI and simulation workloads in terms of raw horsepower,” said Yung-Hsin Lu, Server Product Manager at GIGABYTE.
Purpose-Built Platforms Optimize AI and HPC Workflows
In addition to maxing out hardware capabilities, GIGABYTE has tailored the physical design of its new systems for different AI and HPC use cases:
AI Inferencing Servers
GIGABYTE’s multi-GPU servers packed with A100 or H100 accelerators are optimized for large-scale AI inferencing. Models get deployed on these servers to make real-time predictions and decisions based on new data. For example, inferencing servers power fraud detection or make online product recommendations.
To cost-effectively scale out AI, GIGABYTE offers high-density 2U platforms that can fit up to 10 GPUs. Dual-socket server blocks with 8 GPUs each can also interconnect multiple chassis together using direct liquid cooling manifolds.
AI Training Systems
Training complex machine learning models like large language models requires cutting-edge hardware configured for optimal data flow. GIGABYTE’s liquid-cooled workstation servers with 8 of the latest H100 or A100 GPUs deliver the raw performance needed for AI exploration.
Modular server building blocks simplify scaling up distributed training infrastructure. GIGABYTE’s 4-GPU server nodes can interconnect over fast InfiniBand or Ethernet fabrics to train models with thousands of GPUs working in tandem.
HPC Servers for Research & Simulation
For traditional HPC workloads like weather forecasting, computational chemistry, or physics simulations, GIGABYTE offers rack servers packed with high-core count AMD EPYC CPUs and NVIDIA A40 GPUs. Storage-optimized configurations speed up time-to-results by reducing I/O bottlenecks.
Purpose-built HPC servers maximize data throughput between processors, accelerators, networks, and storage. By removing system bottlenecks, scientists and researchers can realize performance gains sooner.
Sustainable Technology Aligns with Corporate Responsibility Values
In addition to raw technology muscle, GIGABYTE emphasized efficiency and sustainability in its new servers. Many configurations use advanced liquid cooling, which reduces power consumption by up to 40% compared to air cooling. Lower PUE translates directly to data center operators saving millions in electricity costs.
“Liquid cooling once used to be exotic technology only deployed by hyperscalers. But our aim is to make it accessible for enterprises through pre-engineered designs that are reliable and maintenance-free,” said Jackson Hsu.
GIGABYTE also announced it’s adopting more eco-friendly manufacturing processes. For example, recycling heat waste from factories to generate power back into operations.
“Aligning our technology vision with sustainability values benefits both our customers and the planet,” concluded Jackson Hsu.
Outlook: Mainstream Adoption of AI and HPC Expected to Surge
Industry analysts forecast the rapid growth of both AI and high performance computing shows no signs of slowing down.
AI chip revenue alone is projected to reach $140 billion by 2030, over 10x higher than 2021, powered mostly by data center deployments (Source: McKinsey). Nearly every industry plans to invest heavily in AI technology.
On the HPC front, demand for supercomputing resources continues rising across academia, government labs, and companies working on computationally intensive research problems.
To drive mainstream adoption, solution providers like GIGABYTE are making AI and HPC infrastructure affordable and easy to implement for customers lacking specialized technical resources. Packaged solutions reduce time-to-deployment from months to just weeks.
Expect AI and HPC to dominate CES headlines over the next several years as these potentially world-changing technologies continue proliferating from mega data centers into corporate server rooms.
To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.