At the 2026 CES, the globally renowned gaming and high-performance hardware brand Razer officially entered the AI developer hardware market. The new product line focuses on local large models (LLMs), aiming to provide developers, researchers, and small and medium-sized enterprises with high-performance, flexible AI infrastructure solutions.
Forge AI Workstation: Dual form factors—Tower and Rack, tailored for professional AI workloads
The new Razer Forge AI developer workstation offers two deployment forms—tower (Tower) and rack (Rack)—optimized for AI training and inference. Core configurations include:
- 8 DDR5 RDIMM memory slots, supporting high-bandwidth, large-capacity memory expansion;
- Multi-GPU compatible design, supporting NVIDIA RTX Ada, professional RTX, and third-party AI acceleration cards;
- Dual 10GbE wired network ports, ensuring multi-node communication and high-speed data transfer;
- Optimized cooling and power systems for professional workloads such as AI/ML, 3D rendering, and scientific computing.
Razer stated that Forge AI is not only suitable for enterprise R&D teams but also for university laboratories and independent developers, filling the gap in the consumer-level market for high-performance AI workstations.
External AI Accelerator: Connect up to four units via Thunderbolt 5 interface, turning a laptop into an AI terminal instantly
To meet the needs of mobile development and lightweight deployment, Razer has partnered with AI chip company Tenstorrent to launch an external AI accelerator:
- Equipped with Tenstorrent Wormhole AI chip, optimized for Transformer architecture;
- Uses Thunderbolt 5/Thunderbolt 4 interface, plug-and-play;
- Supports up to four devices daisy-chained, significantly increasing computing density;
- Quickly upgrades a regular high-performance laptop into a local AI/ML development platform without replacing the entire system.
This device is especially suitable for developers who need to run open-source models like Llama, Qwen, and Phi locally, balancing portability and computing scalability.
Razer AIKit Open Source Tool: One-click optimization for local LLM inference
To lower the barrier of AI development, Razer also launched Razer AIKit—a set of open-source software tools, featuring:
- Automatic identification and configuration of GPU/acclerators, no manual tuning required;
- Intelligent optimization of LLM inference parameters (such as quantization, batch processing, and VRAM scheduling);
- Deep integration with Razer Forge AI and external accelerators, offering a "out-of-the-box" experience;
- Support for mainstream open-source frameworks (such as vLLM, Ollama, and LM Studio).
AIKit is now available on GitHub, and Razer encourages the community to contribute plugins and optimization solutions to build a local AI development ecosystem together.
AIbase Insight: Razer moves from "Gaming Hardware" to "AI Productivity"
This CES release marks Razer's strategic shift from an e-sports peripherals brand to a provider of AI-era productivity tools. Leveraging its expertise in high-performance hardware design, cooling, and power management, as well as its deep understanding of developer user experience, Razer is expected to open up new opportunities in the local AI hardware market.
