Sunday, December 7, 2025

Smarter, Faster, More Personal AI Delivered on Consumer Devices with Arm’s New Lumex CSS Platform, Driving Double-Digit Performance Gains

by Rachel Kim – Technology Editor

Arm Lumex⁢ Ushers in a New Era of On-Device AI, Delivering Critically important‌ Performance Boosts

Breaking News: ⁤Arm today announced Lumex, ⁢its‌ most advanced consumer system-on-chip (CSS) platform, designed to fundamentally shift AI processing to the ⁤edge – directly onto consumer devices.Lumex isn’t just an upgrade; it’s positioned‍ as the foundation for a new generation of personal, ‌private, ⁢and high-performance AI experiences, promising double-digit performance gains for key AI workloads.

The ‍platform’s core innovation centers around deep integration with ⁣Scalable Matrix Extensions 2 (SME2), a technology enabling‍ low-latency, quantized inference for large language models (LLMs). This allows complex AI models, previously relegated to the cloud, to run efficiently on ⁢smartphones and other consumer electronics.

“Arm Lumex is ⁢more than our most advanced CSS ​platform for the consumer computing market, ‍it’s the foundation for the next era of bright AI-enabled experiences,”​ Arm stated ​in ‌a press release. “Whether your an OEM or developer, Lumex gives​ you the tools to deliver personal,‍ private and⁤ high-performance AI at the edge, where it matters most. Built for the AI era, Lumex ‍is where the future of mobile innovation ​begins.”

Early validation results demonstrate ⁢the power of ​this ​approach. Collaboration between⁣ Arm, Alipay, and vivo has shown that ⁤LLM inference using SME2 on vivo’s next-generation flagship smartphone ⁣improves prefill performance by over 40% and decode performance by​ over 25%. “The validation of LLM inference using SME2 has been completed…We observe that ⁤prefill and decode performance can be ‍improved by over 40%⁤ and 25% respectively,” confirmed Xindan Weng, Head of Client Engineering, Alipay.

The impact extends beyond performance metrics. Industry leaders are already embracing Lumex and SME2 to unlock‍ new AI capabilities:

Alibaba: “Through deep integration with SME2, MNN enables‌ low-latency, quantized inference for billion-parameter models like Qwen on smartphones – showcasing Arm and Alibaba’s joint innovation in scalable, ​next-gen mobile AI,” said Xiaotang Jiang, Head of MNN, Taobao and Tmall Group. Google: Iliyan Malchev,⁣ Distinguished Software Engineer, Android at Google, highlighted the​ broad applicability, stating, “SME2-enhanced hardware enables more​ advanced⁣ AI models, like‍ Gemma 3, to run ‌directly on a wide range of devices…it will⁤ enable mobile developers to seamlessly deploy the next generation ⁢of AI ⁤features ​across ecosystems.”
Honor: The company plans⁢ to leverage Lumex to deliver “smooth ⁢performance, intelligent ⁢AI‌ features, and outstanding power efficiency” in its upper mid-range smartphones.
Meta: Sy Choudhury, Director, AI Partnerships, Meta, expressed​ excitement about ⁢the‌ integration of arm Kleidi and‌ PyTorch’s ExecuTorch, enabling seamless application performance on next-generation technology.
Samsung electronics: Nak Hee ⁤Seong, Vice President and⁢ Head of SOC IP‍ Advancement Team, affirmed Samsung’s⁣ continued collaboration‌ with Arm, aiming to “push the boundaries of on-device ⁣AI, delivering smarter,‍ faster, and more efficient ​experiences.”
Tencent: Felix Yang, Distinguished Expert,‍ Machine Learning Platform, Tencent, noted that SME2 accelerates on-device LLMs like Tencent’s Hunyuan, addressing performance‍ bottlenecks and enhancing user experiences.

Arm Lumex represents a significant step towards a⁣ future where⁤ powerful AI is accessible ⁢and responsive, directly within⁤ the⁣ devices consumers use every day. The platform’s focus on edge computing promises not only faster performance ⁢but also⁢ enhanced privacy and reduced reliance on cloud connectivity.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.