TetraMem Announces 22nm Multi-Level RRAM Analog In-Memory Computing SoC Milestone

via Business Wire
ⓘ This article is third-party content and does not represent the views of this site. We make no guarantees regarding its accuracy or completeness.

TetraMem Inc., a Silicon Valley–based semiconductor company developing analog in-memory computing (IMC) solutions, today announced the successful tape-out, manufacturing, and initial silicon validation of its MLX200 platform, a 22nm multi-level RRAM-based analog IMC system-on-chip (SoC).

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260516556464/en/

Photograph of the MLX200 chip with a five-cent coin for size reference

Photograph of the MLX200 chip with a five-cent coin for size reference

The achievement marks a significant step toward the commercialization of analog computing architectures based on emerging non-volatile memory technologies, addressing the growing challenges of data movement, power consumption, and thermal constraints in modern AI systems.

As AI workloads continue to scale, system performance is increasingly constrained by the cost of moving data between memory and compute units. Analog in-memory computing offers a fundamentally different approach by performing computation directly within memory arrays, significantly reducing data movement and improving system-level efficiency. TetraMem’s MLX200 platform integrates multi-level RRAM arrays with mixed-signal compute engines to enable high-throughput vector-matrix operations within memory, while maintaining compatibility with advanced CMOS processes.

The multi-level RRAM technology demonstrated at the TSMC 22nm process provides key attributes required for practical deployment, including CMOS compatibility with minimal additional process complexity, low-voltage and low-current operation, strong retention and endurance characteristics, and high multi-level capability that supports improved memory and compute density. Early silicon results indicate consistent functionality across arrays, supporting the viability of this approach for both embedded non-volatile memory and compute-in-memory applications.

This milestone builds on TetraMem’s earlier work on the MX100 platform, fabricated on the TSMC 65nm CMOS process, where the company demonstrated multi-level RRAM devices with thousands of conductance levels (“Thousands of conductance levels in memristors integrated on CMOS,” Nature, March 2023), as well as high-precision analog computing capabilities (“Programming memristor arrays with arbitrarily high precision for analog computing,” Science, February 2024). These prior results established a strong scientific and engineering foundation for scaling the technology to more advanced nodes.

Since 2019, TetraMem has worked closely with the world leading semiconductor foundry to advance RRAM technology from early-stage research into manufacturable silicon. The progress achieved at 22nm reflects continued development in process integration, device uniformity, and system-level co-design.

The MLX200 and MLX201 platforms are designed to support power- and latency-sensitive edge AI applications, including voice and audio processing, wearable devices, IoT systems, and always-on sensing. Evaluated sampling is expected to begin in the second half of 2026, and multi-level RRAM memory IP is available for evaluation and potential licensing.

Dr. Glenn Ge, Co-founder and CEO of TetraMem, commented, “This milestone reflects years of close collaboration with our foundry partner TSMC and demonstrates the feasibility of bringing multi-level RRAM and analog in-memory computing from computing architecture breakthrough into advanced-node commercial silicon. We believe this approach provides a practical path to improving energy efficiency and scalability for next-generation AI systems.”

The successful realization of the MLX200 platform highlights the viability of multi-level RRAM-based analog computing on advanced semiconductor processes. TetraMem will continue to advance this technology to support emerging AI workloads with improved energy efficiency and system scalability.

About TetraMem

TetraMem is a Silicon Valley–based semiconductor company pioneering analog in-memory computing using multi-level RRAM technology. Its architecture integrates memory and compute to significantly reduce data movement and improve energy efficiency for AI workloads. With a strong foundation in device, circuit, and system co-design, TetraMem is advancing scalable solutions for edge AI and future high-performance computing, working closely with leading foundries and ecosystem partners to bring fundamental science breakthrough technologies into commercial variable volume production.

TetraMem achieves an MLX200 multi-level RRAM–based in-memory computing SoC milestone on a commercial TSMC 22nm process, with evaluation kits (EVKs) targeted for shipment in 2H 2026.

Contacts

Report this content

If you believe this article contains misleading, harmful, or spam content, please let us know.

Report this article