Frontier Labs
EuropeNXAI Releases xLSTM 7B for Faster, Recurrent Inference
Austrian startup NXAI ships a 7B recurrent model optimized for linear-time inference and constant memory usage.
Linz, January 22, 2026 - NXAI has released xLSTM 7B, a 7-billion-parameter recurrent model designed for linear-time inference and constant memory usage. The company says the architecture is well suited for edge deployments and scenarios where large attention windows are costly.
The model is based on xLSTM research from the group led by Sepp Hochreiter. Instead of the quadratic complexity of standard attention, xLSTM relies on recurrent processing that scales more efficiently with sequence length.
NXAI has published weights and code on Hugging Face and shared documentation for inference and fine-tuning. Early users report strong performance on long-form tasks where memory constraints typically limit transformer models.
The release reflects renewed interest in alternative sequence architectures as developers seek cheaper and faster inference pathways for production systems.
Credit: NXAI research team. Primary sources: NXAI announcement, Hugging Face model.