SINR: Sparsity Driven Compressed Implicit Neural Representations

Dhananjaya Jayasundara, Sudarshan Rajagopalan, Yasiru Ranasinghe, Trac D. Tran, Vishal M. Patel

Johns Hopkins University

Abstract

Implicit Neural Representations (INRs) are increasingly recognized as a versatile data modality for representing discretized signals, offering benefits such as infinite query resolution and reduced storage requirements. Existing signal compression approaches for INRs typically follow one of two strategies: (1) direct quantization with entropy coding of trained INRs, or (2) deriving a latent code through a learnable transformation on top of the INR.

In this paper, we introduce SINR, an innovative compression framework that leverages patterns in the weight vector spaces of INRs. These spaces are compressed using high-dimensional sparse codes within a shared dictionary. Remarkably, the atoms of this dictionary do not need to be learned or transmitted to successfully reconstruct the INR weights.

Our method can be integrated into any INR-based compression pipeline. Results show that SINR significantly reduces INR storage across a range of configurations, outperforming conventional baselines, while maintaining high-quality decoding across diverse modalities, including images, occupancy fields, and Neural Radiance Fields.

How SINR Works

SINR Compression Diagram
The SINR Compression Framework: Standard INR compression techniques often rely on direct quantization and entropy coding. However, the Gaussian nature of INR weight spaces enables natural compressibility. SINR exploits this by using L1 minimization to find a sparse representation in a high-dimensional space. Encoding and decoding are simplified using a fixed random sensing matrix controlled by a seed. Only the non-zero values and their indices are quantized and entropy coded.

BibTeX

Coming soon...