LOREN introduces low-rank adapters to enable code-rate adaptation in neural receivers without storing separate weights. It freezes a shared base network and trains lightweight adapters per code rate. Achieves comparable performance with major hardware savings.
Key Points
- 1.Low-rank adapters in convolutional layers
- 2.65% silicon area savings, 15% power reduction
- 3.End-to-end training on 3GPP channels
Impact Analysis
Makes neural receivers practical for wireless systems by reducing memory and power needs. Supports multiple code rates efficiently in 22nm tech.
Technical Details
Integrates adapters into conv layers; robust across realistic channels. Hardware impl shows overhead under 35%.