MobCache is a mobility-aware cache framework that enables scalable LLM-based human mobility simulations by reusing reconstructible reasoning caches. It encodes reasoning steps as latent embeddings for recombination and uses a lightweight decoder trained via mobility-constrained distillation. Experiments demonstrate major efficiency gains while matching state-of-the-art performance.
Key Points
- 1.Designs reconstructible caches for LLM reasoning reuse
- 2.Latent-space evaluator enables reasoning step recombination
- 3.Lightweight decoder with mobility law-constrained distillation
- 4.Boosts efficiency for urban planning and epi simulations
- 5.Matches SOTA LLM performance in mobility fidelity
Impact Analysis
MobCache lowers computational barriers for large-scale mobility sims, enabling broader use in urban planning, epidemiology, and transport. AI practitioners can now simulate millions of agents realistically without prohibitive costs.
Technical Details
Reasoning component encodes steps as latent embeddings with evaluator for reuse. Decoding uses distillation from LLMs constrained by mobility laws like gravity. Supports recombination for diverse trajectories.