Evaluates LoRA for parameter-efficient fine-tuning of LLMs on organic reaction datasets like USPTO and C-H functionalisation. Matches full fine-tuning accuracy while preserving multi-task performance and mitigating forgetting. Reveals distinct reactivity patterns for better adaptation.
Key Points
- 1.Compares LoRA vs full fine-tuning on reaction, retrosynthesis, reagents
- 2.Generalizes to alternative solvent predictions
Impact Analysis
Supports scalable LLM deployment in chemical R&D. Highlights modular fine-tuning for domain-specific chemistry tasks.
Technical Details
Low-Rank Adaptation specializes broad chemistry LLMs efficiently. Benchmarks forward prediction, retrosynthesis, reagent tasks.