All Updates
Page 764 of 772
February 12, 2026
MetaphorStar Masters Image Metaphor Reasoning
MetaphorStar uses end-to-end visual RL for image metaphor understanding, featuring TFQ-Data dataset, TFQ-GRPO method, and TFQ-Bench. MetaphorStar-32B sets SOTA on implication benchmarks, outperforming 20+ MLLMs including Gemini-3.0-pro. Improves general visual reasoning via scaling analyses.
MERIT Boosts LLM Negotiation Skills
AgoraBench tests LLMs in nine bargaining scenarios like deception; utility metrics measure human alignment. MERIT feedback via prompting/finetuning elicits deeper strategy and opponent awareness. Outperforms baselines in negotiation power and acquisition.
MEL Boosts LLM Reasoning via Meta-Experience
Meta-Experience Learning (MEL) enhances RLVR by internalizing error-derived meta-experience into LLM memory. Uses self-verification for contrastive analysis of trajectories. Achieves 3.92%-4.73% Pass@1 gains across model sizes.
MeCSAFNet Boosts Multispectral Segmentation
MeCSAFNet uses dual ConvNeXt encoders for visible and non-visible channels in multispectral land cover segmentation. It employs smooth attentional feature fusion with CBAM and ASAU activation. Outperforms baselines like U-Net and SegFormer by up to 19% mIoU on FBP and Potsdam datasets.
LRMs Fail to Transfer Reasoning to ToM
Study compares reasoning vs non-reasoning LLMs on ToM benchmarks, finding no consistent gains and sometimes worse performance. Insights reveal slow thinking collapse, need for adaptive reasoning, and option-matching shortcuts. Interventions like S2F and T2M mitigate issues.
LOREN: Low-Rank Adaptation for Neural Receivers
LOREN introduces low-rank adapters to enable code-rate adaptation in neural receivers without storing separate weights. It freezes a shared base network and trains lightweight adapters per code rate. Achieves comparable performance with major hardware savings.
LoRA Enables Modular Chemistry Prediction
Evaluates LoRA for parameter-efficient fine-tuning of LLMs on organic reaction datasets like USPTO and C-H functionalisation. Matches full fine-tuning accuracy while preserving multi-task performance and mitigating forgetting. Reveals distinct reactivity patterns for better adaptation.
Locomo-Plus Tests LLM Cognitive Memory
Locomo-Plus benchmarks cognitive memory in LLM agents under cue-trigger disconnects, focusing on latent conversational constraints. It proposes constraint consistency evaluation over string-matching. Reveals gaps in existing memory systems.
LLMs Tackle Agent-Based Model Replication
Study evaluates 17 LLMs on ODD-to-Python code generation for predator-prey model. Assesses executability, fidelity, efficiency via NetLogo baseline. GPT-4.1 excels, but reliability varies.
LLMs Predict Stroke Outcomes from Notes
Fine-tuned LLMs like Llama predict mRS scores from admission notes alone. Achieves 33.9% exact 90-day accuracy and 76.3% binary, matching structured baselines. Enables seamless clinical integration without data extraction.
LLMs Outstrategize Humans in Games
Uses AlphaEvolve to discover interpretable models of human and LLM strategic behavior from data. Analysis on iterated rock-paper-scissors shows frontier LLMs capable of deeper strategy than humans. Provides foundation for understanding behavioral differences in interactions.
LLMs Generate Planning Abstractions
Prompts pretrained LLMs to create QNP abstractions for generalized planning from domains and tasks. Automated debugging detects/fixes errors iteratively. Guided LLMs produce useful abstractions for qualitative numerical planning.
LLMs Fail Cultural Recipes
LLMs generate culturally unrepresentative recipe adaptations unlike humans. Outputs ignore cultural distance correlations from GlobalFusion dataset. Issues stem from weak cultural representations and novelty inflation.
LLMs Accelerate Systematic Mapping
Experience report on using LLMs for systematic mapping studies. Highlights time savings in screening and extraction but notes challenges like hallucinations and prompt engineering. Offers lessons and recommendations for adoption.
LLM Evolutionary Sampling Speeds Databases
DBPlanBench exposes physical query plans for LLM-proposed localized edits, refined via evolutionary search. LLMs leverage semantic knowledge for optimizations like join orderings. Achieves up to 4.78x speedups, with transfers from small to large databases.
LLM Agents Auto-Optimize RecSys Models
A self-evolving system uses Google's Gemini LLMs to autonomously generate, train, and deploy recommendation model improvements. It features an Offline Agent for hypothesis generation and an Online Agent for production validation. Deployed successfully at YouTube, surpassing manual workflows.
LITT: Timing Transformer for EHR Events
LITT introduces a Timing-Transformer architecture that aligns sequential events on a virtual relative timeline for event-timing-focused attention. It enables personalized clinical trajectory interpretations. Validated on EHR data from 3,276 breast cancer patients to predict cardiotoxicity onset.
Latent Flows Model Reaction Trajectories
LatentRxnFlow predicts reactions as continuous latent trajectories via Conditional Flow Matching from reactant-product pairs. Offers SOTA USPTO accuracy with trajectory diagnostics and uncertainty estimation. Enables error mitigation and reliable predictions.
Large-Scale AI Social Simulation Launched
AIvilization v0 deploys a resource-constrained artificial society with unified LLM agents. Features hierarchical planning, adaptive profiles, and human steering for long-horizon autonomy. Reproduces real market stylized facts like wealth stratification.
LAP Achieves Zero-Shot Robot Embodiment Transfer
Language-Action Pre-training (LAP) represents robot actions in natural language for zero-shot transfer across embodiments without fine-tuning. LAP-3B, a 3B VLA, delivers over 50% success on novel robots and tasks. Enables efficient adaptation and unifies action prediction with VQA.