All Updates
Page 601 of 610
February 12, 2026
OmniSapiens: HARPO for Social Behaviors
OmniSapiens-7B 2.0 uses HARPO RL to train a unified model across heterogeneous social tasks. HARPO balances learning via modulated advantages. It outperforms baselines by up to 16.85% with robust reasoning.
NSAM: Neuro-Symbolic Action Masking in DRL
NSAM learns symbolic models and action masks automatically during DRL to avoid infeasible actions. It integrates symbolic reasoning with deep policy optimization mutually. Evaluations show improved sample efficiency and fewer violations.
Neurosymbolic AI Conquers Schauder Theory
White paper integrates nonuniform ellipticity breakthrough with topos-theoretic LRMs. Formalizes sharp Schauder estimates using ghost equations. Enables autonomous proofs in calculus of variations.
NAEs Balance Interpretability and Accuracy
Neural Additive Experts use mixture-of-experts per feature with context-gated integration for flexible additivity. Targeted regularization ensures smooth transitions from additive to interactive models. Outperforms on accuracy while preserving feature explanations.
Multi-Layer AI Malware Detector
SecureScan uses logistic regression, heuristics, and VirusTotal for URL/file/binary triage. Achieves 93.1% accuracy with balanced precision/recall. Employs gray-zone logic to cut false positives.
MoE for Drift-Aware Malicious Traffic Detection
MalMoE detects encrypted malicious traffic using graph-based Mixture-of-Experts to handle graph drift. It selects optimal 1-hop-GNN experts via a redesigned gate model. Trained with two-stage strategy and augmentation for real-time precision.
MLLMs Survey on Chart Fusion
Survey organizes MLLM evolution for chart understanding via multimodal fusion. Introduces taxonomy of tasks and datasets. Highlights limitations in perception and reasoning, suggesting alignment and RL enhancements.
MIPLIB-NL for Industrial Optimization Benchmarks
MIPLIB-NL creates natural-language optimization benchmarks from real MIPLIB 2017 instances via structure-aware reverse engineering. Includes 223 validated reconstructions tying NL specs to solver code. Reveals LLM failures on large-scale problems.
MetaphorStar Masters Image Metaphor Reasoning
MetaphorStar uses end-to-end visual RL for image metaphor understanding, featuring TFQ-Data dataset, TFQ-GRPO method, and TFQ-Bench. MetaphorStar-32B sets SOTA on implication benchmarks, outperforming 20+ MLLMs including Gemini-3.0-pro. Improves general visual reasoning via scaling analyses.
MERIT Boosts LLM Negotiation Skills
AgoraBench tests LLMs in nine bargaining scenarios like deception; utility metrics measure human alignment. MERIT feedback via prompting/finetuning elicits deeper strategy and opponent awareness. Outperforms baselines in negotiation power and acquisition.
MEL Boosts LLM Reasoning via Meta-Experience
Meta-Experience Learning (MEL) enhances RLVR by internalizing error-derived meta-experience into LLM memory. Uses self-verification for contrastive analysis of trajectories. Achieves 3.92%-4.73% Pass@1 gains across model sizes.
MeCSAFNet Boosts Multispectral Segmentation
MeCSAFNet uses dual ConvNeXt encoders for visible and non-visible channels in multispectral land cover segmentation. It employs smooth attentional feature fusion with CBAM and ASAU activation. Outperforms baselines like U-Net and SegFormer by up to 19% mIoU on FBP and Potsdam datasets.
LRMs Fail to Transfer Reasoning to ToM
Study compares reasoning vs non-reasoning LLMs on ToM benchmarks, finding no consistent gains and sometimes worse performance. Insights reveal slow thinking collapse, need for adaptive reasoning, and option-matching shortcuts. Interventions like S2F and T2M mitigate issues.
LOREN: Low-Rank Adaptation for Neural Receivers
LOREN introduces low-rank adapters to enable code-rate adaptation in neural receivers without storing separate weights. It freezes a shared base network and trains lightweight adapters per code rate. Achieves comparable performance with major hardware savings.
LoRA Enables Modular Chemistry Prediction
Evaluates LoRA for parameter-efficient fine-tuning of LLMs on organic reaction datasets like USPTO and C-H functionalisation. Matches full fine-tuning accuracy while preserving multi-task performance and mitigating forgetting. Reveals distinct reactivity patterns for better adaptation.
Locomo-Plus Tests LLM Cognitive Memory
Locomo-Plus benchmarks cognitive memory in LLM agents under cue-trigger disconnects, focusing on latent conversational constraints. It proposes constraint consistency evaluation over string-matching. Reveals gaps in existing memory systems.
LLMs Tackle Agent-Based Model Replication
Study evaluates 17 LLMs on ODD-to-Python code generation for predator-prey model. Assesses executability, fidelity, efficiency via NetLogo baseline. GPT-4.1 excels, but reliability varies.
LLMs Predict Stroke Outcomes from Notes
Fine-tuned LLMs like Llama predict mRS scores from admission notes alone. Achieves 33.9% exact 90-day accuracy and 76.3% binary, matching structured baselines. Enables seamless clinical integration without data extraction.
LLMs Outstrategize Humans in Games
Uses AlphaEvolve to discover interpretable models of human and LLM strategic behavior from data. Analysis on iterated rock-paper-scissors shows frontier LLMs capable of deeper strategy than humans. Provides foundation for understanding behavioral differences in interactions.
LLMs Generate Planning Abstractions
Prompts pretrained LLMs to create QNP abstractions for generalized planning from domains and tasks. Automated debugging detects/fixes errors iteratively. Guided LLMs produce useful abstractions for qualitative numerical planning.