All Updates

Page 741 of 752

February 12, 2026

๐Ÿ‡จ๐Ÿ‡ณ
cnBeta (Full RSS)โ€ข66d ago

Google Gemini 3.1 Pro Launch Imminent

Google appears set to release Gemini 3.1 Pro soon, with model references already spotted in related arenas. This follows recent launches like Zhipu's open-source GLM-5 and DeepSeek's upgraded model with larger context window.

#launch#google#gemini-31-pro
๐Ÿ‡จ๐Ÿ‡ณ
cnBeta (Full RSS)โ€ข66d ago

Gemini Blocks Disney Content Post-IP Claim

Google Gemini and related tools now refuse Disney character generation requests after Disney's IP infringement notice. The update rolled out about two months after Disney's December cease-and-desist letter.

#update#google-gemini#content-policy
๐Ÿ“„
ArXiv AIโ€ข66d ago

Wavelet Flows Speed Universe Reconstruction

Cosmo3DFlow uses 3D wavelet transform and flow matching for efficient cosmological inference from N-body simulations. Addresses sparsity via spectral compression, enabling 50x faster sampling than diffusion models. Samples initial conditions in seconds at 128^3 resolution.

#research#cosmo3dflow#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

VulReaD: KG-Guided Vulnerability Reasoning

VulReaD uses a security knowledge graph and teacher LLM for CWE-consistent vulnerability detection beyond binary classification. Student models are fine-tuned with ORPO for taxonomy-aligned reasoning. Boosts F1 scores significantly on real datasets.

#research#vulread#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

VLM-Enhanced RL for Autonomous Driving

Found-RL integrates foundation models into RL for end-to-end driving via async batch inference to cut latency. Distills VLM guidance using VMR, AWAG; CLIP rewards shaped by conditional alignment. Lightweight policy matches VLM perf at 500 FPS.

#research#found-rl#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Visual Jailbreaks Hit Image Editors

Vision-Centric Jailbreak Attack (VJA) uses visual inputs to bypass safety in image editing models. IESBench benchmark tests vulnerabilities with up to 80.9% success rates. A training-free defense via multimodal reasoning mitigates risks effectively.

#security#vja#iesbench
๐Ÿ“„
ArXiv AIโ€ข66d ago

VESPO Stabilizes Off-Policy LLM Training

VESPO introduces variational sequence-level soft policy optimization to tackle training instability in RL for LLMs caused by policy staleness and async execution. It derives a closed-form reshaping kernel for importance weights without length normalization. Experiments demonstrate stable training up to 64x staleness on math benchmarks.

#research#vespo#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Versor Revolutionizes Geometric Sequences

Versor uses Conformal Geometric Algebra (CGA) for sequence modeling with SE(3)-equivariance. Outperforms Transformers on N-body dynamics, topology, and benchmarks with fewer parameters. Offers linear complexity and interpretability via rotors.

#research#versor#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

V-STAR: Value-Guided RecSys Sampling

V-STAR addresses probability-reward mismatch in generative recsys via value-guided decoding and sibling-relative RL. VED efficiently explores high-potential prefixes; Sibling-GRPO focuses on decisive branches. Outperforms baselines in accuracy and diversity.

#research#v-star#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Universal Multimodal Immune System Model

EVA is a cross-species, multimodal foundation model harmonizing transcriptomics and histology for immunology. It shows scaling laws and SOTA on 39 tasks from discovery to clinical trials. Open version released for transcriptomics research.

#research#eva#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Unified Theory for Sketching Influence Functions

Develops theory for random projections in computing influence functions, covering unregularized, regularized, and factorized cases. Shows exact preservation conditions and handles out-of-range gradients via leakage term. Guides sketch size selection for scalable computation.

#research#influence-functions#random-projection
๐Ÿ“„
ArXiv AIโ€ข66d ago

TwiFF Enables Dynamic Visual CoT

TwiFF-2.7M dataset and model advance VCoT for videos via future frame generation. TwiFF-Bench evaluates reasoning trajectories. Outperforms baselines on dynamic VQA.

#research#twiff#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Transformers Collapse to Low-Dim Manifolds

Transformer training on modular arithmetic tasks collapses high-dimensional parameters to 3-4D execution manifolds. This structure explains attention concentration, SGD integrability, and sparse autoencoder limits. Core computation occurs in reduced subspaces amid overparameterization.

#research#arxiv-ai#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Transformer for Experimental NMR Structure Elucidation

NMRTrans uses set transformers on experimental NMR spectra for molecular structure elucidation, trained on NMRSpec corpus from literature. It models spectra as unordered peak sets aligning with NMR physics. Achieves SOTA Top-10 accuracy of 61.15% on benchmarks.

#research#nmrtrans#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Topology Meets NNs Under Uncertainty

Integrates neural networks, topological data analysis, and Bayesian methods for AI in military domains. Covers image, time-series, graph applications like fraud detection. Emphasizes robustness and interpretability.

#research#topological-nn#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Tokens Enable Emergent Resource Rationality

Inference-time scaling in language models leads to adaptive resource rationality without explicit cost rewards. Models shift from brute-force to analytic strategies as task complexity rises. LRMs show robustness on challenging functions like XOR/XNOR unlike IT models.

#research#language-models#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

TokaMark Launches Fusion Plasma Benchmark

TokaMark standardizes AI evaluation on MAST tokamak data with unified multi-modal access and 14 tasks. Harmonizes formats, metadata, and protocols for reproducible comparisons. Includes baseline model; fully open-sourced for community use.

#launch#tokamark#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Text Boosts Multimodal Anomaly Detection

Text-guided framework enhances weakly supervised multimodal video anomaly detection. Employs in-context learning for anomaly text augmentation and multi-scale bottleneck Transformer for fusion. Achieves state-of-the-art on UCF-Crime and XD-Violence benchmarks.

#research#text-guided#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

ฮด_TCB Measures LLM Prediction Stability

Introduces ฮด_TCB metric to quantify LLM internal state robustness against perturbations, beyond traditional accuracy. Linked to output embedding geometry, it reveals prediction instabilities missed by perplexity. Correlates with prompt engineering in in-context learning.

#research#delta-tcb#v1
๐Ÿ“„
ArXiv AIโ€ข66d ago

Synthetic Underspecification for Agents

LHAW generates controllable underspecified long-horizon tasks by removing info across goals, constraints, inputs, context. Validates via agent trials, classifying ambiguity impacts. Releases 285 variants from benchmarks.

#research#arxiv-ai#v1
Page 741 of 752