All Updates

Page 344 of 894

March 31, 2026

βš›οΈ
量子位‒31d ago

Meta Intern Builds Self-Evolving Agent

A Chinese intern at Meta created a super intelligent agent that writes its own code to achieve self-evolution. The agent focuses on iteratively improving its own methods for better performance.

#agent#self-improvement#autonomous-evolution
πŸ”₯
36ζ°ͺβ€’31d ago

Yuejiang Revenue Jumps 31.7% on Embodied AI Boom

Yuejiang's 2025 report shows 4.92δΊΏε…ƒ revenue, up 31.7% YoY. Collaborative robot shipments hit global #1 with >100k units cumulative. Embodied AI revenue multiplied; R&D topped 1δΊΏε…ƒ, up 60% focused on embodied intelligence.

#embodied-ai#robotics#cobots
🐯
θ™Žε—…β€’31d ago

Apple Intelligence Accidentally Launches in China

Apple accidentally pushed Apple Intelligence to some Chinese iOS 26.4 users for three hours on March 31, 2026, integrating Baidu Wenxin, Alibaba Qwen, and Google image tech. It was quickly withdrawn due to missing regulatory approvals like algorithm filing and safety assessments. Low chance of penalties due to quick fix and geopolitical factors.

#china-ai-regs#apple#generative-ai
βš›οΈ
量子位‒31d ago

Nvidia-Backed Chinese Robot Sells Out

A expressive-face robot endorsed by Nvidia CEO Jensen Huang has sold out rapidly. The company is Chinese and leverages data-driven embodied intelligence. This paradigm is redefining hardware development.

#robotics#embodied-ai#hardware
πŸ’°
ι’›εͺ’体‒31d ago

AI Cloud to Global Intelligent Cornerstone

In the AI-defined future, globalization competitiveness arises from tech strength, ecosystem power, and deep customer ties. AI cloud evolves from overseas partners to a global intelligent foundation. This binding fosters co-growth and innovation with worldwide clients.

#globalization#ecosystem-building#customer-binding
⚑
雷峰网‒31d ago

Google TurboQuant Sparks RaBitQ Plagiarism Row

Google Research's TurboQuant paper, touted as a breakthrough in LLM inference cost reduction for ICLR 2026, faces accusations from RaBitQ authors of downplaying prior methods, calling their theory suboptimal, and biased experiments. The controversy exposes big tech's academic dominance via branding and channels before peer validation. It echoes Google's history of suppressing critical research.

#academic-controversy#big-tech-hegemony
πŸ’°
ι’›εͺ’体‒31d ago

SenseTime Defines AI Profit Path

SenseTime crosses AI cycles by defining profitability paths for AI firms. Their 2025 performance report signals a clear strategy. It emphasizes turning tech into stable, replicable, scalable product systems for long-term value.

#ai-profitability#product-scaling#strategy-shift
πŸ“„
ArXiv AIβ€’31d ago

Why Semantic AI Memory Forgets

Every major AI memory system uses semantic organization for generalization but incurs inevitable interference, forgetting, and false recall. The paper proves this tradeoff for semantically continuous kernel-threshold memories, deriving four key results on rank, competitor mass, decay, and lures. Tests across five architectures confirm the vulnerability.

#forgetting#interference#semantic-retrieval
πŸ“„
ArXiv AIβ€’31d ago

Verification Hurts LLM Logic Tutoring

Researchers introduce a benchmark of 516 proof states for evaluating LLM feedback in propositional logic tutoring. They find verification boosts error-prone feedback by 85% but all pipelines fail beyond complexity 4-5. This challenges assumptions about verifiers improving tutoring universally.

#logic-proofs#multi-agent#benchmark
πŸ“„
ArXiv AIβ€’31d ago

Two-Stage LTNs Boost Predictive Monitoring

Neuro-symbolic approach integrates domain knowledge into predictive process monitoring using Logic Tensor Networks (LTNs) for fraud detection and healthcare. Two-stage optimization employs weighted axiom loss pretraining followed by rule pruning to balance data accuracy and logical constraints. It outperforms data-driven baselines, especially in compliance-limited scenarios.

#neuro-symbolic#process-mining#rule-pruning
πŸ“„
ArXiv AIβ€’31d ago

Survey of Uncertainty-Aware XAI

This arXiv paper surveys uncertainty-aware explainable AI (UAXAI), detailing how uncertainty is integrated into explanations via Bayesian, Monte Carlo, and Conformal methods. It outlines strategies like trustworthiness assessment, model constraining, and uncertainty communication. The work critiques fragmented evaluations and advocates unified principles linking uncertainty to robustness and human decisions.

#xai-evaluation#explainable-ai
πŸ“„
ArXiv AIβ€’31d ago

Neuro-Symbolic AI for Compliant Process Predictions

Presents a neuro-symbolic approach using Logic Tensor Networks (LTNs) to incorporate domain-specific process constraints into predictive monitoring, overcoming sub-symbolic methods' compliance issues. Follows a four-stage pipeline: feature extraction, rule extraction, knowledge base creation, and knowledge injection. Achieves higher compliance and accuracy than baselines in experiments.

#neuro-symbolic#process-mining#compliance-ai
πŸ“„
ArXiv AIβ€’31d ago

Multiverse: Text-Guided Cross-Game Level Blending

Multiverse is a language-conditioned generator for blending game levels across multiple games using textual prompts. It learns a shared latent space to align text instructions with level structures and employs multi-positive contrastive supervision for cross-domain links. This enables controllable blending via latent interpolation and zero-shot compositional generation.

#text-to-level#contrastive-learning#game-ai
πŸ“„
ArXiv AIβ€’31d ago

MediHive: Decentralized Agents for Medical Reasoning

MediHive introduces a decentralized multi-agent framework for medical QA using LLMs, featuring self-assigning agents, evidence-based debates, and iterative fusion for consensus. It addresses limitations of centralized systems like scalability issues and single points of failure. The system achieves 84.3% accuracy on MedQA and 78.4% on PubMedQA, outperforming baselines.

#decentralized-agents#multi-agent-systems#medical-reasoning
πŸ“„
ArXiv AIβ€’31d ago

Gaps in EU AI Act Transparency Rules

EU AI Act Article 50 II requires dual human- and machine-readable labels for AI-generated content starting August 2026, clashing with generative AI constraints. Analysis of fact-checking and synthetic data generation reveals post-hoc labeling is insufficient due to non-deterministic outputs and workflows. Three structural gaps demand treating transparency as core architecture.

#transparency#compliance#generative-ai
πŸ“„
ArXiv AIβ€’31d ago

FormalProofBench Tests AI Graduate Math Proofs

FormalProofBench is a new private benchmark evaluating AI models' ability to produce formally verified graduate-level math proofs in Lean 4. Problems are sourced from qualifying exams and textbooks in analysis, algebra, probability, and logic. Frontier models achieve up to 33.5% accuracy, with analysis of tool-use, failures, cost, and latency.

#theorem-proving#formal-verification#math-benchmark
πŸ“„
ArXiv AIβ€’31d ago

daVinci-LLM Advances Open Pretraining Science

daVinci-LLM combines industrial-scale resources with full openness to explore LLM pretraining science. It releases a 3B model trained from scratch on 8T tokens using Data Darwinism framework and two-stage adaptive curriculum. Over 200 ablations reveal processing depth as a key scaling factor and domain-specific strategies.

#pretraining#open-source#data-darwinism
πŸ’Ό
VentureBeatβ€’31d ago

Claude Code Source Code Leaked

Anthropic accidentally leaked 59.8 MB of Claude Code's TypeScript source code via an npm package source map file. The 512,000-line codebase reveals innovative memory architecture to combat context entropy in AI agents. This $2.5B ARR product's IP exposure aids competitors like Cursor.

#source-leak#agentic-ai#memory-architecture
🐯
θ™Žε—…β€’31d ago

Apple MacBook Gets Phone Chip at Β₯4599

Apple integrates mobile phone chips into MacBook, launching a Β₯4599 model alongside Air. This shift from traditional laptop chips enables affordable entry into Apple ecosystem. Subjective review compares Air and Neo experiences.

#apple-silicon#laptop-chips#affordable-hardware
πŸ“„
ArXiv AIβ€’31d ago

AlignOPT: LLM-GNN for COPs

AlignOPT aligns LLMs with graph neural solvers to overcome limitations of language-only approaches in combinatorial optimization. LLMs encode textual COP descriptions, while GNNs model graph structures for integrated representations. It achieves state-of-the-art results and strong generalization to unseen instances.

#cop-solvers
Page 344 of 894