All Updates
Page 759 of 775
February 13, 2026
Faster Convergence for Federated VIs
Apple ML paper advances federated optimization for stochastic variational inequalities. It provides improved convergence rates, closing gaps with convex optimization bounds. Refined analysis yields tighter guarantees for Local Extra SGD in smooth monotone VIs.
Custom Kernels for All from Codex & Claude
Hugging Face launches custom kernels powered by OpenAI's Codex and Anthropic's Claude. These are now available to all users on the platform. This expands access to advanced AI-driven kernel customization.
Custom Kernels for All Users
Hugging Face introduces custom kernels powered by Codex and Claude, now available to everyone. This expands access to advanced customization options on the platform. Users can integrate these models seamlessly into their workflows.
Complete Hyperparameter Transfer Across Scales
Extends hyperparameter transfer from small to large models across modules, width, depth, batch, and duration. Introduces Complete(d) Parameterisation unifying width-depth scaling, building on ฮผP.
Complete Hyperparameter Transfer for Scaling
Apple ML extends ฮผP parameterisations with Complete(d) Parameterisation for hyperparameter transfer. Covers scaling across modules, width, depth, batch size, and duration. Enables optimal hyperparameter search on small models for transfer to large-scale ones.
Cadmus: Low-Cost Program Synthesis System
Apple ML introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup enables controlled experiments bypassing issues with large LLMs like OOD challenges and high resource demands.
Cadmus Enables Cheap Program Synthesis Experiments
Apple Machine Learning introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup allows controlled experimentation without the complexities of large LLMs.
Cadmus: Cheap Program Synthesis System
Apple unveils Cadmus, a small-scale system for autoregressive program synthesis. It features an integer VM, diverse program dataset, and transformer model trained under $200 compute. Enables controlled experiments bypassing LLM challenges like OOD and tokenization.
Cadmus: Affordable Autoregressive Program Synthesis
Apple ML introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup enables controlled experiments avoiding LLM pitfalls like OOD issues and high compute demands.
February 12, 2026
Pinterest Tops ChatGPT Searches Despite Earnings Miss
Pinterest's stock fell after missing earnings expectations. The company highlighted higher-than-expected usage as a positive. It claims more searches than ChatGPT.
Pinterest Tops ChatGPT in Searches
Pinterest claims more searches than ChatGPT despite earnings miss. Stock dropped after disappointing results. High usage marks a key positive.
Pinterest Claims More Searches Than ChatGPT
Pinterest reports higher search volume than ChatGPT. Despite earnings miss causing stock drop. Usage surge is key positive.
IBM Triples US Entry-Level Hires for AI
IBM plans to triple its entry-level hiring in the U.S. by 2026. These roles will feature different tasks adapted to the AI era. This shift aims to integrate new talent into AI-driven workflows.
IBM Triples Entry-Level AI Hires
IBM will triple U.S. entry-level hiring in 2026. Roles adapt to AI with new tasks. Focuses on building AI-ready talent pool.
IBM Triples Entry-Level Hires for AI Era
IBM will triple U.S. entry-level hiring by 2026. These roles feature AI-adapted tasks unlike past jobs. Initiative responds to AI workforce shifts.
OpenAI's Ultra-Fast Coding Model Launches
OpenAI unveiled GPT-5.3-Codex-Spark, a coding model 15 times faster than its predecessor. It runs on plate-sized chips, bypassing Nvidia hardware. This enables unprecedented coding speed.
OpenAI's Fast Coding Model Skips Nvidia
OpenAI launched GPT-5.3-Codex-Spark, a coding model 15 times faster than its predecessor. It runs on plate-sized chips, bypassing Nvidia GPUs. This advances efficient AI-driven coding.
OpenAI Launches GPT-5.3 on Cerebras
OpenAI unveiled GPT-5.3-Codex-Spark, its first model running on Cerebras Systems' CS3 AI accelerators. The model delivers 1,000 tokens per second, rivaling Nvidia hardware. It features the world's fastest on-chip memory.
Musk's Moonbase for AI Satellites
Elon Musk envisions Moonbase Alpha for SpaceX and xAI. Includes lunar mass driver launching AI satellites. Aims for deep space expansion.
Musk's Moonbase Alpha Vision for SpaceX
Elon Musk has outlined a new vision for SpaceX and xAI focused on Moonbase Alpha. He wants a mass driver on the moon to launch AI satellites into deep space. This concept integrates SpaceX's space expertise with xAI's technology.