All Updates

Page 736 of 752

February 13, 2026

๐ŸŽ
Apple Machine Learningโ€ข65d ago

Complete Hyperparameter Transfer for Scaling

Apple ML extends ฮผP parameterisations with Complete(d) Parameterisation for hyperparameter transfer. Covers scaling across modules, width, depth, batch size, and duration. Enables optimal hyperparameter search on small models for transfer to large-scale ones.

#research#apple-ml#mu-p
๐ŸŽ
Apple Machine Learningโ€ข65d ago

Cadmus: Low-Cost Program Synthesis System

Apple ML introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup enables controlled experiments bypassing issues with large LLMs like OOD challenges and high resource demands.

#research#apple-ml#cadmus
๐ŸŽ
Apple Machine Learningโ€ข65d ago

Cadmus Enables Cheap Program Synthesis Experiments

Apple Machine Learning introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup allows controlled experimentation without the complexities of large LLMs.

#research#apple#cadmus
๐ŸŽ
Apple Machine Learningโ€ข65d ago

Cadmus: Cheap Program Synthesis System

Apple unveils Cadmus, a small-scale system for autoregressive program synthesis. It features an integer VM, diverse program dataset, and transformer model trained under $200 compute. Enables controlled experiments bypassing LLM challenges like OOD and tokenization.

#research#apple-ml#cadmus
๐ŸŽ
Apple Machine Learningโ€ข65d ago

Cadmus: Affordable Autoregressive Program Synthesis

Apple ML introduces Cadmus, a small-scale system for autoregressive program synthesis. It features an integer virtual machine, a dataset of diverse true programs, and a transformer model trained for under $200 compute. This setup enables controlled experiments avoiding LLM pitfalls like OOD issues and high compute demands.

#research#apple-ml#cadmus

February 12, 2026

๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Pinterest Tops ChatGPT Searches Despite Earnings Miss

Pinterest's stock fell after missing earnings expectations. The company highlighted higher-than-expected usage as a positive. It claims more searches than ChatGPT.

#other#pinterest#na
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Pinterest Tops ChatGPT in Searches

Pinterest claims more searches than ChatGPT despite earnings miss. Stock dropped after disappointing results. High usage marks a key positive.

#pinterest#search#usage-growth
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Pinterest Claims More Searches Than ChatGPT

Pinterest reports higher search volume than ChatGPT. Despite earnings miss causing stock drop. Usage surge is key positive.

#pinterest#search-engine#user-engagement
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

IBM Triples US Entry-Level Hires for AI

IBM plans to triple its entry-level hiring in the U.S. by 2026. These roles will feature different tasks adapted to the AI era. This shift aims to integrate new talent into AI-driven workflows.

#ibm#hiring#ai-talent
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

IBM Triples Entry-Level AI Hires

IBM will triple U.S. entry-level hiring in 2026. Roles adapt to AI with new tasks. Focuses on building AI-ready talent pool.

#ibm#hiring#entry-level
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

IBM Triples Entry-Level Hires for AI Era

IBM will triple U.S. entry-level hiring by 2026. These roles feature AI-adapted tasks unlike past jobs. Initiative responds to AI workforce shifts.

#other#ibm#na
โš›๏ธ
Ars Technica AIโ€ข65d ago

OpenAI's Ultra-Fast Coding Model Launches

OpenAI unveiled GPT-5.3-Codex-Spark, a coding model 15 times faster than its predecessor. It runs on plate-sized chips, bypassing Nvidia hardware. This enables unprecedented coding speed.

#launch#openai#gpt-53-codex-spark
โš›๏ธ
Ars Technica AIโ€ข65d ago

OpenAI's Fast Coding Model Skips Nvidia

OpenAI launched GPT-5.3-Codex-Spark, a coding model 15 times faster than its predecessor. It runs on plate-sized chips, bypassing Nvidia GPUs. This advances efficient AI-driven coding.

#launch#openai#gpt-53-codex-spark
๐Ÿ‡ฌ๐Ÿ‡ง
The Register - AI/MLโ€ข65d ago

OpenAI Launches GPT-5.3 on Cerebras

OpenAI unveiled GPT-5.3-Codex-Spark, its first model running on Cerebras Systems' CS3 AI accelerators. The model delivers 1,000 tokens per second, rivaling Nvidia hardware. It features the world's fastest on-chip memory.

#launch#openai#gpt-53
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Musk's Moonbase for AI Satellites

Elon Musk envisions Moonbase Alpha for SpaceX and xAI. Includes lunar mass driver launching AI satellites. Aims for deep space expansion.

#spacex-xai#space#ai-satellites
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Musk's Moonbase Alpha Vision for SpaceX

Elon Musk has outlined a new vision for SpaceX and xAI focused on Moonbase Alpha. He wants a mass driver on the moon to launch AI satellites into deep space. This concept integrates SpaceX's space expertise with xAI's technology.

#vision#spacex-xai#concept
๐Ÿ’ฐ
TechCrunch AIโ€ข65d ago

Musk Eyes Moon Mass Driver for AI Satellites

Elon Musk unveils new vision for SpaceX and xAI centered on Moonbase Alpha. Key idea includes a lunar mass driver launching AI satellites into deep space. This merges space exploration with AI ambitions.

#vision#spacex#xai
๐ŸŸข
NVIDIA Blogโ€ข65d ago

Inaugural NVIDIA AI Day Hits Sรฃo Paulo

NVIDIA's worldwide AI Days tour stopped in Sรฃo Paulo, Brazil. The event unites AI enthusiasts, developers, researchers, and startups. Focuses on code, compute, and community connections.

#launch#nvidia#ai-days
๐Ÿ’ผ
VentureBeatโ€ข65d ago

Nvidia's DMS Slashes LLM Costs 8x

Nvidia's DMS compresses LLM KV cache up to 8x, reducing memory costs without accuracy loss. Enables longer chain-of-thought reasoning and more parallel paths. Outperforms heuristic eviction and paging methods.

#research#nvidia#dms
๐Ÿ’ผ
VentureBeatโ€ข65d ago

Nvidia DMS Slashes LLM Costs 8x

Nvidia's DMS compresses KV cache during LLM reasoning, reducing memory by 8x without accuracy loss. Enables longer chain-of-thought and parallel paths. Outperforms heuristic eviction and paging methods.

#research#nvidia#dms
Page 736 of 752