PIL: Linear Proxies for Unlearnable Samples
🧠#research#pil#iclr-2026Recentcollected in 12m

PIL: Linear Proxies for Unlearnable Samples

PostLinkedIn
🧠Read original on 机器之心

💡100x faster unlearnable examples via linear proxies – essential data privacy tool (ICLR 2026)

⚡ 30-Second TL;DR

What changed

Replaces DNN proxies with linear models for PGD optimization

Why it matters

Democratizes unlearnable examples for photographers/users, enabling practical data protection against model scraping at low cost.

What to do next

Run PIL from https://github.com/jinlinll/pil on your images to test unlearnability vs ResNet.

Who should care:Researchers & Academics

PIL generates unlearnable examples using lightweight linear proxies instead of DNNs, slashing compute costs. It exploits the insight that perturbations induce linear behavior in deep models. Offers comparable protection with far superior efficiency on CIFAR/ImageNet.

Key Points

  • 1.Replaces DNN proxies with linear models for PGD optimization
  • 2.Key insight: unlearnable samples boost model linearity (FGSM metric)
  • 3.15+ GPU hours (REM) → minutes; scales to high-res images
  • 4.Open-source code protects data privacy without heavy compute

Impact Analysis

Democratizes unlearnable examples for photographers/users, enabling practical data protection against model scraping at low cost.

Technical Details

Optimizes perturbations on linearized proxies; transferability holds due to induced linearity across methods like EM/REM/TAP.

📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Read Next

AI-curated news aggregator. All content rights belong to original publishers.
Original source: 机器之心