💰钛媒体•Freshcollected in 16m
iQiyi Hot Searches Expose AI Face-Buying Biz

💡AI face-buying booms in iQiyi dramas—key for video AI creators!
⚡ 30-Second TL;DR
What Changed
AI short dramas employ 'buy face' for actor faces
Why It Matters
Boosts low-cost content production but raises ethical issues on deepfakes in streaming.
What To Do Next
Experiment with face-swap tools like Roop on short drama clips for content testing.
Who should care:Creators & Designers
🧠 Deep Insight
AI-generated analysis for this event.
🔑 Enhanced Key Takeaways
- •The 'buy face' phenomenon involves the unauthorized use of celebrity or influencer likenesses, often facilitated by 'face-swapping' services sold on e-commerce platforms like Taobao and Xianyu, which bypass platform content moderation.
- •iQiyi and other major Chinese streaming platforms are facing increased pressure from the Cyberspace Administration of China (CAC) to implement stricter digital watermarking and provenance verification for AI-generated content (AIGC) to combat deepfake fraud.
- •The economic model behind these AI short dramas relies on low-cost, high-volume production where 'face-buying' allows producers to swap faces of popular actors into multiple low-budget scripts, significantly reducing talent acquisition costs.
🛠️ Technical Deep Dive
- •Implementation typically utilizes open-source deepfake frameworks such as DeepFaceLab or Roop, often modified with custom LoRA (Low-Rank Adaptation) models to improve facial consistency across different lighting conditions.
- •The workflow involves a multi-stage pipeline: 1) Face detection and alignment using MTCNN or RetinaFace, 2) Feature extraction via pre-trained models like ArcFace, and 3) Generative blending using GANs (Generative Adversarial Networks) or diffusion-based inpainting to integrate the target face into the source video frames.
- •To evade detection, malicious actors employ 'adversarial noise' injection, which introduces subtle pixel-level perturbations that disrupt automated deepfake detection algorithms while remaining imperceptible to human viewers.
🔮 Future ImplicationsAI analysis grounded in cited sources
Mandatory AI-generated content labeling will become a legal requirement for all short-form video platforms in China by late 2026.
Regulators are shifting from reactive monitoring to proactive enforcement, requiring platforms to embed cryptographic signatures in all AI-manipulated media.
The market for 'face-buying' services will face a significant contraction due to the implementation of blockchain-based identity verification for digital actors.
As platforms integrate decentralized identity (DID) systems, the ability to anonymously swap faces will be technically restricted by the lack of verified digital assets.
⏳ Timeline
2023-07
CAC releases interim measures for the management of generative AI services, setting the stage for stricter AIGC oversight.
2024-01
iQiyi begins integrating AI-driven content moderation tools to detect unauthorized deepfake usage in user-uploaded content.
2025-09
iQiyi updates its platform terms of service to explicitly ban the use of unauthorized AI-generated likenesses in short drama productions.
📰
Weekly AI Recap
Read this week's curated digest of top AI events →
👉Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: 钛媒体 ↗



