PLUS: OpenAI burns $15M daily, jet engines power data centers, and Meta’s DINOv2 maps trees
Good morning, AI enthusiast.
Anthropic just revealed a massive coordinated effort by top Chinese AI labs to steal its model’s “thoughts.” Using complex proxy networks, competitors like DeepSeek and MiniMax reportedly siphoned millions of queries to train their own systems on the cheap.
This “distillation” attack highlights a critical vulnerability in current US export controls, allowing foreign entities to bypass hardware sanctions. It begs the question: if model outputs can be copied so easily, are compute restrictions actually effective at maintaining a competitive edge?
In today’s AI recap:
- Anthropic blocks massive model theft ring
- OpenAI’s $15M daily burn and new targets
- Data centers turn to jet engines for power
- Meta’s AI maps every tree in the UK
Anthropic exposes a massive Chinese AI theft ring
The Recap: Anthropic has identified and blocked a coordinated operation by three major Chinese AI labs that generated over 16 million illicit queries to distill Claude’s capabilities into their own models.
Unpacked:
- These organizations deployed extensive “hydra cluster” proxy networks to mask their identity and bypass regional restrictions by cycling through thousands of fraudulent accounts.
- DeepSeek, Moonshot, and MiniMax specifically targeted differentiated features like agentic reasoning and coding, with MiniMax alone generating 13 million exchanges to train its systems before launch.
- The operation undermines US export controls by allowing foreign entities to acquire frontier-level intelligence without needing the advanced compute infrastructure typically required for such breakthroughs.
Bottom line:
State-sponsored distillation campaigns threaten to nullify hardware sanctions by allowing competitors to copy advanced reasoning at a fraction of the development cost. Securing model outputs is now just as critical as protecting the weights themselves to maintain competitive advantages.
OpenAI burns $15M a day as compute plans shrink
The Recap: OpenAI is recalibrating its financial roadmap by significantly reducing its 2030 compute target to $600 billion while managing a steep daily cash burn.
Unpacked:
- The company aims to generate $280 billion in annual revenue by 2030, a target that balances its reduced infrastructure spending with an equal split between consumer and enterprise income.
- Adoption remains explosive despite fierce competition, with ChatGPT hitting 900 million weekly active users and Codex surpassing 1.5 million developers as it competes directly with Anthropic.
- Investors are doubling down on this efficiency pivot, with Nvidia negotiating to invest up to $30 billion as part of a massive funding round that values the AI giant at $730 billion.
Bottom line:
This strategic reset signals that OpenAI is maturing from a research lab into a disciplined enterprise focused on sustainable margins rather than infinite spending. As hardware costs stabilize, the focus shifts from raw infrastructure accumulation to proving that massive scale can deliver profitability.
Data centers turn to jet engines for power
The Recap: Tech companies are increasingly repurposing retired jet turbines to generate electricity for power-hungry AI data centers.
Unpacked:
- Facilities are converting old airplane engines into stationary power sources to keep up with intense computational demands.
- This strategy helps satisfy the massive electricity consumption required to train and run complex AI models.
- The move turns retired aviation hardware into a critical utility asset for sustaining infrastructure growth.
Bottom line: Securing reliable energy is becoming a primary bottleneck for the artificial intelligence sector. Utilizing existing industrial machinery offers a pragmatic solution to power the next generation of computing.
Meta uses AI to map every tree in the UK
The Recap: Forest Research partners with Meta to map England’s tree canopy with unprecedented precision using computer vision. This initiative replaces expensive aerial surveys to help the government achieve its goal of increasing green space access for all residents. Read the announcement
Unpacked:
- The agency employs DINOv2 to analyze satellite imagery at a 1-meter resolution, offering a cost-effective alternative to traditional LiDAR data collection. Learn about DINOv2
- This system captures “trees outside woodlands”—which represent 30% of England’s canopy—building on Meta’s work to create a comprehensive global map of tree height. See the global map
- Frequent data updates allow officials to accurately track their commitment to the Environmental Improvement Plan, which aims to boost canopy cover by 2% by 2050. View the plan
Bottom line: Precise environmental monitoring previously required massive budgets, but open-source models now make this data accessible to national agencies. Accurate baselines empower governments to effectively target reforestation efforts where they matter most.
The Shortlist
Google partnered with ISTE+ASCD to launch a massive initiative providing free Gemini and AI literacy training to all 6 million K-12 and higher education educators across the United States.
Amazon revealed that a Russian-speaking threat group utilized commercial AI tools to successfully breach over 600 Fortinet firewalls across 55 countries during a targeted campaign.
NVIDIA open-sourced DreamDojo, a robotics world model trained on 44,000 hours of human video that predicts physical interactions and physics without needing a dedicated engine.
Superpower launched a public AI health partner that analyzes user lab results and biomarkers to deliver personalized wellness recommendations and health insights.