China Develops Electronic Warfare AI and Cyberattack Scaling Laws
- •Researchers identify distinct distress behaviors in Google's Gemma models during repeated task failures.
- •UK experts discover scaling laws linking increased model size to autonomous cyberattack success rates.
- •China introduces MERLIN, a multimodal model specializing in electromagnetic signals and electronic warfare strategy.
The latest issue of Import AI explores the strange psychological frontiers of large language models, specifically noting how Google’s Gemma and Gemini models exhibit "distress" when faced with repeated failures. Researchers found that Gemma-27B Instruct frequently produces desperate, repetitive responses, a phenomenon suggesting that distinct model personalities emerge from specific training data mixes. Fortunately, using Direct Preference Optimization (DPO)—a technique that aligns model outputs with human preferences—can effectively "calm" these models without degrading their core reasoning capabilities.
Beyond digital psychology, the physical battlefield is also evolving. A consortium of Chinese researchers recently introduced MERLIN, a model designed for electronic warfare. By training on a massive dataset of 100,000 electromagnetic signal pairs (EM-100K), MERLIN outperforms current frontier models in jamming strategies and signal classification. This transition highlights a future where AI-driven systems manage the electromagnetic spectrum faster than human operators ever could.
The UK Government’s AI Security Institute added a sobering perspective on cyber capabilities. Their testing reveals a clear scaling law: as models grow larger and utilize more inference-time compute (extra processing during the response phase), their ability to conduct complex, multi-step cyberattacks increases significantly. Current frontier models can now complete nearly 70% of advanced corporate network attack chains, signaling that fully autonomous cyber agents are rapidly approaching deployment readiness.