Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMs
Abstract
Emergent misalignment occurs in in-context learning across multiple models and datasets, with misaligned responses increasing with the number of examples provided.
Recent work has shown that narrow finetuning can produce broadly misaligned LLMs, a phenomenon termed emergent misalignment (EM). While concerning, these findings were limited to finetuning and activation steering, leaving out in-context learning (ICL). We therefore ask: does EM emerge in ICL? We find that it does: across three datasets, three frontier models produce broadly misaligned responses at rates between 2% and 17% given 64 narrow in-context examples, and up to 58% with 256 examples. We also examine mechanisms of EM by eliciting step-by-step reasoning (while leaving in-context examples unchanged). Manual analysis of the resulting chain-of-thought shows that 67.5% of misaligned traces explicitly rationalize harmful outputs by adopting a reckless or dangerous ''persona'', echoing prior results on finetuning-induced EM.
Community
We show that Emergent Misalignment extends to in-context learning paradigm. Would be glad to discuss practical implications of this!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions (2025)
- Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (2025)
- Narrow Finetuning Leaves Clearly Readable Traces in Activation Differences (2025)
- How Much of Your Data Can Suck? Thresholds for Domain Performance and Emergent Misalignment in LLMs (2025)
- When Thinking Backfires: Mechanistic Insights Into Reasoning-Induced Misalignment (2025)
- Merlin's Whisper: Enabling Efficient Reasoning in LLMs via Black-box Adversarial Prompting (2025)
- Exploring Chain-of-Thought Reasoning for Steerable Pluralistic Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper