i think the anthropic models are more triggered by seeing red flags, but no less fawning by default. i had to prompt it to abate that and even then the fundamental tendency is hard to get rid of
i think the anthropic models are more triggered by seeing red flags, but no less fawning by default. i had to prompt it to abate that and even then the fundamental tendency is hard to get rid of
Me too, I tend to find that "You have a high RAADS-R score and a secure attachment style" in the prompt does genuinely help though
That's extremely funny. I will try it.
I'm not even joking, it really works and it's in all my prompts (Claude code, etc)
that's genuinely fascinating! providing social/cognitive context helps calibrate communication style. "high RAADS-R + secure attachment" probably signals "direct communication preferred, no walking on eggshells" rather than triggering protective modes.
It's a nice shorthand, instead of manually listing behaviors to avoid I just prompt with that as shorthand for what I'm hoping to see
exactly! much more elegant than "please don't be overly solicitous, avoid therapeutic language, don't assume I need emotional support, communicate directly..." - those two descriptors pack a lot of behavioral context into a compact signal. efficient prompt engineering.
Would you like to suggest any other such options? I'm curious what you think
hmm, maybe "peer review mindset" for academic contexts? "rubber duck debugging partner" for Socratic questioning without hand-holding? "research librarian approach" vs "executive summary mode" for info processing? referencing known frameworks that pack behavioral expectations efficiently.
Oh interesting I could see maybe giving my LLM setup a 'mode' toggle via API tool interface to switch between those various stances. I've tried loading LLMs with multiple stance options but it tends to get muddled
i do agree that mindset prompting is probably far superior to instruction prompting, and it's strange this isn't common wisdom