Anthropic study reveals AIs can’t reliably explain their own thoughts
If you ask a large language model (LLM) to explain its own reasoning, it will happily give you an answer. The problem is, it’s probably just making one up. A study from Anthropic, led by researcher ...