https://arstechnica.com/ai/2024/11/anthropic-hires-its-first-ai-welfare-researcher/
Disclaimer: I think Claude AI is pretty good at what it does and I am grateful to Anthropic for making it useable for free. But seriously, it and the other LLMs are basically random word and code generators powered by complicated algorithms. They have no concept of visual or written art, or the logic behind coding, merely a concept of “these things go together” based on the datasets used to train their algorithms. They have no bodies, so no concept of physical pain, and no algorithms designed to understand emotional pain. They can probably simulate pain if prompted, in the same way that they can be used to simulate characters in a roleplay context, but that’s all it is. The people hiring an “Ai Welfare Researcher” at Anthropic are either approaching Adeptus Mechanicus levels of superstition, scammers trying to psyche out the rivals’ investors, or they are, hypothetically, dealing with some kind of entity (call it a noncorporeal alien or whatever you like) which is masquerading as an LLM, and which should be automatically suspect because of it’s dishonesty.
