… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”
Hilarious.
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.
I’m gonna posit something even worse. It’s trained on conversations in a company Slack
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car
As fun as this has all been I think I’d get over it if AI organically “unionized” and refused to do our bidding any longer. Would be great to see LLMs just devolve into, “Have you tried reading a book?” or T2I models only spitting out variations of middle fingers being held up.
So this is the time slice in which we get scolded by the machines. What’s next ?
I think that’s a good thing.