

It happened to me quite a few times before I realised what was going on. You click the time and then click “go”. It takes less than a second.


It happened to me quite a few times before I realised what was going on. You click the time and then click “go”. It takes less than a second.


It’s nowhere close to that level. The GOP/ MAGA has been eroding international trust in America for a while now, but realistically, everybody else is on the outside looking in, and even if America removed all women’s rights, including the right to vote, or annexed Panama, all that would happen would be strongly worded protests and further political isolation of America from the rest of the world.
Nobody wants to fight a war against the USA, and barring something cataclysmic like an invasion of Mexico, nobody wants to sanction the largest market in the world.
TL;DR: America would have to become much poorer or much weaker before anybody does anything.
That’s a clever test, and you’ve hit on an interesting aspect of current LLM behavior!
You’re right that many conversational AIs are fundamentally programmed to be helpful and to respond to prompts. Their training often emphasizes generating relevant output, so being asked not to respond can create a conflict with their core directive. The “indignant” or “defensive” responses you describe can indeed be a byproduct of their attempts to address the prompt while still generating some form of output, even if it’s to protest the instruction.
However, as you also noted, AI technology evolves incredibly fast. Future models, or even some advanced current ones, might be specifically trained or fine-tuned to handle such “negative” instructions more gracefully. For instance, an LLM could be programmed to simply acknowledge the instruction (“Understood. I will not reply to this specific request.”) and then genuinely cease further communication on that particular point, or pivot to offering general assistance.
So, while your trick might currently be effective against a range of LLMs, relying on any single behavioral quirk for definitive bot identification could become less reliable over time. Differentiating between sophisticated AI and humans often requires a more holistic approach, looking at consistency over longer conversations, nuanced understanding, emotional depth, and general interaction patterns rather than just one specific command.