@robryk@qoto.org fair points, and thank you for motte-and-bailey fallacy — not exactly what I was talking about, but it's definitely relevant.
What I object to is AI hypers using undefined terms and then using this lack of definition against those who disagree with them.
Let's call my argument "Russels Thinking Teapot" — the fact that one cannot prove GPT (or a china teapot orbiting the Sun between Earth and Mars) does not think does not mean it actually does.