Agree with you, but I think there are two angles to think about here: 1. Many school kids and even people in universities use AI for math/chemistry/physics problems. In these cases, the problem is not the AI text, because there won't be much text. The problem is... are two, in fact: the humans using AI in these cases won't do any thinking of themselves and, at least in some cases, AI will deliver wrong results. No problem with an erroneous homework, but if we're talking about a chemistry lab experiment that gets quantities or substances wrong... someone might even get hurt. 2. AI text, when talking about texts that usually follow a certain pattern, can be pretty good. Think about obituaries or weather forecasts.
Well, duh. Maybe a school kid who uses AI to write homework would not be able to tell but anyone with an actual education can spot AI stuff most of the time. It is incoherent, illogical and lots of structural contradictions. Editing AI texy is also very laborious because unlike human writing, it does not make the same pattern of mistakes and for long text, it can't hold a theme, reason rigorouslt and develop a deep message.
Although they often deliver impressive results, AI engines such as those from Meta and OpenAI, which use large language models, still lack basic reasoning capabilities. A group backed by Apple proposed a new benchmark, which already revealed that even the slightest wording changes in a query can lead to completely different answers.