Agree with you, but I think there are two angles to think about here:
1. Many school kids and even people in universities use AI for math/chemistry/physics problems. In these cases, the problem is not the AI text, because there won't be much text. The problem is... are two, in fact: the humans using AI in these cases won't do any thinking of themselves and, at least in some cases, AI will deliver wrong results. No problem with an erroneous homework, but if we're talking about a chemistry lab experiment that gets quantities or substances wrong... someone might even get hurt.
2. AI text, when talking about texts that usually follow a certain pattern, can be pretty good. Think about obituaries or weather forecasts.