Exactly. LLMs at their core are just fancy autocomplete. Extremely fancy, to be sure, and the output that they predict can be very useful - but people who anthropomorphize them or ascribe higher significance to the generated output seem to be missing this.