I think this takes too much of a sort of top-down approach. Many broad statements are made about what LLMs are and aren't - some positive, some negative, some well-substantiated, some less so - when there needs to be a greater focus on fundamental knowledge of their inner workings and (im)practical uses.
The title being a question implies it will teach you to answer the question yourself. But it feels more like you're expected to enter with the belief that they're oracles, and this is here to convince you they're bullshit machines.
I don't care which one it is! It doesn't matter if we call what they do logical reasoning or not, because either way it doesn't help to give a full understanding of their actual capabilities.
Much of the language surrounding this course makes me think the intention is a grounded view. If that is the case though, this misses the mark. Rather than educate on the reality of the situation, I fear the use of it would only exacerbate the principles-first approach seen all too often in discussion around AI. More accurate principles than most perhaps, but principles nonetheless.
The title being a question implies it will teach you to answer the question yourself. But it feels more like you're expected to enter with the belief that they're oracles, and this is here to convince you they're bullshit machines.
I don't care which one it is! It doesn't matter if we call what they do logical reasoning or not, because either way it doesn't help to give a full understanding of their actual capabilities.
Much of the language surrounding this course makes me think the intention is a grounded view. If that is the case though, this misses the mark. Rather than educate on the reality of the situation, I fear the use of it would only exacerbate the principles-first approach seen all too often in discussion around AI. More accurate principles than most perhaps, but principles nonetheless.