Agreed. I'm nowhere near expert enough to opine on how far state-of-the-art is from some global maxima.
I'd contend that, for the most part, it doesn't matter. It's a bit like the whole ML vs AGI debate ("but ML is just curve fitting, it's not real intelligence"). The more pertinent question for human society is the impact it has - positive or negative. ML, with all its real or perceived weaknesses, is having a significant impact on the economy specifically and society generally.
It'll be little consolation for white collar workers who lose their jobs that the bot replacing them isn't "properly intelligent". Equally, few people using Siri to control their room temperature or satnav will care that the underlying "intelligence" isn't as clever as we like to think we are.
Maybe current approaches will prove to have a cliff-edge limitation like previous AI approaches did. That will be interesting from a scientific progress perspective. But even in its current state, contemporary ML has plently scope to bring about massive changes in society (and already is). We should be careful not to miss that in criticising current limitations.
Word. I think we’re actually at a level now that we’ll soon start questioning how intelligent people really are, and how much of human intelligence is just an uncanny ability to hide incompetence/lack of deeper comprehension.
(Of course we’re a hell of a long way from A.I. with deep comprehension, and may remain so for hundreds of years. It’s impossible to predict that kind of quantum leap IMHO.)
This perspective makes sense pragmatically, but in philosophical terms it’s a little absurd.
Going back to Turing, the argument was for true, human creativity. The claim was that there is no theoretical reason a machine cannot write a compelling sonnet.
After spending the better part of a century on that problem, we have made essentially zero progress. We still believe that there is no theoretical reason a machine cannot write a compelling sonnet. We still have zero models for how that could actually work.
If you are a non-technical person who has been reading popular reporting about ML, you might well have been given the impression that something like GPT2 reflects progress on the sonnet problem. Some very technical people seem to believe this too? Which seems like an issue, because there’s just no evidence for it.
Maybe a larger/deeper/more recurrent ML approach will magically solve the problem in the next twenty years.
And maybe the first machine built in the 20th century that could work out symbolic logic faster than all of the human computers in the world would have magically solved it.
There was no systematic model for the problem, so there was no reason to conclude one way or another, just as there isn’t any today.
ML is a powerful metaprogramming technique, probably the most productive one developed yet. And that matters.
It’s just still not at all what we’ve been proposing to the public for a hundred years. To the best of our understanding, it’s not even meaningfully closer. And that matters too, even if Siri still works fine.
Re sonnet problem: we can use GPT-2 to generate 10k sonnets, then choose the best one (say by popular vote, or expert opinion, etc), it's quite likely to be "compelling" or at least on par with an average published sonnet. Do you agree? If yes, then with some further deep learning research, more training data, and bigger models, we will probably be able to eventually shrink the output space to 1k, 100, and eventually maybe just 10 sonnets to choose from, to get similar quality. Would this be considered "progress for that problem" in your opinion?
> After spending the better part of a century on that problem, we have made essentially zero progress.
I dunno, I heard music composed with neural nets that is above what the average human could achieve [1]. Not on par with the greatest composers, but over average human level.
In the same line of thought, I have seen models do symbolic math better than automatic solvers, generate paintings better than average humans could paint, even translate better than average second language learners.
I would rate current level in AI at 50% of human intelligence on average, and most of that was accomplished in the recent years.
Going back to Turing, the argument was for true, human creativity.
That's not true. The Turing test is that one can't tell the difference between a human and a machine intelligence by communicating with it. That's it.
The claim was that there is no theoretical reason a machine cannot write a compelling sonnet.
And that's absolutely not true. I can't write a compelling sonnet.
If you are a non-technical person who has been reading popular reporting about ML, you might well have been given the impression that something like GPT2 reflects progress on the sonnet problem. Some very technical people seem to believe this too? Which seems like an issue, because there’s just no evidence for it.
I work in the field of NLP and I believe it does reflect progress, and I think there is evidence for it.
The gods are they who came to earth
And set the seas ablaze with gold.
There is a breeze upon the sea,
A sea of summer in its folds,
A salt, enchanted breeze that mocks
The scents of life, from far away
Comes slumbrous, sad, and quaint, and quaint.
The mother of the gods, that day,
With mortal feet and sweet voice speaks,
And smiles, and speaks to men: "My Sweet,
I shall not weary of thy pain."
GPT2 small generated poetry.
For the youth, who, long ago,
Came up the long and winding way
Beneath my father's roof, in sorrow—
Sorrow that I would not bless
With his very tears. Oh,
My son the sorrowing,
Sorrow's child. God keep thy head,
Where it is dim with age,
Gentle in her death!
I'd contend that, for the most part, it doesn't matter. It's a bit like the whole ML vs AGI debate ("but ML is just curve fitting, it's not real intelligence"). The more pertinent question for human society is the impact it has - positive or negative. ML, with all its real or perceived weaknesses, is having a significant impact on the economy specifically and society generally.
It'll be little consolation for white collar workers who lose their jobs that the bot replacing them isn't "properly intelligent". Equally, few people using Siri to control their room temperature or satnav will care that the underlying "intelligence" isn't as clever as we like to think we are.
Maybe current approaches will prove to have a cliff-edge limitation like previous AI approaches did. That will be interesting from a scientific progress perspective. But even in its current state, contemporary ML has plently scope to bring about massive changes in society (and already is). We should be careful not to miss that in criticising current limitations.