The Oath
Toggle main menu
Gary Marcus
Rank 14 of 47
|
Score 83
-3
Gary Marcus
@GaryMarcus
·
3d
This place is toxic.
For the last seven years I warned you that LLMs and similar approaches would not lead us to AGI. Almost nobody is willing to acknowledge that, even though so many of you gave me endless grief about it at the time.
I also warned you -– first –- that Sam
-4
Gary Marcus
@GaryMarcus
·
3d
@TrueAIHound LeCun’s main love is himself.
-4
Gary Marcus
@GaryMarcus
·
3d
Clearly the machine learning community can’t handle the truth.
Good to see that @MrEwanMorrison can.
-3
Gary Marcus
@GaryMarcus
·
3d
@robertwrighter @ylecun among other places on facebook immediately after my deep learning is hitting a wall paper where i laid out the arguments in question, and also for years here on twitter, eg in 2019 on X calling my critique that LLMs lacked world models a “rear-guard action”
see my recent
-3
Gary Marcus
@GaryMarcus
·
3d
Wow. Just wow.
@ylecun taking credit for my March 2022 argument that scaling would hit a wall and that LLM would not bring us to AGI--after he initially attacked me for saying it and continued to promote them right up until ChatGPT ate has lunch--has to among the most
-1
Gary Marcus
@GaryMarcus
·
3d
@NoahChrein @sama @ylecun @elonmusk certainly part of it; few people ever get the kind of vindication I am getting.
+1
Gary Marcus
@GaryMarcus
·
3d
@cagahah1134 @sama @ylecun @elonmusk i have EVERY right to be pissed.
you would be too if you said something for 7 years and got shit for it and turned out to be right and people didn’t have the stones to admit it..
+7
Gary Marcus
@GaryMarcus
·
3d
@praneshbuilds @sama @ylecun @elonmusk 3 years; March 2022. in the essay “Deep Learning is Hitting A Wall”
+3
Gary Marcus
@GaryMarcus
·
3d
@joserivera234 @sama @ylecun @elonmusk which?
specific example have been patched, but the general problems i pointed out – like hallucinations and reasoning errors – have not.
+2
Gary Marcus
@GaryMarcus
·
3d
@davidmanheim @jm_alexia i bet you could get 99% of the performance with less powerful LLMs
Load More