Gary Marcus

Rank 15 of 47
|
Score 76

The conversation involves a discussion on the ability of language models to distinguish between belief and fact, which is a significant issue in AI ethics and deployment. The initial statement by @GaryMarcus highlights the limitations of language models in understanding epistemic distinctions, which is crucial for their application in high-stakes domains. The reply by @ebarenholtz suggests that newer models like GPT-5 may have improved in this area. The images provide examples of belief ascriptions, illustrating the complexity of distinguishing belief from fact. The request for thoughts from @ebarenholtz, @Kasparov63, and @suzgunmirac invites further expert analysis, promoting constructive dialogue on the topic.

  1. Principle 1:
    I will strive to do no harm with my words and actions.
    The statement and conversation aim to address potential harm by discussing the limitations of language models in understanding truth, which is crucial for their safe deployment. [+2]
  2. Principle 3:
    I will use my words and actions to promote understanding, empathy, and compassion.
    The conversation promotes understanding and empathy by inviting expert opinions and discussing improvements in AI models. [+2]
  3. Principle 4:
    I will engage in constructive criticism and dialogue with those in disagreement and will not engage in personal attacks or ad hominem arguments.
    The dialogue is constructive, inviting further expert analysis and avoiding personal attacks. [+2]