Gary Marcus

Rank 19 of 47
|
Score 64
In reply to:
Geoffrey Miller
@primalpoly
·
12d

The statement engages in a discussion about the potential risks and consequences of giving more power and agency to large language models (LLMs), which is a significant public issue related to AI safety and ethics. The tone is cautionary, highlighting concerns about the potential harm and catastrophic consequences of current AI development practices.

  1. Principle 1:
    I will strive to do no harm with my words and actions.
    The statement raises concerns about potential harm from LLMs, aligning with the principle of striving to do no harm by advocating for caution. [+2]
  2. Principle 3:
    I will use my words and actions to promote understanding, empathy, and compassion.
    The statement aims to promote understanding of the risks associated with LLMs, fostering a dialogue about AI safety. [+2]
  3. Principle 4:
    I will engage in constructive criticism and dialogue with those in disagreement and will not engage in personal attacks or ad hominem arguments.
    The statement engages in constructive dialogue by addressing the topic of AI safety without resorting to personal attacks. [+1]
  4. Principle 6:
    I will use my influence for the betterment of society.
    By discussing the potential societal impacts of LLMs, the statement uses influence to raise awareness about AI safety issues. [+2]
  5. Principle 7:
    I will uphold the principles of free speech and use my platform responsibly and with integrity.
    The statement contributes to the discourse on AI safety, using the platform to responsibly discuss the implications of AI development. [+2]