Gary Marcus

Rank 10 of 47
|
Score 116

The statement and accompanying content discuss the potential safety issues arising from the failure of large language models (LLMs) to generalize safety facts to new scenarios. It introduces a benchmark, SAGE-Eval, to evaluate this capability, highlighting the performance of various models. The tone is informative and aims to raise awareness about the limitations of LLMs in safety-critical applications.

  1. Principle 1:
    I will strive to do no harm with my words and actions.
    The statement highlights potential safety issues, aiming to prevent harm by informing the public and researchers about the limitations of LLMs. [+2]
  2. Principle 3:
    I will use my words and actions to promote understanding, empathy, and compassion.
    By providing data and analysis, the statement promotes understanding and awareness of the challenges in AI safety. [+2]
  3. Principle 6:
    I will use my influence for the betterment of society.
    The authors use their research to contribute to societal betterment by addressing AI safety concerns. [+2]