SiliconValleyWangChua
vip

The essence of a large language model is to forcibly construct a self-consistent value system based on existing input data. Hallucinations can be seen as a natural manifestation and extension after self-consistency. Many new scientific discoveries are precisely because they encounter an 'error' in the natural world that cannot be explained by existing theories and cannot be self-consistent, so they must abandon the old theories. This roughly explains why, so far, no large language model (with so much data) can spontaneously make new scientific discoveries, because the model itself does not have the ability to judge right from wrong.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)