Skip Navigation

AGI is so close, it's almost scary

76

You're viewing a single thread.

76 comments
  • A more technical explanation for this is that LLMs split words into tokens (whole words or parts of words) and then use the "distance" between tokens (how frequently they follow each other) to generate the next most likely token. This results in them not knowing what is actually in a token, just what it's related to. There is a version of ChatGPT called o1 which should be able to solve this by putting its output back into itself and some more parsing that mitigates this problem, but it costs like $30/mo.

You've viewed 76 comments.