• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.

    On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.

    As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.


  • Yeah, this is the problem with frankensteining two systems together. Giving an LLM a prompt, and giving it a module that can interpret images for it, leads to this.

    The image parser goes “a crossword, with the following hints”, when what the AI needs to do the job is an actual understanding of the grid. If one singular system understood both images and text, it could hypothetically understand the task well enough to fetch the information it needed from the image. But LLMs aren’t really an approach to any true “intelligence”, so they’ll forever be unable to do that as one piece.