Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That we're getting either sycophantic or intransigent hallucinations point to two fundamental limitations: there's no getting rid of hallucinations, and there's a trade-off in observed agreement "behavior".

Also, the recurring theme of "just wipe out context and re-start" places a hard ceiling on how complex an issue the LLM can be useful for.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: