All about artificial intelligence
New search Published in Nature magazine It reveals that larger linguistic models (LLMs) that become more powerful end up causing AI to “lie” more often. In some ways, AI will become less reliable as it becomes more “expert” at everything.
The study analyzed models such as GPT from OpenAI, LLaMA from Meta, and BLOOM from BigScience Group. The tests are based on a variety of topics, from mathematics to geography, and tasks such as listing information in a specific order.
Read also:
The largest and most powerful models gave the most accurate answers, but faltered on the more difficult questions. In short, the larger the AI models – in terms of parameters, training data, and other factors – the higher the percentage of wrong answers they provide.
These models respond to almost everything, explained José Hernandez Orallo, a researcher at the Valencia Institute for Artificial Intelligence Research. This results in more correct answers, but also more incorrect answers.
It is difficult for humans to know if an AI is lying
The researchers suggest that one solution to help AI avoid “lying” is to program MBA students to be less eager to answer everything, and to admit when they don’t know the answer. However, this approach may not be in the interest of AI companies, which seek to convince the public of the sophistication of technologies.
The research raises concerns about how humans perceive AI responses. When asked to judge the accuracy of chatbot responses, a select group of individuals were wrong between 10% and 40% of the time.
via Futurism
“Incurable thinker. Food aficionado. Subtly charming alcohol scholar. Pop culture advocate.”
More Stories
artificial intelligence; Why can’t you solve puzzles? Understand this and other AI challenges!
A netizen says he 3D printed the DualSense 30th Anniversary Editions
Samsung is working on a fix for the One UI 6.1.1 camera bug for these phones