3 Comments
User's avatar
John Quiggin's avatar

To some extent, AI is just exposing bad practices that have been around for a long time. For example, if someone produces a paper containing AI-hallucinated citations, it shows that they have never been in the habit of checking that their citations say what they are claimed to. The typical way this happened pre-AI was to copy someone else's list of citations.

In a similar fashion, factoids like "you must drink eight glasses of water a day" can be recirculated for years before anyone bothers to trace them to their original source, which just says something like "this is the average fluid intake from all sources, and people will get thirsty if they ingest less"

Given that the output of an LLM is an answer to the question "what would the average Internet user type next after this", no surprises here.

Expand full comment
Dr. Steven Quest's avatar

A key point made here is that AI is very useful for automating low skilled and/or repetitive tasks, but risky when applied to more complex tasks where human analysis is essential. AI is also effective in analyzing visual data, such as X-ray images, retinal images, etc., but human oversight is still necessary.

Expand full comment
DrBDH's avatar

That METR study has a result that reminds me of the Dunning-Kruger paper that found both high and low scorers were apt to have incorrect ideas of how well they did. The METR subjects thought AI would make them 24% more efficient and even after AI made them 19% less efficient, they thought it made them 20% more efficient. This self-fulfilling misconception may explain a lot of our failure to accept verifiable reality.

Expand full comment