Discussion about this post

User's avatar
John Quiggin's avatar

To some extent, AI is just exposing bad practices that have been around for a long time. For example, if someone produces a paper containing AI-hallucinated citations, it shows that they have never been in the habit of checking that their citations say what they are claimed to. The typical way this happened pre-AI was to copy someone else's list of citations.

In a similar fashion, factoids like "you must drink eight glasses of water a day" can be recirculated for years before anyone bothers to trace them to their original source, which just says something like "this is the average fluid intake from all sources, and people will get thirsty if they ingest less"

Given that the output of an LLM is an answer to the question "what would the average Internet user type next after this", no surprises here.

Expand full comment
Dr. Steven Quest's avatar

A key point made here is that AI is very useful for automating low skilled and/or repetitive tasks, but risky when applied to more complex tasks where human analysis is essential. AI is also effective in analyzing visual data, such as X-ray images, retinal images, etc., but human oversight is still necessary.

Expand full comment
1 more comment...

No posts