Which means, increasingly, that LLMs are being trained on material, much of which was produced by other LLMs (or even by the same LLM). To the extent that LLMs produce output which is incomplete, unreliable or downright false, this will be a self-reinforcing problem.