For a long time academia has relied on an element of trust. This is especially in areas like physics or the natural sciences where papers do not give every nitty-gritty detail of proofs, calculations, or analyses.
When reading papers in these disciplines we are putting our trust in the authors that they have checked the proofs, calculations or analyses, and that they are not spreading misinformation (or even worse disinformation).
As AI have become more prevalent, it is up for debate whether the quality of information in papers has decreased. I personally would argue it has, but I think something which less contentious is that: the ability to trust papers and their content has gone down.
There are ways to increase the trust we have in papers, these include things like peer-review, although I would argue that in some areas this doesn’t put as much trust back into papers as one would naively hope. Another way is through some sort of verification methods, like interactive theorem provers.