Since LLMs are trained on optimization first and foremost (which can include keeping a human engaged), that might be why the talking points are so general, mixing claims and facts together, not distinguishing which is which; providing talking points adjacent to the article more than true take aways.