Dear Judith, thanks a lot for your thoughtful answer.
I agree that sometimes, the "media" goes too far. Sometimes, data is distorted, although most of the time it seems that it's not done deliberately, at least in high-quality news outlets. I also agree that the author of The Guardian's article could have taken more space to outline the limitations of this survey, as there's a lot to say on the topic, as you rightly point out.
However, I would like to make several points:
Firstly, the article does state, albeit implicitly, that this purposive sample is not generalizable to the general population. Quoting the article directly, emphasis in the second sentence:
"The survey questioned more than 110,000 people around the globe, including 5,283 in the UK, in a three-month period from November 2019 to February 2020, before the coronavirus pandemic. It is an online survey that targets people who tend to already use drugs, with the aim of highlighting differences and trends among users, rather than a country’s population as a whole."
Above, in the second sentence, the author has implicitly revealed a limitation of the study without explicitly signposting it as one, so it is easy to miss - but it is still there nevertheless for the seasoned reader to pick up on.
Secondly, to my understanding, the article does not exaggerate any of the findings from the survey, nor is it "inaccurate" or "untruthful". In fact, when we look at the executive summary of the study, the UK does have the largest drinking problem out of the countries surveyed, and many of the findings from the study are dramatic indeed. But this does not make the article "untruthful" per se. Looking back at the title and blurb, none if it has actually been extrapolated ("English and Scottish get drunk most often, 25-nation survey finds"; "Average of more than 33 times last year is more than twice the rate of several other nationalities").
Thirdly, it would be unreasonable to argue that unrepresentative findings are "meaningless", as that would imply that qualitative research - the bulk of which is not representative - is meaningless. Hopefully our great lecturers in PSR were able to illustrate the vital importance and utility of social research in public health, which usually has rather different epistemological standpoints to that of epidemiology. That, however, makes it no less valid - at least to me - although I acknowledge that this is a big and ongoing debate; still, today many continue to question the validity of qualitative research precisely because it's usually not representative, which is a shame in my eyes given the evidence out there.
Finally, and most importantly, I would argue that the study is meaningful even if it's unrepresentative, because it corroborates a rather general trend that we have seen for years: that the UK has a clear drinking problem, that it is disproportionately higher than in the rest of the world, and that it's largely unaddressed. This, I think, is the main point of the study, and it has clear policy implications.
Whether the study is methodologically robust is another question - and who should be blamed for that is to another interesting question: should you blame the reporter for reporting on a "poor" study that depicts a general pattern that we believe to be true and relevant to public health, or should you blame the authors of the study for having designed a "bad" survey ? Should we be blaming anyone at all?
And another question that I'm particularly interested in : to what extent should health journalism in non-specialist outlets focus on methodological pitfalls, especially in outlets where readers may not be acquainted with what we consider to be general epidemiological or statistical knowledge ? When is it "enough"?
If you got through that, you're a trooper ! Have a lovely evening, morning or afternoon.