The REACH finding that four in five Singaporeans supported the proposed Liquor Control (Supply and Consumption) Bill was met with scepticism – for good reason. This telephone poll conducted by the government’s feedback agency in January 2015 came after two public consultation exercises conducted by the Ministry of Home Affairs (MHA) between October 2013 and August 2014 for “a review of liquor control measures”. This piece may be late to the party, but the lazy methodologies of these exercises are no less damning.
Moreover present criticisms have also missed the mark: some confused the two different exercises conducted, while others were too hung up with the sample sizes (the least of our concerns). Laziness characterises these government-commissioned studies.
Take the REACH study for instance. The random sample of 1,145 – “weighted to be demographically representative of the national population in terms of gender, age, and race” – is reasonable, though the questions could have been improved:
1. Asking if respondents are familiar with the issue is good (Q1), yet it was not ascertained if they were actually familiar. That 92 per cent indicated that “they had at least read or heard a little about this issue” counts for little, since respondents could have tendencies to overstate their comprehension of issues or policies. The respondent’s actual knowledge of the new alcohol restrictions would affect the opinion on the new alcohol restrictions (Q2).
In other words, respondents could support or oppose without knowing the alcohol restrictions. The same problem arises when respondents were asked if “the new regulations would help to reduce cases of drunkenness in public places” (Q5).
2. In this vein, with the assumption that most of the respondents were consumers (a fair one), the survey could have considered how frequent the respondent consumes alcohol. The question on whether “my lifestyle and activities will be affected by the new regulations” (Q3) also depends on the respondent’s alcohol consumption patterns.
3. “80 per cent of respondents agreed that public drunkenness was a serious issue that needed to be addressed”, even though it was not clear if “public drunkenness” was explained to them. Even if respondents were asked if they had actually encountered cases of public drunkenness, the aggregation of these (most likely unreliable) anecdotes would not be particularly useful. If public drunkenness is cited as a reason for the new Bill, then numbers should be furnished by the MHA, to determine if a trend exists and if it is out of control.
And at least we had some information to assess the utility of the REACH study. Besides a brief infographic there was little detail on the two public consultation exercises conducted by MHA, which hosted focus group discussions and industry consultation sessions, as well as received written feedback or e-poll results. Here questions on the sampling are more pertinent. Where did the written feedback come from? Could we understand the demographics of their writers – age, alcohol consumption, or affiliated organisations – as well as the respondents of the e-poll, to conclude if the perspectives are representative? Was there a lobby for or against, by tracing potential similarities across the written feedback? How did the researchers confirm if a written feedback expressed support for or against the restrictions? Without revealing the identities of their writers, could excerpts or samples of this feedback be shared?
Quantitative and qualitative studies can be useful, if done with rigour. These surveys – unfortunately – leave much to be desired.