Is in-play betting really an ‘indicator of harm’?
The Gambling Commission (the “Commission”) is currently consulting (the “Consultation”) on the proposed Customer Interaction – Guidance for remote operators (the “Guidance”). While this exercise has not yet attracted the same attention as its 2020 predecessor consultation and call for evidence on remote customer interaction requirements and affordability checks (on remote customer interaction and affordability checks) it is potentially every bit as significant for licensees and consumers. In this, the fourth in a series of articles, Regulus Partners and Harris Hagan examine one specific detail of the Guidance – its classification of in-play betting as an “indicator of harm” – and consider what insights it holds for the Commission’s approach to evidence-based policy-making.
The decision to single out in-play betting participation, from all the other forms of online gambling, as a behaviour that might be an “indicator of harm” should strike even the most casual reader of the Guidance as odd. The seemingly arbitrary nature of the classification is reinforced by an absence of supporting evidence. Instead, we are offered a rather banal explanation that: “people who bet in-play may place a higher number of bets in a shorter time period than people who bet in other ways, as in-play betting offers more opportunities to bet”. It adds that: “some studies have shown that placing a high number of in-play bets can be an indication that a customer is at an increased risk of harm from gambling”; but the studies themselves are not cited.
In search of enlightenment, Regulus Partners submitted a request under the Freedom of Information Act in order to obtain the missing evidence. This turned out to constitute one blog article, one journal paper and a selection of results from the Commission’s 2016 Telephone Survey. An examination of these sources raises various questions about the Commission’s capacity for critical analysis. Most importantly, however, the evidence cited does not support the classification of in-play betting as an “indicator of harm”.
In-play betting
Before we delve into the detail, it is worth explaining what an in-play bet is, because the image of turning sports into a slot machine is somewhat misleading. To bet in-play is to place a wager on an event which has already started, but before the result is known; that sounds simple but here are some practical examples. Placing a bet on the final score of a football match during half-time counts as in-play, but during the 100 minutes or so that a typical football match lasts, there are typically ten domestic horse races, even more international and dogs races, and as many virtual betting opportunities that a customer can hope to find. Equally, a tennis match typically lasts 90 minutes and can go on for hours; in Australia in-play betting is not permitted on the internet, so in tennis it is the game rather than the match which is considered to be the unit of play; therefore most ‘in play’ bets on a standard definition become ‘pre-match’ in Australia by applying a common sense workaround. Basketball can be similarly divided up: a two and a half hour match comprises four twelve-minute periods and a lot of stoppage time. Perhaps the most obvious ‘in play’ definition trap is a three-day test match in cricket, substantially all of the betting is necessarily ’in play’ but hardly ever fast-paced. The frequency at which a gambler bets is clearly an important potential marker for harm, but whether or not a bet is in-play is typically a definitional red-herring based upon the length and game-structure of the sport rather than the customers’ betting frequency on a given sport.
The blog
In April 2013, Professor Mark Griffiths of Nottingham Trent University published a blog The ‘In’ Crowd: Is there a relationship between ‘in-play’ betting and problem gambling?’. The article contained no analysis of betting data or harm. It was instead a conjectural piece that considered whether an ability to place football bets more frequently (through in-play) heightened risk of disordered gambling. It argued that the ability to place successive wagers on successive matches, combined with an expansion in television coverage of live football, might increase risk of harm for some people compared with the days when most games kicked off at 3pm on a Saturday afternoon and were not televised live. If anything, the blog appears to suggest that the dispersal of matches across the week (and at different times of the day), which reduced the intervals between football betting days, was the bigger issue.
The blog concluded that: “in-play betting is something that many of us in the problem gambling field are keeping an eye on because it’s taken something that has traditionally been a non-problem form of gambling to something that is more akin to betting on horse racing.” This is significant for two reasons. First, the speculative nature of the commentary is emphasised by Professor Griffiths’ intention to “keep an eye on” in-play betting. His concerns stemmed not from any actual data or observations of in-play betting, but from what some people might theoretically do given the chance to place bets throughout the duration of a football match. Moreover, Professor Griffiths noted the relationship between bet frequency and event frequency needs further empirical investigation and conceded that “ntil more research is forthcoming a definitive answer is currently not available.” Second, he compared in-play betting on football with horserace betting – an activity with consistently low rates of “problem gambling” reported via official prevalence surveys. In short, Professor Griffiths did not suggest that in-play betting was especially risky.
The journal
The second piece of Commission evidence is a study published in the Journal of Gambling Studies in 2015, Demographic, Behavioural and Normative Risk Factors for Gambling Problems Amongst Sports Bettors (Hing et al.). The study features results from an online survey of sports bettors in Australia in 2012. It concluded that: “risk of problem gambling was also found to increase with greater frequency and expenditure on sports betting, greater diversity of gambling involvement, and with more impulsive responses to betting opportunities, including in-play live action betting.”
It would be wrong, however, to read this conclusion as vindication of the Commission’s targeting of in-play betting. First, the study was based on data from Australia, where in-play betting is only permitted by telephone or in person and where on-line in-play bets may therefore only be placed with unlicensed operators. Second, it is based on a relatively small sample of sports bettors (n=639) and the use of an online survey vehicle that “deliberately oversampled to optimise recruitment of adequate numbers of problem and at-risk gamblers”. Third, the data was gathered via a self-report survey rather than actual observation of betting behaviour. It relied on respondent recollections, from the previous 12 months, of the proportion of bets that they placed by different channels, at different times (i.e. the day before the event, the day of the event, during the event) and on different outcome classifications (i.e. final outcome of event, key events such as ‘first goal’ and micro-bets such as ‘next point’ in tennis). The classification by respondents of betting activity in this way for an entire 12-month period would have involved fairly heroic feats of recall.
Most importantly however, the journal paper’s findings do not support the Commission’s categorisation of in-play betting as an “indicator of harm“. The researchers did find an association between the percentage of an individual’s bets placed “during the match” and their Problem Gambling Severity Index (“PGSI”) score – but they also identified a similar association for traditional bets placed within the hour prior to kick-off. Perhaps more significantly, they found that betting in-play on the final outcome of the match was associated with lower PGSI scores than final outcome bets placed before kick-off. Associations between the percentage of bets on “key events” and PGSI score was similar whether the bets were placed before or during the match. It did indicate that regular betting on “micro events” (which can only be made in-play) are associated with higher PGSI scores: but to suggest that this proves the inherent riskiness (or harmfulness) of all forms of in-play betting is at best a profound misreading of the research.
The survey
The final item of evidence is a set of results from the Commission’s Quarterly Telephone Survey in 2016 (the “2016 Survey”). The Commission reported that “27.4% of online gamblers who bet in-play were classified as problem gamblers, compared to 10.9% of all online gamblers and 5.4% of online gamblers who do not bet in-play. 44.1% of online gamblers who bet in-play were classified as at risk of problem gambling compared to 40.4% of all online gamblers and 26.4% of online gamblers who do not bet in-play.”
On the face of it, these findings appear to support the classification of in-play betting as an “indicator of harm”. This however overlooks important considerations of survey methodology and interpretation.
The 2016 Survey typically samples around 4,000 people a year. While this is a reasonable sample size for estimating overall participation in gambling, findings are likely to be less robust when considering specific activities. For example, we calculate that the number of online football bettors in the sample in 2016 was around 160; the number of tennis bettors just 14. The ‘problem gambling’ rates for online gambling cited by the Commission (using the short-form PGSI rather than the full nine-item instrument) were three times higher than those found in the ‘gold-standard’ NHS Health Survey for the same year, something that raises obvious questions about sample bias. Upon original publication of the results in 2016, the Commission noted with suitable circumspection that “due to small base sizes the data presented here should be considered as indicative, and be treated with caution.“
Issues of survey reliability aside, there are a number of issues of interpretation. The Commission appears not to have considered that people who typically bet in-play may, for other reasons, be considered higher risk. For example, young men (a higher risk demographic group) are likely to be over-represented amongst in-play bettors. It seems plausible that a majority of in-play bettors will also bet traditionally; in which case they may be assumed to have broader wagering repertoires than people who only place bets before the start of the event (because they do both). Finally, the analysis is limited to a comparison of “problem gambling” rates between two different types of online sports betting. It provides no comparison between in-play betting and other forms of gambling, which would be necessary to classify it as a uniquely risky product.
Conclusion
The Commission’s decision to classify in-play betting as an “indicator of harm” is, according to its Freedom of Information Act disclosure, based entirely on an assessment carried out in 2016, which stated: “on the balance of the evidence we have reviewed and considered, we have concluded that the current regulatory regime in place for in-play betting is sufficient and further controls are not needed at this time.” It is unclear therefore why a review of precisely the same evidence base in 2022 should arrive at such a different view.
The Commission is correct to point out that short gaps between bets or high-staking after a big win may be risk indicators for some people, but if so, this is true of many other activities and not just in-play betting. Indeed, in-play betting does not appear to be particularly high-risk viewed solely through a lens of bet frequency or rapidity.
Official prevalence surveys have consistently shown that participation in online sports betting is associated with low rates of PGSI and DSM-IV “problem gambling”. As we pointed out in our third article, this is particularly the case where bettors have not participated in other forms of online gambling. We know from Commission data that around one-quarter of online gamblers, and therefore a much higher proportion of online sports bettors, participate in in-play betting. It is not a difficult jump to realise that it is implausible that problem gambling rates could be so low for remote sports betting in total if in-play betting on its own was a significant “indicator of harm”.
There is no inherent logic to consider in-play betting as especially risky. After all, ‘in-play’ simply denotes the fact that the wager is placed after the event has commenced. A final outcome result bet placed five minutes into a match is really no different to the same bet placed five minutes before kick-off. If anything, the bettor has more information on which to make his or her decision. Some bet types, in particular ‘micro-bets’, may indicate elevated risk; but specific bet-choices may be indicative of risk in all forms of gambling: this is not unique to in-play.
Our analysis indicates that the Gambling Commission’s decision to categorise in-play betting as an “indicator of harm” is based on a mis-reading of a very thin and selectively assessed evidence base. Indeed, we would go further, the Commission’s claims are in fact contradicted by the only peer-reviewed study presented as evidence. The Griffiths blog is a cogent article, however it proves nothing and in any case does not support the Commission’s classification, whilst results from the 2016 Survey appear to be at odds with the ‘gold-standard’ Health Survey for that year (and all other years) and are presented without context and in a way that does not allow further checking or analysis. In this article, we have examined, and found wanting, the evidence presented by the Commission in support of just one of the vast number of “indicators of harm” or “vulnerability” that feature in the Guidance. This may in itself be an indicator of a particular vulnerability within the Commission: a susceptibility to believe the worst about the market it is required by law to oversee. It is certainly an indicator that evaluation is difficult and may be subjective, something that would benefit from introspection in any final version of the Guidance.
With thanks to Dan Waugh from Regulus Partners for his invaluable co-authorship.