tag:blogger.com,1999:blog-4186890287279568808.post4895579756458686150..comments2023-10-30T20:58:57.351+00:00Comments on Unlocking Potential: Evidence and SentencingRob Allenhttp://www.blogger.com/profile/09205742107009573223noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-4186890287279568808.post-25739324725883992742013-01-09T12:14:43.571+00:002013-01-09T12:14:43.571+00:00"There is a current fashion to use propensity..."There is a current fashion to use propensity score matching for this, but I am concerned that this often gives an unrealistic impression of the quality of the comparison, given the inevitable presence of unobserved, unmatched confounders."<br />PSM is only as good as the data being used for the match but if good data are used (e.g. relevant to the chances of receiving a given 'treatment') then good estimates of realised effects can be obtained. There are precedents in CJS of using PSM for things which were not randomised, e.g. ETS: http://www.justice.gov.uk/downloads/publications/research-and-analysis/moj-research/eval-enhanced-thinking-skills-prog.pdf <br /><br />@Toby - matching can be either on common support or not. Meaning that one can choose whether or not to match everyone, or only those with similar predicted chances of receiving intervention. Common support is regarded as a better approach. (See the Sadlier report for MoJ)Anonymoushttps://www.blogger.com/profile/00839764168151943980noreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-62764879121400272452013-01-07T16:06:22.393+00:002013-01-07T16:06:22.393+00:00I don't rule out RCTs in CJ either - I just th...I don't rule out RCTs in CJ either - I just think they're not terribly fit for purpose in that field and unlikely to be that significant in terms of improving the evidence base.<br /><br />I can't say either way whether PSM is a useful technique or not but my (limited) understanding of it is that actually no cases are unmatched. Others may be able to clarify this.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-15499377829080837862013-01-07T15:29:37.255+00:002013-01-07T15:29:37.255+00:00CJ interventions may well take place in messy cont...CJ interventions may well take place in messy contexts, but I don't think the Pawson/Tilley critique rules out the use of RCTs in this field. We do need to make sure we gather and publish information on context, process and 'generative mechanisms' of the evaluated interventions. But without some form of rigorous comparison group, questions on the causal effect of the intervention - even in the context of delivery in the evaluation - will remain unanswered. There is a current fashion to use propensity score matching for this, but I am concerned that this often gives an unrealistic impression of the quality of the comparison, given the inevitable presence of unobserved, unmatched confounders.Alex Stevenshttps://www.blogger.com/profile/00203689237852670445noreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-61681299659065085022013-01-07T12:12:44.640+00:002013-01-07T12:12:44.640+00:00Agree with the broad point that there is a problem...Agree with the broad point that there is a problem concerning the weak evidence base within criminal justice. But I don't think RCTs are a solution to that problem. First, I think the Pawson-Tilley critique remains unanswered - that in evaluating complex social interventions, we need to grasp the (messy) ways in which different contexts shape implementation and patterns of outcomes. This is simply not how RCTs conceptualise 'interventions'. Second, as Rob pointed out, in relation to sentencing, decisions are not made solely on instrumental grounds. Sentencing also has non-instrumental, expressive purposes. So even if randomised studies could deliver on their promise to provide gold-standard evidence of effectivness (which they can't in this area), they would still only be addressing half of the matter for sentencers. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-81784985691366395892013-01-04T11:14:23.482+00:002013-01-04T11:14:23.482+00:00For our research on the DTTO (in the QCT Europe pr...For our research on the DTTO (in the QCT Europe project), we would have loved to do a randomised study. But this was never a serious possibility. The policy discussions in advance of the policy presented it as THE answer to drug-related crime, drawing on (non-randomised) studies of US drug courts. Its effectiveness was not treated as an open question. <br /><br />There are several examples of criminal justice interventions that could and should have been piloted through cluster randomised trials, including the Drug Intervention Programme, the prison Integrated Drug Treatment System and, most recently, Drug Recovery Wings. In each case, money has been/is being spent on designs that were/are doomed not to answer the principal research question by their lack of randomisation. <br /><br />The DTTO was a slightly different case, as it was presented as an alternative to imprisonment (the examples above are in the implementation - not the imposition - of sentencing). Randomisation is ethically preferable in situations of equipoise; ‘unless you know the effect of what you’re doing, how do you know that you’re not doing harm’. But studies randomising to imprisonment are not in equipoise. We know that imprisonment is harmful to the recipient; it is meant to be. The alternative, experimental condition (e.g. the DTTO) may have unknown harms and benefits, but we know that it has the benefit to the recipients of not depriving them of their liberty and private life. So we could only ethically justify randomisation to the experimental condition if we are sure that the participants would definitely have otherwise gone to prison. The problem was, with the DTTO and many other purported alternatives to prison, that it acted as an addition, not an alternative. The number of people sent to prison did not decrease as DTTOs increased. So the existence of a pilot (even randomised) would be likely to increase the harm of imprisonment, unless there were very firm guarantees in place. <br /><br />Even if randomisation could be done on CJ initiatives, there are well known problems of politics and ideology that would hamper the implementation of the resulting knowledge. One that should be better known is the control that the Home Office keeps of the intellectual property that results from the evaluations it funds. When results have not been in the desired direction, some researchers have found it very difficult to get permission to publish their work.<br /><br />So I agree with Ben and Steve that there should be more randomised studies in criminal justice, but also with Rob that there are a host of complexities to consider.Alex Stevenshttps://www.blogger.com/profile/00203689237852670445noreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-39154459654421834742013-01-03T12:50:03.911+00:002013-01-03T12:50:03.911+00:00Thanks for taking up my suggestion of writing a bl...Thanks for taking up my suggestion of writing a blog, I don’t the views of you and your colleagues came across well on twitter at all.<br /><br />You seem concerned that RCTs would lead to sentences that fail to reflect the seriousness of the crime, or deliver adequate punishment (“In our legal system, the sentence imposed on an offender must reflect the crime committed and be proportionate to the seriousness of the offence.” Etc).<br /><br />But there is already variation in sentencing. Drug Testing and Treatment Orders were introduced as an alternative to custodial sentences. The decision to introduce such change, such variation, had nothing to do with anyone running a randomised controlled trial on DTTOs (indeed, no such trial has ever has been run). It is happening anyway. Nobody is suggesting that designing an RCT should involve inventing some wild new form of sentencing. A randomised trial simply introduces a proposed change in sentencing – a change that has already been agreed by society in principle – in a structured fashion to see if it achieves its stated objectives. <br /><br />As Steve Rolles above says, there is no problem with measuring several outcomes (as long as the statistics account for the fact that you’re measuring multiple outcomes, which is simple). It’s perfectly normal and healthy to measure multiple outcomes, and discuss both before and after a trial which outcomes are more important. This happens all the time in medicine, as Steve says.<br /> <br />You say that you doubt judges would agree to participate in an RCT. I don’t doubt that. This is one thing that needs to change: judges need to reflect on the lack of evidence for whether these sentences really do achieve the stated objectives of judges, politicians, and society. In my view it is profoundly unethical of judges (and doctors, when faced with similar situations) to fail to reduce this uncertainty, when the opportunity to do so is present. When we practice in ignorance, we can end up doing harm to individuals and society. <br /><br />I agree with everything Steve has said, above.<br /><br />Ben Goldacre<br />www.badscience.net<br /><br />bengoldacrehttps://www.blogger.com/profile/06265184213740845700noreply@blogger.comtag:blogger.com,1999:blog-4186890287279568808.post-5493468710041996532013-01-03T10:41:48.009+00:002013-01-03T10:41:48.009+00:00Good observations Rob. Its worth reading the BMJ p...Good observations Rob. Its worth reading the BMJ piece he wrote on this with Sheila Bird, and the Cabinet Office paper on the same issue. <br /><br />In terms of the multiple outcomes - I dont think this is neccassarily that different to medical RCTs - where there will generally also be multiple outcomes - nor is it a limiting obstacle. There are obvious complexities to be considered in devising useful and methodologically sound trials - as there will be challenges in interpreting the data they produce. But having data on multiple outcomes from two (or more) randomly allocated sentencing options doesnt mean the data cant be very useful - even if the outcomes show conflicting levels of effectiveness on different outcomes. Increasing levels of complexity can also be build into RCTs by subdividng sentencing options. <br /><br />I agree with you on the ethics of prison as one of the sentencing options needs to addressed (i think a similar issue exisits with getting a crimainl record or not) - but I'm not sure alternatives to prison trialled alongside conventional prison sentencing policy is neccassarily unethical (ie trialling less harsh sentencing options could be Ok even if more harsh sentincing isnt). Im sure Alex Stevens would have a view on this. <br /><br />One important issue I think Ben has raised is that the context of such ethical questions is that (often) judges are making sentenncing decisions based on an absence of evidence in the first place. The implication is that experimenting to find which of a range of available options delivers the best outcomes (whatever our priorities KPIs may be) is an essentially ethical undertaking in broad terms. <br /><br />On the issue of deterrence - This is very hard to measure anyway, especially given the wide variation in sentencing that already exists, and I would sugest almost impossible in terms of a given sentence to an individual. Searching for a literature on deterence in drug sentencing in particular produces almost nothing useful anyway - so almost any data would be better than what we have now. As an aside I think higlighting the paucity of support for any deterrent effect in drug sentencing would be a useful way of challenging the 'tough on drugs' political narrative - given the cetrality of the deterrence myth within it. <br /><br />I think Ben is essentially pushing the point you make in your opening sentence and highlighting RCTs as one potentially useful, and currently under utilised tool. The debate about specifically where and how RCTs could be used is the next step on after the acknowledgement that most sentencing is not evidence based. Its a debate Ben's keen to engage in. Steve Rolleshttps://www.blogger.com/profile/11487781869462634203noreply@blogger.com