OK, so that headline isn't really an attention grabber. At least, not as much as "Youth more likely to be bullied at schools with anti-bullying programs".
So, let's say are interested in whether anti-bullying programs are effective or not. You find yourself a dataset that includes individual-level data on students (including data on whether they have ever been physically or emotionally bullied at school), and school-level data on security climate (like, whether they have uniformed police, metal detectors, random bag/locker checks, etc.) and whether the school runs an anti-bullying program. Importantly, the dataset is collected at a single point in time (a cross-sectional dataset). You run a regression analysis (using whatever fancy analysis method is your flavour of the month - in this case because the data are at two levels (individual-level and school-level) and the dependent variable is binary (bullied - yes or no) you run a multi-level logit model). You find that students' experience of bullying is positively associated with anti-bullying programs. In other words, students at schools that have anti-bullying programs are more likely to have experienced bullying.
You could reasonably conclude that anti-bullying programs somehow create more problem bullying, right? Wrong. What you've done is confused causation with correlation, as the study cited in the article above (recently published in the open-access Journal of Criminology) has done (Quote: "Surprisingly, bullying prevention had a negative effect on peer victimization" - by negative, they mean undesirable).
A positive association between anti-bullying programs and bullying could be because "students who are victimizing their peers have learned the language from these anti-bullying campaigns and programs" (quote from the UTA news release). Alternatively, and possibly more likely, it could be because schools that have more problem bullying are more likely to implement anti-bullying programs in the first place. In the latter case, the observed positive association between anti-bullying programs and problem bullying is in spite of the anti-bullying program, not because of it. A third possibility (and equally plausible) is that students who are at a school that has anti-bullying programs are more aware of what bullying is, are less afraid and more secure in themselves, and consequently are more likely to report bullying.
So, while technically correct, the headline is a little misleading. Youth are more likely to be bullied at schools with anti-bullying programs, but not necessarily as a result of those programs. To be fair, the authors note in the limitations to their study that "the cross-sectional nature of the study limits one from making a causal inference about the relationship between individual and school-level facros and likelihood of peer victimization". However, their loose language in the conclusions and in the media release do just that.
The main problem here is that we have no idea of how much worse (or indeed, how much better) bullying would have been if those programs had not been in place. In order to tease that out, you would ideally need to have longitudinal data (data for schools both before, and after, their anti-bullying programs were implemented, and comparable data for schools that never implemented a program). Then you could see whether the anti-bullying programs have an effect or not (you would have a quasi-experimental design, and there are problems with this but I won't go into them here).
You could possibly argue that, because the results show a positive association, that the anti-bully programs are not effective (because they don't eliminate problem bullying in schools that have them). That's the angle taken by Time magazine columnist Christopher Ferguson ("Anti-Bullying Programs Could Be a Waste of Time"). Again the headline is technically correct but partly misses the point.
Why might schools with effective anti-bullying programs still show great levels of bullying than schools without such programs? This could be because, even though the program is effective in reducing bullying, it doesn't eliminate bullying entirely (relative to schools with no such program in place). In that case, schools with anti-bullying programs could still have higher-than-expected levels of bullying, even though their programs are effective.
So, because the study was poorly designed to determine the effectiveness of anti-bullying programs - it essentially tells us nothing about how effective (or ineffective) they are. I am much more happy to believe meta-analysis of results from many experimental and quasi-experimental studies, such as that by Ferguson and others reported here. They found that anti-bullying programs show a small significant effect. However, they also noted it is likely that their estimated effect was largely due to publication bias, and as a result they concluded that these programs "produce little discernible effect on youth participants". So, the question of whether these programs are effective or not remains somewhat open.
I guess the overall point here is that as researchers, we need to be careful about interpreting and not over-stating our results, and where possible we also need to be careful about how the media interpret our results. It is far too easy for the general public to misinterpret our results if they are not clearly stated.