Claudine Tinsman
Claudine Tinsman DPhil candidate in Cyber Security

The Unstable Foundation of the UK Online Safety Bill

The Unstable Foundation of the UK Online Safety Bill

In March 2022, the Online Safety Bill (OSB) was introduced in the British Parliament. The Bill puts forth a regulatory framework that imposes a duty of care on large social media platforms to protect its adult users from [2, Section 54].

  1. Priority content that is harmful to adults (as defined by the Secretary of State).
  2. Content that poses a significant risk of psychological or physical harm to an appreciable number of adults or children.

A failure to adhere to this duty could result in 10% of their global annual turnover and, in certain cases, criminal sanctions for executives[2]. While the OSB represents a potential for significant change to the manner in which social media companies manage harmful content, it also raises some points of concern regarding its stance on legal but harmful content. Specifically, the lack of specificity in the Bill’s definitions creates the real possibility that it will not be implemented in a manner consistent with evidence and media studies theory. At best, the Bill’s approach to the treatment of ‘legal but harmful’ content is not founded on robust evidence and could lead to unwanted outcomes. In this paper, I discuss the fundamental problems with the Bill’s language around content that is legal but harmful to adults and explore some ways in which the regulation may be implemented.

Background: Media Effects Research

Media effects are the short- and long-term within-person changes that result from media use [17]. They vary both within the same person and between people, depending on a combination of individual differences [17]. Therefore, people with different individual experiences and contexts may respond differently to similar types of content. The connection between individual dimensions and effects of media use is further complicated by the lack of stability in variables over time: Some, such as personality or gender, are relatively stable over time, while others, such as mood, change rapidly [17] [13] [6]. In essence, one type of social media content might negatively impact someone sometimes, but not at other times, and may be harmless–or even helpful–to someone else at certain points in time, but not others [17]. Such in-person and between-person differences make even the coarsest predictions of emotional and psychological responses to media content complicated to predict in a heterogeneous population sample.

Large-scale studies investigating associations between social media use and emotional or psychological harm have found that, on average, those who use social media more are not worse off than those who use them less [8] [7] [11] [16] [9]. However, most research on the link between social media use and well-being concerns the amount or frequency of use, not the kind of content users engage with. This lack of sizeable average effects has led to calls for work that identifies groups and individual users for whom there exists a significant relationship between social media use (both in terms of quantity and content) and well-being [17] [10]. Focus on subgroups which share characteristics relevant to experiences of harm is likely to yield more accurate and practical insights into individual experiences of harm associated with social media content.

The Context of Harm

In the common law, a party for whom the duty attaches must exercise reasonable care to avoid a risk of injury to certain other people [18]. For example, an employer may have a duty of care to hammer down a nail protruding from a floorboard in their office space because it poses a risk of physical harm to employees: Regardless of age, gender, or ability, stepping on a nail will cause injury (though its severity will vary). However, this standard is harder to apply to social media content. A Tweet is not the same as a projecting nail that can be hammered back into place to protect anyone who might step on it: Removing a perceived risk of harm for some also risks depriving others of the potential benefits of such content [5]. For example, it might seem reasonable to categorise self-harm content as ‘legal but harmful’. However, it can be challenging to draw the line between content that promotes such behaviours from content that supports those who self-harm [1]. Therefore, if harm is not substantially scoped and defined in secondary legislation, there is a risk that the Bill could both miss groups that are negatively affected by content while simultaneously removing content that is helpful to another group.

Significant Risk

The threshold for significant risk of harm is not defined in the OSB. Even if one is set in the future, establishing a cutoff point fails to account for cases in which the negative effect on an individual from one single piece of content may be insignificant but repeated exposure may lead to significant harm over time. For example, there is evidence that, among young adult women, more frequent exposure to ‘fitspiration’1 fitness content is associated with increased severity of eating disorder symptoms (both among those with and without eating disorders) and greater body dissatisfaction [14] [15] [12].

The current iteration of the Bill does not stipulate whether exposure over time is a criterion for determining if a harm is significant and if so, how such cumulative effects will be measured. As such, the bill risks leaving out these types of content that cause cumulative harm if no clearer definition of significant risk is included in the bill.

Priority Harm

The majority of the measures that large social media companies will have to take in treating2 harmful content pertain to priority content harmful to adults [2, Section 54]. Yet, a list of ‘priority harms’ are not identified in the bill. It will be up to the Secretary of State to define the scope of priority harms in secondary legislation, subject to parliamentary approval [3]. OFCOM, the regulatory authority tasked with implementing and enforcing the Online Safety Bill, will be required to regularly carry out reviews of legal but harmful content and publish reports at least every three years3 that advises the secretary of state on potential changes to the categories of priority content [2, Section 56]. While this periodic review allows the Bill to be future proof, the lack of specificity is troubling in two respects. First, there is no requirement that the Secretary of State follow OFCOM’s guidance, only that they receive it. This vagueness leaves room for the priority content to more reflect the political issues of the moment rather than reflect the actual state of harmful experiences on social media.

An Appreciable Number of Adults

It is difficult to predict how interacting with social media content will affect a single person, and attempting to predict how it will impact a group, let alone an entire population, is even more so. The OSB’s requirement for the harm to affect an ‘appreciable’ number of adults is therefore worrisome: ‘Appreciable’ implies that the number of users affected must be noticeable, which poses a problem if, on average, the harmful effects are not apparent at a large scale. This issue could be addressed if there was substantial evidence about the impact of specific types of content on individuals or groups sharing particular predictive characteristics. However, that research is lacking: Categorising content and determining which relevant groups are affected by specific content is difficult without clear guidance.

The Future

If the OSB’s lack of precision and nuance are not addressed in secondary legislation and through OFCOM’s codes of practice, it runs the risk of being a confusing and potentially harmful piece of legislation. However, there are some new inclusions to the bill that provide cause for optimism. Section 14(2) [2] sets out that large social media providers have “a duty to include...to the extent that it is proportionate to do so, features which adult users may use or apply if they wish to increase their control over harmful content” [2]. The government has stated that this empowerment clause gives adult users control over the content and users they interact with but has provided little detail about the extent of the user’s power. Given the difficulty in predicting what content will be harmful to a particular person, a strong emphasis on developing user-centred measures that empower individuals to manage their social media preferences could be a highly effective measure to protect users from harm.

References

[1] Legislation loophole failing to protect adults from online suicide dangers, says Samaritans.

[2] Online Safety Bill.

[3] Online Safety Bill: factsheet.

[4] Christensen et al. Evaluating associations between fitspiration and thinspiration content on Instagram and disordered-eating behaviors using ecological momentary assessment: A registered report. International Journal of Eating Disorders 54, 7 (2021), 1307–1315.

[5] Graham Smith. Speech is not a tripping hazard - response to the Online Harms White Paper.

[6] Gray, E., and Watson, D. Emotions, moods, and temperament: Similarities, differences, and a synthesis. In Emotions at Work: Theory, Research and Applications for Management. John Wiley & Sons, Inc., New York, NY.

[7] Ivie, E. J., Pettitt, A., Moses, L. J., and Allen, N. B. A meta-analysis of the association between adolescent social media use and depressive symptoms. Journal of Affective Disorders 275 (Oct. 2020), 165–174.

[8] Kross, E., Verduyn, P., Sheppes, G., Costello, C. K., Jonides, J., and Ybarra, O. Social Media and Well-Being: Pitfalls, Progress, and Next Steps. Trends in Cognitive Sciences 25, 1 (Jan. 2021), 55–66.

[9] Meier, A., and Reinecke, L. Computer-Mediated Communication, Social Media, and Mental Health: A Conceptual and Empirical Meta-Review. Communication Research 48, 8 (Dec. 2021), 1182–1209. Publisher: SAGE Publications Inc.

[10] Orben, A. The Sisyphean Cycle of Technology Panics. Perspectives on Psychological Science 15, 5 (Sept. 2020), 1143–1157. Publisher: SAGE Publications Inc.

[11] Orben, A. Teenagers, screens and social media: a narrative review of reviews and key studies. Social Psychiatry and Psychiatric Epidemiology 55, 4 (Apr. 2020), 407–414.

[12] Robinson, L., Prichard, I., Nikolaidis, A., Drummond, C., Drummond, M., and Tiggemann, M. Idealised media images: The effect of fitspiration imagery on body satisfaction and exercise behaviour. Body Image 22 (Sept. 2017), 65–71.

[13] Rothbart, M. K., and Sheese, B. E. Temperament and Emotion Regulation. In Handbook of emotion regulation. The Guilford Press, New York, NY, US, 2007, pp. 331–350.

[14] Tiggemann, M., and Zaccardo, M. “Exercise to be fit, not skinny”: The effect of fitspiration imagery on women’s body image. Body Image 15 (Sept. 2015), 61–67.

[15] Tiggemann, M., and Zaccardo, M. ‘Strong is the new skinny’: A content analysis of #fitspiration images on Instagram. Journal of Health Psychology 23, 8 (July 2018), 1003–1011.

[16] Valkenburg, P. M., Meier, A., and Beyens, I. Social Media Use and its Impact on Adolescent Mental Health: An Umbrella Review of the Evidence. preprint, PsyArXiv, July 2021.

[17] Valkenburg, P. M., and Peter, J. The Differential Susceptibility to Media Effects Model. Journal of Communication 63, 2 (Apr. 2013), 221– 243.

[18] Woods, L. The duty of care in the Online Harms White Paper. Journal of Media Law 11, 1 (Jan. 2019), 6–17.

Footnotes

  1. Fitspiration images promote healthy living but may also reinforce thin-ideal by glorifying low-fat bodies, inducing feelings of guilt through stigmatising messages about body sizes, and promoting unhealthy attitudes towards exercise [4] 

  2. See Section 13(4) of the Online Safety Bill for the treatment requirements [2] 

  3. Except for the first report, which must be provided sooner [2, Section 56(6)]