Photosensitivity & Social Media

Photosensitivity & Social Media

Millions of people experience photosensitivity, where flashing lights and patterns can trigger adverse health reactions like seizures. On social media, flashing GIFs and autoplaying videos create significant accessibility challenges. Our research paper, “Not Only Annoying but Dangerous”, showcases user-centered design solutions for a more inclusive online experience. As a key member of the research team, I facilitated co-design workshops, documented observations, conducted thematic analysis of user needs, performed the literature review, and contributed to the paper’s writing. Collaborating with photosensitive individuals and UX designers, we developed practical interventions to create safer social media experiences.

Client
CoLiberation Lab
Type
UX Research
Role
Researched, co-designed, and published paper at ACM ASSETS '24
TImeline
September 2023-April 2024
Team Members
Rua Mae Williams, Chorong Park, Monaami Pal, Atharva Dnyanmote, Luchcha Lam and Sean Joo

Secondary Research

Citation tree for “Not Only Annoying but Dangerous”

  • Prevalence and Impact of Photosensitivity: Photosensitivity encompasses a range of reactions to flickering or intense light, from photophobia (causing migraines, nausea, etc.) to photosensitive epilepsy (potentially leading to seizures and even death). It's a significant issue affecting many individuals.
  • Triggers in Digital Content: Photosensitive reactions can be triggered by various forms of digital content, including video games, social media, and even selfies. The Pokémon incident highlighted the potential for widespread harm. Flashing content isn't limited to videos or GIFs; it can also be created through basic web technologies (HTML, CSS, JavaScript).
  • Limitations of Existing Standards and Guidelines: While organizations like the ITU and W3C have developed guidelines (e.g., WCAG) to address photosensitivity, these are often insufficient. They may not cover the full spectrum of photosensitivity (e.g., photophobia), are difficult to enforce, and are often not followed. They also struggle to keep pace with the rapid creation and sharing of user-generated content. Cross-platform content sharing exacerbates the problem. Even legal actions and legislation (like Zach's Law in the UK) are recent and limited in scope.
  • Inadequacy of Current Protections: Existing system-level features like dark mode and reduced motion are helpful but don't fully address the issue. Dark mode, while now more common at the system level, still relies on app developers to implement it effectively. Reduced motion settings often only affect interface elements, not all motion on the screen. Even extra dim settings may be insufficient. Platform-driven solutions (like TikTok's warnings) are rare and inconsistent. User-level protections (browser plugins) are limited in scope and functionality.
  • The Role of Content Creators and Platforms: Content creators and platforms often prioritize user attention and engagement (for revenue purposes) over user safety, leading to the use of flashing content in ads and other media. There's a lack of clear standards and enforcement regarding flashing content in advertising and movie trailers. Platforms often lack robust reporting mechanisms for photosensitive content.
  • Community-Based Solutions: Photosensitive users often rely on community-based solutions, such as warnings from friends and family, and self-organized media curation. These practices highlight the need for more formal community-driven mechanisms for identifying and tagging dangerous content.
  • Need for a Multi-Layered Approach: The literature suggests that a comprehensive approach is needed, involving system-level changes, platform-level features, and community-driven initiatives, to effectively protect photosensitive users. Current solutions are fragmented and insufficient, highlighting the need for a more holistic ecology of protections.

Primary Research Methodology

Overview of primary research methodology

To comprehensively understand the challenges faced by photosensitive users and design effective protections, this research employed a mixed-methods approach. 

  1. User Survey (38 responses via Qualtrics): Provided a broad quantitative understanding of the problem space, including photosensitive social media user demographics, exposure frequency, and coping strategies.
  2. Co-design Workshops (18 UX Students): To explore user needs and generate design solutions from the perspective of future designers of sociotechnical systems. This qualitative approach allowed for in-depth exploration and creative ideation. The workshops were structured around several key activities such as:
    1. Feature Exploration: Participants analyzed existing social media platforms, examining current features related to accessibility, reporting, and content management.
    2. Community Tech Solution Development: Participants collaborated to develop concepts for community-based tagging systems, crowdsourced reporting mechanisms, and other technological tools that could empower users to identify and manage potentially triggering content.
    3. World Mapping: Participants individually and collaboratively created visual representations of the current online content ecosystem, highlighting the challenges faced by photosensitive users and potential points of intervention.
  3. Data Analysis: Thematic analysis of survey and workshop data (sketches, diagrams, notes), with iterative review and synthesis by the research team, incorporating researcher observations.

Findings & Analysis

This section presents the key findings from our user survey and co-design workshops, highlighting the needs of photosensitive users and opportunities for design intervention.

Survey Findings:

Our survey, distributed via social media and disability advocacy networks, explored the experiences of 38 photosensitive respondents. It should also be noted that while our sample is small, it reflects the difficulty in reaching this rare group, many of whom avoid social media platforms due to legitimate safety concerns.

Perceptions of exposure to photosensitive content

Key findings reveal significant challenges and coping strategies:

  • Pervasive Worry and Frequent Exposure: Participants reported a consistent feeling of worry about encountering triggering content and frequent experiences with potentially harmful flashing material.
  • Range of Symptoms: Reported symptoms ranged from mild discomfort (headaches, nausea) to more severe reactions (migraines, seizures), emphasizing the diverse impact of photosensitivity.
  • Limited Feature Awareness and Effectiveness: While some participants (22/38) knew about autoplay disable options, they noted that these settings often reset, demonstrating the need for more reliable controls.
  • Malicious Targeting: Worryingly, 8 participants reported receiving intentionally harmful GIFs, with 5 others unsure if their exposure was malicious, highlighting vulnerability to harassment.
  • Ineffective Reporting: Half of the participants had tried reporting dangerous content, but noted the lack of specific reporting categories for photosensitivity and frequent dismissal of their reports.
  • Mitigation Strategies: Participants used various coping strategies, including screen dimmers, color filters, well-lit rooms, and Dark Mode, indicating they must take on the burden of mitigation.
  • Reliance on Community: Participants heavily relied on friends and family to pre-screen content, emphasizing the critical role of social support. For example, one participant shared
Some friends add content warnings for flashing graphics (P40).

Workshop Findings:

Our co-design workshops with UX design students, some of whom self-identified as photosensitive, provided valuable qualitative insights. These firsthand perspectives enriched the discussions and ensured the proposed solutions were grounded in lived experience. 

Annotated participant sketches from the Rapid 6 activity

 Key themes and ideas emerging from the workshops include:

  • System-Level Protections: Strong consensus emerged for system-wide graphic filters and customizable sensitivity settings, similar to Dark Mode but specifically for photosensitivity, applying across all apps.
  • Enhanced Reporting and Tagging: Participants called for robust reporting mechanisms with specific categories for photosensitive content, plus community-driven tagging and warning systems to flag potentially harmful material.
  • Content Filtering and Warnings: Beyond basic filtering, participants explored intelligent content filters that automatically detect and mitigate triggering content, alongside clear, informative warning systems and “gates” (extra steps) for content interaction.
  • Creator Responsibility: Discussions highlighted the role of content creators, suggesting creator-driven warnings and educational resources to promote responsible content creation, even envisioning consequences for intentional targeting.
  • Community Empowerment: The workshops emphasized community-driven solutions, such as crowdsourced warnings and user-managed blacklists, to empower users to protect themselves and others.
  • Granular User Control: Across discussions, a central theme was the need for greater user control, including granular customization of sensitivity settings and transparent information about content risks.
  • Gaps in Existing Features: Participants noted the inadequacy of existing platform features, especially the lack of specific reporting options for photosensitivity and inconsistent application of settings like "reduced motion" along with the dismissal of reports.

Proposed Solutions

Based on our research findings, we propose a multi-layered ecology of protections to address the complex challenges faced by photosensitive users. This approach addresses the issue at multiple levels, from policy and system design to user empowerment and community action. While we identified 21 features in the workshop activities, for the sake of brevity we will focus on some of the most frequently noted features. The UI design is based on low-fidelity prototypes and discussions from workshops, along with thematic analysis of primary research.

Detailed Reporting Features

The survey revealed that current reporting mechanisms are inadequate, with no specific category for dangerous flashing content. Existing options are often irrelevant, ineffective, and fail to differentiate between malicious intent and accidental sharing, highlighting the need for a new approach to reporting photosensitivity triggers. Improved reporting could provide platforms with valuable data about the prevalence and types of triggering content. 

This solution enhances the reporting mechanism by adding distinct options for strobes and flashing Lights to better capture specific photosensitivity triggers. Users can also identify content as malicious or unintentional and provide additional context. After reporting, users receive confirmation, options to block or unfollow the user, and instructions for immediate relief, such as looking away from the screen.

Potential reporting mechanism for photosensitive content

While effective enforcement and verification of photosensitivity reports can be challenging, and handling a large volume of reports may require significant resources, existing content moderation systems for issues like nudity and violence demonstrate that large-scale reporting is already being managed, though potentially inefficiently. This suggests that similar infrastructure could be adapted and improved to address photosensitive content.

Filter Overlay

Participants wanted system-wide autoplay controls that override app settings, but ad-driven platforms may resist this. Instead they also proposed a full-screen graphic filter to prevent exposing users to triggering content. Existing tools for flashing graphics tend to be browser based which means they do not cover social media which tends to be app based.

Real-time System-Level Graphic Filter: A filter that operates at the system level (meaning it works regardless of the specific app being used) and directly alters the pixels being displayed on the screen. The filter can use a frame-by-frame detection algorithm (like South et al.'s) to identify potentially dangerous changes in luminance (brightness) and color (red-shift) between frames.

  1. Detection: The algorithm analyzes consecutive frames to find rapid changes.
  2. Masking: It creates a mask highlighting the areas with these changes.
  3. Filtering: The filter uses the mask to modify the pixels in those areas, smoothing out the transitions and reducing the severity of the flash. Replacing flashing pixels with solid gray might be a more reliable (though potentially information-lossy) approach.
OS level full-screen filter

The workshops repeatedly touched upon the need for more granular content controls and user autonomy. App filters which are customizable and can easily be toggled were also discussed. Examples included the ability to blur, dim and pixelate content. Triggering content could be identified by existing, though sometimes inconsistent, community tags, supplemented by user reports. Furthermore, providing sufficient context about photosensitivity is essential, as users may not recognize their own symptoms, often associating it exclusively with epilepsy.

Examples of different toggle filters used on a social media feed to mask flashing

Adding Friction to Sending Content

To prevent accidental sharing of harmful GIFs, especially within close-knit online communities, we propose adding extra steps and poster-end warnings to the sending process. It encourages users to be more mindful of the content they share and consider the potential impact on others. This addresses the workshop discussions about creator responsibility and the need for greater awareness of photosensitivity. It also acknowledges the reliance on community support highlighted in the survey findings, as it empowers users to protect their friends and family.

The example below demonstrates a photosensitivity warning modal that appears when a user attempts to send a flashing GIF in a direct message. The sender retains the option to send the GIF after being warned. A participant noted that their solution draws inspiration from Slack's “Notify Anyway” prompt.

Photosensitivity warning before a flashing GIF is sent

Viewer-End Warning Level

The movie Spider-Man: Into the Spider-Verse and its trailers, known for their fast cuts and white strobes, exemplify the potential for photosensitivity triggers in media. This is particularly relevant because such content often proliferates across various platforms, including social media sites like TikTok, Instagram Reels, and YouTube Shorts, which often encourage endless scrolling and passive consumption. 

To provide users with greater control over potentially triggering video content, we propose a warning overlay. As shown in the accompanying image, before a flashing video begins playing, a dark semi-transparent modal appears on the video player interface. This modal includes:

  • A visual indicator of the risk level based on the number of reports on the video (e.g., a bar graph, a number count), providing a crowdsourced risk level indicator.
  • “View Anyway” and “Skip” options, allowing users to make informed decisions about playback.
  • Providing sufficient context about photosensitivity warning is essential, as users may not recognize their own symptoms, often associating it exclusively with epilepsy.
Viewer-end warning in short-form content

This solution addresses the need for granular content control, crucial given varying photosensitivities. Different warning levels empower users to customize their experience and make informed choices. Leveraging community feedback, particularly via report counts, supplements automated detection and builds upon existing reliance on community support.

Conclusion

Our research reveals a concerning dissonance. Social media, intended for connection, can inadvertently isolate photosensitive users. This is due to inadequate platform features and the prevalence of harmful flashing content, particularly in short-form videos. This leaves users feeling worried and vulnerable. 

To address this, we propose a range of solutions visualized across diverse digital contexts—social media feeds, direct messages, GIF databases, and short-form content platforms. I created these visualizations to demonstrate the various ways users encounter flashing content and illustrate how our proposed designs offer feasible improvements. 

While further evaluation with photosensitive users is crucial, this research is grounded in the lived experiences of both photosensitive and non-photosensitive users, gathered through co-design workshops that fostered a community-driven approach. We believe that implementing the proposed solutions and prioritizing user well-being can lead to a more accessible online environment.

The full research paper titled “Not Only Annoying but Dangerous”: devising an ecology of protections for photosensitive social media users can be read here

other work