After the Tumbler Ridge shooting, which shocked Canada, OpenAI quickly took a series of compensatory measures. However, legal and technical experts are not convinced. Scholar Jean-Christophe Bélisle-Pipon pointed out that OpenAI's CEO Sam Altman's safety commitments to the Canadian government essentially replaced democratic regulation with corporate surveillance.

18-year-old shooter Jesse van Ruttenaar had his ChatGPT account flagged months before the incident for posting content related to gun violence. Although the account was banned, OpenAI did not report this threat to law enforcement. Ultimately, this tragedy resulted in eight deaths.

New Measures by OpenAI: Establishing a "Direct Connection" Between Police and Companies

As a response, OpenAI has promised to take the following actions:

  • Direct Reporting: Report threats immediately to the Royal Canadian Mounted Police (RCMP).

  • Retrospective Review: Re-evaluate previously flagged suspicious accounts.

  • Expert Involvement: Allow Canadian experts to enter its security office and assist the government in developing regulatory recommendations.

Experts' Concerns: The "Surveillance Substitution" Trap

Professor Bélisle-Pipon raised a sharp point in his article: OpenAI is avoiding scrutiny of model design and training methods, instead intensifying monitoring of user speech.

  1. Accountability Vacuum: The reporting criteria are still privately set by OpenAI, lacking transparency and external audits.

  2. Chilling Effect: Research shows that users are willing to confide in chatbots because of their privacy. If conversations become direct channels to police surveillance, vulnerable users who are in psychological crisis and need help might choose silence, thus missing intervention opportunities.

  3. Regulatory Capture: This "voluntary concession" may be an attempt to predict and weaken stricter legal regulations.

The True Direction of Governance: Shifting from Users to Systems

Critics argue that true AI regulation should not just monitor who is speaking, but examine the system itself. For example:

  • Independent Disbursement Agencies: A third-party organization composed of mental health and legal experts to assess risks, rather than being decided by the company itself.

  • Model-Level Accountability: Examine how the model responds to disclosures of violent tendencies, and what pressure tests were conducted during the development phase.

As OpenAI actively seeks partnerships with governments around the world, the Tumbler Ridge incident is becoming a benchmark: Are we getting safer technology, or a digital surveillance network operated by private companies?