top of page

From Reddit to Real Violence: How Online Radicalization Turned Two Teenagers Into School Shooters

  • Writer: CrisisWire
    CrisisWire
  • Nov 7
  • 17 min read

By Warren Pulley, BTAM Certified | CrisisWire Threat Management Solutions


The 16-year-old logged into his favorite online forum. He posted photos of his newly acquired tactical gear. He shared links to manifestos from previous school shooters. He used code words and symbols that extremist communities would recognize instantly.

The Anti-Defamation League saw it. They flagged the account to the FBI.

The FBI opened an investigation in July 2025. They tracked the posts for two months, watching as the content grew more concerning—references to past attacks, tactical preparations, escalating rhetoric.


But they couldn't identify who the anonymous user actually was.


On September 10, 2025, that anonymous user walked into Evergreen High School in Colorado with a loaded revolver and dozens of rounds of ammunition. Desmond Holly fired repeatedly, critically wounding two classmates before taking his own life.


Only after the shooting did the FBI connect their two-month investigation to the 16-year-old student.


The school never knew. Teachers never knew. Parents never knew. Until it was too late.

This isn't an isolated case. In Minneapolis just two weeks earlier, another shooter's weapons bore the names of previous mass attackers—evidence of obsessive research into past violence. Digital breadcrumbs existed. Warning signs accumulated online. But no systematic process connected those digital behaviors to real-world intervention.


After 40 years preventing violence across nuclear weapons facilities, LAPD violent crime investigations, and protecting diplomats under daily attack in Baghdad, I've learned that modern violence prevention requires understanding a domain most security professionals never trained for: the digital radicalization spaces where tomorrow's attackers are formed today.


The New Pathway to Violence Starts Online

Traditional threat assessment focused on observable real-world behaviors: verbal threats, weapons acquisition, surveillance of targets, conflicts with authority figures. Those indicators still matter. But they're no longer sufficient.


Today's pathway to violence increasingly begins and accelerates in digital spaces most adults never see.


The Evergreen Case: A Digital Trail Everyone Missed

Let's examine exactly what Desmond Holly did online—and why it matters for every school, workplace, and organization concerned about violence prevention.


What Holly Posted (According to ADL and FBI):


Extremist Symbols and Code Words: Holly used specific imagery and language that extremist communities recognize as signals of shared ideology. These aren't mainstream political symbols—they're deliberately obscure markers that allow radicalized individuals to identify each other while remaining somewhat hidden from general audiences.


References to Previous School Shooters: Multiple posts mentioned perpetrators of past school attacks by name. This identification with previous attackers represents one of the strongest warning signs in threat assessment. It suggests the individual sees these perpetrators as role models rather than cautionary examples.


Tactical Gear Documentation: Holly posted photos showing acquisition of tactical equipment—body armor, magazine pouches, gear typically associated with military or law enforcement operations. For a teenager with no legitimate need for such equipment, this represents preparation for violence.


Research Into Attack Methods: His online activity included searches and discussions about weapons, attack planning, and security vulnerabilities. This moves beyond passive interest into active preparation.


The Critical Timeline:

  • July 2025: ADL flags Holly's account to FBI

  • July-September 2025: FBI investigates but cannot identify user

  • August 2025: School year begins at Evergreen High School

  • September 10, 2025: Holly commits shooting

  • September 10, 2025 (after attack): FBI connects investigated account to Holly's identity


Two months of investigation. Two months of school operation. Zero communication between the investigators who knew something was wrong and the school where the threat actually existed.


That information gap isn't unique to Evergreen. It's systemic across American schools and workplaces.



From Reddit to Real Violence: How Online Radicalization Turned Two Teenagers Into School Shooters
From Reddit to Real Violence: How Online Radicalization Turned Two Teenagers Into School Shooters

Why Online Radicalization Is Accelerating School Violence

The internet hasn't just changed how potential attackers communicate—it's fundamentally altered the pathway to violence in ways that make attacks more likely and more lethal.


Mechanism 1: Echo Chambers Remove Reality Checks

In the pre-internet era, someone developing violent ideation faced natural barriers. Family members noticed concerning changes. Teachers observed troubling statements. Peers pushed back against extreme views. These real-world social interactions created friction that often interrupted escalation toward violence.


Online radicalization spaces eliminate those reality checks. Algorithms show users increasingly extreme content. Communities celebrate and encourage violent ideation. Every concerning thought finds validation rather than contradiction. The teenager who posts about attacking his school receives encouragement, tactical advice, and social status within these communities.


Mechanism 2: Study Guides for Mass Violence

Previous school shooters left manifestos, videos, and documentation of their planning. Rather than serving as cautionary warnings, this material functions as instructional content for potential future attackers.


Individuals researching school attacks can find:

  • Detailed accounts of previous incidents

  • Analysis of what worked and what didn't

  • Weapons selection recommendations

  • Target selection strategies

  • Security vulnerability assessments

  • Timing and logistics planning


As I detail in The Prepared Leader: Threat Assessment, Emergency Planning, and Safety, this creates a feedback loop where each attack becomes a case study for the next—and the internet ensures that case study material remains permanently accessible to anyone researching violence.


Mechanism 3: Gamification and Status-Seeking

Some online communities treat mass violence like a competition. They rank attackers by body count. They assign status based on attack "innovation" or "efficiency." They create leaderboards and infamy hierarchies.


For isolated, marginalized individuals seeking significance, these communities offer a path to fame (or infamy) that feels achievable. The Minneapolis shooter writing previous attackers' names on his weapons wasn't just showing obsessive interest—he was signaling his intention to join their ranks.


Mechanism 4: Anonymity Enables Escalation

Online anonymity removes normal social consequences for expressing violent intent. In face-to-face conversation, stating "I'm going to shoot up the school" triggers immediate intervention—alarm from listeners, involvement of authorities, social ostracism.


Online, the same statement in the right (or wrong) community receives encouragement: "Do it." "Target selection advice?" "Live stream it." The absence of immediate negative consequences allows ideation to progress to planning without the normal circuit breakers human interaction provides.


Mechanism 5: Technical Sophistication Exceeds Detection Capability

Radicalized individuals learn operational security (OpSec) techniques from these communities:

  • Using VPNs to mask location

  • Creating anonymous accounts with no ties to real identity

  • Employing encrypted communications

  • Leveraging platforms with minimal content moderation

  • Using coded language that evades automated detection


The FBI couldn't identify Desmond Holly despite two months of investigation because he employed OpSec techniques learned from online radicalization communities. A 16-year-old with basic technical knowledge can operate beyond the reach of federal law enforcement—and completely beyond the awareness of his school.


The Warning Signs Schools and Organizations Miss

Most administrators focus on physical warning signs while missing the digital indicators that precede nearly every modern school shooting and workplace violence incident.


Digital Red Flags That Preceded Recent Attacks

Based on the Evergreen and Minneapolis cases, plus analysis from my research on school threat assessments and prevention, here are the digital warning signs organizations must learn to recognize:


Tier 1: Identification With Previous Attackers

The single strongest digital predictor of future violence:

  • References to previous school shooters by name

  • Collections of manifestos, videos, or writings from attackers

  • Admiration or praise for perpetrators of mass violence

  • Adoption of imagery, symbols, or rhetoric associated with previous attacks

  • Attempting to contact or communicate with incarcerated perpetrators


This pattern appeared in both 2025 shootings: Holly's online posts about previous attackers, and the Minneapolis shooter's weapon inscriptions.


Tier 2: Research Into Attack Methods

Active preparation rather than passive interest:

  • Searches about weapons, explosives, or attack tactics

  • Questions about security vulnerabilities at specific locations

  • Requests for tactical advice from online communities

  • Study of previous attack timelines and methodologies

  • Analysis of what made previous attacks "successful" or "failures"


The line between concerning interest and dangerous preparation: Asking "how" questions rather than just "what" questions.

"What happened at Columbine?" = Historical interest" How did the Columbine attackers bypass security?" = Attack planning


Tier 3: Acquisition and Display of Tactical Equipment

Physical preparations documented digitally:

  • Posting photos of weapons, ammunition, or tactical gear

  • Discussing recent purchases of equipment associated with violence

  • Seeking recommendations for optimal equipment selection

  • Displaying equipment in contexts suggesting violent intent

  • Tracking arrival of ordered materials or components


Holly's tactical gear posts flagged to ADL fall squarely in this category. For a teenager with no legitimate need for body armor or tactical pouches, such acquisitions represent preparation for a specific purpose.


Tier 4: Extremist Ideology Adoption

Belief systems that justify or encourage violence:

  • Engagement with white supremacist, accelerationist, or other violent extremist content

  • Adoption of dehumanizing language toward potential target groups

  • Expressions of apocalyptic or nihilistic worldviews

  • Belief in conspiracy theories requiring violent response

  • Rhetoric positioning violence as necessary, justified, or inevitable


Holly's use of extremist symbols and code words placed him firmly in this category—indicators that radicalization extended beyond school violence specifically into broader violent ideology.


Tier 5: Grievance Amplification

Personal motivations intensified through online validation:

  • Public expressions of grievances against specific institutions or individuals

  • Escalating rhetoric about perceived injustices requiring action

  • Seeking validation for violent solutions to personal problems

  • Rejection of non-violent alternatives suggested by others

  • Timeline references suggesting planning for specific dates or events


Every threat assessment examines grievance development. Online spaces accelerate this process by validating even irrational grievances and encouraging violent response.


Tier 6: Leakage of Intent

Direct or indirect communications about planned violence:

  • Statements like "wait until they see what I have planned"

  • Cryptic references to future events: "Monday will be interesting"

  • Countdown posts suggesting timeline toward action

  • Requests for others to "watch the news" on specific dates

  • Farewell messages or distribution of personal effects


These communications—called "leakage" in threat assessment terminology—represent the most direct warning signs. The challenge: they often occur in semi-private online spaces where most adults never look.


Why Schools, Parents, and Employers Remain Blind

If warning signs exist online, why don't responsible adults see them? The gap isn't primarily technical—it's structural, legal, and cultural.


Barrier 1: Adults Aren't Where Young People Communicate

Schools monitor hallways. Parents check bedrooms. Employers watch workplace behavior.


But today's threatening communications happen on:

  • Discord servers with invitation-only access

  • Telegram channels using encryption

  • Reddit subforums with deliberately obscure names

  • 4chan and 8kun boards adults never visit

  • Gaming platform chats embedded in multiplayer games

  • TikTok comments and private messages

  • Snapchat that disappears after viewing


Most administrators and parents couldn't access these spaces even if they wanted to. The platforms change constantly. The language evolves to evade detection. The communities migrate when attention arrives.


Barrier 2: Privacy Laws and Ethical Concerns

Even when schools become aware of concerning online behavior, legal and ethical considerations constrain response:


Student Privacy Rights: FERPA and state laws restrict schools' ability to monitor student communications. Schools generally can't demand social media passwords, can't require students to reveal private accounts, and face legal risk for overly broad monitoring.


Employee Privacy Rights: Workplace monitoring of employee social media raises parallel concerns. Employers can't generally monitor personal accounts, can't discipline for off-duty speech, and must navigate complex privacy laws varying by state.


Fourth Amendment Considerations: Public schools face constitutional restrictions on searches—including digital searches—that private employers don't face.


Practical Reality: Even if legally permissible, most organizations lack resources for comprehensive social media monitoring of all students or employees.


Barrier 3: Technical Sophistication Gap

The Evergreen case demonstrates this perfectly: The FBI—with extensive technical resources and legal authority—investigated Holly's account for two months and couldn't identify him.


If federal law enforcement with subpoena power struggles to connect anonymous accounts to real identities, what chance does an average school administrator have?


Radicalized individuals employ:

  • Virtual private networks (VPNs) masking location and IP addresses

  • Anonymous browsers like Tor hiding identity

  • Cryptocurrency for untraceable purchases

  • Burner email addresses with no identifying information

  • Phone numbers from temporary services

  • Accounts created on public wifi networks


A 16-year-old with moderate technical knowledge can operate beyond the detection capability of most school IT departments.


Barrier 4: No Systematic Information Sharing

The most frustrating gap: different entities that possess pieces of the puzzle don't communicate.


In the Evergreen case:

  • ADL identified concerning behavior (July 2025)

  • FBI investigated the account (July-September 2025)

  • School operated normally with no awareness of threat (August-September 2025)

  • Holly's parents presumably interacted with their son daily

  • Holly's peers likely noticed behavioral changes


No mechanism existed to connect these information sources until after the shooting.

This isn't a flaw unique to Evergreen. It's systemic across American threat assessment. As I explain in my Threat Assessment Handbook, effective threat assessment requires breaking down silos between law enforcement, schools, social services, mental health providers, and families. Digital indicators make this integration even more critical—and even more difficult.


What Schools and Organizations Can Actually Do

Digital radicalization creates new challenges. But it also creates new opportunities for early intervention—if organizations develop appropriate capabilities.


Strategy 1: Establish Digital Threat Assessment Protocols

Traditional behavioral threat assessment teams must expand their scope to include digital warning signs.


Enhanced Team Composition:

Your threat assessment team needs members who understand digital spaces:

  • IT security professional who monitors school/organizational networks for concerning traffic patterns

  • Law enforcement liaison with connections to FBI, state fusion centers, and cybercrime units

  • Digital native (younger staff member or consultant who actually uses platforms where radicalization occurs)

  • Legal counsel ensuring monitoring complies with privacy laws


Digital Monitoring Within Legal Bounds:

What organizations CAN monitor legally:

  • Content on school/organization-issued devices

  • Activity on school/organization wifi networks

  • Public social media posts students/employees make without privacy settings

  • Information reported by peers, parents, or other community members

  • Information shared by law enforcement pursuant to appropriate protocols

What organizations generally CANNOT monitor:

  • Private social media accounts on personal devices

  • Personal email or messaging apps

  • Home internet activity

  • Communications with password protection

  • Content requiring authentication to view


The key: Focus on establishing reporting mechanisms and partnerships rather than attempting comprehensive surveillance.


Strategy 2: Create FBI-School Information Sharing Partnerships

The Evergreen shooting would have been prevented if FBI's July investigation had triggered notification to the school. But no protocol existed for that communication.


What This Partnership Looks Like:

Your threat assessment team establishes formal relationship with:

  • Local FBI field office

  • State fusion center

  • Local law enforcement cybercrime units

  • Social media platforms' safety teams (some have education/business portals)

Formalized Protocols:

Document agreements specifying:

  • What information law enforcement can share with schools/organizations

  • What information schools/organizations share with law enforcement

  • Who receives notifications (not principals or HR directors directly—your trained threat assessment team)

  • What actions trigger information sharing (investigations of current students/employees)

  • Privacy protections and information handling requirements

  • Regular communication schedule even when no active threats exist


Real-World Example:

Some school districts have established "online monitoring" memoranda of understanding with local law enforcement. When police encounter concerning online behavior potentially connected to a student, they notify the school's threat assessment team (not publicly, not triggering discipline automatically).


The team investigates using school-available information, interviews the student if appropriate, assesses actual threat level, and coordinates intervention if needed.


Strategy 3: Train Staff in Digital Warning Sign Recognition

Your teachers, counselors, and managers won't monitor Reddit forums. But they can recognize when students or employees exhibit digital behavior suggesting radicalization.


Observable Indicators Staff Can Recognize:


At School/Work:

  • Viewing extremist content on school/work devices

  • Discussing radical ideologies during class/meetings

  • Showing peers manifestos or attack-related materials

  • Wearing symbols associated with extremist movements

  • Attempting to recruit others into extremist viewpoints

Reported by Peers:

  • "Check out what [student/employee] is posting online"

  • "He's in this weird group chat where they talk about violence"

  • "She keeps sending me videos of previous school shooters"

  • "His Instagram has really concerning posts"

Reported by Parents/Family:

  • Changes in online behavior at home

  • Spending excessive time in specific online communities

  • Making concerning purchases (tactical gear, weapons materials)

  • Refusing to let parents see devices/accounts

  • Personality changes correlated with increased online activity


As I detail in Campus Under Siege: School Safety Strategies, effective threat assessment depends on front-line staff knowing what to report and having clear, non-punitive reporting mechanisms.


Training Curriculum:

Annual training covering:

  • Digital warning signs (the tiers outlined earlier in this article)

  • How radicalization spaces operate

  • Differences between concerning and merely edgy online behavior

  • Reporting procedures (who to tell, how to document)

  • What happens after they report (demystify the process)

  • Legal and ethical boundaries of monitoring


Strategy 4: Implement Anonymous Reporting Systems

Students and employees observe concerning online behavior peers share. But most won't report through formal channels.


Why Traditional Reporting Fails:

Peers don't report because they:

  • Fear being wrong and getting someone in trouble

  • Worry about retaliation from the reported individual

  • Don't want to be labeled a "snitch"

  • Aren't sure if what they saw actually matters

  • Don't know where to report

  • Assume adults already know

Anonymous Reporting Breaks These Barriers:

Multiple reporting channels:

  • Anonymous tip line (phone/text)

  • Mobile app for reporting (widely used in schools now)

  • Web form with no identifying information required

  • QR codes in hallways/bathrooms leading to reporting portal

Critical Success Factors:

  1. Absolute anonymity protection - No attempt to identify reporters

  2. Rapid response - Every report investigated within 24 hours

  3. Feedback loop - General announcements that "tips are being acted on" without details

  4. No automatic punishment - Reporting triggers assessment, not discipline

  5. Heavy promotion - Students/employees must know system exists


If Evergreen had robust anonymous reporting and students felt safe flagging Holly's concerning online behavior, assessment could have occurred months before September 10th.


Strategy 5: Educate About Digital Footprints and Radicalization

Prevention includes inoculating students and employees against radicalization in the first place.


Digital Literacy Programs Should Include:

For Students (Age-Appropriate):

  • How algorithms create echo chambers

  • Recognizing manipulation and radicalization tactics

  • Understanding that online anonymity isn't absolute

  • Digital footprints' impact on future opportunities

  • Peer pressure to engage with extreme content

  • Where to get help if they or a friend are being radicalized

For Employees:

  • Company social media policies

  • Consequences of extremist content association

  • Cybersecurity basics (phishing, account security)

  • Reporting concerning co-worker behavior

For Parents:

  • Platforms where radicalization occurs

  • Warning signs of online radicalization

  • Monitoring strategies that respect privacy while maintaining safety

  • Resources for concerned parents

  • When and how to involve professionals


Strategy 6: Coordinate With Platform Providers

Major social media companies have teams focused on violent extremism. Smaller platforms often don't. But partnerships are possible.


What Organizations Can Do:

  • Establish contacts at major platforms (Facebook, Instagram, TikTok, Snapchat, Discord)

  • Report concerning accounts through official channels

  • Request platform monitoring of specific threat actors

  • Participate in platform safety initiatives for schools/businesses

What Platforms Can Provide:

  • Removal of violating content

  • Account suspension for terms of service violations

  • Information to law enforcement (when legally required)

  • Educational resources about online safety

  • Technical tools for reporting concerning content


Realistic Expectations:

Platforms receive millions of reports. Response isn't immediate. Not all concerning content violates terms of service. Privacy and free speech create legitimate constraints on platform action.


But reporting still matters—it creates records, establishes patterns, and in egregious cases, prompts platform intervention before violence occurs.


Strategy 7: Develop Intervention Protocols for Digital Threats

When your threat assessment team identifies someone showing digital warning signs, what happens next?


Assessment Phase:

  1. Gather Information: Review all available digital indicators, interview reporters, examine school/work records, consult with law enforcement if appropriate

  2. Determine Threat Level: Using structured professional judgment frameworks (like those I teach in BTAM certification), assess whether behavior represents:

    • No threat: Edgy content but no genuine violence risk

    • Low concern: Warning signs present but multiple protective factors

    • Moderate concern: Clear warning signs, limited protective factors, requires intervention

    • High concern: Multiple warning signs, planning indicators, immediate action required

  3. Coordinate Response: Based on threat level, implement appropriate interventions


Intervention Options:

For Low/Moderate Concerns:

  • Interview with trained threat assessment team members

  • Increased informal monitoring by staff

  • Connection to mental health resources

  • Parent/family engagement and support

  • Positive behavioral supports and mentoring

  • Regular check-ins with counselor or trusted adult

For High Concerns:

  • Law enforcement notification and involvement

  • Suspension or removal pending investigation

  • Emergency mental health evaluation

  • Family meetings with clear expectations and safety planning

  • Restriction from campus/workplace until threat resolved

  • Ongoing case management and monitoring

Documentation Standards:

Every assessment must be documented thoroughly:

  • Information sources and dates

  • Assessment methodology and framework used

  • Risk factors and protective factors identified

  • Threat level determination and rationale

  • Interventions implemented and by whom

  • Monitoring plan and timeline

  • Regular case reviews and updates

This documentation provides legal defensibility if challenged and enables pattern recognition across multiple incidents.


The Evergreen Prevention Scenario: How This Would Actually Work

Let's walk through exactly how comprehensive digital threat assessment would have prevented the Evergreen shooting.


July 2025 - Initial Detection:

Anti-Defamation League identifies concerning online account. They flag it to FBI as standard practice.


Under the protocols outlined above, ADL also notifies their educational partners in Colorado about concerning account apparently associated with someone in the state.


Colorado school safety fusion center receives notification: "Anonymous account showing indicators of potential school violence threat in Colorado. Posts include extremist symbols, references to previous school shooters, tactical gear acquisition."


Fusion center issues alert to threat assessment teams at Colorado school districts: "Be aware of potential concerning student behavior. Indicators to watch for: [list provided]. Report any students showing these patterns."


Early August 2025 - School-Based Observation:

Evergreen High School's threat assessment team receives the alert. They distribute warning signs to all staff.


A teacher notices a student viewing content on his phone between classes that matches the described indicators—images of tactical gear, references to previous attacks.


Teacher reports to threat assessment team through internal system. No punishment, just assessment.


Mid-August 2025 - Investigation:

Threat assessment team opens case on Desmond Holly. Team includes:

  • School counselor (mental health perspective)

  • School resource officer (law enforcement connection)

  • Assistant principal (administrative authority)

  • IT coordinator (digital expertise)

Investigation Steps:

  1. Review Holly's activity on school-issued devices and school wifi network (legally permissible)

  2. Interview teachers about Holly's behavior, academic performance, social interactions

  3. Review public social media (posts without privacy settings)

  4. Speak with Holly's peers confidentially about any concerns

  5. Contact FBI liaison about whether their investigation might be related

  6. Interview Holly's parents about home behavior and any concerns


FBI Connection:

When Evergreen's school resource officer contacts FBI liaison about Holly, FBI realizes their anonymous account investigation may be the same individual. They share (within legal bounds) that investigation is ongoing into someone at the school.


Mid-August 2025 - Assessment:

Team determines Holly presents elevated risk:

  • Online activity suggesting radicalization ✓

  • Social isolation reported by peers ✓

  • Academic decline recent months ✓

  • Recent family stressors (parents divorcing) ✓

  • No protective factors identified (supportive relationships, future plans, mental health support) ✗

Risk level: Moderate to High


Late August 2025 - Intervention:

Team implements comprehensive intervention:

  1. Mental Health Support: Mandatory counseling with school psychologist specializing in threat assessment and radicalization. External referral to therapist experienced with extremist disengagement.

  2. Parent Engagement: Meeting with Holly's parents explaining concerns, enlisting them as partners in intervention. Discussion of home internet monitoring, account access, ensuring firearms secured.

  3. Monitoring Plan: Increased informal staff oversight. Counselor check-ins twice weekly. Team review every two weeks.

  4. Positive Supports: Connection to mentoring program, club activities creating prosocial connections. Academic support addressing recent decline.

  5. Law Enforcement Coordination: FBI notified that subject of their investigation is identified. School resource officer maintains ongoing communication with Holly's family.


September 10, 2025 - Outcome:

Instead of Evergreen High School experiencing a shooting, Desmond Holly is:

  • Engaged in regular counseling addressing underlying issues

  • Connected to adults who provide support and monitoring

  • Participating in activities creating positive peer relationships

  • Under coordinated oversight from school, family, and law enforcement

  • Moving away from extremist content through therapeutic disengagement process


Violence never occurs because intervention happened during ideation/planning phases—before weapons acquisition, before final preparations, before the point of no return.


That's prevention. That's what digital threat assessment enables.


The Investment Required vs. The Cost of Inaction

Implementing comprehensive digital threat assessment requires investment. But consider the alternative.


Cost of the Evergreen Shooting:

  • Two students critically injured with long-term medical needs

  • Lifetime trauma for hundreds of students and staff

  • Community reputation damage affecting enrollment and property values

  • Litigation costs (lawsuits inevitable after school violence)

  • Insurance premium increases for district

  • Security enhancement costs district will now implement

  • Mental health services for traumatized community

  • Staff turnover and recruitment challenges

  • Shooter's family's lifetime impact

  • Incalculable harm to victims and families


Conservative Estimate: $10-30 million in direct and indirect costs over time

Investment in Prevention:

Comprehensive threat assessment program including digital capabilities for a mid-sized school district:


Year 1 Implementation:

  • Threat assessment team formation and training: $25,000-$35,000

  • Digital threat assessment capability development: $15,000-$20,000

  • Anonymous reporting system: $5,000-$10,000

  • Staff training on digital warning signs: $10,000-$15,000

  • Law enforcement partnership development: $3,000-$5,000

  • Policy and procedure documentation: $5,000-$8,000

Total Year 1: $63,000-$93,000


Ongoing Annual Costs:

  • Training updates and refreshers: $10,000-$15,000

  • Case management and consultation: $8,000-$12,000

  • Anonymous reporting system maintenance: $2,000-$3,000

  • Partnership coordination: $2,000-$3,000

Total Annual: $22,000-$33,000

10-Year Investment: $280,000-$390,000

Return on Investment:


Prevent ONE shooting over 10 years = 3,500-10,700% ROI

And the unmeasurable return: Lives saved. Trauma prevented. Futures preserved.


What You Must Do Now

Every day your organization operates without comprehensive digital threat assessment capabilities, you remain vulnerable to threats developing in spaces you cannot see.

Desmond Holly's online radicalization progressed for months while Evergreen High School had no awareness. The FBI investigated but couldn't identify him. The ADL flagged concerns but couldn't notify the school. Students likely noticed concerning behavior but had no safe way to report.


Every element needed for prevention existed—but no system connected them.

Your organization faces the same structural vulnerability. Right now, a student or employee may be posting extremist content, researching attack methods, acquiring tactical equipment, and moving toward violence. And you have no way to know.


Schedule Your Free 30-Minute Threat Assessment Consultation

Discuss your organization's digital threat assessment capabilities with a BTAM-certified expert who has spent 40 years preventing violence across diverse environments.

In this consultation, we'll address:


✓ Your current ability to detect digital warning signs✓ Gaps in information sharing with law enforcement✓ Anonymous reporting system effectiveness✓ Staff training on online radicalization indicators✓ Legal and ethical boundaries of digital monitoring✓ Specific steps to implement comprehensive digital threat assessment


📧 crisiswire@proton.me🌐 bit.ly/crisiswire


If you're dealing with a concerning situation involving potential digital radicalization RIGHT NOW:


CrisisWire provides 24/7 emergency threat consultation. Whether you've discovered concerning online behavior, received reports from peers, or identified warning signs requiring immediate assessment, we can provide expert guidance.


📧 crisiswire@proton.me (monitored continuously)


Additional Resources

Free Training:

Research on Digital Threats:

Comprehensive Threat Assessment Resources:

Stay Informed:


About Warren Pulley and CrisisWire Threat Management Solutions

Warren Pulley is founder of CrisisWire Threat Management Solutions, bringing 40 years of experience preventing violence across military, law enforcement, diplomatic, and educational environments.


Professional Credentials:

  • BTAM Certified - Behavioral Threat Assessment & Management (University of Hawaii West Oahu)

  • 20+ FEMA Certifications - IS-906 (Workplace Violence), IS-907 (Active Shooter), IS-915 (Insider Threats), Complete ICS/NIMS

  • Former LAPD Officer - 12 years investigating violent crimes and organized crime

  • U.S. Embassy Baghdad Security Director - 6+ years protecting diplomats under daily threat (zero incidents)

  • Former Director of Campus Safety - Chaminade University of Honolulu

  • U.S. Air Force Veteran - 7 years nuclear weapons security

Published Works:

Academic Research:

Connect:

CrisisWire serves organizations nationwide with:

  • Behavioral Threat Assessment Team Development

  • Digital Threat Assessment Capability Building

  • School Safety Programs with Online Radicalization Detection

  • Workplace Violence Prevention Including Cyber Indicators

  • Active Shooter Preparedness Training

  • Law Enforcement Partnership Development

  • 24/7 Emergency Threat Consultation


When violence is preventable, inaction is negligence.


The next school shooter is online right now. The question is whether your threat assessment team can see them. Contact CrisisWire today.


© 2025 CrisisWire Threat Management Solutions. All rights reserved.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page