Secure Use Of AI Project

 
 

About

Science research projects currently use or will use an array of AI resources, from machine

learning (ML) for elements of the research data lifecycle to generative AI large language models

(LLMs) to help facilitate scientific research. These resources are part of a rapidly evolving

landscape where there is limited guidance on their cybersecurity and research security impacts.

The Trusted CI Secure Use of AI project is gathering and disseminating information to aid

research cyberinfrastructure organizations and institutions of higher education in understanding

impacts of AI on their research and cybersecurity programs, including inherent limitations and

vulnerabilities to different types of AI. Our team is working to identify community needs and the

challenges to adopting AI in effective yet safe ways in their organizations. We aim to socialize

guidance and other outputs from this project among NSF and the broader federally-funded

research community.


Want to talk?

If you have a question or need around securing AI in your research project or organization,

contact Trusted CI at help@trustedci.org

Previous events

● January 14, 2025 — RRCoP Webinar on AI & Cybersecurity

● January 13, 2026 — CI Compass Virtual Workshop

○ Virtual Workshop - AI Meets CI: Intelligent Infrastructure for Major & Midscale Facilities.

● October 2025 — NSF Cybersecurity Summit (October 20-24th) highlighting AI

related sessions:

  • Oct 22nd 12:30-1:30 - Poster session including AI related posters

  • Oct 22nd 1:30-3:30pm - Trustworthy AI with Hands-on Defense Against Model Attacks (Training)

  • Oct 22nd 1:30-5pm - Security Log Analysis (Training) w/ new AI content

Related resources

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)

NIST AI Risk Management Framework