Secure Use Of AI Project
About
Science research projects currently use or will use an array of AI resources, from machine
learning (ML) for elements of the research data lifecycle to generative AI large language models
(LLMs) to help facilitate scientific research. These resources are part of a rapidly evolving
landscape where there is limited guidance on their cybersecurity and research security impacts.
The Trusted CI Secure Use of AI project is gathering and disseminating information to aid
research cyberinfrastructure organizations and institutions of higher education in understanding
impacts of AI on their research and cybersecurity programs, including inherent limitations and
vulnerabilities to different types of AI. Our team is working to identify community needs and the
challenges to adopting AI in effective yet safe ways in their organizations. We aim to socialize
guidance and other outputs from this project among NSF and the broader federally-funded
research community.
Want to talk?
If you have a question or need around securing AI in your research project or organization,
contact Trusted CI at help@trustedci.org
Previous events
● January 14, 2025 — RRCoP Webinar on AI & Cybersecurity
Combined Slides: 2026_01_AI & Cybersecurity.pdf
Recorded Video: https://youtu.be/WYNlQ4ssly4?si=ABRbg4E5zWwEX5qU
● January 13, 2026 — CI Compass Virtual Workshop
○ Virtual Workshop - AI Meets CI: Intelligent Infrastructure for Major & Midscale Facilities.
Trusted CI talk: AI Meets CI 2026: Day 2: Speakers Drew Paine and Dhwanit Pandya, of Trusted CI & Indiana University
● October 2025 — NSF Cybersecurity Summit (October 20-24th) highlighting AI
related sessions:
Oct 21st 11am-12pm - Artificial Intelligence Panel
Oct 21st 2:30-3pm - AI, Vibe Coding and Cybersecurity: How AI Coding Can Actually Improve Cybersecurity
Oct 22nd 9-10:30am - Trustworthy AI with Hands-on Defense Against Model Attacks (Training)
Oct 22nd 12:30-1:30 - Poster session including AI related posters
Oct 22nd 1:30-3:30pm - Trustworthy AI with Hands-on Defense Against Model Attacks (Training)
Oct 22nd 1:30-5pm - Security Log Analysis (Training) w/ new AI content
Related resources
● MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)