Secure Use Of AI
Project Overview
Science research projects currently use or will use an array of AI resources to help facilitate scientific research, from machine learning (ML) for elements of the research data lifecycle to generative AI large language models (LLMs). These resources are part of a rapidly evolving landscape where there is limited guidance on their cybersecurity and research security impacts.
The Trusted CI Secure Use of AI project is gathering and disseminating information to aid research cyberinfrastructure organizations and institutions of higher education in understanding impacts of AI on their research and cybersecurity programs, including inherent limitations and vulnerabilities to different types of AI. Our team is working to identify community needs and the challenges to adopting AI in effective yet safe ways in their organizations. We aim to socialize guidance and other outputs from this project among NSF and the broader federally-funded research community.
Want to talk?
If you have a question or need around securing AI in your research project or organization, contact Trusted CI at help@trustedci.org.
Previous events
January 14, 2025 — RRCoP Webinar on AI & Cybersecurity — Recording Slides
January 13, 2026 — CI Compass Virtual Workshop - AI Meets CI: Intelligent Infrastructure for Major & Midscale Facilities
October 2025 — NSF Cybersecurity Summit (October 20-24th). Highlighting AI related sessions:
Oct 21st 11am-12pm - Artificial Intelligence Panel — Recording
Oct 21st 2:30-3pm - AI, Vibe Coding and Cybersecurity: How AI Coding Can Actually Improve Cybersecurity — Recording
Oct 22nd 9-10:30am - Trustworthy AI with Hands-on Defense Against Model Attacks (Training)
Oct 22nd 12:30-1:30 - Poster session including AI related posters
Oct 22nd 1:30-3:30pm - Trustworthy AI with Hands-on Defense Against Model Attacks (Training)
Oct 22nd 1:30-5pm - Security Log Analysis (Training) w/ new AI content
Related resources