Monday, October 20: Day 1

 
  • Description

    We will aim to give attendees the ability to run a real-world Zeek installation on their own hardware. We will start by introducing attendees to the basic architecture of Zeek. This includes showing attendees how to run and customize Zeek on the command line. It also includes guidance on how to do basic log analysis. We will also talk about what real-world deployments of Zeek look like, using examples used by R&E institutions. We will teach attendees how to set up their own Zeek cluster deployments in production together with all the cluster components, and the new Zeek management framework. Other topics that we cover include the Zeek package manager, the configuration framework, the intelligence framework, customizing logging, and the input framework. This is a hands-on training, where attendees will be able to run Zeek on their laptops. Also this year we are planning to have a training session on Unicor, a tool that is uses MISP, pDNSSOC and Zeek together for doing threat hunting in the network.

    Presenter bios

    Keith Lehigh is Information Security Officer at University of Colorado, he has served at UISO at Indiana University for 10 years running Zeek on large .edu networks and is currently a Zeek LT Member

    Christian Kreibich is technical lead (Open Source) @ Corelight by day and Zeek wizard with exceptional Zeek-magic skills by night. Also, he is a Zeek LT Member

    Romain Wartel is a security engineer at ESnet. He comes from CERN where he was doing forensics and threat hunting for the CERN together with day to day security incident response.

    Fatema Bannat Wala is a security engineer at ESNet. She has more than ten years of experience in cybersecurity. She is a member of the Zeek Leadership Team, and an active contributor to the Zeek community and other open-source projects.

  • Description:

    Incident Response with Zeek - This is beginner/intermediate training material and exercises to get a sense of (i) Zeek logs and its data richness (ii) Understand IR with Zeek (iii) Potential to learn how to detect incidents using zeek and write your own detection.

    Presenter bio

    Aashish Sharma is a member of the cybersecurity team of the Lawrence Berkeley National Laboratory. He is a long-time daily user of Zeek, part of the Zeek leadership team, and active in the Zeek community. Aashish is a prolific script-writer, as well as an author of several papers using Zeek data.

Tuesday, October 21: Day 2

  • Description

    Welcome & National Science Foundation (NSF) Address

    Presenter Bio

  • Description

    UCAR & NSF NCAR Welcome

    Presenter Bio

  • Description

    Trusted CI Update

    Presenter Bio

  • Description

    Keynote: TBD

    Presenter Bio

  • Description

    Federated identity is evolving. New technologies include decentralized identifiers, OpenID Federation, passkeys, and verifiable credentials. New guidelines (e.g., NIST SP 800-63-4) specify new requirements (e.g., phishing-resistant authentication). Identity and access management practices on campus and at NSF facilities are also evolving. This panel brings together federated identity practitioners from the academic community to discuss challenges, current practices, and future plans for enabling secure access to cyberinfrastructure.

    Presenter Bio

    Jim Basney leads the CILogon project at NCSA. Jim received his PhD in computer sciences from UW-Madison.

  • Description

    This presentation will cover the basics of AI/vibe coding and how it can be used for scientific studies, presenting a six-step process of:

    • Experiment design

    • Code design

    • AI code development

    • Code verification

    • Experimentation

    • Experimental validation

    Then, the benefits, limitations and drawbacks of AI/vibe coding, generally, will be discussed. Following this, the cybersecurity implications will be discussed. These will include:

    A discussion of drawbacks, including:

    • AI code with inadvertent defects

    • AI as an attack vector (e.g., model tampering, fooling the AI, AI as an exfiltration vector)

    • Risks posed by AI code to infrastructure

    A discussion of the benefits, including:

    • Reduced coding time facilitates more time for assurance activities

    • AI being able to catch defects that programmers may miss (e.g., due to inexperience and/or review from a different perspective)

    • AI's holistic understanding of the software being able to prevent and potentially detect/correct complex defects

    Next, a brief demonstration of an AI/vibe coding tool will be provided. This will be followed by a brief discussion of the concerns, benefits, risks and mitigations listed above. The overall implications of AI coding will be discussed followed by time for audience Q&A.

    Presenter Bio

    Jeremy Straub is the director of the North Dakota State University Cybersecurity Institute and an associate professor. His research focuses on the intersection of AI and cybersecurity. His work has been supported by the U.S. Department of Defense, U.S. National Science Foundation, U.S. National Institute of Health, NASA and others. He focuses on technical advancement, policy implication analysis and entrepreneurial advancement of technology.

  • Description

    Trusted CI has been engaging with the scientific community on software security issues for a decade. These engagement activities have been in the form of code vulnerability analyses, software security training sessions, written guides on software security, advice on software security practices, and community surveys on software security practices in the scientific community. Throughout these activities, we found that many organizations and projects were looking for guidance on what practices they should follow and how they should establish them.

    Over this same decade, Trusted CI developed its Trusted CI Framework, which provides science projects guidance on how to establish a security program. The TCI Framework was designed with the scientific community in mind. It is a minimum standard for cybersecurity programs, providing guidance on establishing a mission-focused standard for managing cybersecurity. This Framework has been successfully adopted by over 50 organizations over this period.

    Inspired by the TCI Framework effect, we are setting out to develop a Trusted CI Software Security Framework (SoF), whose goal is to help scientific organizations establish software security programs. The guiding principles for the SoF are that it should:

    • Be tailored to the needs, practices, and resources of scientific research projects.

    • Help an organization establish minimum standards for software security practices across the organization.

    • Be prescriptive, in that it provides a clear process for the organization or project to establish a software security program.

    • Be role based, in that at each step of establishing a software security program, it is clear who in the organization or project is responsible for each task.

    • Provide an evolutionary path for an organization or project to establish a software security program, and be able to evolve and refine that program over time.

    When there are multiple similar choices available in the design of the SoF, take the one most closely aligned with the TCI Framework.

    The SoF will be designed such that organizations that have already established security programs based on the TCI Framework will find the process familiar and be able to more quickly get started on the SoF process. For organizations that have not established a security program, the SoF will provide the step-by-step processes needed to establish a software security program.

    The SoF team has developed the pillars of the SoF Framework and has started the effort on the Software Framework Implementation Guide.

    This presentation will provide a summary of the Software Framework, a report on the status of the work to date, and upcoming plans. We will allow plenty of time for questions, comments, and suggestions.

    Team members: Elisa Heymann (UW-Madison), Scott Russell (IU), Hauke Bahr (IU), Kate Rasmussen (LBL), Damian Rouson (LBL), Dan Gunter (LBL), Sarthak Khattar (UW-Madison)

    Presenter Bio

    Barton Miller is the Vilas Distinguished Achievement Professor at the University of Wisconsin-Madison. He leads the UW-Madison on Trusted CI, the National Science Foundation Cybersecurity Center of Excellence, where he helps lead the software assurance effort. His research interests include software security, in-depth vulnerability assessment, and binary code analysis. In 1988, Miller founded the field of Fuzz random software testing, which is the foundation of many security and software engineering disciplines. In 1992, Miller (working with his then­-student Prof. Jeffrey Hollingsworth) founded the field of dynamic binary code instrumentation and coined the term “dynamic instrumentation”. Miller is a Fellow of the ACM and recipient of the IFIP WG 10.4 Jean-Claude Laprie Award in Dependable Computing and an R&D 100 Award.

    Miller has been chair of the Institute for Defense Analysis Center for Computing Sciences Program Review Committee; member of the FAA VECTOR Task Force; member of the National Nuclear Security Agency Los Alamos and Lawrence Livermore National Labs Security Review Committee; member of the Los Alamos National Laboratory Computing, Communications and Networking Div. Review Committee; member of the U.S. Secret Service Electronic Crimes Task Force (Chicago area); and is currently an advisor to the Wisconsin National Guard and advisor to the Wisconsin Security Research Consortium.

  • Description

    CMMC is not easy, but it may not be as hard as you are making it for yourself. There are many controls that have “well-known” solutions, but those solutions are not the only way to meet those control requirements. Ex: firewalls are the most common way to deny network communications traffic by default and allow network communications traffic by exception (3.13.6), but they are not the only way. This talk will cover some of the ways universities may be making CMMC harder for themselves, and how to rationalize any “non-standard” solution to an assessor.

    Presenter Bio

    Laura Raderman has been in the cybersecurity industry for over 25 years, first as a consultant to Fortune 50 companies, and now as a Policy and Compliance Coordinator for Carnegie Mellon University’s Information Security Office for the last 11 years. At Carnegie Mellon University, Laura is responsible for assisting all departments at the University to comply with various laws, regulations and contractual agreements controlling information security. Laura is an active contributor to EDUCAUSE including comments on NIST standards and Federal rulemaking related to research security.

    Laura is a Lead CMMC Certified Assessor (CCA) and authorized to participate in Cybersecurity Maturity Model Certification (CMMC) Level 2 Assessments under the US Department of Defense’s CMMC Program.

  • Description

    The National Institutes of Health (NIH) and its partner organizations maintain a variety of datasets that are invaluable to researchers in the biological and healthcare sciences.

    While many such datasets are openly accessible, there are also many that contain confidential controlled-access data from human subjects.

    Effective January 25, 2025, the NIH requires all new or renewed projects using or storing such controlled-access data to adhere to its new NIH Security Best Practices for Users of Controlled-Access Data.

    New or renewed Data Use Certifications (DUCs) incorporate these requirements. The core of the new policy is that systems storing and processing such data must be compliant with NIST SP 800-171 “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations”, or ISO/IEC 27001/27002 for non-US users.

    While less cumbersome than the environment required for export-controlled ITAR/EAR projects, these regulations are still a substantial expansion over what engineers, facilitators, and researchers under the NIH’s subject-matter umbrella have previously been accustomed to.

    To support researchers in complying with these requirements, the Rosen Center for Advanced Computing (RCAC) at Purdue University has built and deployed the Rossmann cluster, a NIST SP 800-171v3 compliant, downsized version of the Top500 Gautschi cluster, providing both AMD Zen 4-based CPU capacity and Nvidia H100-based GPU capacity.

    Lessons learned from years of supporting export-controlled CUI work enabled us to successfully calibrate technical control choices against smooth user experience.

    Key points of interest included split routing at the secure network boundary, integration of existing storage systems compliant with prior NIH policies, web-based access to the cluster, user-local package management, and interactive notebook-centric workflows.

    Since deployment, Rossmann has been successfully serving projects using data from multiple NIH institutes, as well as data from private entities requiring compatible controls, with users reporting only minimal disruption to their established scientific workflows.

    Presenter Bios

    Charles Christoffer is a Senior Computational Scientist with the Rosen Center for Advanced Computing at Purdue University. Over more than a decade of experience in structural bioinformatics, he has leveraged cluster computing resources to solve problems in molecular biology. He received a 2019 fellowship through the NIH T32 GM132024 Molecular Biophysics Training Program, and received his PhD in 2023 from the Department of Computer Science at Purdue University.

    Alex Younts is a Principal Research Engineer with the Rosen Center for Advanced Computing at Purdue University with two decades of experience in academic, commercial, and government HPC environments.

Wednesday, October 22: Day 3

  • Description

    During the investigation of security incidents, the ability to gather information from the affected system in real-time is invaluable. Live forensics, though challenging, often uncovers critical insights that would be lost once the system is powered down, making it a crucial skill for system administrators.

    This training introduces the essential principles of digital forensics and basic techniques for acquiring and analyzing data from a live system. Aimed at system administrators, the session focuses on practical methods that can be easily adopted using commonly available tools. The emphasis will be on Linux-based systems.

    The session will begin with a presentation covering the main principles and introducing fundamental techniques. The rest of the session will be a hands-on experience. Participants will be provided access to live Linux environments and will work through a series of tasks, each targeting a specific aspect of the investigation. The format will resemble a capture-the-flag challenge, allowing participants to tackle tasks in any order and at their own pace.

    We will conclude the training with a discussion on related initiatives and explore opportunities for collaboration on training activities. We will discuss the potential of the WISE community platform for fostering ongoing training activities.

    Presenter Bio

    Daniel Kouril works with CESNET and Masaryk University as a senior researcher focusing on various aspects of security in distributed environments. Daniel has a strong background in incident response and dealt with a number of incidents of various levels. He is a member of several security teams and platforms.

  • Description

    With AI increasingly integrated into scientific research and cyberinfrastructure, ensuring its trustworthiness is now an urgent operational priority. AI models have become essential tools in tasks ranging from autonomous data labeling and classification to real-time threat detection, but their growing importance also makes them vulnerable targets for adversarial attacks. This 1-hour hands-on workshop offers a practical and approachable introduction to AI threats and defences, specifically curated for cybersecurity professionals and researchers working within NSF-supported infrastructures.

    Participants will explore real-world adversarial scenarios including data poisoning, backdoor attacks, model extraction, and membership inference, all of which pose significant risks to model integrity, availability, privacy, and fairness. Through a guided lab activity using a simplified Google Colab notebook, attendees will simulate a model extraction pipeline and evaluate how an adversary can reconstruct a deployed model and launch downstream privacy attacks. The workshop will also explore how these threats interact with broader issues such as data governance, explainability, and ethical AI use in regulated domains (e.g., healthcare, education, justice).

    Drawing from educational materials developed for a graduate-level course on Trustworthy AI, the session is designed for cybersecurity practitioners, AI researchers, and NSF program leads who want to understand the operational implications of adversarial machine learning and the trade-offs between model performance, interpretability, and resilience. No deep background in machine learning is required, participants will be walked through key concepts like threat models, fairness metrics, and defensive strategies including differential privacy, adversarial training, and model calibration.

    By the end of the session, attendees will:

    • Understand how AI systems can be distorted through black-box queries or poisoned data.

    • Experiment with adversarial and fairness-aware pipelines using pre-written Python notebooks.

    • Analyse the trade-offs between predictive performance and fairness/security guarantees.

    Take home reusable lab materials for further training, simulation, or education.

    This workshop bridges theory and application, highlighting how NSF-funded infrastructures can adopt proactive and technically grounded approaches to safeguard AI deployments. The workshop is designed to engage a diverse audience, including system administrators, red team professionals, educators, and policy makers who are looking for practical approaches to deploying AI systems securely and ethically in high-risk, mission-critical settings.

    Session Length (1 hour and 30 minutes):

    • 30 minutes introduction of trustworthy AI triangle, attack types and threat models

    • 25 minutes walkthrough and demo of a model extraction and fairness attack using Google Colab

    • 15 minutes group hands-on lab with Q&A and mentoring support

    • 20 minutes wrap-up discussion on operational defences and resource sharing

    All workshop materials will be publicly available under an open license to promote further adoption and adaptation.

    Presenter Bio

    Maryam Taeb is an Assistant Professor of Cybersecurity and Information Technology. Dr. Taeb holds a Ph.D. in Electrical Engineering, where her research focused on deepfake detection enhancing judicial systems by developing a reliable chain of custody for evidence acquisition and authentication using Machine Learning and blockchain technology. Her research focuses on Trustworthy AI, exploring the intersection of Cybersecurity and Generative AI with an emphasis on robustness, transparency, and accountability in AI models. Her prior work aimed to mitigate bias and address ethical challenges in deploying AI for critical applications.She is actively studying the AI threat landscape, tackling issues such as privacy-sensitive data inference, and the safe deployment of AI systems. Her recent publications examine how large language models (LLMs) and large vision-language models (LVLMs), can be exploited as attack vectors.

    Currently, Dr. Taeb is researching multi-agent planning in cyber threat intelligence, focusing on automating incident triage, phishing detection, and social engineering attacks. Additionally, she is investigating AI applications in academic advising, particularly in personalized portfolio generation, course recommendations, and career pathway guidance.

  • Description

    As organizations rapidly adopt container technologies like Docker and Kubernetes to drive digital transformation, the security of these environments becomes paramount. Containers, while agile and efficient, introduce unique security challenges—ranging from image vulnerabilities to runtime threats and network segmentation complexities[1][2][3][6]. At the same time, attackers are increasingly targeting containerized infrastructures, exploiting misconfigurations and weak access controls.

    This plenary session will explore how integrating honeypots—deceptive systems designed to attract and analyze malicious activity—into containerized environments can significantly enhance network defenses. Attendees will learn:

    The fundamentals and best practices of container security, including image scanning,and network segmentation[1][2][6].

    How honeypots can be deployed on containers, VMs, and edge devices (such as Raspberry Pi) to detect, analyze, and respond to evolving threats.

    Real-world insights from deploying honeypots, including attack patterns, lessons learned, and how honeypot data can inform security policies[7][8].

    Strategies for merging honeypots and container security to create adaptive, proactive network defense mechanisms.

    The session will provide actionable recommendations for security architects, DevOps teams, and IT leaders seeking to harden their container infrastructure while gaining valuable threat intelligence through honeypot technology.

    References

    [1] SentinelOne. (2025, May 8). 10 Container Security Best Practices in 2025. https://www.sentinelone.com/cybersecurity-101/cloud-security/container-security-best-practices/

    [2] Sysdig. (2025, February 18). Comprehensive best practices for container security. https://sysdig.com/learn-cloud-native/container-security-best-practices/

    [3] ActiveState. (2025, June 17). 10 Container Security Best Practices for Engineering Teams. https://www.activestate.com/blog/10-container-security-best-practices-every-engineering-team-should-know/

    [4] Legit Security. (2024, December 12). 10 Container Security Best Practices: A Guide. https://www.legitsecurity.com/aspm-knowledge-base/container-security-best-practices

    [5] Hey, Valdemar. (2025, March 14). Master Container Security in 2025 - Best Practices & Live Demo. https://www.heyvaldemar.com/master-container-security-in-2025-best-practices-and-live-demo/

    [6] Aqua Security. (2025, May 25). 10 Container Security Best Practices. https://www.aquasec.com/blog/container-security-best-practices/

    [7] HoneyDB. (2025, May). HoneyDB Statistics. https://honeydb.io/stats/

    [8] Gcore. (n.d.). Compromised container detection with honeypot containers. Retrieved June 19, 2025, from https://gcore.com/learning/compromised-container-detection-with-honeypot-containers

    Presenter Bio

    Vishal Singh is a Security Analyst at Indiana University’s REN-ISAC OmniSOC, where he specializes in threat detection, vulnerability management, and incident response. With extensive hands-on experience in honeypots, container security, SOC triage, and real-world attack simulation, Vishal develops innovative detection strategies to strengthen organizational defenses.

    He holds a Master’s degree in Cybersecurity from Purdue University and has earned several industry certifications, including SC-900, CC, and Fortinet. Vishal is also an active knowledge sharer, having conducted internal training sessions on container escape techniques. Currently, he is expanding his expertise by exploring advanced honeypot deployments and Red Teaming methodologies.

  • Description

    The Trusted CI Framework is a minimum standard for cybersecurity programs that can be used by any organization, regardless of age, size, or sector. However, the specifics of implementing the Framework will vary considerably depending on the specifics of the organization. One particularly challenging and important area is how to “get started” adopting the Framework. Implementing all 16 of the Framework’s Musts can be a daunting task, and organizations need help determining what to prioritize.

    This training will explore strategies for organizations seeking to adopt the Framework, with particular attention paid to the diversity of starting postures organizations can have. The training will focus on setting effective priorities, crafting realistic timelines, and overcoming common obstacles. Substantial time will be dedicated to Q&A with the trainees, brainstorming potential solutions to their real world challenges.

    Presenter Bio

    Scott Russell (scolruss@iu.edu) is a Chief Security Analyst with the Indiana University Center for Applied Cybersecurity Research (CACR) and the Deputy Director of Trusted CI, the NSF Cybersecurity Center of Excellence. A lawyer and researcher, Scott’s expertise ranges from privacy and cybersecurity to international law. He is the program lead for the Trusted CI Framework, a co-author of Security from First Principles: A Practical Guide to the Information Security Practice Principles, played a central role in building the PACT and Cybertrack assessment methodologies, and served as temporary faculty with Naval Surface Warfare Center Crane. He received his B.A. in Computer Science and History from the University of Virginia, received his J.D. from Indiana University, interned at MITRE, and served as a postdoctoral fellow at CACR.

    Craig Jackson is Deputy Director at the Indiana University Center for Applied Cybersecurity Research (IU CACR), where his research and development interests include cybersecurity program development and governance, cybersecurity assessment design and conduct, legal and regulatory regimes' impact on information security and cyber resilience, evidence-based security, and innovative defenses. He has led collaborative work with critical infrastructure and national security stakeholders, as well as interdisciplinary assessment and guidance teams for Trusted CI, the NSF Cybersecurity Center of Excellence. He is the principal architect of the Trusted CI Framework, and played a central role in building the PACT and Cybertrack assessment methodologies. Craig has served as a temporary faculty at Naval Surface Warfare Center Crane. He is a graduate of the IU Maurer School of Law, IU School of Education, and Washington University in St. Louis.

  • Description

    The goal of security log analysis is to more efficiently leverage log collection in order to identify threats and anomalies in your research organization. This half-day training will help you tie together various log and data sources to provide a more rounded, coherent picture of a potential security event. It will also help you understand log analysis as a life cycle (collection, event management, analysis, response) that continues to become more efficient over time. Interactive demonstrations will cover both automated and manual analysis using multiple log sources, with examples from real security incidents. 45% of the sessions will be devoted to hands-on exercises where participants will analyze real log files in order to find security incidents.  A knowledge of Unix commands such as grep, awk and wc are ideal for this class, but not required as the algorithmic methods can be applied to other systems. A brief primer on these commands will be provided. We have expanded our exercise to include both command line and Elastic Stack based analysis; and we also plan to show Windows devices security logs this time. This will be an interactive session allowing Q&A and also will feature interactive polls to enhance the audience’s learning experience.

    New This Year

    We added new slides on Windows security last year and we plan to add an Artificial Intelligent (AI) aspect to the training for this year.

    Why the topic is relevant to conference attendees

    Nearly all computers create log files of some kind and for systems that are connected to the network this inevitably means those logs will contain indicators of attacks. This tutorial will help attendees to find those indicators of attacks and how to deal with them in order to determine if an attack was successful as well as prepare the logs as evidence.

    Targeted audience (researchers, students, developers, practitioners, etc.)

    Security professionals, developers, students and system administrators. This tutorial will especially be useful to those who describe themselves as devops, since those people may have less experience in system administration and log analysis, but are responsible for maintenance of their servers.

    Content level (% beginner, % intermediate, % advanced)

    25% beginner, 60% intermediate, 15% advanced

    Audience prerequisites

    Attendees should bring a laptop and have a terminal emulation program and an ssh client installed. Attendees who have an understanding of common unix commands such as grep, awk and wc are going to benefit more from this tutorial, however all attendees should be able to benefit from the algorithmic methods described in the tutorials. The trainer will explain the goal of each analysis and advice on how this can be applied to other tools.

    General description of tutorial content

    1. Overview of the Log Analysis Lifecycle

    2. Provide log analysis examples of real attacks with logs from Zeek, Apache, Postfix, Duo, OpenSSH and more.

    3. Interactive hands-on exercises in which attendees analyze a web server log and a Linux system log using both command line and ELK (Kibana)

    4. Takeaways:

      • Ideas on how to improve attendee’s own security logging & monitoring

      • Command examples that attendees can customize for use on their own logs

      • Methods that can generalize to explore and connect events across logs

      • Experience analyzing log files

      • Introduction to the Kibana Query Language and how to use it for deep-dive investigations

      • Information about community resources available to log analysis

      • Windows security

      • AI aspect to log analysis

    Presenter Bios

    Chief Security Analyst Mark Krenz is focused on cybersecurity operations, research and education. He has more than two decades of experience in system and network administration and has spent the last decade focused on cybersecurity. He serves as the CISO of Trusted CI and the CISO of OSG.

    Phuong Cao is a cybersecurity researcher at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. His work focuses on securing high-performance computing (HPC) cyberinfrastructure against accidental failures and intentional cyberattacks. His research is driven by network traffic analysis, cyberattack detection using probabilistic models, and formal verification of authentication protocols. As a National Science Foundation Trusted CI Fellow, he provides security expertise to research and education partners, including the FABRIC testbed. He has received awards for his research and mentorship, including a Best Paper Award, a Best Hackathon Award, and recognition as an Outstanding Mentor for Fiddler Innovation Fellowship recipients.

    Ishan Abhinit is working as a Senior Security Analyst at Indiana University for 6 years and has been associated with Trusted CI since then where he has worked on creating security policies, conducting table-top exercises, and managing cybersecurity risks to the organization. He has a masters degree in Cybersecurity from Northeastern University, Boston. He also serves as the deputy CISO for CACR.

  • Description

    As Operational Technology (OT) devices become more and more capable and connected, securing them has moved from the realm of "We wish decent security was possible" to "We wish someone would bother to use the security features."

    More and more vendors are providing OT systems with security options, but few system integrators enable them and few customers insist on them, so for many new installations, OT security is just as bad as it's ever been.

    With the tremendous growth in automation and interconnected devices, the number of attack surfaces being exposed is increasing geometrically while the familiar challenges of insecure equipment, minimal budgets, strict operational requirements, and lack of staff continue to plague the OT cybersecurity sector.

    This class will look at some of the security options which are now available in newer OT systems, some of the third-party hardware and software which can be added to help secure older systems, and what options we as users and customers have to improve the overall security of OT infrastructure.

    Presenter Bio

    Phil Salkie has been working in Industrial Controls and Automation for over 40 years, and has presented training sessions on Operational Technology security at CACR since 2017. He is a Managing Member of Jenariah Industrial Automation, serving a diverse array of clients including food packaging, medical and laboratory equipment manufacture, power generation, broadcasting, wastewater processing, and the chemical industry.

  • More information coming soon.

Thursday, October 23: Day 4

  • Description

    NIST's finalization of Post-Quantum Cryptography (PQC) standards in 2024 means we need to plan the transition to PQC-compliant algorithms. NIST has established that existing public key cryptography algorithms should be deprecated by 2030, and disallowed by 2035. The “harvest now, decrypt later” threat - where adversaries collect encrypted research data for future decryption - is already underway. PQC risk assessments will soon be mandatory for obtaining cybersecurity insurance, making this transition not just a technical necessity but a campus and business imperative.

    Scientific infrastructure faces particular challenges due to complex workflows, legacy systems, and diverse IT environments that often span multiple institutions. Unlike other environments, governance and compliance decisions about PQC transition risks and timelines aren't straightforward. Operators of scientific infrastructure will likely need to partner with campus and/or lab IT for some aspects of the transition. Doing a cryptographic asset discovery now can help R&E organizations prepare for PQC risk assessments and ultimately make informed decisions about upgrade priorities and timelines.

    This 2 hour session will: 1. Give an overview of what a PQC Risk Assessment looks like 2. Demonstrate PQSee, an NSF-funded open source cryptographic asset discovery tool and 3. Conduct an open discussion to foster a community of interest around PQC adoption across the scientific community including but not limited to: sharing best practices, sharing data on PQC assessments and adoption, or coordinating transition timelines across projects or institutions.

    The format will be:

    1. PQC Risk Assessment Overview (30 minutes)

    2. PQSee Tool Demonstration (15 minutes)

    3. Community Discussion (75 minutes)

    Presenter Bios

    Anita Nikolich is a Research Scientist at the School of Information Sciences at UIUC and an Affiliate at NCSA who works at the intersection of networking and security. Among other projects, she is the Co-PI of FABRIC and PQSee.

    Phuong Cao is a cybersecurity researcher at NCSA. His work focuses on securing high-performance computing (HPC) against accidental failures and intentional cyberattacks. His research is driven by network traffic measurements, cyberattack detection using probabilistic models, and formal verification of authentication protocols. Phuong is a former Trusted CI Fellow and has received awards for his research and mentorship, including a Best Paper Award, a Best Hackathon Award, and recognition as an Outstanding Mentor for Fiddler Innovation Fellowship recipients.

    Santiago Núñez-Corrales serves as Quantum Lead Research Scientist at NCSA, as Faculty Affiliate at the Illinois Quantum Information Science and Technology Center and as Core Faculty at the Arms Control & Domestic and International Security Program at UIUC. His expertise includes research computing and quantum programming language design, building digital twins of superconducting quantum devices, HPC-QPU integration, post-quantum security, and dependable classical-quantum computer systems engineering. He is a Co-PI in PQSee.

  • Description

    Presenter Bio

  • More information coming soon.

  • Description

    Secure by Design would like to host a Birds of a Feather session to discuss operational technology, the challenges faced with securing OT systems, and share solutions/mitigation strategies.

    The SbD team is prepared to discuss and/or provide guidance on the Vendor Procurement Matrix (https://zenodo.org/records/15014703).

    Attendees are encouraged to bring their own hardware (demos or stories) to share.

    Presenter Bios

    Michael M. Simpson is the manager of the Security Services team at OmniSOC, part of REN-ISAC, hosted at Indiana University. Michael serves as the Chief Information Security Officer (CISO) for the United States Academic Research Fleet (ARF), as part of contracted services with OmniSOC. Michael's 20 plus year career in IT has primarily been focused on cybersecurity in higher education and research. Micheal is also a Senior Security Analyst with Trusted CI, the NSF Cybersecurity Center of Excellence.

    Mikeal Jones is a Security Analyst at OmniSOC, bringing over two decades of professional experience in higher education. His background encompasses IT leadership and operational strategy, security and policy, systems architecture and administration, and customer service and support. Mikeal is passionate about safeguarding the technology utilized in the pursuit of knowledge, advancement of science, and discovery of solutions to real-world problems.

  • Description

    The Security Catch and Release Automation Manager (SCRAM) is an open-source, web-based service that assists in automation of your security data such as blocked IPs. This training will introduce SCRAM, explain its core features, and demonstrate how it can help security organizations secure their networks. During this training, attendees will learn how SCRAM can be integrated with existing tooling (such as Zeek), see a live demo, and discover ways to get involved with the project.

    Presenter Bio

    Chris Cummings is a Software Engineer with a background in network engineering, software engineering, and network programmability on the Security team at ESnet, where he develops tooling to support ESnet's security operations.

  • Description

    In late 2003 and early 2004, NSF Facilities faced computer intrusions, known as the "Stakkato Incident." NSF responded by forming an internal working group called FACSEC. Initially an ad hoc committee, it addressed intrusions affecting multiple facilities like TeraGrid and NCAR. This presentation provides an overview of how FACSEC improved cybersecurity practices within NSF and external communities. The group contributed to Cooperative Agreements for FFRCDs and Large Facilities, shared "Best Practices" guidance, and facilitated dialogue among cybersecurity professionals through the initiation of Cybersecurity Summit meetings.

    Presenter Bio

    Clifford Jacobs (NSF employee, retired), George Strawn (NSF employee, retired)

  • Description

    Presenter Bio

  • Description

    Join us at the poster session and help yourself to a sweet treat to conclude the Summit.