As part of the 2024 NSF Secure and Trustworthy Cyberspace PI meeting, breakout sessions will be held with the goal of identifying important new challenges and trends in securing cyberspace, new directions for research, and areas in which the SaTC community can contribute to an improved future society. Breakout sessions will take place on Day 1 (Wednesday, September 4, 2:45 p.m.- 4:15 p.m.) to discuss the topic, followed by a brief report from the co-leads of each topic back to the whole group at the end of Day 2 (Thursday, September 5, 2:30 p.m. - 3:55 p.m.). There are 14 breakout topics.
SaTC PI meeting participants are asked to choose one breakout group to participate in for both breakout sessions.
Final Breakout Session Categories and Slides
- AI for Detecting and Fixing Vulnerabilities (Slides)
- AI to Prevent Phishing, Scams, and Online Hate (Slides)
- Applied Artificial Intelligence for Operational Cybersecurity (Slides)
- Cyberinfrastructure for Reproducible Experimentation (Slides)
- Preventing Disinformation and Deepfakes (Slides)
- Hardware Security for Emerging Computing Systems (Slides)
- Network and NextG Security (Slides)
- Post Quantum Security (Slides)
- Privacy, Policy, and Social Factors (Slides)
- Security Education and Workforce Development (Slides)
- Space Security (Slides)
- Software Supply Chain Security (Slides)
- Safeguarding Cyber-Physical Systems (Slides)
- Usable Privacy and Security (Slides)
Breakout Session Goals
Detailed goals for each of the 14 breakout topics are to address the following questions and issues:
- What is the topic? Why is it important to society? to a secure and trustworthy cyberspace? in other ways?
- Is there is an existing body of research and/or practice? What are some highlights or pointers to it?
- What are important challenges that remain? Are there new challenges that have arisen based on new models, new knowledge, new technologies, new uses, etc?
- Are there promising directions to addressing them? What kinds of expertise and collaboration is needed (disciplines and subdisciplines)?
- Any other topic-specific questions/issues not covered by the earlier questions.
Breakout Group Topic Descriptions
1. AI for Detecting and Fixing Vulnerabilities
Co-Leads: Carlos Rubio-Medrano (Texas A&M University - Corpus Christi), Gianluca Stringhini (Boston University)
Description: The growing capabilities of Generative Artificial Intelligence (Gen-AI) models have enabled novel techniques for automatically locating and assessing security vulnerabilities in software products. However, the research community still lacks an understanding of potential biases and shortcomings introduced by using these technologies for vulnerability detection, as well as their ability when it comes to handling non-trivial, complex vulnerabilities, e.g., logic-based, multi-module, corner cases, etc. This session will discuss the current state-of-the-art Gen-AI techniques for vulnerability detection, emerging research trends, and opportunities for collaborations and future work.
2. AI to Prevent Phishing, Scams, and Online Hate
Co-Leads: Hongxin Hu (University at Buffalo, SUNY), Shirin Nilizadeh (University of Texas at Arlington)
Description: In the era of Generative AI, the landscape of artificial intelligence has undergone a dramatic transformation, presenting both unprecedented opportunities and new challenges in combating phishing, scams, and online hate. This breakout group will explore how emerging Generative AI technologies can be utilized to develop advanced detection systems and countermeasures against these malicious activities. Additionally, we will address the dual-edged nature of Generative AI, particularly its potential to inadvertently facilitate more sophisticated phishing schemes, scams, and hate speech. Our discussion will focus on understanding the capabilities and limitations of these AI models, sharing cutting-edge research, and fostering collaborations to develop robust solutions that enhance online safety. This session is ideal for PIs who are keen to explore the frontiers of AI applications in cybersecurity and contribute to innovative efforts that ensure safer digital interactions.
3. Applied Artificial Intelligence for Operational Cybersecurity
Co-Leads: Sagar Samtani, (Indiana University Bloomington), Anita Nikolich, (University of Illinois Urbana-Champaign)
Description: The last few years has seen an unprecedented use of Artificial Intelligence (AI) methods, including deep learning, machine learning, large language models, and others for various cybersecurity applications. Increasingly, AI is being employed in various operational cybersecurity contexts, especially in Security Operations Centers (SOCs; which are the heart of cybersecurity efforts in many firms). However, significant research efforts are needed to effectively adapt and design viable AI capabilities for different types of operational cybersecurity applications. Therefore, in this breakout, we propose delving into various areas where AI can play a significant role in operational cybersecurity, including alert prioritization, vulnerability discovery and detection, optimal allocation of human talent for SOC tasks, human in the loop AI-enabled operational intelligence systems, and others. Time will be reserved specifically to discuss viable strategies to overcome significant and common impediments in AI for operational cybersecurity research activities, including dataset access/sharing, infrastructure capabilities, and artifact development and reuse.
4. Cyberinfrastructure for Reproducible Experimentation
Co-Leads: David Balenson (University of Southern California Information Sciences Institute), Patrick Traynor (University of Florida)
Description: Enabling reproducible experimentation on shared hardware that is easily and remotely accessible by all researchers has the potential to democratize security and privacy research and especially benefit underserved researchers and students, enabling them to compete on an equal standing with those from top-tier institutions. This breakout session will explore NSF-funded research infrastructure such as Chameleon, CloudLab, FABRIC, POWDER, and SPHERE (formerly DETER) and the hardware, software, and other capabilities needed to support reproducible experimental research in cybersecurity and privacy. The session will explore questions such as what is needed for experimentation in different fields of cybersecurity and privacy research; how researchers experiment today (i.e., in a lab, in a testbed, in the real Internet); what would a testbed have to offer to be able to experiment in it; and how we as a community can improve the quality and increase reuse of cybersecurity artifacts (code and datasets, experiment scenarios, etc. in published papers)?
5. Preventing Disinformation and Deepfakes
Co-Leads: Matthew Wright, (Rochester Institute of Technology), Vandana Janeja (University of Maryland, Baltimore County)
Description: Held during a year when half the global population will vote in an election, including the US, this session will delve into the critical challenge of countering the spread of false information facilitated by advanced generative AI technologies. We will explore the current landscape of deepfake generation, highlighting the sophistication of modern text, audio, and video manipulation tools. Participants will discuss the lag in detection technologies and the pressing need for robust and reliable models to ensure that reliable narratives outshine false and misleading ones. One example direction is the intersection of audio deepfake detection and linguistics, looking into innovative approaches to enhance detection capabilities. We will also address the needs of journalists, forensic analysts, and other potential users of detection. In the end, we seek to foster a deeper understanding of the multifaceted problem of deepfakes and brainstorm strategies to mitigate the risks of disinformation posed by generative AI.
6. Hardware Security for Emerging Computing Systems
Co-Leads: Yunsi Fei (Northeastern University), Jakub Szefer (Yale University)
Description: New and emerging computing systems introduce novel computational capabilities, but at the same time they come with potential new vulnerabilities or attack surfaces. In order to fully benefit from emerging computing systems’ benefits, they need to be secured and protected from security attacks that can undermine user’s privacy, security, and we need to protect data and intellectual property of these computing systems. For each novel computing system, there is a need to develop scientific and engineering understanding of the security threats, and then develop defenses. This breakout session will focus on any and all aspects of hardware security pertaining to emerging computing systems. The session will encompass materials, circuits, and systems. Discussion will focus on topics such as emerging materials, cryogenic CMOS, DNA storage, analog/RF circuits, analog AI hardware accelerators, digital AI hardware accelerators, quantum computing systems, PQC hardware, FHE hardware, and other emerging systems that the audience may bring up during the discussion. The session will first present state-of-the-art in hardware security of some of these systems, followed by a broad discussion among the audience. The goals of the session are to gather collective thoughts on challenges in hardware security of emerging systems, and identify gaps in research and new types of emerging systems not yet considered.
7. Network and NextG Security
Co-Leads: Wenjing Lou (Virginia Tech), Syed Rafiul Hussain (Penn State University)
Description: As next-generation networks evolve, such as 5G and beyond, they promise transformative advancements in bandwidth, connectivity, and user experience. These developments bring fundamental changes in hardware, software, and network architecture, shaping the future of network infrastructure. As networks become increasingly complex and integral to daily operations, they face a growing array of threats, from data breaches to sophisticated cyber-attacks. This breakout session will explore the pressing research needs for securing these advanced networks, focusing on challenges in technologies like 5G and 6G, Open RAN, and edge computing. The discussion aims to address the security implications and solutions for emerging network systems.
8. Post Quantum Security
Co-Leads: Kirsten Eisentraeger (Penn State University), Dakshita Khurana (University of Illinois Urbana-Champaign)
Description: In recent years, there has been a substantial amount of research on quantum computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. Historically, it has taken almost two decades to deploy our modern public key cryptography infrastructure. Therefore, regardless of when exactly large-scale quantum computers can be built, we must begin now to prepare our information security systems to be able to resist quantum attacks. At the same time, quantum mechanical principles can be harnessed to obtain security properties that are classically impossible to achieve. It is important to understand what these properties are, and whether they can be obtained from near-term quantum devices. This discussion will focus on post-quantum and quantum cryptography including quantum attacks, post-quantum assumptions, upgrading systems to post-quantum security, and new cryptographic capabilities enabled by quantum devices.
9. Privacy, Policy, and Social Factors
Co-Leads: Sarah Rajtmajer (Penn State University), Shomir Wilson (Penn State University)
Description: Recent studies have shown that the overwhelming burden of privacy self-management is not felt equally by all. For example, low-socioeconomic status (low-SES) and minority populations report feeling substantially more concerned about their digital privacy and these concerns occur within a broader context of limited resources and a historical context of targeted surveillance. This session will bring together researchers interested in the intersections of privacy and social and demographic factors. The session will explore the intersectional nature of privacy vulnerabilities and envision equitable privacy enhancing technologies.
10. Security Education and Workforce Development
Co-Leads: Colleen Lewis (University of Illinois Urbana-Champaign), Bhavani Thuraisingham (University of Texas at Dallas)
Description: Over the years, the SaTC EDU program has funded numerous efforts on cutting edge cyber security and privacy education projects. To develop directions for future efforts, the program organized a workshop in November 2023 to focus on some of the emerging directions as well as integrating both education research and evaluation techniques into the projects. In particular, the workshop focused on critical areas such AI/ML (Artificial Intelligence/Machine Learning), Quantum Computing and Space systems as they relate to cyber security and privacy. Specifically, the objectives of the workshop were the following: (i) Involve the larger research and education community in cyber security and privacy in the SaTC EDU program. (ii) Ensure that the cyber security and privacy education community has an understanding of the Education Research and Evaluation techniques to ensure the success of the projects, and (iii) Explore the inclusion of projects in critical areas such as AI/ML, Quantum Computing and Space Systems as they relate to Cyber Security and Privacy. We plan to continue to expand on the discussions at the workshop during the breakout session at the SaTC PI meeting. In particular, we will examine some of the emerging directions in addition to those discussed at the workshop as well as explore how education research and evaluation techniques could be integrated into the projects. The discussions will also include specific examples on how cyber security and privacy educators could collaborate with education and evaluation researchers to enhancing into proposals and projects.
11. Space Security
Co-Leads: Bruce DeBruhl (Cal Poly San Luis Obispo), Tao Shu (Auburn University)
Description: In recent years we have witnessed a growing interest in utilizing the space above the Earth to conduct communications, elevating communication from a 2-D plane to a 3-D space. The most prominent example is the emerging Low Earth Orbit (LEO) satellite networks, which are capable of providing seamless and full coverage for communications around the globe, and have been deployed in massive scale over the recent years. As a critical infrastructure of the society, the security of LEO satellite networks has a high stake, but the unique features of LEO satellites, e.g., global distribution and mobility, low altitude, high speed, dense connectivity, and geo-political sensitivity, render these systems susceptible to new threats and attacks, in both the physical and cyber spaces. The research community still lacks a comprehensive understanding on the vulnerabilities of these systems and the countermeasures that can protect them. This session will discuss everything related to the security of space communications, including but not limited to LEO satellite networks, UAV or drone networks, and stratospheric communication platforms, from not only the technical and engineering perspective but also the geo-political policy-making standpoint.
12. Software Supply Chain Security
Co-Leads: Laurie Williams (North Carolina State University), Justin Cappos (New York University)
Description: Software supply chain security is a relatively new area of concern for academia and industry. Come and learn about this vibrant and increasingly important area. The workshop will discuss state-of-the-art research, ways to improve university’s educational offerings on the topic, and also how to engage effectively with the key players in this space.
13. Safeguarding Cyber-Physical Systems
Co-Leads: Xiali (Sharon) Hei (University of Louisiana at Lafayette), Yuan Tian (University of California, Los Angeles), Alfred Chen (University of California, Irvine)
Description: With the increase in investment in cyber-physical systems (CPSs), including self-driving vehicles, robotic devices, drones, smart grids, medical devices, etc, and the decision systems of robots and self-driving vehicles may leverage the powerful understanding capabilities of large language models. The coexistence of human beings and these automotive moving systems may incur new security, safety, and privacy leakage issues. CPSs also need to face security challenges from physical-layer threats such as sensor attacks and physical-world attacks, for which many traditional cyber-defense methods today have become fundamentally ineffective. Using smartphones with various applications using diverse interfaces, the enlarged control chain will introduce new attack surfaces to CPSs. The emerging extensive use of AI including LLMs in safety-critical CPSs such as autonomous driving systems further exacerbates such security challenges due to the lack of interpretability and formal specifications of the CPS AI components. Such substantial research gaps at CPS, AI, and security fronts call for fundamentally new security design strategies, theories, and principles.
14. Usable Privacy and Security
Co-Leads: Michelle Mazurek (University of Maryland), Lujo Bauer (Carnegie Mellon University)
Description: “Usable security” as a subdiscipline is now 25 years old. In this breakout, we’ll take stock of where we are as a research community and where we should be headed. We expect to discuss questions like (but definitely not limited to):
- What research topics and questions need more attention in our community? Are there topics and questions that we should start moving on from?
- Now that we have 25 years of history, are there any topics or questions from the early years that we should revisit in a more current context? Relatedly, when is it reasonable to repeat a study on a specific population if a similar study has previously been done on a more general population?
- What kinds of new tools and methods will we need to tackle new research questions? Are there any methodological errors or problems that we as a community are particularly prone to, and how can they be corrected?
- Are we doing as much as we should to benefit participants? How can we improve in this area?