Loading Events

2026 GET-AI SERIES: 2 . Trust in AI Systems: Detecting, Defending, and Securing Intelligent Agents

May 26 @ 4:45 pm - 7:00 pm

We are excited to continue the Orange County Computer Society (OCCS) Global Emerging Technologies and Artificial Intelligence (GET-AI) Series—a monthly platform focused on transformative innovations in computer science and technology. Hosted by the IEEE Orange County Computer Society Chapter, this series brings together professionals, students, and tech enthusiasts to explore the cutting edge of what’s possible.
Following a highly engaging April session on Generative AI, where we explored LLMs, RAG, Agents, MCP, and hands-on AI application development, we are excited to bring you our May Tech Talk on “Security in AI.”
—————————————————————
🔒 May Focus: Securing Generative AI
As AI systems evolve—from traditional models to LLM-powered agents interacting with enterprise systems and real-world tools—they introduce powerful capabilities along with new security challenges, including:
– Data leakage and prompt injection
– Model misuse and unauthorized access
– Risks in agent-driven automation
– Governance and compliance concerns
This session combines technical insights and practical demonstrations to explore how to build secure, trustworthy AI systems at scale.
—————————————————————
Session 1: Intelligent Attack Detection & Provenance (45 mins)
Modern enterprises generate massive, fragmented logs, making it difficult to derive meaningful security insights.
This session explores how AI enhances detection and forensic analysis:
– Graph-Based Intrusion Detection
Use unsupervised graph learning to uncover multi-step attacks in network activity
– LLM-Powered Security Intelligence
Convert low-level alerts into high-level, actionable insights for faster response
👉 Takeaway: Move from fragmented alerts to intelligent, end-to-end attack understanding
—————————————————————
Session 2: Securing AI Agents — MCP Threats & Defense (45 mins)
As AI agents integrate with tools, APIs, and external systems, they introduce new attack surfaces.
This session includes a live demo of how agents can be compromised—and secured:
– Understanding MCP Architectures
How agents invoke tools and why trust boundaries blur
– Live Demo: Tool Poisoning & Agent Manipulation
See how adversarial inputs can:
– Manipulate agent behavior
– Trigger unintended actions
– Lead to data exfiltration
– Layered Security Framework
Practical defenses:
– Tool authentication
– Response sanitization
– Schema validation
– Context isolation
– Real-Time Evaluation
Prevent attacks without impacting performance
👉 Takeaway: Practical strategies to secure AI agents in enterprise environments
—————————————————————
About the Organizer
Pradyumna Kodgi
Principal Product Manager | Oracle Health & AI
IEEE Senior Member | Vice Chair, IEEE EMBS – Orange County
Member, IEEE AI Agentic Systems & AI Policy Committees
📍 California, USA
📧 pkodgi@ieee.org
🔗 linkedin.com/in/pkodgi
Co-sponsored by: Pradyumna Kodgi
Speaker(s): Zhou, Sreekanth
Agenda:
Securing AI: From Innovation to Resilience
AI is rapidly transforming how we build intelligent systems—but as capabilities grow, so do security risks. From LLM-powered agents to tool-integrated architectures, the question is no longer just what AI can do—but how do we secure it?
In this interactive session, we cut through the noise and break down AI security in practical, real-world terms—so you can understand not just the risks, but how to defend against them.
—————————————————————
🔍 What You’ll Explore
– How modern AI systems (LLMs, agents, MCP) introduce new attack surfaces
– The shift from traditional security to AI-driven threat models
– Key security concepts—explained clearly and practically
– Real-world attack scenarios and emerging threat patterns
—————————————————————
💡 What Makes This Session Different
This isn’t just theory—you’ll see AI systems under attack and defense in action.
Through a live, end-to-end demonstration, we’ll show how AI agents can be manipulated—and how layered security approaches can prevent these attacks in real time.
—————————————————————
🛠️ Practical Takeaways
You’ll walk away with actionable strategies and frameworks you can apply immediately, including:
– Securing AI agents interacting with external tools
– Validating and sanitizing untrusted inputs
– Designing trust boundaries in AI-driven architectures
—————————————————————
🎯 Who Should Attend
– Security professionals and architects working with AI systems
– Engineers and developers building AI/LLM-based applications
– Product managers and leaders driving AI adoption
– Anyone interested in understanding AI risks and defenses
—————————————————————
✨ What You’ll Walk Away With
– A clear understanding of emerging AI security risks
– Practical knowledge of how to secure AI agents and systems
– Real-world insights into attack prevention and defense strategies
—————————————————————
As AI systems become more autonomous and integrated into enterprise workflows, security becomes foundational—not optional. This session will equip you with the mindset and tools to build AI systems you can trust.
5270 California Ave , Irvine, California, United States, 92617, Virtual: https://events.vtools.ieee.org/m/557806

Venue

<a href="https://ieeesfbac.org/venue/5270-california-ave-irvine-california-united-states-92617-virtual-https-events-vtools-ieee-org-m-557806/">5270 California Ave , Irvine, California, United States, 92617, Virtual: https://events.vtools.ieee.org/m/557806</a>