How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen copertina

How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen

How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen

Ascolta gratuitamente

Vedi i dettagli del titolo

3 mesi a soli 0,99 €/mese

Dopo 3 mesi, 9,99 €/mese. Si applicano termini e condizioni.

A proposito di questo titolo

In this episode of Open Tech Talks, host Kashif Manzoor sits down with Bobbie Chen, a product manager working at the intersection of fraud prevention, cybersecurity, and AI agent identification in Silicon Valley.

As generative AI and large language models rapidly move from experimentation into real products, organizations are discovering a new reality. The same tools that make building software easier also make abuse, fraud, and attacks easier. Vibe coding, AI agents, and LLM-powered workflows are accelerating innovation, but they are also lowering the barrier for bad actors.

This conversation breaks down why security, identity, and access control matter more than ever in the age of LLMs, especially as AI systems begin to touch authentication, customer data, financial workflows, and enterprise knowledge. Bobbie shares practical insights from real-world security and fraud scenarios, explaining why many AI risks are not entirely new but become more dangerous when speed, automation, and scale increase.

The episode explores how organizations can adopt AI responsibly without bypassing decades of hard-earned security lessons. From bot abuse and credit farming to identity-aware AI systems and OAuth-based access control, this discussion helps listeners understand where AI changes the threat model and where it doesn't.

This is not a hype-driven episode. It is a grounded, experience-backed conversation for professionals who want to build, deploy, and scale AI systems without creating invisible security debt.

Episode # 177

Today's Guest: Bobbie Chen, Product Manager, Fraud and Security at Stytch

Bobbie is a product manager at Stytch, where he helps organizations like Calendly and Replit fight against fraud and abuse.

  • LinkedIn: Bobbie Chen

What Listeners Will Learn:

  • How LLMs and AI agents change the economics of fraud and abuse, making attacks cheaper, faster, and more customized
  • Why vibe coding is powerful for experimentation, but risky when used without security review in production systems
  • The difference between exploring AI ideas and asking users to trust you with sensitive data
  • Standard security blind spots in AI-powered apps, especially around authentication, parsing, and edge cases
  • Why organizations should not give AI systems blanket access to enterprise data
  • How identity-aware AI systems using OAuth and scoped access reduce risk in RAG and enterprise search
  • Why are many AI security failures process and organizational problems, not tooling problems
  • How fraud patterns like AI credit farming and automated abuse are emerging at scale
  • Why security teams must shift from being gatekeepers to continuous partners in AI adoption
  • How professionals in security, product, and engineering can stay current as AI threats evolve
Resources:
  • Bobbie Chen
  • The two blogs I mentioned:
  • Simon Willison: https://simonwillison.net
  • Drew Breunig: https://www.dbreunig.com
Ancora nessuna recensione