Testing United Navigation
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
Full Day Workshop (6 hours)
Beyond Assert(true): Hands-on Testing for LLMs and AI Agents
Nov 25 (Day 1)
9:00 - 16:00
Workshop room 4

This workshop focuses on designing structured workflows and Quality Gates for AI-powered QA agents, with a strong emphasis on ensuring quality. Rather than building the agents themselves, participants will focus on defining how these agents should operate within a controlled workflow, including how their outputs are validated at each step.

The main objective is to ensure the reliability and consistency of AI agents by defining minimum acceptable quality for outputs, creating deterministic validation steps (Quality Gates) across the workflow, and identifying when and where adjustments are needed. We approach this as a pipeline of tasks, where each step is validated to guarantee quality before moving forward.

In addition, we introduce observability practices to monitor agent behavior, detect failure points, and enable continuous improvement.

Who is this for?
Designed for QA Engineers, SDETs, and Developers working with, or transitioning into, Generative AI systems, LLM-powered applications, or AI agents. Especially valuable for practitioners who understand traditional testing but need practical techniques to validate probabilistic outputs, orchestrated workflows, and multi-agent systems. Participants should have basic familiarity with Python and APIs.
Main Objectives
  • Defining minimum acceptable quality for agent outputs
  • Creating deterministic validation steps (Quality Gates) across the workflow
  • Identifying when and where adjustments are needed
  • Introducing observability practices to monitor agent behavior