Testing United Navigation
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
Half-Day Workshop (3 hours)
Decide First, Prompt Second
Nov 25 (Day 1)
9:00 AM - 12:00 PM
Workshop room 1

The majority of testers I've surveyed describe AI output as noisy, incorrect, or requiring heavy rework. In workshops, I hear the same diagnosis every time: better prompting, more context, a different model. These are answers to the wrong question.

The correction loop starts before the prompt. It starts at the moment a team decides which task to hand to the AI, and does so without asking whether the task is well-defined enough for a machine to handle, or whether the team would recognise a good result if they saw one. When the use case is poorly chosen, the AI produces something plausible-looking and the human spends their time correcting output they cannot properly evaluate. That isn't humans and machines working together. That's humans cleaning up after a decision they didn't consciously make.

This workshop teaches a practical framework for making that decision deliberately. You'll work through your own real AI use cases using a structured diagnostic that draws on the skills you already have from test analysis and risk assessment. There is no slide-heavy instruction. The session is pair work, structured critique, and practice against real examples from your own work.

Who is this for?
QA engineers, test leads, SETs, and developers working with AI tools and finding the results uneven. Most useful for people who have been handed AI tools and told to use them, or who are advising their teams on where AI should and shouldn't be applied.
What you'll leave with
  • An audit of at least two of your current AI use cases against explicit diagnostic criteria, with a clear view of which are worth continuing and which should be redesigned or dropped
  • An explicit definition of "working" for one use case, specific enough to share with your team and act on the same week
  • Language to explain your diagnostic reasoning to a stakeholder who believes the tool is the solution, without making it a debate about the tool
  • A reusable, tool-agnostic framework that applies across AI tools, team sizes, and testing contexts