Testing United Navigation
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
   SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •    SUPER EARLY BIRD TICKETS ARE NOW AVAILABLE    •
Track Talk
When Green Dashboards Silence Human Judgment
Nov 26 (Day 2)
13:00 - 13:40
Track 1

AI agents are increasingly embedded in our testing ecosystems: generating tests, triaging defects, evaluating risk, and even making release recommendations. At the same time, something subtle is happening. The more capable our systems become, the more responsibility we quietly transfer to them.

In many organizations, green dashboards and high metrics are beginning to replace critical questioning. If the accuracy is high and the gates are green, we move forward. We trust the numbers. We assume control.

But autonomy changes the equation. AI agents do not simply execute instructions – they reason, interpret, act, and drift within dynamic boundaries. Metrics alone cannot capture intent, context, or long-term impact. And when humans gradually stop challenging machine outputs, we don't gain intelligence – we lose it.

This vision explores a key question for our industry: What does real collaboration actually look like – what we might call United Intelligence – when humans and AI agents share responsibility? Participants will leave reflecting not just on better metrics, but on better judgment – and on the deliberate role humans must play in increasingly autonomous systems.

Key Topics Explored
  • How cognitive offloading affects tester judgment.
  • How over-trusting dashboards weakens critical thinking.
  • How teams can design explicit human–agent interaction boundaries that preserve accountability.
Who is this for?
Test architects, quality leaders, senior testers, AI/ML engineers, and engineering managers responsible for introducing AI agents into testing workflows and decision processes.