I want to talk about the end of writing test scripts from scratch. In 2025 I started using Claude Code to write Playwright test automation. Not as a helper that suggests snippets. As a collaborator that reads my application, understands what I want to test, and writes the tests. That changed how I think about this job.
The hard part of test automation was never the code. It was the decisions around the code. Which user flows matter most? Where are the real risks? What does "done" even mean for this feature? Those are tester questions. Not coding questions. Yet we spent a decade treating test automation like it's primarily a coding problem.
My vision: within 2 years, writing test scripts from scratch will feel as outdated as manual regression spreadsheets feel today. AI coding agents won't replace testers. They'll expose that what we actually do has never been about writing assertions.
But here's the part nobody talks about. I've seen Claude Code generate confidently wrong tests. Tests that pass but verify nothing meaningful. Beautiful automation that misses the point entirely. The gap between "a test exists" and "we're actually testing something" is where testers become indispensable.
I'll include a live demo of Claude Code writing Playwright tests from intent. I'll show the magic and the failures on purpose. Then we'll have the bigger conversation about what this means for testers, teams, and careers.
- How to think differently about what test automation skills actually are (It's not syntax or frameworks, but the ability to ask the right questions).
- How to spot when AI-generated tests look correct but test nothing.
- The judgment to know what's worth automating in the first place.
- Testers and test automation engineers who currently write test scripts as a core part of their job.
- QA leads who are trying to figure out how AI changes their team's skill requirements.
- Anyone who feels a mix of excitement and anxiety about AI coding agents entering the testing space.