Early Bird Discount Ends June 30! Get a special discount on Combo tickets before prices go up. Secure yours now — Click here.

Leveraging Large Language Models in Automated Testing: Navigating the Promise and Perils

Track Talk I Friday I Track 1 I 1 PM – 1:40 PM

Description

As software systems become increasingly complex, automation in testing must evolve beyond execution to include test planning, analysis, design, and implementation. While effective, traditional testing methods often struggle with requirements intake, inefficient test case design and selection, script maintenance challenges, and result analysis mistakes. Could Large Language Models (LLMs), lately extended with the support of AI Agents, help bridge these gaps by automating tasks prone to human error and enhancing the efficiency of software testing activities? LLMs introduce new possibilities, enabling us to transform vague requirements into reasonable models, generate optimized test cases, and adapt test scripts to evolving conditions. As AI advances, its capabilities and limitations force us to rethink established testing methodologies. However, innovations are usually followed by challenges—questions of trust, correctness, interpretability, and ethical concerns remain key obstacles to widespread adoption. This talk will delve into the real-world impact of LLMs in software testing, cutting through the hype to critically examine their strengths and limitations. While LLMs offer intriguing possibilities, they also introduce uncertainties: How reliable are their outputs? Is new technology mature enough to be applied with low-level risks? Can they truly replace human intuition in critical testing scenarios? Let us explore several use cases for blending LLMs into testing process, highlighting successes and unexpected failures, and discuss the risks of over-reliance on LLMs.

Key-takeaways

1. Understand how Large Language Models (LLMs), enhanced with AI Agents, can support and optimize various aspects of software testing.
2. Identify the strengths and limitations of LLMs in real-world testing scenarios.
3. Explore the ethical and trust-related considerations involved in AI-driven testing.

About the Speaker

Predrag Skokovic

Managing Director at Quality House

About the Speaker

Predrag Skokovic has over 25 years of experience in the IT field, specializing in software testing and quality assurance. He currently serves as the Managing Director at Quality House, where the focus is on delivering exceptional software testing services. Previously, Predrag spent more than a decade as a Test Manager, leading international teams on projects, particularly in technical automation for safety-critical industries such as medical and petrochemical sectors. Along the way, he gained valuable experience in fintech, banking, e-commerce, and other domains. Predrag is dedicated to advancing software testing practices and processes. Throughout his career, he has provided consultancy services to assess and refine testing strategies and frameworks, as well as mentoring and coaching teams to enhance their skills and achieve sustainable quality improvements. Predrag is a frequent speaker at international conferences and a guest lecturer, sharing insights from his extensive professional journey.

Nov 13 - 14

NH Milano Congress Centre
For general inquiries contact info@testingunited.com

Follow our social media platforms

Take advantage of Early Bird discount to save money!

Limited seats available.

Follow our social media platforms