Bruno Legeard

Head of the AI ​​lab

Join my presentation on: AI Test Execution Agents vs Scripted Test Automation: A Practical Decision Framework

AI test execution agents can now execute natural-language test scenarios (manual or Gherkin) directly on a GUI, without automation code. Interacting visually, they perform functional tests like a human, generating structured evidence (screenshots, explanations, PASS/FAIL verdict).
 
Read More

Who is Bruno Legeard?

Head of the AI ​​lab

I’ve been working at Testinium as a Senior Test Solutions Architect for over seven years. I lead automation and performance testing projects, and I manage projects and teams. I specialize in mobile, web, and API test automation, and I mentor my teammates.

What will Bruno Legeard be discussing?

AI Test Execution Agents vs Scripted Test Automation: A Practical Decision Framework

AI test execution agents can now execute natural-language test scenarios (manual or Gherkin) directly on a GUI, without automation code. Interacting visually, they perform functional tests like a human, generating structured evidence (screenshots, explanations, PASS/FAIL verdict).

This capability challenges the long‑standing assumption that functional GUI testing must be fully scripted to be automated. In many contexts, AI agents can replace scripted automation for functional test execution, significantly reducing the cost of test creation and the ongoing maintenance burden caused by fragile locators in frequently changing UIs. I propose a strategic shift: use AI agents as the default functional GUI test execution engine during the high-volatility development and qualification phases to enable fast feedback. Scripted automation, ideally AI-assisted, should be reserved for a smaller set of stable, high-value tests run frequently in the CI/CD pipeline.

Based on experiments across twenty projects, I will introduce a Decision Radar to help teams select the right approach to test execution using three dimensions: execution cadence, UI evolution rate, and oracle strictness. I will also demonstrate how to measure and govern AI agent reliability using true/false PASS/FAIL metrics against human references to avoid false confidence.

Key Takeaways:

  1. How to decide “agent vs script” using three practical questions: how often the test runs, how fast the UI changes, and whether the oracle is visual or rule‑based.
  2. How to measure and control trust in AI test agents with accuracy, correct FAIL detection, and correct PASS detection—so agent execution can be used safely in production test processes.
  3. How to build a cost‑effective QA test execution strategy: replace a large portion of scripted functional tests with AI agent execution, and keep only high‑frequency CI/CD gate tests as scripted automation for speed and determinism.

Inscrivez-vous à ma présentation

* indicates required

A4Q is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. In order to provide you the content requested, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please tick the checkbox below.

I agree to receive other marketing information and receive one-to-one communication regarding sales, customer services, promotions, or additional information from A4Q and the sponsors.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.