Minh Nguyệt - Cô gái xinh xắn dễ thương và hành trình chinh phục chứng chỉ ISTQB

we are vareal

🤖

Góc Công Nghệ

2023.03.01

⏱ 8 min min read

Tester in the Age of AI: From Test Execution to Quality Design

V

Nguyễn Văn Minh — CTO / AI Lead

Vareal Vietnam

There is a question many testers have probably heard lately, even if they have not said it out loud yet.

AI can write test cases.
AI can generate scripts.
AI can read logs.
It can even help produce test data, suggest edge cases, and draft Playwright scripts that look surprisingly usable.

So where does that leave testers?

It sounds uncomfortable, but it is not an unreasonable question. The real issue is not that testers are running out of work. The real issue is that the old way of working as a tester is losing value.

If someone’s contribution is mostly receiving requirements, writing test cases in a fixed format, executing tests, logging bugs, and repeating that cycle sprint after sprint, then yes, AI is moving quickly into that territory. And that happens to be the kind of work machines are often good at: fast, consistent, tireless, and comfortable with repetition.

But if we look at testing at a higher level, the story changes.

AI is getting good at execution, not at understanding

That distinction matters.

AI can help draft test scenarios from requirements.
It can generate automation scripts.
It can suggest happy paths and even propose some reasonable exception cases.
Used properly, it can help create traceability matrices, review coverage, identify gaps, and point out redundant cases.

All of that is real. And it will keep improving.

But every one of those benefits depends on something else being true: the input must be clear enough, the user must know what they are asking for, and the output must still be reviewed with real judgment.

AI does not truly understand how your system behaves in a real business context.
It does not automatically know which business flow is sensitive, even if the requirement sounds simple.
And it does not take responsibility when a logical gap slips through and turns into a release problem.

It is very good at acceleration. It is not automatically good at turning quality into something trustworthy if people are still testing on autopilot.

The value of a tester is no longer mainly about writing test cases faster

This is the uncomfortable part.

For a long time, many teams have unintentionally reduced the tester role to execution:

  • read the document
  • write the test cases
  • run the tests
  • log the bugs
  • retest
  • close the ticket

Those activities still matter. But if that is where the role is defined, AI will put pressure on it very quickly. This kind of work is structured, repeatable, and easy to speed up with tools.

What makes a tester strong is not the ability to write more test cases in one afternoon.
What matters more is:

  • whether they understand the business flow
  • whether they detect ambiguity in requirements
  • whether they see missing scenarios
  • whether they know what needs deeper testing and what needs only reasonable coverage
  • whether they protect quality at the system level, not just at checklist level

In other words, a strong tester is not just someone who “tests software.”
They help the team design quality, control quality, and expose risk early.

Testers in the AI era need to move closer to business analysis

This is one of the most valuable shifts a tester can make.

In practice, many strong testers have already been doing this for years, even if they did not call it that. They do not wait until code is finished. They read requirements early, challenge business logic, ask about unclear conditions, bring up exception paths, and help improve quality before implementation begins.

In the AI era, that becomes even more important.

Because once AI can help generate artifacts quickly, the human role becomes more valuable in the areas that require:

  • clarifying requirements
  • breaking down business rules
  • detecting missing logic
  • challenging assumptions
  • identifying boundary conditions
  • connecting business intent with quality expectations

A tester who understands the business will not just ask:

  • “Does the system work?”

They will also ask:

  • “Work for whom?”
  • “Under what conditions?”
  • “What happens when the user does not follow the ideal path?”
  • “What happens when two business rules collide?”
  • “If the system technically passes but violates business expectations, is that really a pass?”

That is where quality starts to mean something real.

AI can make testers much stronger, if used in the right places

The most useful way to look at AI is this: AI is not replacing testers. It is helping testers move away from mechanical work and spend more time on work that actually matters.

1. Using AI to draft test scenarios

From requirements, user stories, business flows, or meeting transcripts, AI can quickly create a first draft of scenarios. That is especially useful when moving from raw documents to something reviewable.

But the value is not simply that AI writes fast.
The value is that testers can use the draft to:

  • spot missing flows
  • add exception paths
  • prioritize scenarios by risk
  • identify logical gaps earlier in the cycle

2. Using AI for test matrix and coverage review

This is an area where testers can gain a lot of leverage.

From requirements and scenario lists, AI can help:

  • organize coverage by feature
  • compare conditions and expected results
  • draft traceability matrices
  • identify overlap
  • highlight areas that look under-tested

Instead of spending most of the time filling out spreadsheets, testers can focus on the more important question: is this coverage strong enough to protect the release?

3. Using AI to review test artifacts

This is practical and underrated.

A tester can use AI as a second layer of review to:

  • check consistency between requirements and test cases
  • detect business rules not yet reflected in scenarios
  • identify vague expected results
  • see whether the suite is too biased toward happy paths
  • suggest additional edge cases

AI does not replace real review. But it is a fast and patient reviewer that does not get tired of reading long documents again and again.

4. Using AI to accelerate automation, such as Playwright E2E

This is one of the clearest use cases.

From a reasonably well-defined scenario set, AI can help:

  • generate Playwright test skeletons
  • draft locator strategies
  • create assertions
  • suggest test data variations
  • refactor repetitive scripts
  • organize suites more efficiently

Used properly, this can save a huge amount of time in setup and repetitive work.

But one thing needs to be said clearly: AI helping to write scripts does not mean AI understands whether those tests are worth automating in the first place. A meaningless test suite can still be generated very quickly. And a long, flaky, high-maintenance E2E suite is still bad whether a person or a model wrote it.

So the faster AI helps you generate automation, the more important it becomes to ask:

  • which tests deserve automation
  • which tests should remain exploratory or manual
  • which tests belong at unit or integration level instead of E2E
  • which assertions actually protect something important

From “test executor” to “quality designer”

This may be the most important shift of all.

In many teams, testers have historically been treated as the final gate:

  • development finishes
  • testing starts
  • bugs are reported
  • fixes are verified

But today, the role becomes much more valuable when it starts earlier and reaches wider.

Not just someone who runs checklists.
But someone who:

  • helps clarify requirements
  • helps the team see risk
  • designs meaningful scenarios
  • controls coverage depth
  • decides what should be automated
  • uses AI to speed up execution without giving up judgment

In other words, testers should not compete with AI in the places where machines are naturally better.
They should use AI to move themselves toward the part of quality work that requires deeper thinking, stronger business understanding, and better judgment.

The real problem is not that AI can write test scripts

The real problem is continuing to test the same old way while everything around the role is changing.

Waiting until requirements are “done” before joining the conversation.
Writing test cases like filling out a form.
Running checklists without questioning business logic.
Treating automation as someone else’s responsibility.
Using bug count as the main indicator of quality.

If that mindset stays unchanged, AI will absolutely make people feel left behind.

But if testers use AI as a tool to accelerate the mechanical part of the job while expanding into business analysis, scenario design, matrix review, and automation thinking, then the story looks very different.

In that case, AI does not make testers smaller.
It simply pushes them to become more complete.

Conclusion

Testers are not running out of work in the age of AI. But the old model of manual execution, template-driven test cases, and routine checking is losing value faster than before.

The bigger opportunity lies elsewhere:

  • understanding the business more deeply
  • getting involved earlier in requirements
  • using AI to accelerate scenario design
  • controlling matrix and coverage more effectively
  • reviewing artifacts more intelligently
  • pushing automation where it truly matters

So the better question is not:

“Will AI replace testers?”

It is:

“Which testers will become stronger now that AI handles execution so well?”

The answer is probably not the ones who run the most tests.

It is the ones who design quality better.

VAREAL Vietnam

AI-first software company — kiến tạo giải pháp thông minh đặt AI làm cốt lõi.

MST: 0108704322 — Hà Nội

Dịch vụ

AI Development

Process Automation

Web & Enterprise

AI Consulting

Liên hệ

contact@vareal.vn

(+84) 982 894 859

Cầu Giấy, Hà Nội

© 2026 Vareal Vietnam Co., Ltd. All rights reserved.

Đại diện pháp lý: Teramoto Masahiro — Chairman