Home > Blog > Did You Look at AI's Answer from a Different Angle?
Author: Tomotaka ASAGI Published: Mar 01, 2026
Introduction
When I work with AI on test analysis and design, I never trust the first output.
I provide the context and what I want to analyze, review the output, share my thinking, and have it generate again. Over and over. For example, when analyzing why a project has a high bug count, I don't just accept AI's suggestions at face value — I use them as a starting point to deepen my own understanding.
Through this process, I realized something: evaluating AI's output requires thinking on multiple axes.
What Security Testing Taught Me
A while back, I had the opportunity to analyze security testing with AI on a project.
The project had solid penetration testing in place. It's a fundamental practice in security — probing system vulnerabilities from the outside using known attack techniques. In many industries, you can't release without passing third-party security testing, so it makes sense to invest heavily here.
But when I asked AI to analyze bug trends, it came back with an interesting question: "Is the testing for role-based access control sufficient?"
Think about a bug where a regular user can escalate to admin privileges. That's a security hole, but look at it from another angle — it's also a functional testing issue. It's about whether access control per role is working correctly.
In practice, many bugs found during security testing could have been caught earlier through proper functional testing. Behind the pattern of "we do thorough security testing, yet bugs keep appearing" often lies gaps in functional testing. When you have AI analyze bugs and ask "which phase should have caught this?", that's one of the first things it points out.
No matter how thorough your penetration testing is, if these perspectives are missing, security holes remain. What AI's observation reminded me of is that working logically within a single framework isn't enough.
Three Ways of Thinking
Reflecting on this experience, I realized I use several distinct modes of thinking when designing tests.
Vertical Thinking — Logical, Step by Step
Vertical thinking is about progressing logically, step by step, within a single framework. You move from one step to the next, verifying correctness along the way.
Penetration testing is exactly this. Given the question "Can we break into this system?", you systematically try known attack techniques one by one. Is this path vulnerable? What about this weakness? — a sequential process of verification.
The deeper your knowledge of the target system's domain (the business area or industry the system serves), the more you can design tests that go beyond what general testing would cover. For a financial system, that means verification based on regulatory requirements. For a healthcare system, exception handling specific to clinical workflows. These test perspectives emerge precisely because you're following the domain's logic step by step.
However, when you focus too much on following the logic within one framework, you risk missing problems in the framework right next to you.
Lateral Thinking — Switching Frameworks
Lateral thinking is about intentionally stepping away from your current framework and looking at the problem from a different angle. By switching from one pattern to another, you gain fresh insights.
The role-based testing example is exactly this. Switching your perspective from "a security problem" to "a functional testing problem." A bug where a regular user can escalate to admin privileges is both a security hole and a functional bug in access control. It's a perspective that only becomes visible when you switch frameworks.
What's interesting is that the deeper someone's domain expertise, the more likely they are to stay within that domain's framework. They can dig deep within their area's logic, but general perspectives — "is this kind of pattern actually safe?" — tend to get overlooked. Front and back, left and right — I've learned through experience that consciously looking for the opposite perspective matters.
Critical Thinking — Connections and Sufficiency
The third mode is stepping back to ask, "Is this enough?"
Even if individual tests are good, that alone isn't reassuring. You start with requirements, identify risks through analysis, and then ask: are those risks covered by tests? When a test fails, which requirement's risk level goes up? Can you derive test priority and severity from risk assessments?
As you move from one phase to the next, are the connections — the traceability — actually there? From test analysis to test design to test case creation, you need to examine the entire flow with a critical eye.
Vertical thinking digs deep within a framework. Lateral thinking switches frameworks for different perspectives. Critical thinking verifies whether all of that is truly sufficient. I believe test design coverage only becomes visible when all three come together.
Where AI Fits In
So how do these three ways of thinking relate to working with AI?
When you ask AI to generate test perspectives, it's good at logical enumeration within a single framework. Ask for "security test perspectives" and you'll get a solid list. For the lateral direction — "look at security from a functional testing perspective" — AI can formally cover that too if instructed.
But the critical thinking part — examining traceability between phases and pointing out "this connection is weak" — is difficult without understanding the project context.
What's interesting is that recent AI approaches involve assigning different roles to multiple agents. For example, one agent that digs deep vertically, one that expands laterally, and one that critically reviews.
In practice, the lateral agent tends to generate many ideas and diverge easily. The critical agent then steps in: "Given the current scope and phase, which of these should we actually focus on?" It creates the same structure as role assignment in a human test analysis team.
In other words, understanding these thinking modes directly translates to designing how you use AI. A person who knows what perspectives are needed can design what roles to assign to AI.
Closing Thoughts
Vertical thinking, lateral thinking, critical thinking. These three make a useful framework, but knowing them alone doesn't make you wise.
The instinct to naturally think "if there's a front, check the back" comes from designing tests over and over, noticing what you missed, learning from mistakes, and thinking again — it builds up through that cycle. Wisdom comes from the accumulation of thinking deeply and continuing to act on it. It's messy, unglamorous work.
But now we have AI as a partner. AI will go through the cycle of thinking and trying with you as many times as you need. It never gets tired of it (and I'm genuinely grateful for that). So you can run far more cycles, far faster than before.
Think and try. Think and try again. Through that cycle, knowledge transforms into wisdom.
The more you use AI, the more your own thinking ability is tested. It sounds contradictory, but that's what I'm experiencing.
This is Part 2 of the "AI × Software Testing" series.
- Do You Review AI-Generated Test Code?
- Did You Look at AI's Answer from a Different Angle? (this article)
Arrangility Sdn. Bhd.