Latest Content
The $15 Million Problem: Why Your Cloud Data Testing Strategy Is Costing You More Than You Think
How modern organizations can ensure data integrity in cloud environments through robust testing strategies.
Quality Engineering Leader, Building AI into Testing
I build AI-powered testing systems that find defects faster, reduce manual effort, and make enterprise software reliable at scale. 10+ years of quality engineering experience, now focused on agentic testing, AI-assisted validation, and building the future of how software gets tested.
I've spent the last decade making enterprise systems reliable by building test automation frameworks, data validation pipelines, and quality processes from scratch. Now I'm focused on what comes next: using AI to fundamentally change how we test software.
Every day, I use AI tools in production work, not as experiments but as core infrastructure. I use Claude Code to generate test suites, build API validations, write documentation, and manage test workflows. I've built GPT-based anomaly detection systems that catch data quality issues such as schema drift, distribution shifts, and transformation errors that traditional rule-based checks miss.
I'm now building toward agentic testing: AI agents that can autonomously generate test cases, execute them, analyze results, and triage failures with human oversight at critical checkpoints. This is the future of quality engineering, and I'm building it in production, not just talking about it.
My background spans insurance, enterprise software, and healthcare data systems. These are complex, regulated environments where getting quality wrong has real consequences. That's what drives my approach: AI makes testing faster, but human judgment makes it trustworthy.
I use Claude Code daily to generate Pytest test suites, API validations, and data pipeline checks. This is not one-off prompting. It is a structured workflow where AI generates tests following our framework conventions, and I review and refine them. Coverage has expanded to edge cases and boundary conditions we previously skipped due to time constraints.
I built AI-powered validation that monitors data pipelines across cloud databases. It detects schema drift, distribution shifts, and transformation anomalies that rule-based checks can't catch. Integrated into CI/CD as quality gates, these checks stop bad data before it reaches production or customers.
I'm building toward fully autonomous test agents that can explore applications, identify testable scenarios, generate and execute test cases, analyze results, and file bugs with human review checkpoints. The goal: testing that scales with AI while maintaining the judgment that keeps systems trustworthy.
Testing isn't just writing code. I use AI tools for:
What used to take hours now takes minutes. The compounding effect: when test writing is cheap, coverage naturally expands.
AI in testing isn't magic. Here's what I've learned:
The teams that win with AI testing won't be the ones that trust it blindly. They'll be the ones that build the right guardrails around it.
Insurance Technology Company
2020 - Present
Enterprise Software Company
2016 - 2020
Healthcare Technology Firm
2015 - 2016
Technology Solutions Provider
2011 - 2013
AI won't replace testers. But testers who use AI will replace testers who don't. The skill isn't prompting. It's knowing what to test and why. AI handles the how.
Most "AI testing tools" are wrappers around GPT with a nice UI. The real value isn't the tool. It's building the workflow that makes AI output trustworthy enough to act on.
I've seen AI generate 50 test cases in 10 seconds. 40 were useful. 8 were redundant. 2 were dangerously wrong. The future of QA isn't generating tests. It's building the guardrails that catch the 2.
Agentic testing is coming. AI agents that explore, test, and report autonomously. The QA leaders who build these systems today will define the field for the next decade.
How modern organizations can ensure data integrity in cloud environments through robust testing strategies.