Episodi

  • How To Test With ContextQA - Deep Barot
    Apr 23 2026

    Deep Barot, founder and CEO of ContextQA, an IBM partner and G2 recognized platform, built a company solving a problem he experienced as a DevOps engineer: QA teams filing bug reports without enough context, costing him 3–4 hours a day just to reproduce issues.His answer? The world's first context-aware testing platform. And his belief: AI should handle 80% of repetitive tests so humans can focus on what requires judgment.


    To find out more: 🎧 Listen to the full conversation: https://youtu.be/D-Ba3qT36_g🔗 Connect with Deep Barot: https://www.linkedin.com/in/deep-barot-98b7bb33/🔗 ContextQA Website: https://contextqa.com/🔗 Book a demo with ContextQA https://contextqa.com/book-a-demo/

    Mostra di più Mostra meno
    48 min
  • How to Test With Playwright CLI & AI - Lucas Smit
    Apr 17 2026

    Lucas Smit, SDET II at Optro and instructor at Codemify Tech Bootcamp, shares how he led the migration of 1,000+ test cases to Playwright, built AI agent infrastructure for his QA team, and is now pioneering AI red teaming for production systems.

    Mostra di più Mostra meno
    43 min
  • How To Test As Agentic Quality Engineer – Dragan Spiridonov
    Apr 8 2026

    Most testers won’t be replaced by AI.

    But many will fall behind because they don’t evolve with it.


    Episode #22 – How to Test as an Agentic Quality Engineer

    Dragan Spiridonov, Founder of Quantum Quality Engineering, shares how AI is not replacing testers but elevating them into orchestrators of intelligent systems.


    👉 Top 5 Advice to Become an Agentic QA Engineer:


    1. Master the basics before the agentsStart with prompting → then context → then agents.


    2. Invest in thinking, not just toolsCritical thinking, risk analysis, exploratory testing are differentiators.


    3. Design systems, not just testsFrom executing test cases → to orchestrating agents doing testing


    4. Build your own AI-powered QA projects
    Small tools, experiments, or open-source projects to learn faster.


    5. Verify everything (10% build / 90% validate)

    Agentic QA is about designing strong validation loops.


    Which of these levels are you currently at: AI Assistant, AI Augmented, or full Agentic QE?

    Mostra di più Mostra meno
    38 min
  • How to Test With Agentic AI automation Tool - Geosley Andrades
    Mar 29 2026

    Teams don't fail at AI adoption because of the technology. But because of how they evaluate and adopt it.


    Geosley Andrades, Product Evangelist & Community Builder at ACCELQ, shares how Agentic AI is changing test automation.


    And what QA teams need to know before jumping in.


    👉 5 Mistakes Teams Make When Adopting AI Testing Tools:


    ⛔ Using AI at every place vs ✅ Know where AI actually solves a problem.

    AI without the right context will hallucinate. Be skillful, not just enthusiastic.


    ⛔ Picking a tool before defining scope vs ✅ Define what you need to automate first.

    Web? API? Mobile? Desktop? If the tool can't do it, you're wasting time evaluating it.


    ⛔ Going all-in at once vs ✅ Start with a proof of concept.

    Give the tool your hardest test case. If it handles that, the basics are a given.


    ⛔ Using AI just to generate code vs ✅ Adopt a design-first approach.

    AI-generated code without design principles creates tech debt, not solutions.


    ⛔ Ignoring where your data goes vs ✅ Check the architecture before you buy.

    If your test data is going to a public LLM, you have a data breach waiting to happen.


    Which of these mistakes have you seen on your team?


    Connect with Goelsey or find out more:


    • LinkedIn: https://www.linkedin.com/in/geosley/
    • Personal Blog: https://geosley.blogspot.com/
    • ACCELQ Website: https://www.accelq.com
    • Blog (articles by Geosley): https://www.accelq.com/blog/autonomous-testing/
    Mostra di più Mostra meno
    51 min
  • How to Test a Release – Oleksandr Bolzhelarskyi
    Mar 27 2026

    Today, Oleksandr Bolzhelarskyi, Director of QA & Release Management at Salesfloor, shares why successful releases are impossible without responsible testing and quality management, and how teams can stop firefighting production bugs and start shipping with confidence.


    👉 5 Golden Rules for a Successful Release


    ⛔ Testing features individually → ✅ Testing the release as a package Features that work perfectly in isolation can break when merged together. Two developers changing the same component won't know about each other's work until it's combined. Always retest the full package.


    ⛔ Bug bashes → ✅ Combining structured testing with fresh eyes.

    Involving non-testers brings valuable perspective, but it's not systematic. Bug bashes catch random issues, not targeted risks. Use them as a complement, never as your only release testing.


    ⛔ Testing everything → ✅ Testing based on risk / product knowledge You'll never have time to cover everything. Know your product architecture, understand what the new changes could break, and focus regression testing where the real risks are.


    ⛔ Release pressure → ✅ Communicating untested risks clearly

    Pressure to ship means the feature matters. Instead of pushing back emotionally, tell your manager: here's what we tested, here's what we didn't, and here's what could go wrong. Let them make an informed decision.


    ⛔ Allowing "one last quick fix" → ✅ Enforcing a strict code freeze Last-minute changes during regression testing cause the biggest surprises. Say no. If it's urgent, ship it as a hot fix after the current release is done.


    Connect on LinkedIn: https://www.linkedin.com/in/oleksandr-bolzhelarskyi/

    Mostra di più Mostra meno
    58 min
  • How to Test with Independent QA | Guest: Tudor Brad
    Mar 19 2026

    If the chef shouldn't certify his own dish, why is your dev team validating their own code? Today, Tudor Brad shares why independent QA is non-negotiable. With 15+ years in QA, Tudor founded BetterQA in 2018, A team of 50+ engineers across 24+ countries. They've built in-house tools like BugBoard, Flows, and Better Flow to bring full transparency and quality to the testing process.

    Mostra di più Mostra meno
    40 min
  • How to Test This with AI and MCP - Deepak Kamboj
    Mar 17 2026

    Today, Deepak Kamboj, Senior Software Engineer and Solution Architect at Microsoft, shares how he scaled Playwright automation across 40 teams and 14,000 test cases and why AI agents are the next leap for test engineering.


    👉👉 Key takeaways from Deepak:


    🔹 Build infrastructure, not just test cases

    "My work was focused on building a large scale automation framework, not just writing test cases."


    🔹 Use AI across the full test lifecycle

    "I started building AI agents that can generate Playwright test cases, analyze failures, run accessibility checks, do performance checks, visual comparison, and ultimately create pull requests automatically."


    🔹 Learn prompt engineering

    "Prompt engineering is very good when you are about to do automation. The way you are writing a system prompt or your user prompt will play a big role in the way your agent will behave."


    🔹 Don't fear AI — use it

    "AI is your co-pilot or your companion. Don't take it as your replacement. It will complement you. It will provide you with more efficiency. It will make you more effective."


    🔹 Let AI handle the repetitive work

    "Testers can use AI agents to automate a lot of repetitive activities they were performing. They can use agents to understand why a system fails, why a particular test case fails, what type of test cases to write."

    Mostra di più Mostra meno
    27 min
  • How to test with HIST - Ruslan Desyatnikov
    Mar 7 2026

    Today, Ruslan Desyatnikov, CEO of QA Mentor and creator of the Human Intelligence Software Testing (HIST) framework, explains why the QA industry is at risk of losing its strategic role and how teams can bring human intelligence back into software testing.


    👉 Testing Problems solved with the HIST mindset


    1️⃣ STOP treating requirements as a Bible — START challenging them early

    => Actively question requirements during review sessions . Always ask why and what if to eliminate ambiguities and prevent costly downstream defects.


    2️⃣ STOP obsessing over automation — START using it strategically

    => Automation can absolutely supports testing but doesn't replace human thinking. Focus on risk-based coverage and business value, not big test cases numbers


    3️⃣ STOP being a passive button-pusher — START thinking like an investigatorGo beyond front-end clicking. Analyze backend logic, business rules, integrations, and real user purpose to uncover meaningful defects, not just cosmetic ones.


    4️⃣ STOP reporting isolated bugs — START connecting defects to business impact

    =>Map quality issues to revenue generation, client retention, and business value — so stakeholders understand what's testers bring to the table.


    5️⃣ STOP blindly trusting AI output — START keeping human intelligence in control

    =>Whether it's AI-generated test cases or automation predictions, always verify, spot-check, and apply human judgment before acting on results.


    Resources mentioned in this episode:

    - QA Mentor - https://www.qamentor.com/

    - HIST Testing Methodology - https://www.qamentor.com/what-is-hist/

    - Ruslan's LinkedIn - https://www.linkedin.com/in/ruslandesyatnikov/

    Mostra di più Mostra meno
    1 ora e 1 min