Exploring Six Types of Decisions Orgs Make When Working with Test Automation Frameworks That Often Lead to Trouble
by Trevor Wagner | at Minnebar20 | 3:15 – 3:55 in Georgia | View Schedule
At the same time tooling supporting test automation is tasked with discharging some unique responsibilities, the principles that set the stage for success with testing tooling closely resemble those that are equally as beneficial to first-class software solutions.
In particular, test automation frameworks carry a lot of weight when it comes to supporting/ providing access to define- and execute tests, to produce reliable test result data, and to provide scaffolding for extension and adaptation to improve coverage and reach. And as AI becomes more closely intertwined with how we deliver software, it will become increasingly important not just to be able to release with confidence, but also to be able to analyze-, evaluate-, and interact with work product with confidence.
Despite this, organizations for some reason seem to encounter very similar sorts of dysfunction when working with test automation frameworks. Ultimately it is the sort of dysfunction that presents unwelcome trade-offs related to sunken cost. This dysfunction seems to follow six types of decisions with distinct characteristics and trade-offs. If those in charge of execution and decision-making potentially understood what these decisions looked like and how they incur risk of dysfunction, perhaps they could anticipate the trouble and make decisions that better managed that risk better.
I’ve been brought in by a couple different organizations interested in resolving dysfunction in existing test automation frameworks and to build new tooling that stand a chance of circumnavigating it. I’m hosting this session to share what I see — in terms of who seems to be involved in/ affected by the dysfunction, where the dysfunction happens, and what/ how/ why organizations seem to (do to) perennially open themselves to risk.
Format: 30-ish minutes of presentation, 8-10 minutes of Q&A.
Trevor Wagner
Trevor is a Consulting Programmer Analyst/ SDET/ Quality Engineer with Upstream Consulting LLC.
With nearly 20 years of success helping development organizations to develop production solutions, testing, automation, and strategy for products of various sizes in a diverse set of environments, he helps organizations develop vision for visibility and insight into the functional state of work product.
Links:
Are you interested in this session?
This will add your name to the list of interested participants. It will help us gauge interest for scheduling purposes.
Interested Participants
Similar Sessions
Does this session sound interesting? You may also like these:
-
What the fuck are passkeys and why are they everywhere now?
by Dan Lew -
No One Has the Full Picture (Especially in Complex Systems)
by Tom Harren -
📡🕸️ Preppers & Comrades Unite: Building a Decentralized Mesh Network for Resilient Communication
-
End-to-end tests considered harmful (securing credentials for E2E and synthetic testing)
by Katie Kodes -
Play Well With Others: Creativity Through Collaboration