Starting new company - learning well established applications

Morning All

When moving to a new company which has some well established applications how do you approach the learning of that new application?
How long would you expect it to take?
What sort of artefacts and /or methods do you want or use?
Has there been good or bad examples you remember?



I start with collecting all the introduction-level info I can. User manuals, website info, and most of all people who know what they’re on about to give me tours and assistance and answer questions.

Testing is learning, so essentially I then test it. I do a sort of series of long recon sessions where the object is to see as much as possible, rather than see it fail - what it can do rather than what it might do. For each session I’ll take notes on how things work, maybe testing ideas or testability issues, and definitely any questions or confusion I have so that I can ask a bunch of questions.

Then I take my info to someone who can help me to understand it better. Go back and look at more technical documents, find new people who know about subsystems, meet teams responsible for different parts of the product.

Then I form new testing charters and begin again, narrowing the scope of things I find interesting or complex. Iterate within the time and resources you have. This can also serve as useful learning for testing, so I can learn the product during the testing I do, as long as I have enough overview of the basics.

How long would you expect it to take?

Depends on what I need to learn, what I already know and my ability to learn quickly. So it’s different for every person and product.

What sort of artefacts and /or methods do you want or use?

I’ll make session notes, build risk catalogues, maps of the product, reminders on how things work, anything I think is valuable to write down.

Methods include any and all testing methods I can pull out that fit the strategy. For example I’m unlikely to consider many equivilence partitions of test data, excepting those that allow me access to the product - so I might need different kinds of users so that I can explore as an admin or standard user and get a feel for the two kinds of functionality.

Has there been good or bad examples you remember?

I find that the hardest systems to learn are based in a background I’m unfamiliar with. So it’s the complexities of the industry that make it hard, not the software in itself - I’d need to know a lot about house insurance terms and processes and concepts to work on software dealing with house insurance, for example.


Great reply there - thank you.

Out of interest is this a task that you’d be happy doing for a few weeks, or do find you need to also have some testing of new features slotted in to break things up a bit?

(sorry, I know this is a bit of a vague, open question)


Most welcome. And I like testing so either is fine for me. The more I know about a product the better placed I am to test it, but its also a concern for the business that I provide value. Depending on the business this might mean anything from a lot of study, through lots of recon, through enough recon to get some testing done, through pair testing to learn and provide value, all the way to just starting testing and learning as I go. All are valid, depends on the company and product.

Products in familiar spaces need much less domain learning, like a calculator app I already understand enough about to get started before I research other apps and common failures and so on.


The things that I ponder on this topic - and I’ve not said, perhaps I should, but I’m looking at this from the context of the Employer here, not the new guy.

How much time are you happy to hear people drone on and on, compared to how much time you want to sit there and do some exploring yourself (provided you have access to the right artefacts). I’m considering the value and importance lower level test cases might have in this process. I.e. if you have some brief explanation then then detailed test cases, that might be enough to get your started at least (to get a gist then do some exploration).

For example - one of the applications I work with, it could be at least perhaps 1 month to to skim across the topics a bit. This would then give you the foundation to continue learning further features (which might be lower importance), or to go back and dive deeper into things more.


The things that I ponder on this topic - and I’ve not said, perhaps I should, but I’m looking at this from the context of the Employer here, not the new guy.

The main thing I should add, then, is not every tester is me. I worked very hard to get very good at evaluating software quickly. Some testers have been told that following boring instructions or writing simplistic checks are a valid substitute for what I consider testing, and those people may struggle more to learn quickly, or apply what they’ve learned effectively. Not everyone considers themselves responsible for product learning, and may need more hand-holding. So if you’re looking at me as an example I’m an example of one particular kind of tester and ymmv.

However, if you’re taking time to make your position explicit I should ask why you’re asking about learning applications. Are you trying to make on-boarding easier or quicker?

How much time are you happy to hear people drone on and on, compared to how much time you want to sit there and do some exploring yourself (provided you have access to the right artefacts).

Interesting people talking about interesting things? Otherwise? Depends on the people, and the substance and quality of the droning. My job is to take whatever I can to learn in a useful and efficient way. If someone talking to me about the domain is helping that, then I want to hear. Again, just my answer, everyone’s different.

I.e. if you have some brief explanation then then detailed test cases, that might be enough to get your started at least

If I have them or if any new tester has them?

I despise written test cases, and because of the kind of tester I am I’m hired to be very good at exploring, learning and reporting. I don’t accept being told what to do by people who don’t trust me to be good at my job and I’ve never worked for any company that says that I must. Test cases restrict freedom of learning - if you want to guide learning I recommend charters with checklists. I have written so much about test cases on MoT in the last few years, so feel free to check my history if you want to know why I think they’re detrimental and unnecessarily cruel to testers - or I can talk about them with you if you have specific questions for me.

If I can explore a product and gain, through previous experience of software and living in society, some understanding of how I think it achieves what it’s supposed to achieve (and I understand what it’s supposed to achieve in a general sense) then I don’t need instruction, I can just learn. Others might need more guidance, and for them I recommend setting them tasks. “Create an admin user” is a perfectly okay task, and if it needs detailed instructions on how to achieve it perhaps that’s an indicator there’s a problem with your software. If your software is convoluted and obtuse then you’ll have to take the hit on the costs that come with it, including testability and learning for new staff. You may be better off pairing the tester with someone and getting them to show them how to achieve their tasks (also makes the task list much easier to maintain).

For example - one of the applications I work with, it could be at least perhaps 1 month to to skim across the topics a bit.

The topics of the domain, or the topics of the application itself? Because domain learning is part of what can make testing harder.

I assume that you’re investigating how to on-board new testers, because you haven’t before or you’re looking to make it better or easier or faster. I didn’t know how to sum that up, so I wrote a 2000 word article draft about it and submitted it to MoT, and I’ll link it here if it gets published.


Thanks again. You’ve taken time out to give decent thoughts out replies, and it’s appreciated.

Yeah I’ve seen some of your comments/threads about test cases in the past (and responded to them etc). From the way you describe yourself, I’d say I fit in a similar camp to you in your approach etc. I don’t think testing should be driven exclusively by detailed test cases (there are some scenarios though where I’ve needed to do this and it was fine for the specific examples - more related to e2e integration testing across multiple organisations)

Learning a domain and an application is a double battlefront - though the former will definitely aid the latter. I think some people just “get” things faster (the penny drops faster). E.g. in my case, as I understand the domain, I’m going to generally understand new developments and applications quickly as well as spotting holes and issues in design specs that others might not see.

So on another thread I’ve also mentioned about test cases and a role they may or could play in learning an application/onboarding. They could in essence play the part of an artefact such as SOP /work instruction, or point you in the direct of at least a “happy path” test example to start your learning.

A very Simple E.g. In a e-commerce site, we have functionality for Discounts. , Discounts are a combination of 3 factors (bits of data) together with a discount value. = i.e. A,B,C =0.10 (10% saving). So in the test system there are some items and data set up so you can easily run a test to show discounts can working ok. This test with the examples and and the expected outcome , and perhaps a paragraph of overview (i.e. tables involved, what the 3 bits of data are and where they come from) would be enough then to hopefully give a quick demo and then off you go - you can explore the discount functionality.

1 Like

“There are lots of ways for testers to learn about the product, and almost all of them would foster learning better than procedurally scripted test cases. Giving a tester a script tends to focus the tester on following the script, rather than learning about the product, how people might value it, and how value might be threatened.”
– Michael Bolton

If you want people to mentally engage with a system you can’t give them a series of detailed instructions because those instructions become the goal. Instead of a test case you’ll find it’s more powerful to have a person explain it and ask them to achieve it themselves, as they can make sure they’re in a good place to begin and early questions are all answered.

What’s the advantage of having a test case perform this communication? You chose to explain it to me with a series of sentences that told me more than the test case did - notice that you did not give me a test case. You would have to show a new tester how to access and set up the test system and data anyway in order to do the case. It just feels like an attempt to find a use for test cases where better tools are available.

If you do decide to use test cases you still have to find which ones you believe communicate suitable learning for a new tester which you could use to give a higher level explanation adapted to new testers. You’d also be relying on the accuracy of these test cases, which for existing testers who treat them like the onerous oracles they are understand that a difference between a test case and reality doesn’t mean that it’s reality that’s incorrect, but new testers may treat them with undeserved authority. Test cases are also interpreted differently by different people, and new testers are different in a particular way. They don’t have the same domain understanding and mental models of the product. You’re also robbing yourself of the potential of having a fresh look at your product by dictating how it should be used - finding that a tester cannot perform simple tasks in your product through general use indicates interesting usability and explainability issues.

Most of all testers are human beings, and an interactive system is more useful and engaging, with smaller feedback loops. You can manage expectations, praise good work and answer questions.

Test cases are pointlessly expensive and low value. There’s just better ways to form that communication that have more power and flexibility with lower maintenance and writing cost. I think that people grip onto test cases because they think that code makes computers work so human code makes humans work, but it doesn’t, and when we realise that the finnicky formality of test cases just looks silly.

1 Like

Yeah, and look I do get your points above. I agree, and this is my style of learning also.

To expand on the discount scenario/example then.

Some bumpf might be:

Discounts are calculated as a combination of Customer Discount Code (Set against the customer), Item Discount Code (Set against the Item), and Order Type Discount Code (Set against the Order Type). The discount table (DiscountValues) will have combinations of the above 3 values together with a discount percentage.

The test case to give you your “Blue Peter”, “here’s one I prepared earlier” is.

Select account: X
Go to Item: Y (Base price of £1)
Select Order Type : Standard
Expected Result: Item price displayed as £0.90 (10% discount).

Now this should be enough for you to go and have a dig around/explore at this functionality a bit more and ask questions you want to ask.

Granted, it would probably take me no more than 5 mins to explain to you in person.

I’m trying to have some artefacts for simple functional elements that allow someone to learn (or rather, give enough to start an exploration process) without perhaps needing to shadow someone round the clock.

But yeah, I’m in your camp forsure and I know this is a divisive topic

If the article gets published it has various approaches included, but in short if you want hands off artefacts there’s lots of tools available.

1 Like

Hopefully we’ll see that then :+1::+1:

Tag me in if it does.

1 Like

Hi @testerfromliverpool,

An approach that’s worked well for me in various contracting contexts is to use reconnaissance charters.

I’d run them after having some chit-chats with product people and developers to get a sense of what they’ve been working on and what’s next. I start by creating exploratory testing goals. They might look something like this:

Explore the login feature to discover useful information about how login works for
Explore the bookings API to see what requests and responses are available
Explore the checkout feature to discover how it works and capture questions to ask my new team

The exploration goal provides a handy high-level constraint to keep me focused yet still allows me to discover stuff for myself. I’d then set a strict timer for each exploratory session. Typically 45 to 60 minutes. And for each session, I’d capture written notes about what I’m observing. And for a reconnaissance session, I’m more likely to capture lots of questions – which I would then ask to whoever I think I need to ask later after the session unless I’m blocked. Or I might answer them myself during the session a bit later as I explore things.

This high-level approach has come in handy when faced with something I have pretty much zero experience with. Here’s a video I created of an exploratory testing session which touches on the approach.

1 Like

The application can be split up beforehand into the Charters as a framework then provided there are some notes for each charter then for simple ones then it should be sufficient, for others we need need to go through as well verbally.
Then set a target for going through a few each day - e.g. 1 hour exploration, 1 hour for notes

1 Like

That sounds like a great plan. Nice one, @testerfromliverpool.