Testing is essential. But when it comes down to it, there鈥檚 still one big question: Are you testing it manually or are you automating the process?
鈥
And if you鈥檙e wondering, yes, there鈥檚 a difference. Both have their place, but in this chapter, we鈥檙e going to focus on manual testing 鈥 where real humans (that鈥檚 you) are the ones driving the tests. We鈥檒l cover what it is, who does it and how to organize it all without losing your sanity.
Manual testing is exactly what it sounds like 鈥 testing done by humans. It鈥檚 the opposite of automated testing, where specialized software handles the heavy lifting.
However, the whole idea of dividing testing into 鈥渕anual鈥 and 鈥渁utomated鈥 is a bit odd if you think about it. The only part that鈥檚 truly automatable is the test execution itself. Everything else, like creating test cases, analyzing results and figuring out what even needs testing, is still pretty manual in both processes.
鈥
But as AI keeps advancing, that boundary is getting thinner every day. Soon enough, humans may have even less work to do. But hey, let鈥檚 not get ahead of ourselves 鈥 Chapter 8 dives into the future of AI-driven testing. For now, let鈥檚 stick to the manual side of things.
The people who perform manual tests are often called "manual testers." And, naturally, manual testers come in two main breeds:
鈥
鈥
If we鈥檙e being cheeky, we could add a third category: accidental testers 鈥 those users who stumble across bugs in production that weren鈥檛 caught earlier. But if you鈥檙e reading this far, we trust you wouldn鈥檛 let that happen. Right?
鈥
Professional testers are well-versed in testing methodologies and know how to root out software bugs. For them, the line between "manual" and "automated" 诲辞别蝉苍鈥檛 really exist 鈥 they鈥檙e skilled at both.
鈥
Occasional testers, on the other hand, are usually employees recruited to test software they鈥檒l be using on the job. They鈥檙e the SMEs (subject matter experts) in their respective business processes, and that鈥檚 why you鈥檒l often hear them referred to as 鈥渂usiness testers.鈥
When planning a manual testing effort, there are a few key questions you need to answer. Don鈥檛 worry, they鈥檙e straightforward 鈥 but it鈥檚 the details that will make or break your success:
鈥
鈥
Sounds like a lot, right? Don鈥檛 panic. These questions are simpler than they seem. The real challenge comes when you skip over them entirely, and that鈥檚 where most organizations trip up. But don鈥檛 worry, we鈥檝e got you covered. Let鈥檚 break each one down.
Before diving into any other types of testing, your first priority should be making sure developers perform unit testing on their code. This helps catch those trivial programming errors early. If anyone objects, just tell them it's called Shifting Left 鈥 trust me, they鈥檒l be so impressed with your jargon that they won鈥檛 dare push back.
鈥
When you add something new or change something existing in your application, anything can break. But you don鈥檛 have the luxury of testing everything all the time, right? Even if you did, it probably wouldn鈥檛 make sense. You need a strategy. Here鈥檚 a recipe that works for manual testing most of the time:
Points 1, 2 and 3 are pretty straightforward. But points 4 and 5? That鈥檚 where things get tricky. There鈥檚 no magic method for knowing what鈥檚 at risk of breaking. If you鈥檝e worked with the application long enough, you鈥檒l develop an instinct for what tends to go haywire when things change. But what if you鈥檙e new or just inherited a new application? So far, Marvel hasn鈥檛 released a superhero with test intuition superpowers, so you鈥檙e on your own there.
It鈥檚 important to note that 鈥渢est everything that鈥檚 new鈥 is a bit more vague than it sounds. For example, let鈥檚 say a new report feature is introduced. You鈥檒l want to open it, check the results and make sure it works as expected. Great, right? Well, not exactly. Have you tried to break it? Did you test it in scenarios where things could go wrong 鈥 like whether it鈥檚 accessible to users who shouldn鈥檛 have access or editable by people who should only be viewing?
鈥
This is where risk-driven testing comes in. The goal is to prioritize what you test based on the likelihood of something breaking and the severity of the consequences if it does. New features are more likely to break than ones that have been around for a while. But that 诲辞别蝉苍鈥檛 mean you can ignore existing features, especially those related to new changes.
鈥
In summary, always prioritize testing that focuses on high-risk areas. New or changed features are the most obvious, but don鈥檛 forget to test components that are indirectly affected and could cause significant damage if they fail.
We know you are so looking forward to testing, but first, take a deep breath and make a plan. Once you know what needs testing, you鈥檒l be able to estimate the effort and decide what skills your testers will need.
鈥
If you're practicing DevOps or agile software development, your test plan should already be baked into your process. Right? The whole idea of agile is frequent, incremental deployments with minimal disruption, which means testing should be ongoing and almost continuous 鈥 ideally, much of it automated.
鈥
But if you鈥檙e not quite there yet, that鈥檚 okay. Most teams aren鈥檛. Just know that if you're not practicing agile fully, you鈥檙e missing out on some of the benefits of DevOps, like the ability to catch defects early and accelerate software development.
鈥
鈥
There鈥檚 Waterfall, Agile and then there鈥檚 what most companies actually do 鈥 which is somewhere in between. Let鈥檚 call it 鈥渕ore agile鈥 or 鈥渓ess agile.鈥 Assess where you are honestly, and aim for continuous improvement.
鈥
In agile, the first round of testing is often manual testing to give developers quick feedback. This testing is ideally fast and continuous, whether done by a dedicated tester or someone balancing testing as a part-time role. And yes, manual testing can sound costly, but frequent, incremental testing is usually faster and cheaper than massive, infrequent testing campaigns.
鈥
Once new functionality is manually tested, the next step is to automate those tests, so you don鈥檛 need to keep testing the same things manually. These automated tests can run whenever needed, reducing the burden on your testers 鈥 though someone will still need to click that start button until you fully integrate automated testing into your pipeline.
Now, you might ask, "Why manual testing first if automated testing is supposed to be faster?" The truth is, while Salesforce automated testing is faster, setting up automation takes time 鈥 time that can be saved by running a quick manual test first.
鈥
If you're deploying infrequently and in larger chunks, you鈥檒l need full testing campaigns with more people. This may sound expensive compared to agile's frequent testing, but it鈥檚 not necessarily. The difference is that continuous testing spreads the workload over time, while a "big chunk" approach is less frequent but higher-effort. The latter can be slower and riskier, but it鈥檚 sometimes necessary.
鈥
Even in agile environments, there鈥檚 room for bigger testing campaigns. Many teams deploy frequently to pre-production environments but roll out to production less often. In this case, larger User Acceptance Testing (UAT) campaigns can make sense, especially if they鈥檙e aimed at ensuring real users are comfortable with the system or giving formal approval for a release.
Your testing plan will depend on your overall development process. Agile? You鈥檙e likely testing continuously in small increments. Less agile? You鈥檙e probably organizing bigger testing campaigns. Either way, the goal remains the same: find and fix defects fast to improve productivity and reduce risks.
鈥
The people you have available for testing play a huge role in how you organize your manual testing effort. If you鈥檙e working with business users, you can鈥檛 expect them to design tests from scratch. That鈥檚 where you or a testing pro comes in to create the tests for them. There are two primary approaches for business users:
鈥
鈥
In scripted testing, you provide testers with detailed 鈥渢ype this, click that鈥 instructions. It鈥檚 essentially robotic work for humans 鈥 but there鈥檚 a better way. Enter task-based testing, where you give your testers a broader task like "create a new lead" or "run a report on recently opened opportunities."
鈥
鈥
The beauty of task-based testing is that it鈥檚 more realistic. You won't know every step your testers take, but you鈥檒l know if they complete the task successfully or encounter any issues. The tasks should be scoped well, but unlike scripted tests, the tester isn鈥檛 hand-held through every click. They鈥檙e given a goal, and it's their job to achieve it and report anything unexpected along the way.
The third approach, exploratory testing, is best suited for professional testers and subject matter experts (SMEs). Now, a business user might occasionally be motivated enough to dive into exploratory testing, but let's be honest, that鈥檚 rare.
鈥
Exploratory testing takes things up a notch. Instead of rigid instructions, the tester is given a general brief about the business process and then asked to鈥攚ell鈥攅xplore. It sounds more fun, right? Imagine telling Julie, "You're not working, you鈥檙e exploring!"
鈥
We'll dive deeper into these approaches in later sections, but here's the truth: In real life, you often don鈥檛 get the ideal testers you鈥檇 hope for. Instead, you get the ones who happen to be available.
鈥
If you have business users testing, let them focus on testing business processes 鈥 that鈥檚 their wheelhouse, and they鈥檒l do it well. If you have professional testers, they can push the application鈥檚 limits and test those "unlikely scenarios" that, trust us, will happen in real use. People always manage to find creative ways to break things.
鈥
Having at least one experienced professional tester on your team is invaluable. Not only will they improve your testing quality, but they鈥檒l also free up your business testers to focus on what they do best.
鈥
And remember that these Salesforce deployment best practices are key to ensuring smooth, error-free releases.
The skill level of your testers will dictate the kind of guidance they need. You鈥檝e essentially got three options for briefing them:
鈥
鈥
This applies to three very different kinds of testers:
鈥
鈥
There鈥檚 also a fourth category: people who are experts in both the business process and in testing 鈥 but like unicorns, they are rare. If you have one on your team, congratulations!
The testers who "know nothing" need a detailed step-by-step description of each task. For example:
鈥
鈥
For this kind of testing, you don鈥檛 need an expert in the business process 鈥 anyone who knows the basics of Salesforce can follow a scripted test. But, ideally, you shouldn鈥檛 be doing this kind of testing at all. If you鈥檝e got time to prepare such detailed instructions, you鈥檝e got time to automate those tests.
鈥
With modern test automation tools (especially AI-based ones), creating automated tests isn鈥檛 much different from writing out a detailed, human-readable script. Excited to read Chapter 8 on automated testing yet?
鈥
This script example also highlights the essential elements of good test design. A well-defined test will include these key parts:
鈥
For business testers, a task-based approach works best. By assigning them a task鈥攍ike 鈥淐reate a new lead鈥 or 鈥淔ind and run a report of recently opened opportunities鈥濃攜ou鈥檙e seeing if they can accomplish the task without a detailed script. This approach mimics real-life scenarios.
鈥
Here鈥檚 an example:
鈥
鈥
For professional testers or more advanced business testers, a goal-based test is ideal:
鈥
鈥
This kind of test is more abstract but gives the tester the freedom to explore potential risk-driven areas in the application. Goal-based tests are perfect for testers who know how the application should work and what problems to look for.
鈥
Yes, you read that right 鈥 areas where things could go wrong. Professional testers aren鈥檛 just checking to see if the application works; they鈥檙e trying to prove that it 诲辞别蝉苍鈥檛. That鈥檚 the attitude that makes them effective.
When briefing your testers, ask yourself: What don鈥檛 they know? Professional testers generally need a thorough brief about the purpose of the application, any new or changed features and areas with the highest risk. Once they understand the context, they know how to test.
鈥
On the other hand, teaching testing strategies to business users can be a waste of time 鈥 no offense to them. It鈥檚 not that they couldn鈥檛 learn, but it鈥檚 not their focus. They鈥檝e been asked to test the application, and they鈥檒l do it, but their main job isn鈥檛 testing. If you have a curious business user, take the opportunity to introduce them to some basic test design concepts 鈥 they might become a great asset to your team, a potential unicorn in the making.
鈥
There鈥檚 one thing, though, that you must teach all your testers: how to report their findings. We鈥檒l cover this in detail later in the chapter.
鈥
Also, make it a habit to always explain the purpose of a test: What are we testing, and why does it matter? Giving testers this context improves their motivation and sharpens their focus, helping you get better, more effective results.
鈥
And don鈥檛 forget 鈥 testing isn鈥檛 intuitive for most people. Remind your business testers that finding a problem or not knowing what to do is a win. That鈥檚 the entire point of testing: to discover what鈥檚 not working as expected.
In task-based testing, a tester is given a task 鈥 like "Create an opportunity" or "Add a date filter to a report." Sometimes, this task can be as complex as an entire business process, end to end. The advantage of task-based testing is that the tester is simulating how a real user would interact with the application. As a result, they鈥檙e more likely to encounter issues that a scripted test might miss. For example, the app might technically function as intended, but a button might have an odd label that confuses the tester. This is a usability issue that should be fixed. A scripted tester, however, would follow instructions and might miss the issue entirely.
鈥
Exploratory testing takes task-based testing a step further. In addition to trying to accomplish the task (often referred to as the "happy path"), an exploratory tester will ask questions like:
鈥
鈥
In short, exploratory testers don鈥檛 just follow instructions 鈥 they explore the app, looking for unexpected behaviors. And yes, sometimes, they even ask ridiculous questions like, "What if this solved world peace?"
鈥
Scripted testing, by contrast, is more rigid and less creative. Its merit lies in its repeatability, but for that very reason, it鈥檚 better suited for a test automation robot. If a human tester is simply following a script, it鈥檚 likely a waste of their potential.
鈥
That said, exploratory testing, while powerful, has its drawbacks 鈥 it lacks repeatability. A tester might stumble across a problem but not remember how they got there. Worse yet, after the problem is fixed, they might not know how to test if it was resolved correctly. Don鈥檛 worry 鈥 we鈥檒l cover these challenges later in the book.
It may sound ridiculous, but one of the biggest challenges in manual testing is simply getting the application into the hands of the testers. Most Salesforce testing campaigns start with confusion 鈥 finding the right Salesforce org, using the correct user credentials and ensuring everyone has the appropriate permissions.
鈥
Besides just wasting time, these issues kill the motivation of testers. So, make sure you鈥檝e planned ahead: ensure the right version of the application is in the test environment, verify test user IDs and double-check that permissions are correct. And don鈥檛 forget to provide written instructions to every tester. Why? Because they鈥檒l forget most of what you tell them 鈥 because, well, they鈥檙e human. (Unicorns, on the other hand, don鈥檛 forget anything. But good luck finding one.)
鈥
Before diving into the testing tasks, you should perform a Smoke Test. This is a quick round of basic tests to check if the application is stable enough for more in-depth testing. Occasionally, during active development, developers release a version that鈥檚... let鈥檚 just say, not quite there. The smoke test ensures the app鈥檚 basic functionality is intact, so you鈥檙e not wasting your testers鈥 time.
鈥
The best part? Smoke tests should be fully automated. They鈥檙e a simple but crucial step to prevent major headaches later.
Testing is essentially useless if the problems your testers find aren't properly recorded. And let鈥檚 face it, even the most professional testers tend to forget one issue the moment they stumble on the next. That鈥檚 why problem reporting needs to be seamlessly integrated into your process, and testers need constant reminders to report problems as soon as they find them.
鈥
If you have a problem tracking system like 91九色 Plan or Jira, use it. If not, a document with a problem report template will do. But the key is to capture the right information. A great problem report typically includes these components:
鈥
鈥
Finally, problem reports usually note the severity of the issue. But don鈥檛 leave this to your testers. It's better to assess the severity yourself or hand it off to someone who can evaluate it from a business perspective.
When business testers are involved, it's best to have them test simultaneously. While technology allows people to test at their convenience, these testers usually have other full-time responsibilities, making it easy for them to forget about testing altogether if you don鈥檛 set a specific time. Delivering your brief just before the tasks also ensures that the testing pipeline runs smoother and more efficiently. Plus, your dedicated supervision and assistance during testing will increase productivity and help keep everything on track.
鈥
If you have multiple testers, assign different tasks to each one. This reduces the number of tests any one person has to complete and ensures they're more likely to focus and engage with the tasks. It also increases the chances they'll show up for future testing cycles. In short, respect their time by being organized and giving them manageable workloads.
鈥
A good strategy is to have every tester run through the happy path of the business process to catch any major usability issues. Then, assign different testers to handle edge cases and error behaviors. This way, you maximize coverage and ensure all potential issues are addressed.
鈥
Finally, provide clear, written instructions to all testers, and be available to clarify any questions that may arise during the testing process. This proactive approach will keep things running smoothly and make sure all testers stay on the same page.
Manual testing will always have a role in your testing strategy. As we've discussed, sometimes it's more cost-effective than automation, depending on the context. In the next chapter, we鈥檒l dive into calculating the Return on Investment (ROI) for test automation and how to determine when automation makes sense.
鈥
Planning is non-negotiable. Even if your testing needs are straightforward, you鈥檒l still need to consider the seven key questions we covered in this chapter. Simpler features might call for a simpler plan, but you can鈥檛 skip the planning phase.
鈥
Your human testing resources are valuable. Respect their time and plan thoughtfully, especially if you want to maintain a strong, engaged team of testers in the future.
鈥
So, ready to reduce your reliance on humans? In the next chapter, we鈥檒l explore how AI is transforming the testing game and improving the ROI of automation.
鈥
Read On!
Level up your Salesforce DevOps skills with our resource library.