Quantcast
Channel: Best Practices – Ranorex
Viewing all 161 articles
Browse latest View live

Hire the Right Person: 5 Traits to Look for in a Software Tester

$
0
0

One of the biggest challenges for companies to release products faster is the lack of skilled people to do the necessary work. Acquiring skilled software testers in particular is time-consuming and expensive. Not a lot of small and mid-size organizations have the budget to employ many testers, so they have to ensure every hire brings maximum value to the organization.

To find your ideal fit, here are five qualities to pay attention to when hiring software testers.

1. Curiosity

This is one of the most important traits of a software tester. Being curious is in a tester’s DNA. They should question the product like an end-user, think about different edge cases that other people may not think of, and have the mindset to learn anything that would help them become better at their craft. All this starts from having the curiosity to learn, practice and explore the application.

Here are some ways you can identify if a tester is curious:

  • Ask them a question and see if they ask clarifying questions to understand the context of what you are asking
  • Give them a problem to solve and ask them to take you through their thought process
  • Tell them about a new product in your company and see if they ask follow-up questions to know more
  • Ask them about a time when there were many unknowns in a project or an application and have them explain how they went about handling the situation

2. Experience and skill set

In this day and age, experience trumps qualification. If one tester has a master’s in computer science from a reputed school with no experience, and another has 10 years of experience but no degree, I would pick the second person.

Take time to scan through the tester’s profile and identify different companies, projects and applications the person has worked on. Try to understand how those experiences are beneficial to your current project needs. Yes, you learn a lot of great things in school, but the corporate environment is a different beast.

Also, you need to look at the tester’s skill sets. For example, if a job requires extensive programming in Python, then the tester should definitely have skills in programming and preferably some in the Python language. Similarly, if you are looking for someone to do API testing, it helps to have a tester who has used tools like Postman, SoapUI and other API tools.

However, if a tester has an aptitude for learning and a proven record of contributing to teams, then some skill sets can be acquired during the job. Lack of prior experience with a specific language or tool should not necessarily be a deal-breaker.

3. Interest in growing

It is important to figure out if a tester has a growth mindset. Apart from things learned at work, the tester should show interest in acquiring different skill sets by attending conferences, taking courses, and contributing to the testing community in some shape or form. This is what makes them contribute more effectively and broaden the team’s horizons.

As Steve Jobs once said, “It doesn’t make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do.”

4. A team player

You can be smart and have various accomplishments on your resume, but if you are not ready to work with a team, nothing matters. You succeed or fail as a team; one individual contribution alone does not make a product successful.

  • During the hiring process, ask for examples of when the tester has collaborated with different teams. Some questions you could ask are:
  • When have you had to collaborate with different teams to fix a problem?
  • Do you like to work alone, or in groups? Why?
  • What does your ideal workday look like?
  • How do you react when your task is delayed by another person finishing a dependent task?

5. Communication

A considerable portion of a tester’s day is communicating product information to various stakeholders. Testers are essentially information brokers, helping teams make educated decisions on the product before it is released to the customer. The information could be related to bugs, feature requests, red flags, test data, testability challenges, test coverage or any ideas to make the product better.

Before you hire a tester, pay attention to the way they communicate — not just their words, but also the tone of voice and body language — and ask questions to gain more insight into their communication style when working within a team.

All these traits help build ideal software testers who can effectively contribute to the company’s team and growth.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Hire the Right Person: 5 Traits to Look for in a Software Tester appeared first on Ranorex.


Thinking Out-of-the-Box in an AI-Centric World

$
0
0

According to research conducted by management consulting firm McKinsey & Company in 2018, by the year 2030, AI will have the potential to deliver additional global economic activity of about $13 trillion.

As more companies are moving toward incorporating AI into their existing business systems, it becomes crucial for software testers to consider how this technology would change the way they — and their product’s users — will interact with these systems.

AI’s impact on end-users

AI-based systems have hugely influenced our lives already. Things we thought weren’t possible have become a reality.

Researchers at UC San Francisco built an AI model that could detect the onset of Alzheimer’s disease an average of six years before a clinical diagnosis. They did two rounds of testing; in the first round, the machine-learning algorithm correctly identified patients who developed Alzheimer’s with 92% accuracy, and in the second round, with 98% accuracy.

But while AI-based systems have been able to comb through millions of datasets to find patterns and gain new insights, this practice has also caused significant problems in the area of data, privacy, security and biases.

The lifeline of AI-based systems is data. A large amount of user information is needed to train AI models to make the right predictions. But if consumer data is used for these models, there’s the likely consequence of security breaches that can happen with the flow of data between different systems. According to one risk report, there were 5,183 data breaches in the first nine months of 2019 alone — an increase of 33.3% from that time the year before. A total of 7.9 billion records were exposed.

Another toxic byproduct of AI-based systems is the impact on race, culture, diversity and other human social aspects. Do you recall such unsettling news as Google Photos classifying Black people as gorillas; Microsoft’s Tay, an AI chatbot that quickly began spitting out racist tweets; and the Beauty.AI algorithm deeming only white people beautiful?

When AI models are being used to make decisions about humans, rather than humans using AI models as an aid to make informed decisions, we risk becoming slaves to these algorithms, whether we realize it or not.

How do testers ensure that AI is safe for human consumption, and how do we interact with these systems?

Interacting with AI-based systems

As testers, our minds are trained to think of different failure scenarios that could happen in production. We put ourselves in the shoes of an end-user and exercise the application the way they would use it. This helps to uncover a lot of critical information about the application.

The same applies to AI-based systems. We have to think about edge cases when providing different data sets to train the AI model. For example, say we are training an AI model for autonomous cars. Instead of only feeding the model clear images of stop signs, we should also supply images of stop signs covered with snow or graffiti. This tests the AI-based system under real conditions it would encounter. These are the edge cases we need to think about when interacting with these systems.

Also, remember that the working of an AI system is a black box. We do not know how the AI model forms different relationships based on the data sets or how it makes decisions. Keeping this in mind, use more inclusive data sets to reduce biases, have an audit process to ensure the learning of the AI model is according to your expectations, and test for adversarial attacks. (Just like other applications, AI-based systems are also prone to attacks.)

Finally, while advancements in AI continue to evolve, it is essential to upgrade our skills by learning new technologies and programming languages to stay relevant in the industry. After all, being curious, continually learning, and applying critical thinking skills is the essence of what makes us human and differentiates us from algorithms and machines.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Thinking Out-of-the-Box in an AI-Centric World appeared first on Ranorex.

The Power of Bug Bashes

$
0
0

Organizations have many checkpoints to catch defects as early as possible: unit tests, automated tests, manual tests, smoke tests, regression tests, acceptance tests and much more. All these types of testing are done to find defects, fix them quickly, and prevent them from leaking into production. It saves a considerable amount of time and cost for the company.

Apart from these testing approaches, organizations can also use a powerful technique to find defects called bug bashes.

What is a bug bash?

A bug bash is a testing event with internal employees. Teams from different parts of the company collaborate to find bugs in the application within a specific time-boxed testing session, with prizes or awards given out at the end.

The bash is typically a 60-minute session, organized as follows:

  • 5 minutes for the introduction
  • 40 minutes of focused group testing
  • 15 minutes of debriefing

The time splits may vary based on who organizes the event, but the general format remains the same. Usually, the organizers prepare a couple of slides showing different areas to be tested in the application. This helps teams when they are stuck and run out of ideas. The organizers also select a channel for teams to communicate with each other and document defects — typically an app such as Slack, Zoom, Skype, etc.

Advanced notification is given to the teams about the event so that they come prepared with prerequisites such as a set of devices, a certain environment setup, system accesses, and other resources they may need during the session. Food and drinks are provided to all the teams to incentivize participation and encourage a fun atmosphere.

The group of people conducting bug bash sessions do not participate in the event; instead, they go around the room and support the other participants.

Bug bashes are usually organized just before a major release of a product so that the teams’ insights will help inform the final rounds of testing. This is not a replacement for any part of the test team’s testing process, but rather a complement to it.

Advantages of having a bug bash

Finding different kinds of bugs

When a team works on a particular application for a long time, they start to develop biases by seeing the same features repeatedly. Subtle changes and differences in the application may go unnoticed.

With bug bashes, you get participation from people who do not work directly with the application, like UX engineers, business analysts, product managers, ScrumMasters and customer support. They see the application from different viewpoints and can find bugs that have been missed during the regular development and testing process, before the product is released to customers.

Collaborating across teams

Since people from different teams and roles collaborate to find bugs, a bug bash encourages open communication among employees who may not usually interact. They get to learn the application while exploring the features, which gives your product visibility to the entire company. And because it’s structured like a competition, team members get a fun and collaborative experience while getting to try out a job role that’s new to them.

Encouraging friendly competition

To encourage teams to take the event seriously, bug bashes are incentivized with prizes. Based on the budget allocated for the event, teams can get rewards for the best bug, the most bugs, and the most interesting bug. This makes the teams focus on finding as many bugs as possible during the testing session.


Bug bashes are an innovative way to harness the whole company’s power to find bugs in your application. It takes time and effort to organize such an event, but the benefits outweigh the costs. It is amazing to see the different types of information you can uncover about your product when people from different roles collaborate to test it out.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post The Power of Bug Bashes appeared first on Ranorex.

Write Tests That Fail

$
0
0

High-quality projects need to deliberately practice failing tests in order to help ensure that successful tests succeed properly. It may sound controversial or counter-intuitive, but these examples show how this idea embodies basic principles of test-driven development (TDD) in four distinct ways.

1. TDD best practice

TDD’s key methodology is “Red, Green, Refactor” (RGF). In other words, the very first step of proper programming is a test that fails. Numerous articles elaborate the intentions of RGF; for the moment, just recognize that development only begins with a Red.

As I’ve written elsewhere, even this simplest possible model presents at least one severe practical problem: Red never leaves the individual contributor’s desktop. It’s nearly universal practice for Red to have no shared artifact. It’s equally universal, at least from the reports dozens of senior developers have shared with me, to skip Red accidentally on occasion. Even the best programmers do it. The consequence is inevitable: implementations that do something other than what was intended. That’s a true failure.

At this point, I have no remedy more satisfying than moral exhortation. Conscientious programmers need to take Red seriously, at the start of every RGF cycle. Red is essential, even though experience shows that it’s often overlooked, and its neglect always harms the quality of the final implementation.

2. Handling exceptions

An entirely different kind of test failure is a unit test confirming that an implementation correctly handles an error. Test frameworks generally support this kind of test well: It looks like just another verification of a requirement. Instead of verifying, for instance, the “happy path” that 2 + 2 is indeed 4, a distinct test might confirm that 2 + “goose” accurately reports, “The string ‘goose’ cannot be added to the integer ‘2.’”

Test frameworks usually have the technical capability for this requirement. If a framework can verify that 2 + 2 yields 4, it can equally well verify that 2 + “goose” yields a specified error.

The problem is that organizations too rarely specify these requirements. In isolation, the incentives for marketing, product, engineering or other departments are to focus on features — ”affordances” — with positive capabilities. Decision-makers don’t often sign for services based on high-quality exception-handling. The best that can happen, it appears, is that user experience, technical support or another secondary department makes a point of advocating for and documenting requirements having to do with errors. Once those are in writing, engineering and QA generally test these requirements adequately. Teams need to be aware, though, of the importance of testing error-handling, as well as the need to be alert to its unintended absence.

3. Warning lights

Return, for a moment, to the second and third steps of RGF. TDD teaches that the whole programming sequence should be relatively quick and lightweight; “heavy” development segments into multiple manageable RGF cycles.

Sometime it happens, though, that a Green or Refactor step doesn’t go as planned. Errors turn up. Progress stalls.

This is important information. These errors in RGF are a symptom of a design or architecture that deserves improvement. Be sensitive to errors that turn up in these stages, as they can be guides to hotspots that might deserve rework — that is, additional RGF cycles.

4. Validating the validators

A fourth and final kind of test failure for your attention has to do with false positives that turn up in systematic validation.

We construct quality assurance practices and continuous testing (CT) implementations, deliver software artifacts to them, and then relax when an “all good” result emerges. This is exactly as it should be. It’s the way our system is supposed to work.

One of its frailties, though, is that it leaves us with no immediate evidence that the tests themselves are reliable. One failure mode for tests is to pass more artifacts than they should. They stay Green even when an error is present.

This is especially common for locally customized validators. Suppose a particular system is dependent on a number of XML sources. (The analysis applies equally to JSON, INI or other human-readable formats.) The CT for this system is good enough to include a validator for the XML. Notice that the validator is specific to this system, at least in its configuration. The configuration embodies local rules about the semantics of the XML.

At the product level, what we want is for the validator to pass all the system’s XML. It’s easy, though, to misconfigure such validators so they pass too much. Passing this kind of validator tells us less about the XML than we expect — maybe, in an extreme case, nothing at all.

A good remedy for this vulnerability exists, though. It’s generally easy to automate generation of perturbations or mutations of XML instances into invalid instances, and verify that those result in appropriate error reports. With these supplementary tests in place, we can have confidence not just that the XML passes a test for validity, but that the XML passes a discriminating test for validity.

Conclusion

Software’s purpose is to produce correct results. Careful thinking about different kinds of failure, though, helps bring certainty about correctness.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Write Tests That Fail appeared first on Ranorex.

Exactly Who is Doing the Testing?

$
0
0

Who tests the software at your company? Is it people trained to test software? Maybe people who have been trained to write test cases and execute the steps as written? What if they are people who have NOT been trained to write test cases? What if they are given steps to follow but they don’t always follow them precisely?

What if “testing” is not something they ever expected to do? What if they don’t work in IT or software development? Do you still call them “testers?”

Over the last several years a growing number of companies, large and small, established and new, have been seen doing something different from the usual. Instead of having testing being done by people trained to do testing and expected to do testing related work, testing is being done by other people.

The “State of Testing, 2020” report compiled by Tea Time with Testers shows this interesting trend. More organizations are looking to people who understand the business needs and functions to test their software than to “testers.” In some ways this makes a great deal of sense.

Business expertise over testing expertise?

The first response from many “traditional” testers or software professionals, is often something like “Well, sure. But they aren’t trained to follow scripts and can’t do any really technical work.” Do we really want to require people to “follow scripts” all the time? Does that lead to good testing work, in and of itself?

As for the technical aspects, how many “manual” testers are comfortable running SQL queries against the database? Are they comfortable digging into system logs to find evidence of behaviors not reflected in the UI? Are they comfortable moving from one tool to another to help them examine parts of the system that might not be examined through “following a script?”

Loads of people learn about things by experimentation. When you get a new mobile phone or laptop do you read the documentation available? If there is no user guide or “quick start” guide included in the packaging do you go online and look them up? Does anyone read all the tips and tricks on a device before doing anything with it?

I do not. Most people I know don’t. Instead, we apply what we already know from previous experience and look to how this device behaves in light of that. We know how to use a phone. We know how to use a laptop or computer. We have our preferences based on comfort and experience.

We start exercising the new device and comparing it against our experience with other devices and expectations. We look for the behavior of the device against the model of our experience.

We are testing the new device.

People applying knowledge gained from working in a variety of roles can likewise evaluate software intended to meet their needs and expectations. They can test the software based on their understanding of the business processes which need to be achieved.

Traps and training

A common statement about experienced business users doing testing is they “need to be trained” to use the software. A case can be made that when the new software is radically different from what they have worked on, some training might be needed. Following a “cookbook” collection of scripts does little toward actually testing the software.

A short tutorial on how the new system works and how the pieces interact, including how it differs from the old system, often gives more effective training. Explaining the differences, then watching and gently guiding them as they work through broadly stated scenarios often can lead to greater success for people to learn the new software, and greater effectiveness in testing software.

However, a fair number of organizations try to combine “training” and “testing” into the same activity. The “testers” are instructed to follow scripts to “test” the software. The theory is they will learn to use the new system correctly.

The trap is that they are focused on what they are “supposed” to do and not what the software is supposed to do. They are not actually testing the software.

Here’s the trap in that approach. For most adults, a demonstration of the task is a good start to show people how to do something. Then, let people learn and experiment around what they need to do for their work. This gives people a chance to apply what they were shown in the demonstration to what their actual work is.

People who know the business needs and workflows can very quickly transition to actually testing the new system instead of following a step by step recipe for them to “learn.”

For some organizations, this concept presents a challenge in managing the process of testing. There is a consistent belief that detailed scripts will always provide measurable proof of progress and efficacy of testing. After all, if these scripts find defects in specific steps, then they will be able to show value.

The most common response I get when I suggest this to companies using business experts to test new software is that they don’t have a good way to measure what is being done or what areas are showing problems.

Measuring progress without detailed scripts

Here is what I have done to address the need for measurement. Begin with having a list of tasks needed to do their work. This likely will require conversations with multiple business areas to make a “punch list” of high and mid-level tasks. This gives you a list of tasks which can be worked through with “How do I do THIS?” focus. It also gives an in-depth understanding of the work itself and allows for creation of targeted training material if it does not already exist.

The list of high and mid-level tasks also becomes the measure for progress and system readiness. The people working on testing the software are the same ones who will need to use it after it “goes live.” If they are satisfied that each function they need to be successful works as they need it to, testing for those tasks is complete.

Borrowing the idea of “Just In Time” and making it “Just Enough” training, have an expert in the new system available to answer questions after there have been some demonstrations and basic exercises. You can also have testing experts available to answer questions and help communicate problems found to the development team.

What about automated testing?

The business experts testing the application have also provided real scenarios that can be incorporated into scenario based automated tests. These can be built and used as models for regression testing and scenario based smoke tests.

Using the combined expertise of business and testing together, organizations can work to build realistic test scenarios which will cover the most common and the most critical interest points for the business. Testers can help structure the work and look for likely areas that would not have been thought of without experience testing. However, the bulk of the testing will be done by people with an eye to what is needed for them and their customers.

This type of working environment improves the relationship and builds a partnership between IT and testing groups and the people who use the software every day. It helps both feel more connected to the challenges each face and how they can help each other overcome them and improve their work, their product and their organization.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Exactly Who is Doing the Testing? appeared first on Ranorex.

Six Benefits of Developer Testing

$
0
0

Software testing is one of the most critical aspects of the development process. Teams must find as many defects as possible before they get into production. It is a more significant impact when customers find issues for you and decide not to buy your product anymore or, worst of all, give you bad reviews on websites and social media. It hurts your brand.

However, it should not be the responsibility of only the testers to catch defects. Developers can play a crucial role in detecting defects earlier by doing their fair share of testing too. These tests can be a combination of unit, integration or system-level tests.

Here are six benefits you get from testing that’s done by developers prior to pushing code to QA.

1. Catch defects early

Naturally, a feature has to be developed before testers can start testing it. But developers can not only build the feature during the development phase, but also test it before it moves on to the next phase. This helps to catch defects early so developers can fix them immediately, rather than testers finding the defects later in the development process and notifying the developer to fix them. Also, it’s been well established that finding defects earlier helps organizations save a considerable amount of time, effort and cost.

2. Write better code

Developer-conducted tests help to write better code, faster. When teams follow agile practices like XP, TDD or BDD, the teams write tests before the application code. This forces the developers to write only the relevant application code to make the test pass. It helps reduce the occurrence of complex and highly coupled code, which is often a nightmare to maintain in the long run.

3. Contribute to documentation

Any test written during the development phase becomes part of the documentation. For example, when developers write unit tests, it helps them gain a better understanding of the developed feature and allow them to use the same code in the future without any ambiguity. Having well-written unit tests makes the code more understandable if you work in large teams where everyone works on different parts of the code.

4. Make changes with confidence

We live in an age where customer demands are exponentially increasing, and new features have to be developed faster than ever before. When new features are introduced, developer tests help to get quick feedback on the existing functionalities of the system even before code gets pushed to QA. With DevOps and continuous testing becoming the standard in organizations, making rapid changes and getting quick feedback about the application is crucial.

5. Reduce manual testing

Most developer tests, such as unit, API or even integration tests, can be automated. This means testers could spend less time testing features manually and instead work on exploring other parts of the application that are more complex. When developers automate simpler tests, testers can use their creativity to spend more time on testing important and intricate scenarios, and less time on mundane testing activities.

6. Improve team performance

Testing is a team activity. Contrary to popular belief, it is not only the testers who are responsible for delivering good products; the entire team owns quality. If the product succeeds or fails, it affects the entire team. Part of a developer’s responsibility is to make sure features work correctly and meet expectations. They have to write unit tests, do spot checks, and perform some manual tests to ensure their code did not break other existing parts of the application. This is part of their job, and there are no excuses.

With teams implementing CI/CD pipelines to release faster, it helps to have developer testers included as part of this process. It enables delivering features with better quality to the customer, so everybody wins.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Six Benefits of Developer Testing appeared first on Ranorex.

6 Questions to Help You Know When to Stop Testing

$
0
0

Continuous testing has become more and more popular in recent years. Teams strive to have automated tests at every stage of the software development lifecycle in order to evaluate risks and obtain immediate feedback. But when the aim is to test continuously, there is one question that teams often struggle to answer: When do we stop testing?

We can validate whether a product is working as expected, but testing has to stop at some point so the product can be released to the customer. Complete exhaustive testing is impossible.

Here are six questions whose answers can help you make the decision that the time is right to stop testing.

1. What risks have been mitigated?

There are risks associated with every feature developed. A good approach to decide when to stop testing is to analyze whether the team has mitigated all the identified risks.

This could have different meanings based on the context of the project, such as:

  • Have the tests related to the identified risks been executed? What were the results?
  • Are there missing risks that need to be addressed before releasing the product to the customer?
  • Does another round of test case execution need to happen to retest the fixed defects?
  • How confident are you in releasing the product in its current state?

Risk-based testing helps evaluate the product in the customer’s lens, and mitigating all the risks identified will ultimately define when testing is complete.

2. Are there open critical defects?

Realistically, there are always going to be defects in the product, even after releasing it to production. The only thing we can do is identify and fix defects that would significantly impact the customer. One way to do this is to ensure all the identified critical or high defects are fixed and retested.

3. Are you meeting project deadlines?

There are often strict release schedules to ensure features are released on time. This could be due to multiple reasons, such as signed contracts, getting a competitive edge in the market, or helping retain customers by providing value.

As a result, teams are on the hook to deliver features by certain dates. There should be weeks of planning meetings, multiple release schedules, and clear criteria for deliverables for a certain time period. This helps get more clarity on stakeholders’ goals and expectations.

4. Do you have acceptable requirements coverage?

Before the start of feature development, there is a list of requirements documented in user stories that the team has to work on. One way to know whether testing is complete is to ensure that all the requirements identified for a given release cycle have been tested. Usually, a release cycle is split into different sprints to make this effort more manageable and measurable.

If there are user stories moved to the backlog, the stakeholders have to make an informed decision as to whether those user stories are important for the current release or could be scheduled to go out at a later time.

5. Is the product good to release?

When the project deadline comes around, the stakeholders have to collectively decide whether a product is at an acceptable level to be released to their customers. The factors that aid in this decision-making process could vary based on the project’s context, the planned features to be released, signed contracts, and more.

6. Has the difficulty exceeded the value?

Sometimes it is glaringly clear that the product has reached a certain level of stability or maturity. A great indication for this is when teams are:

  • Finding fewer defects with less severity over a certain time period
  • Spending more time discussing than testing
  • Starting to repeat the same testing steps multiple times and ending up with the same expected results
  • Sharing the same status updates in multiple standup meetings

When this is the case, it is common sense to stop testing and move the focus to other modules with higher risks.

Testing teams should have various metrics to measure whether it is time to stop testing and move on to other areas. It all boils down to what the team’s priorities are and how to make your customers happy.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post 6 Questions to Help You Know When to Stop Testing appeared first on Ranorex.

Shift Security Left: Solving the Challenges of DevSecOps

$
0
0

Identify and correct problems sooner, rather than later: That’s the heart of the “shift left” slogan. But to do so with security — to cultivate not just DevOps, but DevSecOps — is one of application development’s thornier problems.

Here are a few ideas to help your team shift security work left effectively.

Challenges: Span, Incentives, Openness

It helps first to understand what can go wrong. DevOps has been a success largely because, left to themselves, development and operations teams too often optimize for partial goals that inadequately serve businesses and their customers. Putting them together has improved software development lifecycle (SDLC) reliability and quality.

So if DevOps is good, then DevSecOps must be even better, right? Not necessarily. Cultivating workers who simultaneously juggle development and operations perspectives is difficult. To go from these two perspectives to three, though, with the inclusion of security, isn’t 50% harder; it’s more like three times as hard. One index of that difficulty: Neuvoo reports that the entry salary for beginning DevSecOps employees in the US is $78,000 annually, compared to DevOps at $61,175.

Moreover, a span that includes security also introduces real cultural strain. DevOps workers are accustomed to thinking of what they do in terms of positive actions: They construct features, they raise performance, they fix bugs, and so on. Security is frequently about what does not happen. To prevent hostile actions is so different from building new features that quite a few DevOps employees are never able to make the transition. Even when they can do the work of identifying and patching vulnerabilities, say, many are never able to negotiate their value within an organization the way they can carry on conversations and make plans about positive features.

Another cultural difference of security work is its infinitude. DevOps recognizes the need for continuous learning: New tools, frameworks and libraries emerge daily. Still, when a particular implementation accurately fulfills a specific collection of requirements, DevOps workers know they can relax, at least temporarily. A program without bugs and with the right functionality is good enough. Security doesn’t have such clear limits. Security is more open-ended. Security never gets to as clear-cut a stopping point.

Detect and Eliminate Application Vulnerabilities

360° code coverage with SAST and SCA

Payoff: Faster Delivery, Fewer Vulnerabilities

If we can overcome these difficulties, though — if an organization nurtures a team that thinks in simultaneous terms of development and operations and security — the potential payoffs are large. To identify and solve security problems as early as possible promises to slash their cost of repair by a factor perhaps as large as 15, accelerate the speed of delivery to customers, and protect the organization and its customers from the alarming costs of security vulnerabilities. DevSecOps is expensive, but less so than the alternatives.

How does an organization win these benefits? Start with mindset and attitude. Practice continuous delivery. Allow and encourage the whole team to be accountable for security. Align product, development and security to be equally cloud-native.

With the whole team aspiring to the same consolidated achievement of continuously delivered, high-quality, secure software, appropriate technical milestones include automated left-shifted security scans, training in security topics for those coming from a DevOps background, and explicit attention in product plans to security expectations.

Opportunities for leadership will abound. When someone in a daily standup raises concern about how authentication functions, for example, the group response defines the organization’s long-term security prospects. Does the team as a whole believe something closer to “That’s a security problem; we can return to that after you finish coding the functionality,” or “Thanks for spotting that problem. Let’s get help in and make sure we thoroughly settle the security questions before they have a chance to impact the rest of the software”?

An attitude like the former means that the team isn’t ready for true DevSecOps yet. It’s not in a position to pay the costs or gain the advantages.

Conclusion

DevSecOps, like agile, is more about culture than processes or tools. Certainly different individual contributors will have more or less background in the development or security or operations legs of DevSecOps. For DevSecOps to work, though, the entire team, from marketing and product through to QA, needs to share the attitude that software is not just a bundle of features, but a trustworthy software construction.

Build in security from the beginning. Take security problems as seriously as visual designs, lookup algorithms or scalability measurements. Support the team in the effort not only to get security right from the beginning, but to continue learning how to get security right and to obtain the tools to support correctness.

Purchasing and installing left-shifted security tools is easy once the right culture is in place.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Shift Security Left: Solving the Challenges of DevSecOps appeared first on Ranorex.


SQL in a Serverless Landscape

$
0
0

SQL is from 1970, meaning it’s now half a century old. Does it still have a place in 2020’s cloud-based serverless implementations?

Conventional wisdom

Recognize first that plenty of voices teach that serverless architectures and NoSQL go together nicely, for a range of reasons. Serverless computing emphasizes scalability, statelessness, flexibility, thrift, and ease of deployment, among other virtues. NoSQL advertises a similar line-up of benefits. A natural conclusion is that serverless and NoSQL partner well.

And they certainly can. Showcase serverless applications sometimes leverage NoSQL persistence that, from all appearances, plays its role well.

Cloud industry is only in its infancy, though, and best practices remain far from clear. We’re not yet certain whether the association between serverless and NoSQL is a necessary one, a matter of fashion, a historical accident or something more nuanced. While serverless and NoSQL have teamed successfully, so have serverless and SQL — leading cloud providers offer Cloud SQL, Amazon Relational Database Service and so on.

Principles

Perhaps the most certain result for now is that serverless computing is compatible with a variety of implementation technologies. Serverless computing can succeed with NoSQL, but also with SQL, if the latter is managed nimbly and with such modern conveniences as on-demand provisioning, SSD-backed storage and IPsec isolation.

Accept for a moment that both NoSQL and SQL have the potential to complement serverless — how do we decide between them? Think architecturally: Consider where the value of the processing lies. Do a variety of different applications need to retrieve precise, objective data and present that data in different perspectives, as is common in fintech and some kinds of IoT work? Those are jobs where SQL has been in the lead. Is the emphasis on filtering massive collections of approximate data with experimental queries in search of actionable results? Does the programming team think only of objects, and never of structured queries? NoSQL better fits those situations.

Inevitably, as soon as we generalize usefully about what differentiates these technologies, clever innovators hybridize constructions that promise the advantages of both sides. ArangoDB, for instance, is a “multi-model” database that allows graph, document and key-value stores. It is resolutely NoSQL, yet to express all that it does requires a unified query language, AQL, that most closely resembles SQL.

Build, Test, and Deploy with Confidence

Scalable build and release management

Serverless for success

The way to win with serverless starts by expressing requirements without reference to an architectural choice; let the architecture be part of the solution, rather than the problem. Identify the abstract nature of the data, their transactions, types and life cycles.

Do customers receive value because they can retrieve precisely what was earlier captured about them, or does value arrive from relatively stateless algorithms applied to newly collected data? The latter is likely to fit naturally with artificial intelligence components, often backed by NoSQL technologies. When data is highly transactional, though, or when security requirements are complex and “least privilege” is an essential, and especially when data needs to persist far beyond the manipulations of one particular application, cloud-savvy SQL starts with several advantages.

Another way to think about customer value is how tightly data is bound to a particular application. If data belongs to a small number of closely related applications, NoSQL is a natural choice, and much of the design likely emphasizes stateless programming. If state is primary, though, and different applications are perspectives on that long-lasting state, SQL probably is the vehicle of choice.

Does the data admit a persistent model? That suggests SQL, perhaps managed by a dedicated database administrator. Is it more important that the relations in an instance be flexible and easy to change throughout the lifetime of the data? Several NoSQL technologies emphasize this kind of flexibility.

Whatever the database technology, security deserves care from the first design. Make sure secrets — access passwords, resource tokens and so on — are always rigorously protected. Even one accidental commit of secrets to a version control system or a transient password-free exposure of a datastore to all employees can have devastating consequences. Barricading all data accesses behind an explicit API gateway, for instance, gives a welcome extra layer of security.

Be ready for change. However perfect last year’s choices of technologies, and however large the investment in those technologies across an entire enterprise, new business circumstances can make migration to a different technology appropriate and desirable.

Ultimately, SQL assumes a role with serverless that’s small but certainly greater than zero. Analyze the true business requirements of your serverless work with care, and any decisions about datastore technology should follow with a minimum of controversy.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post SQL in a Serverless Landscape appeared first on Ranorex.

Shift Security Left: Solving the Challenges of DevSecOps

$
0
0

Identify and correct problems sooner, rather than later: That’s the heart of the “shift left” slogan. But to do so with security — to cultivate not just DevOps, but DevSecOps — is one of application development’s thornier problems.

Here are a few ideas to help your team shift security work left effectively.

Challenges: Span, Incentives, Openness

It helps first to understand what can go wrong. DevOps has been a success largely because, left to themselves, development and operations teams too often optimize for partial goals that inadequately serve businesses and their customers. Putting them together has improved software development lifecycle (SDLC) reliability and quality.

So if DevOps is good, then DevSecOps must be even better, right? Not necessarily. Cultivating workers who simultaneously juggle development and operations perspectives is difficult. To go from these two perspectives to three, though, with the inclusion of security, isn’t 50% harder; it’s more like three times as hard. One index of that difficulty: Neuvoo reports that the entry salary for beginning DevSecOps employees in the US is $78,000 annually, compared to DevOps at $61,175.

Moreover, a span that includes security also introduces real cultural strain. DevOps workers are accustomed to thinking of what they do in terms of positive actions: They construct features, they raise performance, they fix bugs, and so on. Security is frequently about what does not happen. To prevent hostile actions is so different from building new features that quite a few DevOps employees are never able to make the transition. Even when they can do the work of identifying and patching vulnerabilities, say, many are never able to negotiate their value within an organization the way they can carry on conversations and make plans about positive features.

Another cultural difference of security work is its infinitude. DevOps recognizes the need for continuous learning: New tools, frameworks and libraries emerge daily. Still, when a particular implementation accurately fulfills a specific collection of requirements, DevOps workers know they can relax, at least temporarily. A program without bugs and with the right functionality is good enough. Security doesn’t have such clear limits. Security is more open-ended. Security never gets to as clear-cut a stopping point.

Detect and Eliminate Application Vulnerabilities

360° code coverage with SAST and SCA

Payoff: Faster Delivery, Fewer Vulnerabilities

If we can overcome these difficulties, though — if an organization nurtures a team that thinks in simultaneous terms of development and operations and security — the potential payoffs are large. To identify and solve security problems as early as possible promises to slash their cost of repair by a factor perhaps as large as 15, accelerate the speed of delivery to customers, and protect the organization and its customers from the alarming costs of security vulnerabilities. DevSecOps is expensive, but less so than the alternatives.

How does an organization win these benefits? Start with mindset and attitude. Practice continuous delivery. Allow and encourage the whole team to be accountable for security. Align product, development and security to be equally cloud-native.

With the whole team aspiring to the same consolidated achievement of continuously delivered, high-quality, secure software, appropriate technical milestones include automated left-shifted security scans, training in security topics for those coming from a DevOps background, and explicit attention in product plans to security expectations.

Opportunities for leadership will abound. When someone in a daily standup raises concern about how authentication functions, for example, the group response defines the organization’s long-term security prospects. Does the team as a whole believe something closer to “That’s a security problem; we can return to that after you finish coding the functionality,” or “Thanks for spotting that problem. Let’s get help in and make sure we thoroughly settle the security questions before they have a chance to impact the rest of the software”?

An attitude like the former means that the team isn’t ready for true DevSecOps yet. It’s not in a position to pay the costs or gain the advantages.

Conclusion

DevSecOps, like agile, is more about culture than processes or tools. Certainly different individual contributors will have more or less background in the development or security or operations legs of DevSecOps. For DevSecOps to work, though, the entire team, from marketing and product through to QA, needs to share the attitude that software is not just a bundle of features, but a trustworthy software construction.

Build in security from the beginning. Take security problems as seriously as visual designs, lookup algorithms or scalability measurements. Support the team in the effort not only to get security right from the beginning, but to continue learning how to get security right and to obtain the tools to support correctness.

Purchasing and installing left-shifted security tools is easy once the right culture is in place.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Shift Security Left: Solving the Challenges of DevSecOps appeared first on Ranorex.

5 Ways to Create Faster and More Stable Web UI Tests

$
0
0

Automated UI testing is one of the biggest investments companies make as part of the testing process. The investment pays off, though, because these tests are easier to create for both technical and nontechnical testers, as various tools can aid in the process. These types of tests also simulate real user experiences via the user interface and validate different functionalities.

But many companies are hesitant to implement automated UI tests because they tend to get slower and brittle as the number of tests increases. How can you make the test suite faster, leaner and more stable?

Here are five practices that can build a robust, reliable, and faster suite of UI automated tests.

Use explicit waits

Web applications have a lot of Ajax, JavaScript and other server-side calls taking place in real time. This impacts page load times, as the elements display at different time intervals. A common practice to handle this kind of situation is to use a wait statement.

There are different ways to make the UI test wait for a particular element, such as Thread.Sleep() or implicit, explicit and fluent waits. A good practice to optimize the waits is to use explicit waits as much as possible, as it makes the test wait explicitly based on the occurrence of certain conditions. This speeds up tests because they are not waiting for an indefinite amount of time, like in Thread.Sleep() or implicit waits.

Conditional waits like explicit waits help to make the test smarter and much faster.

Run tests in parallel

Organizations want quick feedback about the application under test before it is released. In the past, teams had one server machine and ran all their tests in this single server. Now, times have changed, and there are various options available to run more tests, faster and in parallel.

There are open source solutions like Selenium Grid and third-party tools such as SauceLabs, BrowserStack and Ranorex that can run multiple tests in parallel and in the cloud. There is no need for a physical machine anymore.

3. Create granular tests

Every test you create should have a single responsibility and test a single feature. This is one of the best practices of writing stable tests. It is bad test design when you try to validate multiple functionalities within one test, because they bulk up the tests and make troubleshooting and maintenance a nightmare in the long run.

Structure your tests so that the object definitions and the implementations are separated. For example, you can use the Page Object Model to separate the IDs of elements from the actual tests and call the IDs during run time. It makes the tests more reusable and maintainable.

If you’re using Ranorex Studio to automate your tests, the Object Repository handles this for you. To learn how it works, watch our video tutorial below.

4. Focus on the best location strategy

One of the biggest reasons UI tests are brittle is that teams do not pay attention to the location strategy they use to find elements on the page. The easiest way to locate an element is using the XPath, but it is also one of the worst location strategies, as they keep changing based on the DOM structure. So they tend to be unreliable and should be used only when there are no other options.

A good rule of thumb is to use the below locators, which are more stable than XPath:

  • ID (but avoid using dynamic IDs)
  • Name
  • Class name
  • Tag name
  • Link text
  • Partial link text
  • CSS selector

Ranorex Studio uses a stable, proprietary location strategy, the RanoreXPath, which offers a robust, flexible, and reliable way to locate elements. 

5. Remember that not everything has to go through the UI

The more times you open up a UI to run different test scenarios, the more time-consuming it gets for tests to complete and get feedback on the application. Not everything has to go through the UI; you can interact with the page underneath the hood without needing to open the webpage.

There are tools like Headless Chrome, PhantomJS, Zombie.js, HtmlUnit, Watir-webdriver and many more to validate page elements through the DOM without needing to visually launch the web browser. This drastically reduces the execution time of web UI tests. Try to have a combination of UI and headless tests in your overall UI testing strategy.

Following the above strategies will help to make your UI tests much faster, leaner and more stable.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post 5 Ways to Create Faster and More Stable Web UI Tests appeared first on Ranorex.

How to Improve the Automate Everything Approach

$
0
0

A little over 10 years ago I heard many, many people at a conference insist their management expected them to “automate everything.” I heard very similar statements 5 years ago. I heard the same thing this Spring.

In some ways it seems perfectly reasonable. You have some tests you work through. Then you exercise the function and how the entire piece works. Then you look to see how it interacts with what was already in the system or on the screen. When you are comfortable with what is going on and you understand it, you turn around and write the code to make what you just did repeatable. Simple, right?

Recently I was talking with a tester who was a bit frustrated. At her shop there are functional tests written for every story. Each test starts like this:

  • Logon to the system 
  • Navigate to the screen under test
  • Locate this menu item

Then you test it. Every change to every field on every screen is exercised the same way.

Every script is executed by a person sitting at a desk who steps through each script. They manually verify the results and click pass or fail on every single step. The goal is to make certain everything matches the acceptance criteria in detail. So far this makes sense.

Then all the scripts are gathered and sent to the automation team who automate these tests. All of them are automated as written and executed by the testers working “by hand.”

This is the result of taking the “automate everything” reasoning to its full, logical conclusion. But is this really the best that we can do? 

Collaborate, don’t isolate

If the developers are doing any level of unit testing, there should be conversation between the people writing code (and unit testing it) and the people exercising the code. Once it makes it to the test environment, a sanity check of those same unit tests in the new environment likely will give the first level of confirmation of behavior. If they fail, take the failing code and fix it. Then test again.

Then, the people exercising the code can do a deeper level of evaluation of the behavior. Check links, drop-down lists, communication with other modules, response codes, messages in the logs (application, DB, system, whatever) and the normal testing “stuff.”

Check the acceptance criteria and requirements. Make sure those are handled properly. Also, check the exceptions to them that likely were not called out. How many times are there “requirements” and “acceptance points” that explain only one path? What happens if something ELSE happens? Exercise the “something else.”

Focus on function and flow

You have now reasonably confirmed the software addresses the change it was intended to address, at least to the level many people will exercise it. Likely, you have already done more than most would and we have not yet automated anything. Here is where most “testing” stops and people begin to “automate.” This is the wrong place to do this.

Instead, take a look at intended usage of the software. How does it get used, in the wild? What do the customers, external or internal, reasonably intend to use the software for? Can you emulate what they need to do? Can you emulate the “business flows” they will use?

Many will say “No.” I understand that. At one point in my working life, I would have agreed. A wise woman gently asked me once, “Have you tried asking anyone?” I hadn’t. That was a lesson I have never forgotten.

It often isn’t in the requirements or in the acceptance criteria, and is not often addressed in the “justification” or “statement of business purpose” or “problem/need” statement. Most of the time those are not prepared by the people who use the software to do what needs to be done. Ask the people who need it for their jobs, if at all possible. It may not be, I get that. But someone can likely describe how the software gets used.

Talk with them.

Then, build scenarios to exercise what they describe, Review it with them. Show them what the software does to make sure you understand what need is being addressed.

The scenarios you scripted and reviewed the results for have one vital purpose.

They define the main business “flows” going through the software you are supposed to test. Once you have that done, you now have a meaningful set of test scenarios which make sense to the actual customers.

Automate the right tests

Now that you have meaningful test scenarios, automate those scripts — with the following caveats:

  • Be cautious in automating scenarios that require a lot of manual intervention (i.e., a card swipe). 
  • Avoid automating tests of features that resist automation, like image-based ReCaptchas. 
  • Wait to automate a test until a feature is relatively stable and the pass/fail criteria is clear.

Otherwise, the effort required to maintain or execute your test automation may outweigh the benefit of automating your test in the first place.

For more recommendations on what to automate, check out the article: 10 Best Practices in Test Automation #1: Know What to Automate. 

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post How to Improve the Automate Everything Approach appeared first on Ranorex.

Are Your Testing Metrics Misleading You?

$
0
0

If you are like me, there is nothing better than loading up the car with your favorite beverages and snacks and hitting the wide open road. Nothing compares with rolling down country highways and roads, good music playing while watching everything unfold in the rear-view mirror and the back windows.

Too much sun can come in the windshield (windscreen) and front windows, so I like to keep them covered and use the mirrors and sometimes take a look out the back window. That is the best way to know where you are going on a road trip, right?

I mean, you have speed and mileage indicators, and a fuel level indicator. It is even easier now with GPS either built in or on your mobile device. What else do you need?

If this does not sound like a good idea when driving, why do so many people manage testing efforts the same way?

Indicators

Several “indicators” are mentioned above. Speed has a speedometer. Mileage has the odometer. Fuel obviously has a fuel gauge. I expect an electric vehicle has something comparable, but I’ve never driven one so that is merely a guess.

Indicators are important. They can tell us a great deal about what is going on. They can tell us the rate of fuel usage, speed and distance travelled. An on-board compass can tell us what direction we are currently travelling. A GPS can give us a path to get to where we need to go.

All of these things can add up to something along the lines of “that is what we need.” It may be.

I have had engagements where clients or their “partners” had extremely detailed test plans and cases prepared. These were the roadmaps for the test effort. These were the guides to be used. They tracked progress by counting the test cases executed. I remember one that counted the number of test steps executed. These were executed per day.

Maps

A focused, alert vehicle operator will do far more than focus on the roadmap, or even the GPS. She will find information from the indicators helpful for shaping her thinking and progress toward her destination. Still, that is not all that she will do.
Her reference points from the map or GPS can give her information, so too can the speedometer and fuel gauge. To make sure she reaches her destination safely, she does far more than use the information items.

She will view the road ahead, shifting focus near and far, side to side. She will be alert to potential risks and stay aware as she drives the vehicle. A stopped or slowed vehicle on the side of the road, or even in her traffic lane is a reference point not contained elsewhere. Animals on the road or wildlife crossing the road need avoiding. Other obstructions, from branches, rubbish or even entire trees also need navigating.

Miles or hours driven can be easily measured. Test cases or test steps executed can be easily measured. With both we can “measure progress.” We might be able to estimate time of arrival or delivery. We might attempt to extrapolate when we arrive, or deliver software, based on how long we have been on the road, or testing.

We are estimating based on an estimate. We then presume the rate of progress will be consistent based on what we have done thus far. The correct term for this is “guessing.”

Looking Out the Window

If we are not actively engaged in what is around us, we will absolutely miss things. We will not see the lovely stand of trees. We will not notice the odd flash on the screen that is there and gone. We might not notice the group of deer by the side of the road, until they try to cross it. We might not notice the wrong value presented on the screen.

Measures and indicators can only tell us what we have seen so far. They are trailing measures, at best.

We must be aware of where we are going. We must be aware of why we are going there. We must look out the window to see what is around us.

The simple path defined by maps and detailed test plans will probably get us to where we are going. The way we get there is another question. By strictly following the easily measured test scripts, or the route provided by GPS, we will definitely get to an end point.

The typical testing metrics used to show “progress” show us what we have done, based on what we believed we needed to do before we started. They can show us the number of interesting things, anomalies, defects or bugs we discovered along the way.  They can tell us something about how our test automation is performing. None of them can accurately predict what is about to happen.

If we restrict our test efforts to the pre-planned scripts, we will miss issues customers, who are not following our path, will encounter. Limiting the number of interesting side trips for the sake of not impacting the performative metrics in use, e.g., test steps or cases executed per day or week, test steps or cases remaining to execute, will give an inaccurate view of the state of the software product.

If we allow for detours and interesting stops along the way, things might take longer than the simple, straight path would take. Taking side trips from the path is often where interesting and fascinating things are discovered. I have found this to be true for road trips, as well as testing.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Are Your Testing Metrics Misleading You? appeared first on Ranorex.

Four Strategies for a Scalable Continuous Testing Process

$
0
0

In the age of DevOps, one agile approach has become more relevant than ever: continuous testing. This is a process where testing happens in each stage of the software development process, aligning with the shift-left paradigm. The goal is to evaluate risks and obtain feedback as quickly as possible.

Companies have tried to include this approach as part of their development process, but quite often, they do not know where to start with continuous testing or how to implement it to a greater scope.

Here are four strategies that will help you strategically implement a scalable continuous testing process across the enterprise.

1. Prioritize your testing

100% exhaustive testing is impossible. There are a variety of components to be tested — requirements, code, business logic, application services, infrastructure, etc. — and only a limited amount of time and resources.

Therefore, it is critical to prioritize what to test along the way. These are some of the more important aspects to consider:

  • Your critical paths
  • How your business makes money
  • How your users use the application
  • How your application services are advertised
  • What has been a problem in the past

With this list in hand, you can begin your prioritization.

2. Expand automation

Automated testing can be done at every level, starting right from the requirements phase and all the way through to the user acceptance and deployment phases. This is especially true in the realm of DevOps and continuous testing.

DevOps has helped software development and operations teams better collaborate, thereby ensuring constant automation and monitoring throughout the software development lifecycle (SDLC), which includes infrastructure management. Continuous testing has helped to shift testing left, or to ensure that testing starts as early as possible in the SDLC.

With these current agile practices, everything we do as part of testing is changing, and automation will be needed in various development phases.

3. Have clearly defined roles and responsibilities

Having clearly defined lines of responsibility and communication is key to successful testing. Teams have to collaboratively come up with answers to these critical questions:

  • Who is going to write unit, API and UI tests?
  • Who maintains this test over time?
  • Who is responsible for running the test?
  • Who submits changes to the tests?
  • Who is responsible for updating the frameworks and libraries used?
  • Who writes the issues in the ticketing systems?
  • Who closes bugs?

Generally, this will come down to a choice between developers and testers. Each has their benefits and drawbacks, so consider the possibilities and decide as a team.

4. Ensure infrastructure support

One of the biggest challenges of implementing continuous testing is providing the right infrastructure to ensure different testing activities happen at each stage of the development process.

First of all, you need multiple environments so you can isolate the tests and test with different test data. At a minimum, you need development, test and production environments. The types of tests that run in each environment may vary depending on the context of the project.

Next, assuming your teams are following a DevOps approach, your infrastructure may have to support the creation of virtual machines using container-orchestration systems such as Docker and Kubernetes, version control systems and complete CI/CD integration. This helps to manage your application into logical units and get quick feedback on different features.

Finally, automated security and performance testing have become vital to delivering quality products to the customer. The infrastructure would have to support this effort as well.

Continuous testing is beneficial in so many ways, but it has its own set of challenges for implementation across the enterprise. Organizations should be prepared to invest a considerable amount of time and resources into building a sustainable and scalable continuous testing practice across the delivery pipeline. However, in the long run, the benefits outweigh the effort.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post Four Strategies for a Scalable Continuous Testing Process appeared first on Ranorex.

How Aggressively Should an Application Update Its Dependencies?

$
0
0

Should an application or service in production aim to keep its library dependencies as current as possible? Or should they go to the other extreme, and update only as a last resort? What tactic yields the best result?

These questions turn out to be surprisingly central to DevOps practice in 2020. Resolution of dependency conflicts is one of the most frequent stumbles I see. Even those living in a continuously integrated, fully containerized environment with modern package management and Everything-as-a-Service at least occasionally fight off the latter-day infections of what the 1990s called “DLL Hell.”

Image source: https://www.netlify.com/blog/2018/08/23/how-to-easily-visualize-a-projects-dependency-graph-with-dependency-cruiser/

Software has so many dependencies, and our requirements on them are so complex, that conflicts at least occasionally require expert intervention.

A first step to any resolution is a systematic approach to dependencies. Consider these three broad strategies:

  • Import or include or reference libraries without qualification. Pick up whatever the operating environment thinks is best, or at least most recent. Leave details of release management to operating-system specialists who can best handle them.
  • “Pin” dependencies to specific releases that are “known good,” and change that configuration only when security patches for those releases become unavailable. Programmers concentrate on their own code and accept that, when a migration to new versions is necessary, it is likely to involve shutting down all other progress, for anything from a weekend to a month.
  • “Pin” dependencies, but update them on a short cycle. Refresh dependencies every week or even daily.

Each of these strategies has its place. Let’s take a deeper look.

Comparing strategies for updating

First, understand that one team might even juggle distinct strategies at different levels. It can manage its operating system conservatively, expecting to minimize and even eliminate updates during the lifetime of a server used largely to run containers, at the same time as programmers aggressively require the latest versions of the packages for their chosen languages. Or the rhythms can go the other way: Every weekend patches for the operating system are checked and installed, while programmers code against language-specific modules from months or even years earlier.

The computing world is large and complex, and it’s difficult to generalize about all the ways security, compliance, marketing, operations, engineering culture, quality assurance and other dimensions come together to decide these choices.

To help appreciate the possibilities, consider a few concrete instances. A high-volume web application balances load across a fleet of servers, each running a special production container. The attack surface of the container runners is minimal; they’re automatically provisioned and deployed, with rare need to update or maintain them.

The containers themselves execute large Java-based monoliths. Gradle manages packages the application requires. This particular application relies on a combination of public and private repositories. The repositories are generally fixed for three months at a time. What happens if an error turns up in a library on which the application depends?

One possibility is that the error’s resolution might be scheduled for a release a few quarters in the future. More severe defects can be fixed while maintaining the rigor of the system by coding around dependence of the approved library. Suppose, for instance, that an open-source cryptographic function has been discovered flawed. Rather than update the dependence directly, and risk a cascade of transitive dependence changes, the application might supply its own patched version of the corrected implementation of the function. A few release cycles later, the public version of the function presumably will be approved for use in the application’s configuration, and the local copy of the implementation can be discarded as redundant.

An internally used extract-transform-load (ETL) tool for business intelligence reporting coded in Python might be handled quite differently. In this case, the source code barely exceeds 10,000 lines, and it emphasizes use of the latest data science libraries. Maintainers keep a requirements.txt with specific version numbers, to help ensure results are reproducible. At the same time, the tool’s unit tests and other tooling are strong enough to allow its programming team to advance those versions routinely and aggressively once a week. When new functionality becomes available in an external data-science library, that functionality can be available for production-level experiments just a day or two later.

Advice

The first architectural goal for dependence management should be clarity. No one configuration is best for all situations, and effort to optimize an abstract ideal is nearly always misplaced. Instead, concentrate on documenting existing practices, ensuring their consistency and sustainability. Expose assumptions with explicit language and even tooling, so they become more manageable.

Do developers program in the same operating system as the production data center? Is a dedicated staging environment available to the quality assurance team? What mechanisms keep dependences uniform across environments? Do requirements issued as security or business continuity dictates properly respect that the mainstream cultures around different programming languages often set different defaults for package maintenance? Does someone on the team understand how PHP packages are managed differently from Rust ones?

Once accurate information of all these sorts is available, it becomes easier and more effective to decide between different alternatives.

Whenever possible, use computers to help with these tasks and constraints. Write simple tools to scan configurations and report dependencies:

  • On libraries or versions that appear no longer to be actively maintained
  • On releases that are near an end-of-life maintenance deadline
  • On packages that are in conflict
  • On the distance between current package releases and those in use

This kind of reporting helps inform higher-level planning, as well. Your programmers might be eager for a new release of JavaScript that will help them write functionality crucial for a new marketing campaign. If a targeted browser won’t implement that JavaScript version for another 14 months, though, or if the new JavaScript obsoletes an implementation of an external dependence … well, it’s far better to recognize such conflicts before a major project starts.

Modern programming depends on a dense forest of dependencies across languages, operating systems and other technologies. Manage those dependencies with care and precision to minimize unpleasant surprises. Take initiative to control dependencies, rather than have dependencies control your programming.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post How Aggressively Should an Application Update Its Dependencies? appeared first on Ranorex.


How Much Testing is Enough?

$
0
0

One common question on software development teams is “When should we stop testing?” That simple question has loads of ways to answer that. Books and numerous papers have been written and many presentations, lectures, and workshops around answering that question.

Recently, I was asked a question that was different. The team had a variety of tests, each exercising interesting aspects of the application. They had progressed from simple checks against requirements to testing around requirements. They were doing deep dives into areas that caused them problems in the past.

They were looking at areas of concern not covered in “requirements” or “acceptance criteria” but their experience with the product told them these were likely trouble spots. They also used their knowledge of how their customers used the software and expected it to behave.

They had a mix of manual and automated tests defined. They had multiple levels of complexity covered by the tests, depending on what they intended to find out about the system. Some were simple “smoke tests” they could use in their CI environment. Some were more complex integration tests looking at interactions between segments of the system.

Their question was, “We have all these tests we regularly run. We are developing more tests as the product changes. How much testing is really enough?

What does the team mean by “enough”?

Asking a question about “enough testing” might be confusing to some organizations. They might understand “testing” that consists of positive confirmation of what is “expected” and not looking beyond the “happy path.” The idea of “one requirement: one test” is normal. The challenge is, while this may be acceptable for some organizations it is something less than that for many others.

Then there are other organizations, like the one I described in the introduction:

  • They cover as many scenarios as possible.
  • Tests that make sense are included in their CI test suite.
  • Other tests are included in their automated integration and regression test suites.
  • They are using tools to run tests that have provided interesting or unexpected results in the past, so their skilled analysts can focus on new work and not waste their time repeating steps that could be run by an automation tool.

Nearly every other organization is somewhere between these two extremes.

The “not as much testing as we think” trap

Many organizations are closer to the minimal “one requirement: one test” than they realize. A simple script is created and intended to be an open-ended question. The steps are expected to be run several times with different values which can be correct or incorrect in various ways.

Testing around a requirement is what is expected with such a test. Except, when deadlines are looming or past, and massive pressure is being applied to “finish testing,” such tests might be run two or three times, instead of the seven or eight they might otherwise be executed. They might not exercise all the possible logic paths, even if they are aware of them.

Corners get cut for the sake of time. People who are looking only for the checkbox of “this test verifies this requirement” are likely not going to consider what “verifies” actually means or implies. They have fallen into the “not as much testing as we think” trap.

While the intent is there and we can recognize the goal, they are falling short of that goal.

The “kitchen sink” trap

Some teams or organizations look for testing to cover every possible behavior and combination of values. They look for testers to evaluate everything possible in the system and fully document, or at least fully exercise those possibilities.

Then testers are expected to repeat the tests. All of them, for every release and every build.

The volume and amount of work needed to be done are overwhelming. Even if they try to automate tests, continuing to run every single functional test in the name of regression testing becomes impossible. Tests get left out. Tests that are quick and easy to run often are selected in place of more complex tests that take much longer.

Why? Because when the management team realizes the Herculean nature of the task, they often settle for some percentage of the tests being executed. If the test team can get 80% of the tests run by doing small simple tests, then they can focus time on the new features that need more careful thought.

Find the right balance

What does the idea of “enough” mean? There is a balance between the two extremes. What and where that balance is depends on the situation.

I find a level of testing which allows for in-depth exploration of key features, along with reasonable coverage of secondary features to work much of the time. What will be tested more and less than other features should be discussed with the stakeholders and project team so everyone is in agreement. Then, regression and integration tests need to be updated accordingly to handle the changes.

Where these fall will vary by organization, team, and project. In short, the first team I described did a very good job finding their balance. It rarely happens on the first try and can take some effort and patience. It is worth it.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post How Much Testing is Enough? appeared first on Ranorex.

10 Best Practices in Test Automation #1: Know What to Automate

$
0
0
test automation best practices

Welcome to the first article in the series, 10 Best Practices in Test Automation.

Efficient product development is always about trade-offs. One of the first considerations in undertaking any test automation project is where to focus your efforts. Resources are invariably limited, so which efforts will give you the greatest payoff? Which test cases will give you the highest return on the time invested? This article provides recommendations for three types of test cases: those to automate, those that will be challenging to automate, and those that shouldn’t be automated at all.

What to Automate

In principle, any software test can be automated: humans who understand the requirements for an application can create tests that express those requirements. Wise testers always ask, though, whether a particular test will cost more to develop and maintain than it will save in the effort of manual testing. To get the best return on your effort, focus your automation strategy on test cases that meet one or more of the following criteria:

Tests for stable features

Automating tests for unstable features may end up costing significant maintenance effort. To avoid this, test a feature manually as long as the requirement remains experimental, or under development.

Note that this is different from the stability of the implementation. Once functionality has been settled, it’s particularly valuable to have good automated tests if the development team continues to experiment with alternative implementations.

Regression tests

A regression test is one that the system passed in a previous development cycle. Re-running your regression tests in subsequent release cycles helps to ensure that a new release doesn’t reintroduce an old defect or introduce a new one. Since regression tests are executed often, they belong at the top of your priority list for automation. Why does frequent execution matter? Because each automated execution saves the manual effort otherwise needed to perform the test. Multiply that savings by a large count of executions, and the overall gain is proportionally large.

To learn more about regression testing, refer to the Ranorex Regression Testing Guide.

High-risk features

Use risk analysis to determine which features carry the highest cost of failure, and focus on automating those tests. Then, add those tests to your regression suite. To learn more about how to prioritize test cases based on risk, see the section on risk assessment in the Ranorex GUI Testing Guide.

Smoke tests

Depending on the size of your regression suite, it may not make sense to execute the entire suite for each new build of the system. Smoke tests are a subset of your regression tests which check that you have a good build prior to spending time and effort on further testing. Smoke testing typically includes checks that the application will open, allow login, and perform other high-profile functions. Include smoke tests in your Continuous Integration (CI) process and trigger them automatically with each new build of the system.

A smart test team labels and actively maintains different categories of tests. A test of a specific functionality might move in and out of the smoke test suite at different times during the lifecycle of the application. When a particular login method is widely used, it deserves to be a smoke test. If later it’s deprecated in favor of a different method, it might safely be moved away from the smoke tests. Similarly, a test that once was too time-consuming to be a smoke test can become a smoke test if it’s accelerated enough to fit in CI/CT (“CT” abbreviates “Continuous Testing”).

Data-driven tests

Any tests that will be repeated are good candidates for test automation, and chief among these are data-driven tests. Instead of manually entering multiple combinations of username and password, or email address and payment type to validate your entry fields, let an automated test do that for you. How to design good data-driven tests will be explored further in articles on parameterized and property-based tests in this series.

Load tests

Load tests are simply a variation on data-driven testing, where the goal is to test the response of the system to a simulated demand. Combine a data-driven test case with a tool that can execute the test in parallel or distribute it on a grid to simulate the desired load.

Load and other performance tests often are too expensive and time-consuming to execute with each commit. This just means that they need their own schedule for execution, one that doesn’t slow the CI cycle.

Cross-browser tests

Cross-browser tests help ensure that a web application performs consistently regardless of the version of the web browser used to access it. It is generally not necessary to execute your entire test suite against every combination of device and browser, but instead to focus on the high-risk features and most popular browser versions currently in use. As of October 2020, Google Chrome is the leading browser on both desktop and mobile, and the second-largest on tablets behind Safari. So, it would make sense to run your entire test suite against Chrome, and then your high-risk test cases against Safari, Firefox, Internet Explorer, and Microsoft Edge.

Along with other reasons, it’s good to automate cross-browser and cross-device tests because humans typically do not perform well in this tedious and repetitive role. Automation has proven far better at spotting environment-specific problems such as browser incompatibility.

Cross-device tests

Mobile apps must be able to perform well across a wide range of sizes, screen resolutions, and O/S versions. According to Software Testing News, in 2018, a new manual testing lab would need almost 50 devices just to provide 80% coverage of the possible combinations. Automating cross-device tests can reduce testing costs and save significant time.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

What is Difficult to Automate

The following types of test cases are more difficult to automate. That doesn’t mean that they shouldn’t be automated – only that these test cases will have a higher cost in terms of time and effort to automate. Whether a particular test case will be challenging to automate varies depending on the technology basis for the AUT (application under test). If you are evaluating an automation tool or doing a Proof of Concept, be sure that you understand how the tool can help you overcome these difficult-to-automate scenarios.

This last point is so important it bears repeating: a test might be worth automating even though it’s difficult to automate. Sometimes the difficulty of automation of a test reflects that the corresponding manual test is particularly time-consuming or error-prone or sensitive. That case probably means that such a test is especially valuable to automate, or perhaps redefine to be less expensive.

One general response to difficult automations is to seek help, whether the leverage of a high-quality test framework that solves hard problems, or counsel from fellow professionals who’ve faced similar problems.

Mixed-technology tests

Some automated tests require a mix of technologies, such as a hybrid mobile app or a web app with backend database services. To make automating end-to-end tests in this type of environment easier, the ideal solution is to implement an automation framework that supports all of the technologies in your stack. To see whether Ranorex Studio is a good fit for your stack, visit our Supported Technologies page.

Dynamic content

There are many types of dynamic content, such as web pages built based on stored user preferences, PDF documents, or rows in a database. Testing this type of content is particularly challenging given that the state of the content is not always known at the time the test runs. Learn about the issues with dynamic content and how Ranorex helps overcome them in our User Guide.

Waiting for events

Modern user interface technologies make some aspects of testing having to do with time particularly difficult. For technical reasons having to do with Web browsers’ Document Object Model (DOM), it’s much easier to instruct a human tester, “when the login form pops up”, than it is to communicate the corresponding, “when the login form finishes rendering”, to an automated test. Waiting for events, especially those having to do with completion of a visual display element, is a consistent programming challenge.

Automated tests can fail when an expected response is not received. It’s important to handle waits rigorously so that a test doesn’t fail just because the system is responding slower than normal. However, you must also ensure that a test does fail in a reasonable period of time so that the entire test suite is not stuck waiting for an event that will never happen. Issues around synchronization and waits are particularly important when comparing test frameworks. To learn how to configure waits in Ranorex automated tests, refer to the description of the Wait for action in the Ranorex User Guide.

Handling alerts/popups

Similar to waiting for events, automated tests can fail due to unexpected alerts or pop-ups. To make them more stable, be sure to include logic in your test to handle these special events. Ranorex Studio includes an automation helper that makes it easy to handle alerts and pop-ups.

Complex workflows

Automation of a workflow brings several challenges. Typically, a workflow test will consist of a set of test cases that each check steps in the workflow. When one step fails, it’s pointless to run subsequent test steps: the failure means that results which arrive afterward can’t be trusted. Because the steps must be performed in order, they can’t be split across multiple endpoints to run in parallel. Another challenge is that automating a workflow involves choosing one particular path through the application, possibly missing defects that occur if a user chooses a different path in production.

To minimize these types of issues, make your test cases as modular and independent of each other as possible, and then manage the workflow with a keyword-driven framework. Measure source coverage accurately to minimize the count of unexercised lines of source.

Challenging aspects of web applications

Web applications have aspects that present unique challenges to automation. One of the primary issues is recognizing UI elements with dynamic IDs. Ranorex provides “weight rules” to tweak the RanoreXPath for specific types of elements, which helps ensure robust object recognition even on dynamic IDs. Other challenges in automating web applications include switching between multiple windows and automating iframes — especially those with cross-domain content. Ranorex Studio detects and automates objects inside cross-domain iframes, even when web security is enabled.

Challenging aspects of mobile applications

Mobile apps also can be challenging to automate. For example, you must ensure that your application responds appropriately to interruptions such as the phone ringing or a low battery message. You must further ensure that your tests provide adequate device coverage, which is a particular challenge for Android apps due to the wide variety of screen sizes, resolutions, and O/S versions found in the installed base. Finally, due to differences between iOS and Android, tests that are automated for a native app on one platform will likely require adaptation to perform as expected on the other platform. As with other difficult-to-automate tests, it’s essential to have a testing framework that supports the full technology stack for your application under test.

What You Shouldn’t Automate

There are some types of tests where automation may not be feasible or advisable. This includes any test where the time and effort required to automate the test exceeds the potential savings. Plan to perform these types of tests manually.

Single-use tests

It may take longer to automate a single-use test than to execute it manually once. Note that the definition of “single-use tests” does not include tests that will become part of a regression suite or that are data-driven.

Tests with unpredictable results

Automate a test when the result is objective and can be easily measured. For example, a login process is a good choice for automation because it is clear what should happen when a valid username and password are entered, or when an invalid username or password are entered. If your test case doesn’t have defined pass/fail criteria — if it lacks clarity — it would be better have a tester perform it manually.

Features that resist automation

Some features are designed to resist automation, such as CAPTCHAs on web forms. Rather than attempting to automate the CAPTCHA, it would be better to disable the CAPTCHA in your test environment or have the developers create an entry into the application that bypasses CAPTCHA for testing purposes. If that isn’t possible, another solution is to have a tester manually complete the CAPTCHA and then execute the automated test after passing the CAPTCHA. Just include logic in the test that pauses until the tester is able to complete the CAPTCHA, and then resumes the test once login success is returned.

Unstable features

It is best to test unstable features manually. As mentioned above, invest the effort in automation once the feature has reached a stable point in development.

Native O/S features on mobile devices

Particularly on Apple iOS, non-instrumented native system apps are difficult or impossible to automate due to built-in security.

Conclusion

To ensure that you achieve your automation goals, focus your automation efforts on the right test cases. And be sure to build in time for exploratory testing and UX/usability testing – by their nature, these types of tests can’t and shouldn’t be automated.

To help determine whether or not to automate a particular test case, you can use the Test Case ROI Calculator spreadsheet. This simple spreadsheet compares the estimated time and costs to automate a test case vs. the time and costs to execute the same test case manually; it is not designed to determine the ROI of a test automation project as a whole. With a little up-front analysis, though, you can make tactical decisions about individual tests that yield the best possible results for your project: the biggest bang of completed, informative tests for the buck of the effort invested in testing.

Watch our on-demand webinar

Strategies for a Successful Automation Project: Learn how to ensure that your automation project accomplishes your goals.

Solve Your Testing Challenges
Test management tool for QA & development

The post 10 Best Practices in Test Automation #1: Know What to Automate appeared first on Ranorex.

A Practical Approach to Risk-Based Testing

$
0
0

Software applications are all designed to serve customers in some shape or form. Each of these applications has features that are expected to work when released in production, and teams are under constant pressure to deliver them at a rapid pace. As a result, there are tight deadlines for getting all of the features tested.

How do we prioritize the features to test first and evaluate which modules need more testing effort than others? The answer is risk-based testing.

This is a type of testing approach that is focused on mitigating risks and prioritizing the testing effort accordingly. It involves assessing the risks based on software complexity, impact on the customer, and history of production problems. Finally, arriving at a risk score that determines the level of testing effort needed for each module.

For example, say you are testing two requirements. Requirement A is to build a payment functionality so that customers can make a payment via the web page, and requirement B is to ensure the font size throughout the webpage is to be changed from size 14 to 16. If requirement A is not implemented correctly, it is a high impact on the business and customer as none of the payments for your services may go through. This may result in a huge financial loss and a bad customer experience. If requirement B does not work as expected, it is an issue but the customer or the business is not financially affected and may even go unnoticed by the customer.

So, if there are three days to test the above requirements, it would make more sense to allocate a majority of the time for testing requirement A and the remaining time (if any) for testing requirement B. This way you are prioritizing the testing effort based on risks and impact.

How to Perform Risk-Based Testing

Risk-based testing consists of two stages:

  • Getting clarity on what needs to be tested
  • Conducting a formal risk analysis

Getting Clarity

The members of a development team should develop a common understanding of the different features that have to be tested as part of the application.

Some questions that help in the process are:

  • What features have to be tested?
  • How much time you have to test?
  • How many resources you have for the testing effort?
  • What are some of the high-risk areas of the application?
  • What should be outside the scope of testing?
  • What metrics will be used to measure testing progress?

Once there is a clear understanding of the answers to these questions, a team is ready to conduct a risk analysis.

Conducting a formal risk analysis

The first step toward risk-based testing is doing a formal risk analysis. Begin by:

  • Targeting different modules to test
  • Identifying different risks associated with each module
  • Gathering scores related to impact, complexity and history of production problems
  • Calculating risk scores

The next step is to identify different types of testing to be performed and how much effort should be dedicated to each module based on the risk score.

For example, say we have a flight-booking application. There are various features that the application offers, but the impact on the customer if some of the features do not work varies.

If we do a formal risk analysis, these are things we may identify:

  • Module: Something we might want to test or test for
  • Risks: Direct, concise descriptions of potential problems and their impact
  • Impact rating: The business representatives’ severity rating of potential impact on the customer from 1 to 5, where 5 = worst impact and 1 = least impact
  • Technical rating: The solutions architect or tech lead’s rating of the likelihood of a problem occurring in the module based on complexity, churn, dependencies or other technical aspects of the implementation from 1 to 5, where 5 = highest likelihood and 1 = least likelihood
  • Historical rating: The problem subject matter expert’s rating based on historical production problems from 1 to 5, where 5 = highest likelihood and 1 = least likelihood
Module
Risks
Impact Rating
Technical Rating
Historical Rating
Module Rating¹
Risk Score²
Flight search
*Customers are not able to search for required flights
*Customers are not able to search for flights with more than one adult
*Customers are able to search for flights for dates that are past the current date
*Customers are not getting the same search results in the desktop, mobile web, and mobile native apps
*Prices do not change when I add minors to the trip
5
5
5
5
25
Flight booking
*Customers are not able to book round-trip flights
*Customers are not able to book one-way flights
*Customers are not able to book multi-city flights
*Customers are not able to book flights in incognito mode
*Customers are not able to book Basic status flights
5
4
3
4
20

¹ Max of technical and historical ratings
² Impact rating × module rating

After identifying the risk scores, we can start prioritizing our testing in terms of:

  • How much manual testing effort is needed
  • What flows have to be automated
  • What regression tests have to be run
  • How many exploratory testing sessions would be needed, and on what flows

In this way, testing is focused on high-risk modules, which have a bigger impact on the customer and the company. This method also helps find critical defects as soon as possible.

We only have a certain amount of time for testing, so it pays to use that time wisely. If you’re looking to make your testing more impactful, start testing based on risk.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post A Practical Approach to Risk-Based Testing appeared first on Ranorex.

10 Best Practices in Test Automation #7: Integrate with a CI Pipeline

$
0
0

Rapid application development practices such as Continuous Integration/Continuous Deployment (CI/CD), and DevOps have a common goal: small, frequent releases of high-quality, “working” software. Whether your development cycle is measured in weeks or days, integrated automated tests are essential to maintaining the pace of development.

Automated tests in a CI pipeline

The image below shows a typical CI pipeline. A developer checks out code from the shared repository in the version control system, such as Git, TFS or Subversion. Once code changes are complete, the developer commits the change back to the version control system, triggering a CI job. The CI server builds the application under test and triggers automated tests to verify whether the new code results in a good, “green” build. The results of testing are reported back to the entire team for a decision regarding deployment of the application. In a CD environment, the application is deployed automatically to the production environment.

A continuous integration pipeline

Continuous integration with automated testing offers several benefits to organizations, including the following:

  • Developers get fast feedback on the quality, functionality, or system-wide impact of their code changes, when defects are easier and less expensive to fix.
  • Frequent integration of small changes reduces the volume of merge conflicts that can occur when several developers are working on the same application code, and makes merge conflicts easier to resolve when they do happen.
  • Everyone on the team has a clear understanding of the status of the build.
  • A current “good build” of the application is always available for testing, demonstration, or release.
  • Frequent releases make for good practice in a successful release process. Rather than hazards to be avoided, updates become routine events in a healthy software development lifecycle (SDLC).

Recommendations for automated testing in a CI pipeline

The recommendations below focus on test automation in a CI pipeline, some of which overlap the best practices for the CI process itself. Read more about best practices for a CI process in the Wikipedia article on Continuous Integration.

Icon source control

Use source control for your automated tests

The Twelve-Factor App is a methodology for building software-as-a-service that’s widely-regarded as authoritative for general best practices in modern coding. Among all these, the very first is, one codebase tracked in revision control.

That’s the starting point for all automation, integration, and release-management efforts: maintain your automated tests under revision control, and in fact in the same repository as your code. Good control over test sources pays off in many ways, first among these being to make it easier to match correctly the version of a test to the version of the source code. Ranorex Studio integrates with popular solutions for source control including Git, Microsoft Team Foundation Server, and Subversion.

An independent quality assurance (QA) team responsible for specialized tests might maintain sources in a separate repository. In all cases, though, the main points remain:

  • sources, including test sources, need to be under revision control
  • programming source and CI test source need to be coordinated
Icon unit testing

Don't rely solely on unit tests

Unit testing in an individual developer’s local environment doesn’t tell you enough about how that code will work once it is introduced to the production application. Integration of new or revised code may cause a build to fail for several reasons. For example, changes made by another developer may conflict with the new code, or there may be differences between the developer’s local environment and the production environment. Therefore, it’s important to run integration tests, regression tests and high-priority functional UI tests as part of the build verification process.

Notice, too, the distinction between “unit testing in an individual developer’s local environment”, and the “green” build mentioned above. The latter is critical: a standardized, repeatable criterion with a definite relationship to a consistent test environment. Reliance on developers to submit what passes in a local environment introduces too much uncertainty.

With that standardized test environment in place, extension of CI from unit testing to integration tests (and more) is natural.

Icon self-testing build

Make your build self-testing

Component testing checks individual units of code. Component testing is often called unit testing, but may also be called module testing or program testing. Developers write and execute unit tests to find and fix defects in their code as early as possible in the development process. This is critical in agile development environments, where short release cycles require fast test feedback. Unit tests are white-box tests because they are written with a knowledge of the code being checked.

Icon refactor automated test

Refactor your automated tests

A build that takes a long time to complete disrupts the CI process. To keep your testing efficient, approach your automated testing code as you would the application code itself. Regularly look for redundancies that can be eliminated, such as multiple tests that cover the same feature or data-driven tests that use repetitive data values. Techniques such as boundary value analysis and equivalence partitioning can help reduce your data-driven testing to just the essential cases.

Automated tests frequently bottleneck on network operations, or, more generally, use of external resources. When developers notice tests slowing, one of the remedies your refactoring can take is to mock those resources for fast results.

Icon stopwatch

Keep your build fast

It’s essential that build tests complete as quickly as possible so that developers aren’t discouraged from committing frequently. To keep the process fast, trigger the minimum automated tests required to validate your build.

Due to their more complex nature, integration tests are usually slower than unit tests. Run your smoke and sanity tests first to rapidly identify a broken build before spending time on additional tests. If your team merges frequently, it may be more efficient to run integration tests only for daily builds rather than every merge.

Run a full regression only when necessary, such as in preparation for deployment to the production system. For example, Ranorex Studio supports the use of run configurations to run a subset of tests, such as smoke tests or tests just for features or modules of the application that have changed. Exclude from the build test set any low-priority regression test cases that haven’t found a defect in several test cycles.

Icon test environment

Test in the right environment

To minimize the chance of test failures due to issues such as incorrect O/S version or missing services, test in an environment that is stable. Ideally, you will have an isolated test platform that is dedicated solely to testing. Your test environment should also be as identical as possible to the production environment, but this can be challenging. Realistically, it may be necessary to virtualize or mock certain dependencies such as third-party applications. In complex environments, a virtualization platform or solution such as Docker containers may be an efficient approach to replication of the production environment.

Icon parallel testing

Test in parallel

Speed is essential in a CI/CD environment, as two previous sections already mentioned. Quick return of results pays off enormously in making the most of developers’ flow. Save time by distributing your automated tests on a Selenium grid or running them in parallel on multiple physical or virtual servers. As mentioned earlier in this series, keep your automated tests as modular and independent of each other as possible so that you can test in parallel.

Icon functional UI testing

Include functional UI and exploratory testing

It takes a combination of automated testing approaches to confirm that your application is ready for deployment to the production environment. In addition to your automated unit and integration tests, include automated user interface tests to verify core functionality, check common user paths through the application end-to-end and validate complex workflows. Exploratory testing can uncover defects that automated tests miss.

Icon verify deployment

Verify your deployment

After deploying the new build to the production environment, run your smoke tests in the production environment as a quick check to ensure a successful deployment.

To learn more about how to integrate Ranorex Studio tests in your CI pipeline, read our blog article Integrate Automated Testing into Jenkins. While this article focuses on Jenkins, Ranorex tests can be triggered from any CI server process, including Bamboo and TeamCity.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

The post 10 Best Practices in Test Automation #7: Integrate with a CI Pipeline appeared first on Ranorex.

The Power of Session-Based Exploratory Testing

$
0
0

The field of software testing has changed significantly in the past several decades, but at least one thing will remain the same: Testers will continue to perform exploratory testing. Exploratory testing is a powerful approach that helps testers tap into their creativity and experience. It is simultaneous learning, test design and execution.

Historically, teams have performed exploratory testing in several ways using different tools and strategies. But if testers explore the product in an unstructured manner, it may be difficult to gather specific information to understand how this process fits with the overall testing effort, including automation and manual testing. Unstructured testing might lead to a misconception that an exploratory approach is not useful, particularly when teams already have robust test processes and strategies in place.

This is where session-based exploratory testing (SBET) can help.

What is SBET?

SBET uses uninterrupted testing sessions that are time-boxed, usually from 45 to 90 minutes, focused on a particular module, feature or scenario. During the session, testers document various information about their testing in what’s called a charter document.

This is a document that contains all the details about the session: the goal of the session, resources used, task breakdowns with time spent performing different tasks, notes containing helpful information along with test ideas and observations, issues uncovered during the session, and any screenshots.

With this document, everyone knows the details about the session and how much time was spent on it. The document can be attached to a story or any repository where you house your test artifacts.

When should you use SBET?

SBET can be used in various scenarios to cover edge cases and complement the existing testing effort. Startup companies that do not have any testing process can start doing SBET to learn about the product, test user stories, use the documentation from the session to create scripted test cases, and automate features.

Mid- and large-sized companies that have a formal testing process can use SBET at various stages of their development process, to test high-risk areas, and to evaluate features that are hard to automate. They can even use it in parallel to the regression tests they already do.

To release products faster, teams want quick feedback about the system. One way to do this is by using SBET before pushing builds to QA, to get more test coverage, to cover edge cases that people may not think of, and during user acceptance testing before pushing the feature to production.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

How does SBET support automation efforts?

SBET is a great way to learn about different application features and which is a higher risk compared to others. Based on this, teams can start prioritizing automation features and will have a better idea of what to automate — and what not to. It also helps to gather test cases for automation, as all the effort during an SBET session is documented for future reference.

There are various tools available to document details of an SBET session and directly export the recorded documentation to test cases. This is a powerful approach to help new or less experienced testers pair up with more experienced ones to learn about an application while simultaneously testing it.

Remember, SBET is not a replacement for scripted test case execution; it is performed complementary to it. It is an approach that helps testers exercise their creativity and experience while getting more information about the product. As a result, stakeholders can make informed decisions, and the test team can better prioritize their efforts.

The post The Power of Session-Based Exploratory Testing appeared first on Ranorex.

Viewing all 161 articles
Browse latest View live