Automated Testing Best Practices WebdriverIO With JavaScript

Building a Robust Test Automation Framework with WebdriverIO: Best Practices

Efficient test automation is crucial for reliable software testing, and WebdriverIO provides a robust framework to achieve this. This blog will highlight best practices that enhance the performance and maintainability of your automation efforts. We’ll cover key topics such as setting up your test environment, adopting the Page Object Model (POM) for better test organization, and leveraging WebdriverIO commands effectively.

Additionally, we’ll explore strategies for parallel test execution to reduce runtime, best practices for locating elements to avoid flakiness, and optimizing test reliability with custom waits. We’ll also address common pitfalls, cross-browser testing integration with platforms like BrowserStack, and maintaining test stability with retry mechanisms.

By the end of this blog, you’ll have practical insights to enhance your WebdriverIO automation strategy, ensuring a smoother and more efficient testing process.

Table of Contents

Setting Up Your Test Environment Efficiently

To harness the full potential of WDIO, you need a solid test environment setup. Here’s a step-by-step guide for setting it up.

For setting up WebdriverIO you can refer our WebdriverIO Setup

Integrating the Page Object Model (POM) for Better Test Organization

The Page Object Model (POM) is a design approach that improves test automation by organizing page elements and their Methods in separate files, away from test files. This makes your tests more manageable, especially when working with applications that have multiple pages or complex workflows.

🎯Benefits of POM in Large-Scale Projects

  • Centralized Maintenance
    • UI changes are only updated in the relevant page object file, reducing effort.
  • Code Reusability
    • Page methods (like login or search) can be reused across multiple test cases.
  • Improved Readability
    • Test scripts become concise, focusing only on business logic and assertions.
  • Better Scalability
    • Adding new pages or features becomes easier by extending existing page objects.
  • Reduced Flakiness
    • Encapsulating waits or interactions inside page objects makes tests more stable and reliable.

Example: Refactoring Tests Using POM

  • Without POM (Direct Test Logic in Tests)
describe('Login Test', () => {
   it('should log in successfully', async () => {
     await browser.url('https://www.saucedemo.com/v1/');
     const username = await $('#user-name');
     const password = await $('#password');
     const loginButton = await $('#login-button');
     
await username.setValue('standard_user');
       await password.setValue('secret_sauce');
     	await loginButton.click();
     const message = await $('//div[@class="product_label"]').getText();
     expect(message).toBe('Products');
   }); 
});
  • With POM (Refactored Approach)

Login Page Object (login.page.js):

class LoginPage {
 get username() { return $('##user-name'); }
 get password() { return $('#password'); }
 get loginButton() { return $('#login-button'); }
 get welcomeMessage() { return $('//div[@class="product_label"]'); }


 async open() {
   await browser.url('https://www.saucedemo.com/v1/');
 }
 async login(user, pass) {
   await this.username.setValue(user);
   await this.password.setValue(pass);
   await this.loginButton.click();
 }
 async getWelcomeMessage() {
   return this.welcomeMessage.getText();
 }
}
export default new LoginPage();
  • Refactored Test Script (login.test.js)
import login  from '../../PageObjects/SauceLabPo/login.page.js';


describe('Login Test using POM', () => {
 it('should log in successfully', async () => {
   await login.open();
   await login.login('standard_user', 'secret_sauce');
   const message = await login.getWelcomeMessage();


   expect(message).toBe('Products');
 });
});

🎯Benefits of Refactoring with POM

  • Centralised Maintenance: Any change to the login form only requires updates in login.page.js.
  • Clean Test Scripts: Tests now focus on validation rather than page interactions.
  • Scalability: Adding new tests becomes easier by reusing the Login Page methods.

Efficient Use of WebDriverIO Commands

Let’s explore some best practices for using WebDriverIO commands to enhance test efficiency and reliability.

1. When to Avoid Protocol Methods in WebDriverIO

Protocol methods (e.g., browser.elementClick(), browser.executeScript()) communicate directly with the WebDriver, bypassing WDIO’s built-in error handling, retries, and implicit waits. Using these methods can lead to flaky tests, especially if elements aren’t available due to dynamic content or latency issues.

When to Avoid:

  • UI is not fully loaded: Use WDIO commands like element.click() which automatically retry until the element is ready.
  • Inconsistent behaviour: Use methods like .waitForDisplayed() to ensure stability before performing actions.
const button = await $('#login-button');


// Avoid this
await browser.elementClick(button.elementId);


// Use this instead
await button.click(); // WDIO retries on failure

2. Use of .waitFor and Dynamic Waits over Static Timeouts

Static waits (browser.pause()) halt execution for a fixed duration, slowing tests unnecessarily. Dynamic waits, such as .waitForDisplayed(), only pause until an element is ready, improving both stability and speed.

Using browser.pause() introduces a fixed wait time (e.g., pause(5000)), which slows down tests unnecessarily and makes them fragile. Even if an element becomes available earlier, the test will still wait for the full duration, increasing execution time.

  • Better Alternative: Use dynamic waits such as .waitForDisplayed() to pause only as long as needed.

Example:

const message = await $('#login-button');
await message.waitForDisplayed({ timeout: 5000 }); // Waits up to 5 seconds


// Avoid this
await browser.pause(5000); // Always waits 5 seconds, even if unnecessary

3. Configuring WDIO for Parallel Execution

Running tests in parallel shortens test execution time, especially for large suites. Configure parallel execution in the wdio.conf.js file using the maxInstances setting.

Parallel execution enables WebDriverIO to run multiple test cases or browser instances simultaneously. Instead of running tests sequentially (one after another), it divides the workload across available instances (browsers or devices), significantly reducing overall test time.

For example:

  • If you have 10 test files and set maxInstances: 5, WDIO will launch 5 tests at once, then start the next 5 when the first batch completes.
  • In cloud platforms like BrowserStack, parallel execution spreads tests across multiple devices or browsers, ensuring faster coverage and scalability.
exports.config = {
 maxInstances: 5, // Run 5 tests in parallel
 capabilities: [{ browserName: 'chrome' }],
};

4. Reducing Test Runtime in CI Environments with Parallel Workers

When running tests in CI/CD, use parallel workers to distribute tests across multiple runners. Tools like Jenkins, GitHub Actions, or GitLab CI allow splitting test suites by tags or groups.

Example: Split tests using tags or groups.

npx wdio run wdio.conf.js --suite login
npx wdio run wdio.conf.js --suite checkout

In your CI pipeline, assign suites to parallel workers:

jobs:
 test-login:
   runs-on: ubuntu-latest
   steps:
     - run: npx wdio run wdio.conf.js --suite login


 test-checkout:
   runs-on: ubuntu-latest
   steps:
     - run: npx wdio run wdio.conf.js --suite checkout

Using parallel workers reduces the execution time by distributing tests across multiple agents, which is essential for fast feedback in CI pipelines.

🎗️Best Practices for Locating Elements in WebDriverIO

Strategies to Avoid Flaky Element Selectors

Flaky selectors can break tests when UI elements change. Here are key strategies to make selectors reliable:

  • Use Unique Attributes: Prefer id, data-testid, or custom attributes (e.g., data-test) over CSS classes, which may change during UI updates.
  • Avoid Absolute XPaths: Instead, use relative XPath (e.g., //button[text()=’Submit’]).
  • Wait for Element States: Use dynamic waits like .waitForDisplayed() to ensure elements are ready.
  • Use CSS over XPath: CSS selectors are often faster and more readable.
// Good: Reliable CSS selector with custom attribute
const submitButton = await $('[data-test="submit-button"]');


// Avoid: Unreliable XPath with complex hierarchy
const submitButton = await $('//div[2]/form/button[1]');

 Using Custom Locators Effectively in Complex Applications

In complex UIs, elements may not have unique attributes. You can define custom locators to improve test reliability.

Example of Custom Locator:

// Define custom selector logic (e.g., locating element by partial text)


browser.addLocatorStrategy('partialText', async (text) => {
   const elements = await $$('*'); // Select all elements
   return elements.filter(async (el) => (await el.getText()).includes(text));
 });


  // Use custom locator in test
 const element = await browser.$('partialText=Welcome');
 await element.click();

This approach makes interacting with tricky elements simpler and more maintainable over time.

Optimizing Test Reliability with Custom Waits

Custom wait utilities improve test stability, especially in scenarios where standard wait methods (like .waitForDisplayed()) aren’t sufficient.

Crafting Custom Wait Utilities for Flaky Scenarios

Sometimes, elements may take longer to appear or change state due to dynamic content, animations, or network delays. A custom wait utility ensures your tests only proceed when specific conditions are met, reducing flakiness.

Example: Custom waitForElementText Utility

async function waitForElementText(selector, expectedText, timeout = 5000) {
   await browser.waitUntil(
     async () => (await $(selector).getText()) === expectedText,
     { timeout, timeoutMsg: `Text not found: ${expectedText}` }
   );
 }

Usage :

await waitForElementText('#status', 'Success', 3000);

Implementing waitForShadowDom for Shadow DOM Elements

Shadow DOM elements are encapsulated and require special handling. A custom wait method ensures you can reliably interact with them

Example: Custom waitForShadowDom Utility

async function waitForShadowDom(selector, timeout = 5000) {
   await browser.waitUntil(
     async () => {
       const shadowRoot = await browser.execute((el) => el.shadowRoot, $(selector));
       return shadowRoot !== null;
     },
     { timeout, timeoutMsg: `Shadow DOM not found for ${selector}` }
   );
 }

Usage:

await waitForShadowDom('#shadow-host');

Avoiding Common Pitfalls in WDIO Tests

Caching Elements—Why It’s a Bad Practice

Caching elements means storing references to them (e.g., const button = $(‘#btn’);) and reusing them throughout the test. This practice is problematic because DOM elements may change between interactions (due to re-renders or state changes), causing stale element exceptions.

Example of Stale Element Issue:

// Cache element reference
const button = await $('#btn');
// If the DOM updates, this button reference becomes stale
await button.click(); // Might throw an error

Solution: Always fetch elements fresh right before interacting with them.

// Get element fresh before each interaction
await $('#btn').click();

Avoiding these pitfalls helps keep tests fast, maintainable, and stable, reducing flakiness in WebDriverIO automation.

Optimizing Cross-Browser Testing with WDIO

Best Practices for Handling Browser Compatibility

  • Use Standard Web Locators: Avoid browser-specific selectors that may behave differently across browsers.
  • Incorporate Dynamic Waits: Different browsers may render elements at different speeds. Use .waitForDisplayed() instead of pause().
  • Set Browser-Specific Capabilities: Define capabilities for browsers (like Chrome, Firefox) to handle known differences.
  • Enable Headless Testing: Use headless mode in CI pipelines to speed up cross-browser tests.

Example: Integrating BrowserStack and Sauce Labs

BrowserStack Configuration in wdio.conf.js:

exports.config = {
   user: process.env.BROWSERSTACK_USERNAME,
   key: process.env.BROWSERSTACK_ACCESS_KEY,
   services: ['browserstack'],
   capabilities: [
     { browserName: 'chrome', os: 'Windows', os_version: '10' },
     { browserName: 'firefox', os: 'OS X', os_version: 'Monterey' },
   ],
 };

With services like BrowserStack or Sauce Labs, you can run tests across multiple browsers and platforms without managing local environments. This ensures better compatibility coverage and faster feedback in CI/CD pipelines.

Maintaining Test Stability with Retry Mechanisms

Configuring Retries in WebdriverIO (WDIO) Configuration

You can implement retries in the WDIO configuration to rerun failed tests. Here’s how to do it:

Example: wdio.conf.js

exports.config = {
   // Retry failed specs at the suite level
   mochaOpts: {
     retries: 2, // Retries the entire suite 2 times
   },
   // Retry failed tests at the spec level
   specFileRetries: 2, // Retries individual spec files
   specFileRetriesDelay: 5, // Time delay (in seconds) before a retry
   // Retry tests based on worker level (optional)
   specFileRetriesDeferred: true, // Defers retries to the end of the run
   // Other configurations
   runner: 'local',
   framework: 'mocha',  // or 'cucumber', 'jasmine'
   capabilities: [{
       maxInstances: 5,
       browserName: 'chrome',
     }
   ],
   reporters: ['spec'],
};

 Explanation:

  • mochaOpts.retries: Retries the entire test suite if any test fails.
  • specFileRetries: Retries a particular test/spec file.
  • specFileRetriesDelay: Introduces a delay before retrying the spec file.
  • specFileRetriesDeferred: If true, retries are deferred until all other tests have run.

This helps maintain test stability by reducing transient failures, especially useful in CI pipelines.

Strategies for Reducing Flakiness in CI Pipelines

Here are some strategies to reduce flakiness in CI:

Stabilize Test Data and Environments

  • Use Mock Data: Avoid relying on external systems by mocking APIs and databases.
  • Isolate Test Environments: Run tests on fresh, isolated environments (e.g., Docker containers).
  • Set Timeouts Carefully: Adjust timeouts based on expected response times and network variability.

Optimise Test Execution

  • Parallel Execution: Run tests in parallel to minimise dependencies.
  • Rerun Failed Tests: Use retries for intermittent issues, as configured above.
  • Queue Management: Limit parallel jobs if your CI/CD infrastructure faces bottlenecks.

Use Better Synchronization Techniques

  • Avoid Hard Waits: Replace static waits with WebDriver waits (e.g., waitForDisplayed).
  • Poll for State Changes: Use retries or polling for state-dependent elements.

Monitor CI Infrastructure Performance

  • Reduce Browser Resource Usage: Use headless browsers or adjust resolution to save resources.
  • Detect Bottlenecks: Analyze job durations and resource utilization to identify bottlenecks.

 Improve Test Quality

  • Modularize Tests: Break down large tests into smaller, independent ones.
  • Handle External Dependencies Gracefully: Add fallbacks for API rate limits or timeouts.
  • Log and Debug Better: Enable logs, screenshots, or video capture for failed tests to make debugging easier.

Employ Smart Retries with CI Tools

  • Use tools like Jenkins, GitHub Actions, or CircleCI to configure test reruns based on exit codes. Example: Use retry plugins in Jenkins or workflows with retry steps in GitHub Actions.

Utilize Hooks for Setup Tasks in WebdriverIO

 What Are Hooks in WebdriverIO?

Hooks are lifecycle methods that run before or after specific events in a test’s execution, like test suites, test cases, or session initialization. They help streamline repetitive tasks like test environment setup, logging, and resource cleanup.

Types of Hooks in WebdriverIO

  • beforeSession and afterSession
    • Use case: Setup and teardown tasks that need to run once per session.
    • Example: Configuring environment variables or clearing logs.
beforeSession: function (config, capabilities, specs) {
 console.log("Starting a new test session.");
 // Set environment-specific variables
 process.env.TEST_ENV = 'staging';
},
afterSession: function (config, capabilities, specs) {
 console.log("Test session ended.");
 // Perform cleanup tasks
}
  •  before and after Hooks
    • Use case: Run setup/cleanup logic before or after all tests in a suite.
    • Example: Database connection or API token generation.
beforeSuite: function (suite) {
 console.log(`Preparing suite: ${suite.title}`);
 // Seed database with mock data
 seedDatabase();
},
afterSuite: function (suite) {
 console.log(`Finished suite: ${suite.title}`);
 // Clear any suite-specific data
}
  • beforeSuite and afterSuite
    • Use case: Manage pre-test preparations for a specific suite.
    • Example: Seeding test data or resetting a particular app state
beforeSuite: function (suite) {
 console.log(`Preparing suite: ${suite.title}`);
 // Seed database with mock data
 seedDatabase();
},
afterSuite: function (suite) {
 console.log(`Finished suite: ${suite.title}`);
 // Clear any suite-specific data
}
  • beforeTest and afterTest
    • Use case: Handle setup/cleanup at the individual test level.
    • Example: Resetting app state before each test or capturing.
beforeTest: function (test) {
 console.log(`Starting test: ${test.title}`);
 // Reset app state before test
 browser.reloadSession();
},
afterTest: function (test, context, { error, result, duration, passed }) {
 if (!passed) {
   console.error(`Test failed: ${test.title}`);
   // Capture screenshot on failure
   browser.saveScreenshot(`./screenshots/${test.title}.png`);
 }
}
  • onComplete Hook
    • Use case: Actions after all test executions, such as generating reports.
onComplete: function (exitCode, config, capabilities) {
 console.log("All tests completed.");
 // Generate test report
 generateTestReport();
}

Benefits of Using Hooks in WebdriverIO

  • Code Reusability: Centralise common setup tasks, reducing duplication.
  • Improved Test Reliability: Ensure the environment is ready before tests run.
  • Clean Up Resources: Free up memory and avoid state issues by running teardown logic.
  • Consistency: Reduce human error by automating initialization and teardown across all tests.
  • Simplified CI/CD Pipelines: Automatically generate reports and manage logs at the session or test level.

Best Practices for Using Hooks in WebdriverIO

  • Avoid using browser.pause() inside hooks to maintain test speed.
  • Use conditional logic for environment-specific setups (e.g., different actions for staging vs. production).
  • Modularize reusable functions (e.g., seedDatabase()) to keep hook logic concise and maintainable.
  • Capture relevant test data (like screenshots or logs) in afterTest hooks for failed tests.

Use Custom Commands for Reusability

What Are Custom Commands in WebdriverIO?

Custom commands allow you to extend WebdriverIO’s default set of commands with reusable logic tailored to your specific testing needs. Instead of repeating the same code in multiple tests, you can encapsulate logic into a command and call it throughout your test suite, improving maintainability and readability.

What Are Custom Commands?

Custom commands are user-defined functions that extend WebdriverIO’s command set. These commands allow you to perform repeated tasks (like login flows, form submissions, or complex assertions) without duplicating code.

How to Create Custom Commands in WebdriverIO

Basic Syntax of a Custom Command

You can define custom commands inside the WebdriverIO configuration or in a separate file.

Syntax for Registering a Custom Command:

browser.addCommand('login', async (username, password) => {
 await $('#username').setValue(username);
 await $('#password').setValue(password);
 await $('#login-button').click();
});

In this example:

  • The custom login command accepts username and password as parameters.
  • It interacts with the username and password fields, then clicks the login button.

Using the Custom Command in Tests

Once added, you can use the login command as part of your test scripts:

it('should login with valid credentials', async () => {
 await browser.url('https://example.com/login');
 await browser.login('testuser', 'securepassword');
});

Adding Commands to Specific Elements

You can also define commands for specific WebdriverIO elements:

browser.addCommand('waitAndClick', async function () {
 await this.waitForDisplayed();
 await this.click();
}, true); // Pass `true` to make it an element-level command

// Usage in test
it('should wait and click on the button', async () => {
 const button = await $('#submit-button');
 await button.waitAndClick();
});

Best Practices for Custom Commands

  • Encapsulate complex logic: Commands should handle intricate or repetitive flows like login, navigation, or data setup.
  • Promote test readability: Use descriptive names for commands to make tests more intuitive.
  • Keep commands modular: Create commands that handle small, discrete tasks to avoid bloated logic.
  • Use error handling: Ensure commands account for potential issues, such as missing elements or timeouts.

Benefits of Custom Commands

  • Improves Reusability: You can reuse custom commands across multiple test files, reducing code duplication.
  • Increases Test Readability: By abstracting complex flows into commands, test cases become easier to understand.
  • Centralised Maintenance: Any changes in the logic (e.g., element locators) need to be updated only once within the command.
  • Supports Complex Scenarios: Commands allow the combination of multiple Webdriver commands for more sophisticated test flows.

Example: Login Command with Error Handling

browser.addCommand('safeLogin', async (username, password) => {
 await $('#username').setValue(username);
 await $('#password').setValue(password);
 await $('#login-button').click();
  const errorMessage = await $('#error-message');
 if (await errorMessage.isDisplayed()) {
   throw new Error('Login failed: Invalid credentials');
 }
});
// Usage
it('should login safely', async () => {
 await browser.safeLogin('invalidUser', 'wrongPassword');
});

Custom Commands and Parallel Execution

WebdriverIO commands are especially useful when running tests in parallel. Encapsulating logic into commands helps ensure consistency across different threads and simplifies debugging.

Where to Define Custom Commands?

  1. In the Configuration File (wdio.conf.js): Good for project-wide custom commands.
  2. In Helper Files or Page Objects: Useful for project-specific flows. For example, define a login command in the LoginPage object.

Enhance Debugging with Screenshots in WebdriverIO

Taking screenshots during test execution is a critical strategy to improve debugging by capturing the application state at key moments. Screenshots provide visual feedback that helps identify issues like UI changes, element loading failures, or test flakiness. Here’s how you can use screenshots effectively with WebdriverIO.

How to Capture Screenshots with WebdriverIO

Taking Full Page Screenshots

You can use browser.saveScreenshot() to capture the entire visible part of the page.

it('should take a full-page screenshot', async () => {
 await browser.url('https://example.com');
 await browser.saveScreenshot('./screenshots/fullPage.png');
});

Capturing Element-Level Screenshots

You can also capture a specific element’s screenshot, which is useful for debugging element-specific issues.

it('should take a screenshot of a specific element', async () => {
 const logo = await $('#logo');
 await logo.saveScreenshot('./screenshots/logo.png');
});

Saving Screenshots on Test Failures (Using Hooks)

To automatically capture screenshots when a test fails, you can use WebdriverIO’s hooks like afterTest in the configuration file.

afterTest: async function (test, context, { passed }) {
 if (!passed) {
   const screenshotPath = `./screenshots/${test.title}.png`;
   await browser.saveScreenshot(screenshotPath);
   console.log(`Saved screenshot for failed test: ${test.title}`);
 }
}

Best Practices for Using Screenshots for Debugging

Use Descriptive File Names

Save screenshots with dynamic names based on the test title or timestamp to easily identify them in large test suites.

const timestamp = new Date().toISOString();
await browser.saveScreenshot(`./screenshots/test_${timestamp}.png`);

Capture Screenshots During Critical Flows

Take screenshots at important checkpoints in your tests, such as after page navigation or form submission, to trace where failures occur.

it('should navigate and verify screenshot', async () => {
 await browser.url('https://example.com');
 await browser.saveScreenshot('./screenshots/page_loaded.png');
 await $('#submit').click();
 await browser.saveScreenshot('./screenshots/after_click.png');
});

Integrate Screenshots with CI/CD Pipelines
Store screenshots in your CI/CD reports (e.g., Jenkins or GitHub Actions) for easier debugging.

Combine Screenshots with Logs and Videos
Use cloud platforms like BrowserStack or LambdaTest to capture screenshots alongside video recordings for enhanced debugging. These services also provide automatic screenshots for failed tests, network logs, and browser console logs.

Example: Parallel Tests with Screenshots

If you’re running tests in parallel (e.g., on BrowserStack), it’s important to handle screenshots carefully to avoid name collisions between threads. Use a unique ID or timestamp in the file names.

const workerID = browser.capabilities['bstack:options'].sessionName ||  'default';
const screenshotPath = `./screenshots/${workerID}_${Date.now()}.png`;
await browser.saveScreenshot(screenshotPath);

Benefits of Debugging with Screenshots

  • Visual Context: Provides a clear view of what the user interface looked like during the test failure.
  • Faster Issue Resolution: Allows developers and testers to quickly spot UI issues without reproducing the test manually.
  • Reduces Flakiness: Helps identify subtle UI changes or timing issues that could cause flaky tests.
  • Seamless CI Integration: Automated screenshots provide immediate insights in CI reports.

Optimise Test Structure for Readability in WebdriverIO

Creating a well-structured test suite is essential for improving readability, making your code easier to maintain, debug, and scale. An optimised test structure ensures that tests remain intuitive for both current and future team members.

Key Practices for Improving Test Readability

Use Descriptive Test Names

  • Test names should clearly describe the purpose and expected behaviour of the test.

Example:

it('should display an error message for invalid login', async () => {
 // Test logic here
});

Organize Tests Using Describe Blocks

  • Group related tests logically using describe blocks to improve clarity.

Example:

describe('Login Page Tests', () => {

 it('should load the login page successfully', async () => { /*...*/ });

 it('should display an error for invalid credentials', async () => { /*...*/ });

});

Keep Tests Short and Focused

  • Each test should ideally verify one behaviour or feature to keep it focused. Long tests are harder to read and maintain.

Implement the Page Object Model (POM)

  • Use the Page Object Model to separate UI elements and logic from test scripts, improving code readability and maintainability.

Example:

class LoginPage {
 get username() { return $('#username'); }
 get password() { return $('#password'); }
 get loginButton() { return $('#login-button'); }


 async login(user, pass) {
   await this.username.setValue(user);
   await this.password.setValue(pass);
   await this.loginButton.click();
 }
}
const loginPage = new LoginPage();

Use Hooks for Setup and Cleanup

  • Use before and after hooks to handle test setup and teardown logic, keeping tests focused only on the actual behaviour they validate.

Example:

before(async () => {
 await browser.url('https://example.com');
});

after(async () => {
 await browser.deleteSession();
})

Modularize Repetitive Logic Using Custom Commands

  • Use WebdriverIO’s custom commands to encapsulate frequently used logic and make tests cleaner.

Example:

browser.addCommand('loginAsAdmin', async () => {
 await browser.url('/login');
 await $('#username').setValue('admin');
 await $('#password').setValue('admin123');
 await $('#login-button').click();
});
it('should allow admin to login', async () => {
 await browser.loginAsAdmin();
 expect(await browser.getTitle()).toBe('Admin Dashboard');
});

Avoid Hardcoding Test Data

  • Use external data files or configuration files to manage test data, reducing duplication and increasing flexibility.

Example:

const credentials = require('./data/credentials.json');
it('should login with valid credentials', async () => {
 await loginPage.login(credentials.username, credentials.password);
});

Consistent Naming Conventions

  • Follow consistent naming patterns for test files, functions, variables, and Page Objects. This makes code easier to read and understand.

Add Meaningful Assertions

  • Ensure your test assertions reflect the intent of the test, so anyone reading the code can understand what’s being validated.

Example:

expect(await $('.error-message').getText()).toBe('Invalid username or password');

Benefits of an Optimized Test Structure

  • Easier Maintenance: Clearer structure allows easier updates and bug fixes.
  • Reduced Duplication: Using Page Objects, custom commands, and data files minimises redundant code.
  • Faster Onboarding: New team members can quickly understand and contribute to the test suite.
  • Improved Debugging: Cleaner, focused tests make it easier to identify the root cause of issues.

Conclusion

In this guide, we’ve explored what makes WebDriverIO (WDIO) a standout choice for automation testing. We’ve covered essential practices such as setting up your testing environment and utilising the Page Object Model (POM), which significantly enhance both the reliability and maintainability of your tests.

By applying techniques like custom waits and smart element locators, you can effectively address flakiness and improve test stability. Additionally, leveraging cloud platforms for cross-browser testing ensures that your application performs smoothly across various environments.

Overall, adopting these best practices will help you streamline your automation efforts and deliver high-quality software. By staying informed about these strategies, your team will be well-equipped to maximize the benefits of WDIO in your testing endeavors.

Witness how our meticulous approach and cutting-edge solutions elevated quality and performance to new heights. Begin your journey into the world of software testing excellence. To know more refer to Tools & Technologies & QA Services.

If you would like to learn more about the awesome services we provide, be sure to reach out.

Happy testing! 🙂