Visual testing has become essential in modern test automation, ensuring applications not only function correctly but also look flawless. Unlike functional testing, which checks behaviour, visual testing focuses on UI appearance detecting changes in layouts, fonts, colours, and placements.
WebdriverIO, combined with TypeScript, offers a powerful solution for automating visual tests. It simplifies capturing snapshots, comparing them to baselines, and identifying visual discrepancies. This approach is especially valuable for dynamic web applications, where frequent UI changes can introduce unnoticed visual bugs. With WebdriverIO’s integrations and TypeScript’s type safety, teams can efficiently deliver visually consistent user experiences across browsers and devices.
- What is Visual Testing?
- Setting Up Your Environment for Visual Testing in WDIO
- Visual Regression Testing: The Basics
- Integrating Visual Testing Tools with WebdriverIO
- Writing Visual Testing Scripts in TypeScript with WebdriverIO
- Debugging Visual Test Failures
- Automating Visual Testing in CI/CD Pipelines
- Benefits of Automating Visual Testing in CI/CD
- Integrating WebdriverIO visual tests into Jenkins/GitHub Actions
- Example YAML configuration for a pipeline.
- Example: Integrating Visual Testing with Jenkins
- Running tests on BrowserStack or Sauce Labs for cross-browser coverage
- Leverage Reporting and Debugging Tools
- Best Practices for Visual Testing in WDIO
- Real-World Implementation of Visual Testing
- Conclusion
What is Visual Testing?
Visual testing is a quality assurance process oriented toward visually verifying that the visual aspects of an application’s UI are presented as intended by the end-user. It differs from functional testing, which actually validates the behaviour of an application. Visual testing ensures that the application looks right to the end user. Its basic idea is to take snapshots of web pages or application screens and compare them to a predefined baseline image, thus capturing the look and feel of the UI. Any unintentional visual differences, including layout shifts, broken alignments, missing elements, and wrong colours, are marked as regressions.
Visual testing is especially useful for applications with dynamic or frequently updated UIs. A small modification can unwittingly affect the overall experience.
The importance of visual validation in modern web applications
In today’s competitive digital landscape, user experience (UX) is as important as functionality. Visual bugs, even minor ones, can harm an application’s credibility and user satisfaction. Modern web applications must ensure:
- Cross-browser compatibility: Visual consistency across multiple browsers like Chrome, Firefox, and Safari.
- Responsive design validation: Proper rendering across various screen sizes and devices.
- Brand integrity: Ensuring fonts, colours, logos, and layouts align with brand guidelines.
Without visual testing, these aspects are difficult to validate systematically, leaving teams to rely on time-consuming and error-prone manual checks. Visual testing automates this process, ensuring consistency while saving time and effort.
Advantages of using WebdriverIO for Visual Testing
WebdriverIO is a powerful, JavaScript-based end-to-end testing framework with robust visual testing capabilities. Here’s why it’s an excellent choice for visual validation:
- Comprehensive Ecosystem:
WebdriverIO supports integrations with leading visual testing tools like Applitools, Percy, and Resemble.js. These tools enhance visual comparison with advanced features like AI-driven diff detection and cross-browser testing. - Ease of Use with TypeScript:
When paired with TypeScript, WebdriverIO allows for type-safe, maintainable test scripts, reducing bugs in test code and improving developer productivity. - Flexible Automation:
WebdriverIO makes it easy to combine functional and visual tests in a single framework, streamlining test execution. For example, you can validate a button’s functionality and its appearance in the same test run. - Scalability for CI/CD Pipelines:
With its built-in support for cloud-based platforms like BrowserStack and Sauce Labs, WebdriverIO can perform visual tests across a wide range of browsers, devices, and resolutions in parallel. - Dynamic Content Handling:
WebdriverIO allows testers to address common challenges in visual testing, such as ignoring dynamic content (e.g., timestamps) or animations, by configuring specific elements to exclude during comparisons.
By leveraging WebdriverIO for visual testing, teams can ensure their applications not only work flawlessly but also deliver visually consistent experiences across all environments.
Setting Up Your Environment for Visual Testing in WDIO
To implement visual testing in WebdriverIO (WDIO) using TypeScript, it’s essential to have a properly configured environment. This involves installing necessary dependencies, setting up WebdriverIO, and integrating it with visual testing tools. Below is a detailed guide with examples to help you get started.
Required Tools and Dependencies
Before you begin, ensure you have the following tools and libraries installed:
- Node.js: A runtime environment for executing JavaScript code.
- WebdriverIO: The core framework for test automation.
- For more information about WebdriverIO framework setup, you can refer our blog WebdriverIO Setup.
- TypeScript: For writing type-safe test scripts.
- Visual Testing Tool: Choose a tool like Resemble.js, Applitools, or Percy for snapshot comparison.
Installing WebdriverIO with TypeScript
Step 1: Install WebdriverIO and TypeScript
Run the following commands to set up your project:
# Initialise a Node.js project
npm init -y
# Install WebdriverIO CLI
npm install @wdio/cli --save-dev
# Install TypeScript and its dependencies
npm install typescript ts-node @wdio/types --save-dev
# Install visual testing dependencies (e.g., Resemble.js)
npm install wdio-image-comparison-service --save-dev
Configuring the wdio.conf.ts file for visual testing
Run the WebdriverIO configuration wizard:
npx wdio config
When prompted, choose the following options:
- Select TypeScript as the language.
- Choose a framework like Mocha or Cucumber.
- Add the wdio-image-comparison-service for visual testing.
After completing the wizard, update the wdio.conf.ts file to include the image comparison service:
import { join } from 'path';
import { config as baseConfig } from './base.conf';
export const config = {
...baseConfig,
services: [
[
'image-comparison',
{
baselineFolder: join(process.cwd(), './baseline/'),
formatImageName: '{tag}-{browserName}-{width}x{height}',
screenshotPath: join(process.cwd(), './screenshots/'),
autoSaveBaseline: true,
},
],
],
};
Write a Visual Test Script
Create a test script to capture and compare screenshots. For example:
import { expect } from 'chai';
describe('Visual Testing Example', () => {
it('should compare a webpage screenshot', async () => {
// Navigate to the application
await browser.url('https://example.com');
// Capture a full-page screenshot
const screenshot = await browser.checkFullPageScreen('homepage');
// Assert that there are no visual differences
expect(screenshot.misMatchPercentage).to.be.lessThan(
1,
'Visual differences found!'
);
});
});
Execute the Test
Run the test using the WebdriverIO CLI:
npx wdio run ./wdio.conf.ts
Example Output
- If the test passes:
“Visual comparison succeeded: No differences detected.“ - If the test fails:
A visual diff highlighting the changes will be generated and stored in the screenshots folder.
With this setup, you can capture baselines, compare snapshots, and automate visual regression testing for your web applications. This approach ensures consistent UI experiences across updates and deployments.
Visual Regression Testing: The Basics
What is Visual Regression Testing?
Visual regression testing is testing for unintended changes in a web or mobile application’s user interface. This technique works by taking snapshots of an application’s UI (baseline images) and comparing them against snapshots from subsequent runs. Differences-such as layout shifts, colour mismatches, or misaligned elements-are flagged as regressions.
Unlike functional tests, which verify that an application behaves as it should, the visual regression tests make sure that the look-and-feel of an application does not change through updates. That’s why this test is quite important in the quality assurance process, particularly in those applications where UI design and user experience are vital.
Scenarios where visual testing is essential
Visual regression testing is particularly valuable in the following situations:
- Frequent UI Updates:
For projects with rapid development cycles, visual testing ensures that new changes don’t disrupt existing UI elements.- Example: Testing a redesigned homepage for layout consistency.
- Responsive Design Validation:
Ensures the application appears correctly across various screen sizes and resolutions.- Example: Validating that a navigation menu renders properly on desktop and mobile views.
- Cross-Browser Compatibility:
Detects inconsistencies in how browsers render the application.- Example: Ensuring that fonts and button styles are identical in Chrome and Firefox.
- Dynamic Content Validation:
For pages with animations, dynamic elements, or frequent data updates, visual regression testing confirms that these don’t affect the overall appearance.- Example: Testing a news feed where content dynamically loads without breaking the layout.
Challenges and Limitations of Visual Regression Testing
Dynamic Content Handling:
Pages with frequently changing elements like timestamps or ads can lead to false positives.
- Solution: Use tools that allow you to ignore specific regions or configure tolerance thresholds for acceptable differences.
Environment Differences:
Tests may fail due to variations in browser rendering, screen resolutions, or even OS-specific fonts.
- Solution: Run tests in consistent environments using tools like BrowserStack or Sauce Labs.
Performance Overhead:
Capturing and comparing screenshots can be time-consuming, especially for large-scale applications.
- Solution: Optimise by running visual tests only for critical paths or key pages.
Baseline Maintenance:
Maintaining accurate baseline images can be challenging, particularly for applications with frequent UI changes.
- Solution: Automate baseline updates after verified UI updates.
Example: Visual Regression Testing with WebdriverIO
Here’s a practical implementation of visual regression testing using WebdriverIO with the wdio-image-comparison-service:
Test Script:
describe('Visual Regression Testing Example', () => {
it('should detect visual differences on the login page', async () => {
// Navigate to the login page
await browser.url('https://example.com/login');
// Capture and compare the screenshot
const result = await browser.checkFullPageScreen('login-page');
// Assert no visual differences
if (result.misMatchPercentage > 1) {
console.error('Visual differences detected!', result);
} else {
console.log('No visual differences found.');
}
});
});
Output:
- Pass: “No visual differences found.”
- Fail: “Visual differences detected!” (A diff image highlighting the changes is generated).
By incorporating visual regression testing into your workflow, you can ensure that UI changes don’t disrupt the user experience, enhancing both quality and confidence in your application.
Integrating Visual Testing Tools with WebdriverIO
Integrating visual testing tools with WebdriverIO enhances test automation by adding powerful visual validation capabilities. These tools simplify capturing UI snapshots, comparing them to baseline images, and detecting unintended changes. Popular tools like Applitools, Percy, and Resemble.js offer advanced features such as AI-powered visual diffs, cross-browser testing, and dynamic content handling.
In this blog, we’ll explore how to integrate a visual testing tool, Applitools, with WebdriverIO and demonstrate its use with an example test case.
Overview of Popular tools
- Applitools:
- Uses AI to identify visual discrepancies intelligently.
- Supports cross-browser and cross-device visual testing.
- Offers a cloud-based dashboard for managing test results.
- Percy:
- Specialises in automated visual reviews.
- Integrates seamlessly with CI/CD pipelines.
- Handles dynamic content with snapshot stabilisation.
- Resemble.js:
- An open-source library for pixel-based image comparison.
- Lightweight and suitable for local or small-scale projects.
Step-by-step guide to integrating a tool
Integrating Applitools with WebdriverIO
Step 1: Install Dependencies
# Install the Applitools SDK for WebdriverIOnpm install @applitools/eyes-webdriverio –save-dev
Step 2: Configure Applitools in wdio.conf.ts
export const config = {
// Other WebdriverIO configurations...
services: [
[
'applitools',
{
apiKey: process.env.APPLITOOLS_API_KEY, // Store your API key securely
viewportSize: { width: 1366, height: 768 }, // Set the viewport size for testing
},
],
],
};
Step 3: Write a Visual Test Script
Create a test case to capture and validate the appearance of a webpage:
describe('Applitools Integration Example', () => {
it('should visually validate the homepage', async () => {
// Navigate to the homepage
await browser.url('https://example.com');
// Start a visual test and check the full window
await browser.eyesCheck('Homepage', Target.window().fully());
// End the test and validate results
const result = await browser.eyesClose();
// Log the visual test results
console.log('Visual test result:', result);
});
});
Step 4: Execute the Test
Run the test using the WebdriverIO CLI:
npx wdio run ./wdio.conf.ts
Example Output
- Baseline Image Creation:
The first run captures and stores a baseline image in the Applitools cloud. - Snapshot Comparison:
Subsequent runs compare new snapshots to the baseline, highlighting any differences directly in the Applitools dashboard. - Visual Diff Report:
- Pass: “No differences detected.”
- Fail: The dashboard highlights differences (e.g., colour changes, misaligned elements).
Benefits of Integrating Visual Testing Tools
- Improved Accuracy: Tools like Applitools use AI to reduce false positives from dynamic content or minor rendering differences.
- Cross-Browser Testing: Validate your application across browsers and devices effortlessly.
- Enhanced Collaboration: Dashboards make it easy for teams to review and manage visual defects.
By integrating a tool like Applitools with WebdriverIO, teams can automate visual regression testing efficiently. This ensures that UI changes don’t break the user experience, giving developers confidence in their updates. Whether you’re building a new feature or maintaining an existing application, visual testing tools are an indispensable part of modern test automation.
Writing Visual Testing Scripts in TypeScript with WebdriverIO
Advanced Concepts in Visual Testing
Visual testing has become a cornerstone of modern UI testing, ensuring that web applications render consistently across different devices, browsers, and resolutions. While basic visual tests capture and compare snapshots, advanced concepts like responsive testing, handling dynamic content, and creating custom matchers elevate your test coverage and accuracy.
In this blog, we’ll explore these advanced techniques, how they address complex UI scenarios, and provide examples to demonstrate their implementation with WebdriverIO and TypeScript.
Testing Responsive Designs Across Multiple Viewports
Responsive design ensures your application adapts gracefully to various screen sizes. Visual testing across multiple viewports validates that layouts, typography, and interactive elements remain consistent.
Example: Multi-Viewport Testing
describe('Responsive Design Testing', () => {
// Define an array of viewports for different device sizes
const viewports = [
{ width: 320, height: 568 }, // Mobile
{ width: 768, height: 1024 }, // Tablet
{ width: 1366, height: 768 }, // Desktop
];
// Iterate through each viewport for testing
viewports.forEach((viewport) => {
it(`should validate layout on ${viewport.width}x${viewport.height}`, async () => {
// Set the browser window size to the current viewport
await browser.setWindowSize(viewport.width, viewport.height);
// Navigate to the target URL
await browser.url('https://example.com');
// Capture a full-page screenshot and compare it with the baseline
const result = await browser.checkFullPageScreen(`layout-${viewport.width}x${viewport.height}`);
// Assert that the visual difference is within acceptable limits
expect(result.misMatchPercentage).to.be.lessThan(1, 'Visual differences detected!');
});
});
});
Benefits:
- Validates layout changes for different devices.
- Ensures design consistency for mobile-first or responsive applications.
Strategies for Handling Animations and Dynamic Content
Animations, dynamic data, or timestamps often lead to false positives in visual testing. Strategies to handle these include ignoring specific regions, waiting for animations to complete, or stabilising dynamic content.
Example: Ignoring Dynamic Regions
describe('Handling Dynamic Content', () => {
it('should ignore dynamic elements during comparison', async () => {
// Navigate to the target page
await browser.url('https://example.com');
// Perform visual regression testing while ignoring specific regions
const result = await browser.checkFullPageScreen('dynamic-content', {
ignoreRegions: [
{
x: 500,
y: 100,
width: 200,
height: 50 // Exclude dynamic banner
},
],
});
// Assert that visual differences are within acceptable limits
expect(result.misMatchPercentage).to.be.lessThan(1, 'Visual differences detected!');
});
});
Example: Waiting for Animations to Complete
it('should wait for animations to finish', async () => {
// Navigate to the target webpage
await browser.url('https://example.com');
// Pause to allow animations to complete
await browser.pause(2000);
// Perform a visual regression check
const result = await browser.checkFullPageScreen('after-animation');
// Assert that the visual differences are within the acceptable limit
expect(result.misMatchPercentage).to.be.lessThan(1, 'Visual differences detected!');
});
Benefits:
- Reduces false positives from dynamic UI elements.
- Ensures tests focus on relevant visual aspects.
Leveraging Custom Matchers for Advanced Validations
Custom matchers allow you to define advanced validation logic for specific scenarios, such as ensuring a button remains visible or a component’s colour changes as expected.
Example: Custom Matcher for Button Visibility
import { expect } from 'chai';
it('should validate button visibility and position', async () => {
// Locate the button element
const button = await $('button#submit');
// Perform visual validation of the button
const result = await browser.checkElement(button, 'submit-button', {
blockOut: [
{ x: 0, y: 0, width: 50, height: 50 }, // Ignore the specified overlay region
],
});
// Assert that the button is displayed
expect(await button.isDisplayed()).to.be.true;
// Assert that the visual differences are within the acceptable threshold
expect(result.misMatchPercentage).to.be.lessThan(1, 'Button position mismatch!');
});
Example: Validating Colour Changes
it('should validate button colour change on hover', async () => {
const button = await $('button#hoverButton');
await button.moveTo(); // Trigger hover state
const result = await browser.checkElement(button, 'hover-button');
expect(result.misMatchPercentage).to.be.lessThan(1, 'colour change not as expected!');
});
Benefits:
- Tailored validations for specific UI behaviours.
- Combines visual testing with functional assertions for robust coverage.
Advanced concepts in visual testing, such as responsive testing, dynamic content handling, and custom matchers, empower teams to tackle complex UI scenarios effectively. By integrating these techniques with WebdriverIO and TypeScript, you can ensure that your application’s visual fidelity is maintained across all devices and conditions, providing users with a seamless experience.
Adopting these strategies in your test automation pipeline enhances reliability, reduces maintenance overhead, and keeps visual regressions at bay—making them indispensable for modern QA practices.
Debugging Visual Test Failures
Visual regression tests are essential for ensuring a consistent user interface, but debugging failures can sometimes be challenging. Differences in rendering engines, dynamic content, or test environment inconsistencies can lead to failures. Understanding the root causes and having strategies to address them is crucial for maintaining reliable tests.
In this blog, we will explore common causes of visual test failures, how to analyse diff images, strategies for minimising false positives, and tips for updating baselines effectively.
Common causes of failures in visual regression testing
- Dynamic Content:
- Content such as timestamps, animations, or ads that change between test runs can cause unexpected differences.
- Environment Differences:
- Variations in browser versions, operating systems, or resolutions might produce inconsistent rendering.
- Flaky Network Conditions:
- Delayed resource loading (e.g., images or fonts) can alter the appearance of UI elements.
- Incorrect Baselines:
- Outdated or inaccurate baseline images lead to false negatives.
Tips for analysing diffs and updating baselines
When a visual test fails, the diff image highlights the areas where differences were detected. Tools like Applitools or wdio-image-comparison-service provide detailed reports:
Steps to Analyze Diff Images:
- Review the Highlighted Areas: Identify if changes are intentional (e.g., a new feature) or unintended (e.g., a rendering bug).
- Compare the Context: Look at screenshots from previous runs to understand what caused the change.
- Validate Dynamic Elements: Ensure no unnecessary regions (e.g., ads or animations) are included in the comparison.
Example with WebdriverIO:
const result = await browser.checkFullPageScreen('homepage');
// Log the mismatch percentage to understand the differences
console.log('Mismatch percentage:', result.misMatchPercentage);
// Log the location of the difference image for debugging purposes
if (result.misMatchPercentage > 0) {
console.log('Diff saved at:', result.diffFilePath);
}
Tips for Updating Baselines
When intentional UI changes occur, updating baselines becomes necessary:
- Automate Updates for Approved Changes:
After reviewing and approving the changes, use tools to update baselines automatically.
await browser.saveFullPageScreen(‘homepage’, { autoSaveBaseline: true });
Review Baseline Updates in Teams:
Ensure that baseline updates are reviewed by developers and designers to avoid overwriting unintended changes.
Maintain Version Control:
Store baseline images in version control systems (e.g., Git) to track changes over time and revert if necessary.
Strategies for Minimising False Positives
- Ignore Dynamic Regions:
Exclude areas of the page that contain dynamic content from the comparison.
const result = await browser.checkFullPageScreen('page', {
ignoreRegions: [
{ x: 100, y: 200, width: 300, height: 100 }, // Dynamic banner
],
});
- Wait for Stability:
Use pauses or wait conditions to ensure the page is fully rendered before taking snapshots.
await browser.pause(2000); // Wait for animations to finish
- Environment Consistency:
Run tests in a controlled environment using tools like Docker, BrowserStack, or Sauce Labs to ensure consistent browser versions and settings.
- Set Tolerances:
Define acceptable visual differences to avoid failures due to minor pixel-level variations.
const result = await browser.checkFullPageScreen('homepage', {
misMatchTolerance: 1, // Allow 1% difference
});
Example Workflow
Debugging a Visual Test Failure:
- Step 1: Run the test and capture the failure report.
- Step 2: Open the diff image to identify discrepancies.
- Step 3: Determine the cause—dynamic content, environment issue, or intentional change.
- Step 4: If intentional, update the baseline after approval.
- Step 5: Rerun the test to confirm stability.
Debugging visual test failures involves understanding the root cause of discrepancies and implementing strategies to minimise false positives. By analysing diffs, maintaining a stable test environment, and adopting practices like ignoring dynamic regions or setting tolerances, you can make your visual regression tests more reliable.
With these strategies, you’ll not only improve your testing accuracy but also ensure your application’s UI maintains a consistent and high-quality user experience.
Automating Visual Testing in CI/CD Pipelines
Integrating visual testing into a CI/CD pipeline ensures continuous validation of your application’s UI, and thus, detects regressions early in the development cycle. By automating these tests, teams can achieve faster feedback, ensure consistent UI quality, and reduce manual testing efforts. WebdriverIO, when combined with tools like Jenkins or GitHub Actions, makes it easy to set up and run visual tests on platforms like BrowserStack or Sauce Labs for comprehensive cross-browser and device coverage.
We will be covering the advantage of automation in visual testing, how to integrate it in your CI/CD pipeline, and an example configuration.
Benefits of Automating Visual Testing in CI/CD
- Early Detection of UI Issues: Catch visual regressions as soon as code changes are pushed.
- Cross-Browser Consistency: Validate UI across multiple browsers and devices.
- Scalable Testing: Leverage cloud platforms like BrowserStack or Sauce Labs to run tests in parallel.
- Increased Productivity: Automate repetitive visual checks to free up QA resources for exploratory testing.
Integrating WebdriverIO visual tests into Jenkins/GitHub Actions
Steps to Integrate Visual Testing in CI/CD Pipelines
- Set Up Visual Testing in WebdriverIO:
Ensure your WebdriverIO project is configured with a visual testing service, like wdio-image-comparison-service.
const result = await browser.checkFullPageScreen('home-page');
expect(result.misMatchPercentage).toBeLessThan(1, 'Visual differences detected!');
- Install CI/CD Tools:
Use Jenkins, GitHub Actions, or another CI/CD platform to automate the test execution.
- Configure Cross-Browser Testing:
Set up platforms like BrowserStack or Sauce Labs for cross-browser and device coverage.
- Include Visual Testing in the Pipeline:
Add visual testing scripts to your build pipeline, ensuring they run automatically after code commits or during scheduled builds.
Example YAML configuration for a pipeline.
Example: Integrating Visual Testing with GitHub Actions
Example YAML Configuration
name: Visual Regression Tests
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Set Up Node.js
uses: actions/setup-node@v3
with:
node-version: 16
- name: Install Dependencies
run: npm install
- name: Run Visual Tests
env:
BROWSERSTACK_USERNAME: ${{ secrets.BROWSERSTACK_USERNAME }}
BROWSERSTACK_ACCESS_KEY: ${{ secrets.BROWSERSTACK_ACCESS_KEY }}
run: npx wdio run wdio.conf.js --suite visual
How It Works:
- Trigger: Runs on every push to the main branch or a pull request.
- Setup: Install dependencies and configure Node.js.
- Execution: Runs visual tests on BrowserStack, using environment variables for authentication.
Example: Integrating Visual Testing with Jenkins
Jenkins Pipeline Script
pipeline {
agent any
environment {
BROWSERSTACK_USERNAME = credentials('browserstack-username')
BROWSERSTACK_ACCESS_KEY = credentials('browserstack-access-key')
}
stages {
stage('Checkout Code') {
steps {
checkout scm
}
}
stage('Install Dependencies') {
steps {
sh 'npm install'
}
}
stage('Run Visual Tests') {
steps {
sh 'npx wdio run wdio.conf.js --suite visual'
}
}
}
}
How It Works:
- Environment Variables: Credentials are securely stored and injected into the pipeline.
- Stages: Steps are clearly defined for checking out the code, installing dependencies, and running visual tests.
Running tests on BrowserStack or Sauce Labs for cross-browser coverage
Integrating cross-browser testing platforms ensures your visual tests cover multiple environments:
- Add capabilities for BrowserStack or Sauce Labs in your WebdriverIO configuration file.
- Example configuration for BrowserStack:
exports.config = {
user: process.env.BROWSERSTACK_USERNAME, // BrowserStack username from environment variables
key: process.env.BROWSERSTACK_ACCESS_KEY, // BrowserStack access key from environment variables
capabilities: [
{
browserName: 'chrome', // The browser on which the test will run (Chrome)
browserVersion: 'latest', // The latest version of Chrome will be used
'bstack:options': {
os: 'Windows', // The operating system (Windows)
osVersion: '10', // The specific version of Windows (Windows 10)
},
},
],
};
Automating visual testing in CI/CD pipelines ensures continuous monitoring of your application’s UI, providing rapid feedback on any visual inconsistencies. With tools like GitHub Actions and Jenkins, and cloud testing platforms like BrowserStack, you can seamlessly integrate visual tests into your development workflow.
By following the steps and examples shared, teams can achieve robust and scalable visual regression testing, ensuring a flawless user experience across all environments.
Integrate Visual Tests into CI/CD Pipelines
To ensure continuous validation, integrate visual tests into your CI/CD workflow.
Example Setup with GitHub Actions:
name: Visual Regression Testing
on:
push:
branches:
- main
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Install Dependencies
run: npm install
- name: Run Visual Tests
env:
BROWSERSTACK_USERNAME: ${{ secrets.BROWSERSTACK_USERNAME }}
BROWSERSTACK_ACCESS_KEY: ${{ secrets.BROWSERSTACK_ACCESS_KEY }}
run: npx wdio run wdio.conf.js
Benefits:
- Early detection of regressions.
- Automated feedback loops for developers.
- Cross-browser validation using cloud services like BrowserStack or Sauce Labs.
Leverage Reporting and Debugging Tools
Effective debugging and reporting streamline the resolution of visual test failures.
Recommendations:
Enable Detailed Diff Reports:
- Highlight mismatched areas to make debugging easier.
- Example using wdio-image-comparison-service:
const result = await browser.checkFullPageScreen('dashboard');
if (result.misMatchPercentage > 0) {
console.log(`Differences detected: ${result.misMatchPercentage}% mismatch`);
console.log('Diff file path:', result.diffFilePath);
} else {
console.log('No visual differences detected');
}
Use Reporting Dashboards:
- Tools like Applitools and Percy provide centralised dashboards to view test results, diffs, and baselines.
- Retry Mechanism:
- Re-run failed tests to rule out flakiness:
exports.config = {
mochaOpts: {
retries: 2, // Retry failed tests up to 2 times
},
};
Implementing visual testing in WDIO requires a balance between thoroughness and efficiency. By maintaining consistent baselines, optimising test coverage, and integrating robust reporting, teams can ensure accurate and scalable UI validation. Adopting these best practices will help you catch regressions early, streamline collaboration, and enhance the reliability of your test suite.
With a solid visual testing strategy, you can confidently deliver visually consistent applications that delight users across browsers and devices.
Best Practices for Visual Testing in WDIO
Visual testing helps maintain the consistency and integrity of an application’s UI as it evolves. However, without a systematic approach, these tests can become flaky, generate false positives, or lead to inefficient processes. By adhering to best practices, teams can optimise their visual testing strategy for reliability, scalability, and ease of maintenance.
This blog explores detailed best practices for implementing visual testing in WebdriverIO (WDIO), supported by practical examples.
Maintain Consistent Baselines
Baselines are the reference images used to detect UI changes. Inconsistent baselines can lead to false positives or missed regressions.
Key Practices:
- Standardised Test Environments:
- Use tools like Docker, BrowserStack, or Sauce Labs to ensure consistent browsers, operating systems, and resolutions.
- For example:
exports.config = {
capabilities: [
{
browserName: 'chrome',
'goog:chromeOptions': {
args: ['--headless', '--disable-gpu'], // Run Chrome in headless mode
},
},
],
};
- Version Control Baselines:
- Store baseline images in version control systems (e.g., Git) to track and audit changes.
- Organise images by features or modules:
/visual-baselines
/login
login-page.png
/profile
profile-section.png
- Review Baseline Updates:
- Automate the process of updating baselines but require manual approval for changes:
if (updateApproved) {
await browser.saveFullPageScreen('dashboard', { autoSaveBaseline: true });
}
Optimising test scripts for speed and reliability
Testing every page or element visually can be overkill. Instead, focus on scenarios where visual regressions are most likely.
Strategies:
- Test Critical UI Elements:
- Limit tests to high-impact pages like landing pages, checkout flows, or dashboards.
- Ignore Dynamic Content:
- Exclude areas with ads, timestamps, or animations:
const result = await browser.checkElement('.banner', {
ignoreRegions: [{ x: 20, y: 30, width: 100, height: 50 }],
});
- Set Pixel Tolerances:
- Accept minor pixel differences caused by rendering variations:
const result = await browser.checkFullPageScreen('homepage', {
misMatchTolerance: 0.5, // Accept 0.5% variation
});
- Run Tests in Parallel:
- Reduce execution time by running tests across multiple environments:
maxInstances: 5,
capabilities: [
{ browserName: 'firefox' },
{ browserName: 'chrome' },
],
};exports.config = {
Use Advanced Comparison Techniques
Modern tools provide powerful features to enhance comparison accuracy and reduce false positives.
Examples:
- Thresholding:
- Use fuzzy matching for content with slight variations:
const result = await browser.checkScreen('header', {
misMatchTolerance: 1, // 1% tolerance
});
- Region-Specific Testing:
- Focus only on specific sections of the page:
const result = await browser.checkElement('#menu', { blockOut: [{ x: 10, y: 10, width: 200, height: 100 }] });
Regularly reviewing and cleaning up outdated snapshots
Over time, unused or outdated snapshots can pile up and complicate baseline management.
Best Practices:
- Automate Snapshot Cleanup:
- Use a script to remove unused images:
find ./visual-baselines -type f -mtime +30 -delete
- Organise Snapshots by Feature:
- Categorise snapshots for easier navigation:
/visual-baselines
/authentication
login.png
signup.png
/dashboard
overview.png
- Tag Baselines by Version:
- Maintain snapshots for multiple application versions:
/baselines-v1.0
/baselines-v2.0
Real-World Implementation of Visual Testing
Configuring wdio.conf.js for Visual Testing with WebdriverIO
To set up WebdriverIO for visual testing, it’s crucial to configure your wdio.conf.js file properly. Below is an example configuration and the steps you need to follow to ensure your tests are ready to perform visual comparisons effectively.wdio.conf.ts file:
import { join } from 'path';
export const config = {
runner: 'local',
specs: [
'./test/saveTabbablePageTest.js', // Path to your test file
],
maxInstances: 10, // Max number of browser instances to run concurrently
capabilities: [
{
browserName: 'chrome',
'goog:chromeOptions': {
args: ['--window-size=1920,1080'], // Set a consistent viewport size
},
},
],
logLevel: 'info', // Log level for WebDriverIO logs
bail: 0, // Continue testing even if some tests fail
waitforTimeout: 10000, // Timeout for waiting for elements to appear
connectionRetryTimeout: 200000, // Timeout for retrying connection
connectionRetryCount: 3, // Retry connection 3 times if failed
services: [
[
'visual',
{
baselineFolder: join(process.cwd(), './baseline/'), // Folder for baseline images
screenshotPath: join(process.cwd(), './screenshots/'), // Folder for test screenshots
compareOptions: {
blockOutStatusBar: true, // Ignore the status bar in comparisons
blockOutToolBar: true, // Ignore the toolbar in comparisons
misMatchTolerance: 0.01, // Tolerance for mismatches (1% allowed)
},
},
],
],
framework: 'mocha', // Mocha testing framework
reporters: ['spec'], // Console output of the tests
mochaOpts: {
ui: 'bdd', // Use BDD style
timeout: 120000, // Max timeout for a test (2 minutes)
},
};
Implementing Various WebdriverIO Visual Testing Methods with Assertion
saveScreen() and checkScreen()
- Purpose: Capture the current viewport and compare it with a saved baseline.
- Usage:
- saveScreen(‘screenshot-name’): Saves the current viewport as a baseline image.
- checkScreen(‘screenshot-name’): Compares the current viewport with the baseline and returns the difference.
- Ideal for: Verifying visible portions of the page without scrolling.
saveScreenTest Test file.
import { expect } from 'chai';
describe('saveScreen method with WebdriverIO', () => {
before(async () => {
// Open the target webpage
await browser.url('https://jignect.tech/qa-services/'); // Replace with your target URL
});
it('should save the full-page screenshot as a baseline', async () => {
// Capture and save the current full-page screenshot
await browser.saveScreen('saveScreen-Baseline');
// Log a message indicating success
console.log('Full-page screenshot saved as baseline for future comparisons.');
});
it('should compare the current screen with the saved baseline', async () => {
// Compare the current full-page screenshot with the baseline
const result = await browser.checkScreen('saveScreen-Baseline');
// Assert that there are no visual differences
expect(result).to.equal(0); // 0 means no differences detected
});
after(async () => {
console.log('saveScreen test completed.');
});
});
Baseline Image:
Actual Image:
Image for display Difference Between Baseline image vs Actual Image
Assertion:
saveFullPageScreen() and checkFullPageScreen()
- Purpose: Capture the entire page, including content beyond the visible viewport, and compare it with a baseline.
- Usage:
- saveFullPageScreen(‘fullpage-baseline’): Saves a full-page screenshot as a baseline.
- checkFullPageScreen(‘fullpage-baseline’): Compares the full-page screenshot with the baseline.
- Ideal for: Long, scrollable pages or SPAs with dynamically loaded content.
saveFullPageScreenTest Test file:
import { expect } from 'chai';
describe('saveFullPageScreen method with WebdriverIO', () => {
before(async () => {
// Open the target webpage
await browser.url('https://magento.softwaretestingboard.com/gear/bags.html
'); // Replace with your target URL
});
it('should save the full-page screenshot as a baseline', async () => {
// Capture and save the current full-page screenshot
await browser.saveFullPageScreen('saveFullPageScreen-baseline');
// Log a success message
console.log('Full-page screenshot saved as baseline for future comparisons.');
});
it('should compare the current full-page screen with the saved baseline', async () => {
// Compare the current full-page screenshot with the saved baseline
const result = await browser.checkFullPageScreen('saveFullPageScreen-baseline');
// Assert that there are no visual differences
expect(result).to.equal(0); // 0 means no differences detected
});
after(async () => {
console.log('saveFullPageScreen test completed.');
});
});
Baseline Image:
Actual Images
For passed test:
For failed Test:
Image for display Difference Between Baseline image vs Actual Image
Assertion:
saveElement() and checkElement()
- Purpose: Capture and compare a specific element on the page.
- Usage:
- saveElement(element, ‘element-baseline’): Saves a screenshot of the specified element as a baseline.
- checkElement(element, ‘element-baseline’): Compares the element’s current state with the baseline.
- Ideal for: Testing critical UI components like buttons, logos, or form fields.
saveElementTest Test file:
import { expect } from 'chai';
describe('saveElement method with WebdriverIO', () => {
before(async () => {
// Open the target webpage
await browser.url('https://jignect.tech/'); // Replace with your URL
});
it('should save the screenshot of a specific element', async () => {
// Find the element you want to capture
const element = await $('//a[text()="QA Services"]/parent::div'); // Replace with your element selector
// Save the element's image for baseline comparison
await browser.saveElement(element, 'header-logo-baseline');
// Validate that the image was saved successfully
console.log('Element screenshot saved as baseline for future comparisons.');
});
it('should compare the current element against the saved baseline', async () => {
const element = await $('//a[text()="QA Services"]/parent::div'); // Same element selector
// Compare the current element screenshot with the baseline
const result = await browser.checkElement(element, 'header-logo-baseline');
// Assert that there are no visual differences
expect(result).to.equal(0); // 0 means no differences detected
});
after(async () => {
console.log('saveElement test completed.');
});
});
Baseline:
Actual:
Image for display Difference Between Baseline image vs Actual Image
Assertion:
saveTabbablePage() and checkTabbablePage()
- Purpose: Capture and compare all tabbable (interactive) elements on the page, such as buttons, links, and form inputs.
- Usage:
- saveTabbablePage(‘tabbable-baseline’): Saves the current state of tabbable elements as a baseline.
- checkTabbablePage(‘tabbable-baseline’): Compares the tabbable elements against the baseline.
- Ideal for: Ensuring that interactive elements maintain their layout, focus states, and overall appearance.
saveTabbablePageTest Test file:
import { expect } from 'chai';
describe('checkTabbablePage method with WebdriverIO', () => {
before(async () => {
// Open the target webpage
await browser.url('https://magento.softwaretestingboard.com/training/training-video.html'); // Replace with your target URL
});
it('should save the tabbable elements as a baseline', async () => {
// This is useful for creating the initial baseline image
await browser.saveTabbablePage('tabbable-baseline');
console.log('Tabbable elements screenshot saved as baseline for future comparisons.');
});
it('should compare the tabbable elements on the page with the baseline', async () => {
await $('//a[@class="logo"]').click();
// Compare tabbable elements on the current page with the baseline
const result = await browser.checkTabbablePage('tabbable-baseline');
// Assert that there are no visual differences in tabbable elements
expect(result).to.equal(0); // 0 means no differences detected
});
after(async () => {
console.log('checkTabbablePage test completed.');
});
});
Baseline:
Actual Image:
Image for display Difference Between Baseline image vs Actual Image
Assertion:
Conclusion
In conclusion, visual testing with WebdriverIO and TypeScript is a critical practice for ensuring the integrity and consistency of modern web applications. By leveraging the power of tools like Applitools, Percy, or Resemble.js, testers can automate the detection of UI regressions, streamline cross-browser validations, and maintain high-quality user experiences across updates.
The structured approach outlined in this guide—from understanding the basics of visual regression testing to integrating tools, debugging failures, and automating pipelines—empowers QA teams to implement reliable and efficient visual testing strategies. With advanced concepts like responsive testing and handling dynamic content, testers can address even the most challenging scenarios, while adhering to best practices ensures scalability and maintainability in their frameworks.
Through real-world implementation, it becomes evident that visual testing is not just about detecting changes but about building confidence in the UI’s stability and delivering polished, error-free applications. Adopting these methods allows teams to future-proof their testing efforts, reduce manual overhead, and provide seamless, visually consistent experiences to end users.
Witness how our meticulous approach and cutting-edge solutions elevated quality and performance to new heights. Begin your journey into the world of software testing excellence. To know more refer to Tools & Technologies & QA Services.
If you would like to learn more about the awesome services we provide, be sure to reach out.
Happy Testing 🙂