JMeter: Writing Sampler Data into the File

Method requires 2 post-processors on Sampler element:

Post-Processor 1
Type: Regular Expression Extractor
Extracts some data from the request into a variable (say, “myData”)

Post-Processor 2:
Type: BSF Post-Processor
Saves the extracted data in the variable into the file:

String filename = vars.get("userVariableWithFileName"); 
String message = "At " + System.currentTimeMillis() +" data is " + vars.get("myData");

FileOutputStream f = new FileOutputStream(filename, true);
PrintStream p = new PrintStream(f); 
this.interpreter.setOut(p); 
print(message);
f.close();

vars.put("myData", "N/A"); // Make sure we don't accidentally record the same data twice

I would not recommend this method for large production-scale load tests, but for smaller tests it could be a quick and convenient way to retrieve and save some data from request, without changing script in any significant way. The performance of this piece largely depends on the the performance of the underlying hard disk, which is why it’s not good for large-scale performance. But it gives a lot of flexibility in how and what you write (e.g. each user could have its own file; the format of the file is absolutely up to you; you may do some statistics on the fly, and so on). Thus might be worth it.

JMeter: Writing Sampler Data into the File

Parameterized JUnit Tests

Useful when multiple tests have the same structure, but have different inputs and produce different results. Here’s the skeleton:

import static org.junit.Assert.assertEquals;

import java.util.Arrays;
import java.util.Collection;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

@RunWith(Parameterized.class)
public class SumTests
{
	@Parameters(name="{index}: {0}+{1}={2}") // name is optional, {index} will be used by default; arguments used here must be declared in constructor
	public static Collection<Object[]> data() 
	{
		return Arrays.asList(
			new Object[][] 
			{ 
				{1, 1, 2},
				{3, 5, 2}, // fails
				{-1, 1, 0}
			}
		);
	}

	public SumTests(int a, int b, int sum)
	{
		this.a = a;
		this.b = b;
		this.sum = sum;
	}

	@Test
	public void sum()
	{
		assertEquals("Sum is not correct", sum, a + b);
	}
	
	private int a, b, sum;
}
Parameterized JUnit Tests

Selenium WebDriver on Firefox: Working with Add-Ons

Running Selenium WebDriver on Firefox with Static Add-Ons

  1. Create a special profile for Firefox
  2. Install add-ons on that profile
  3. Start Firefox as described here

Installing Add-On when Starting Selenium WebDriver on Firefox

import java.io.File;
import org.openqa.selenium.firefox.FirefoxProfile;
import org.openqa.selenium.firefox.FirefoxDriver;
// ...
final String addOnPath = "C:\\Temp\\youraddon.xpi";
File addOnFile = new File( addOnPath );
FirefoxProfile profile = new FirefoxProfile();
profile.addExtension( addOnFile );
WebDriver driver = new FirefoxDriver( profile );

Getting List of Installed / Active Add-Ons with Selenium WebDriver on Firefox

There’s no easy way to achieve this unfortunately. So the method below is really an ugly hack, but it get the job done:

  • Firefox is loaded on about:addons page
  • The page contains list of add-ons in JSON format, which can be parsed
FirefoxProfile profile = new FirefoxProfile();
profile.setPreference( "browser.startup.homepage", "about:addons" ); 
WebDriver driver = new FirefoxDriver( profile );
driver.get( "about:addons" );
		
String source= driver.getPageSource();
final String addOnsSectionStart 		= "<browser id=\"discover-browser\"";
source = source.substring( source.indexOf( addOnsSectionStart ) + addOnsSectionStart.length());
source = source.substring( source.indexOf( "{%22" ) );
source = source.substring( 0, source.indexOf( "\" clickthrough" ) );
source = URLDecoder.decode( source, "UTF-8" );
		
JSONObject addonObjects = new JSONObject(source);
JSONArray jsonAddonArray = addonObjects.names();
System.out.println(
	String.format(
		"%-10s\t%-15s\t%-15s\t%-15s\t%-15s\t%s",
		"type",
		"version",
		"user disabled",
		"is compatible",
		"is blocklisted",
		"name"
	)
);
		
for(int i = 0; i < jsonAddonArray.length(); i++)
{
	JSONObject jsonAddonObject = addonObjects.getJSONObject(jsonAddonArray.getString(i));
	System.out.println(
		String.format(
			"%-10s\t%-15s\t%-15s\t%-15s\t%-15s\t%s",
			jsonAddonObject.getString("type"),
			jsonAddonObject.getString("version"),
			jsonAddonObject.getString("userDisabled"),
			jsonAddonObject.getString("isCompatible"),
			jsonAddonObject.getString("isBlocklisted"),
			jsonAddonObject.getString("name")
		)
	);
}

The output will look like this:

type      	version        	user disabled  	is compatible  	is blocklisted 	name
plugin    	7.7.1.0        	false          	true           	false          	QuickTime Plug-in 7.7.1
plugin    	6.0.260.3      	false          	true           	false          	Java(TM) Platform SE 6 U26
theme     	17.0.1         	false          	true           	false          	Default
plugin    	2.4.2432.1652  	false          	true           	false          	Google Updater
extension 	2.28.0         	false          	true           	false          	Firefox WebDriver
Selenium WebDriver on Firefox: Working with Add-Ons

Test Automation vs. Mechanization

Wiki has a clear definition for both of those terms:

Mechanization provided human operators with machinery to assist them with the muscular requirements of work. Whereas automation is the use of control systems and information technologies reducing the need for human intervention. In the scope of industrialization, automation is a step beyond mechanization.

In testing, however, I see those two concepts often dangerously mixed: people tend to mechanize testing, thinking they are automating it. As a result, the estimated value of such “automation” is completely wrong: it will not take risks into account, it will not find important regressions, and it will give a tester a false sense of safety and completeness.  Why? Because of the way mechanization is created.

Both automation and mechanization have their value in testing. But they are different, and therefore should always be clearly distinguished. Whereas automation is a result of application risk analysis, based on knowledge of the application and understanding of testing needs (and thus finding tools and ways to cover those risks), mechanization  goes from the opposite direction: it looks for out of box tools and easiest ways to mechanize some testing or code review procedures, and makes an opportunistic usage of those methods. Mechanization, for instance, is great for evaluating a module/function/piece of code. It is, if you will, a quality control tool, but it does not eliminate a need for quality assurance testing, either manual or automated.

It’s like making a car: each detail of the car was inspected (probably in mechanized, or very standardized way) and has a “QC” stamp on it. But after all details were assembled, do they know if and how the car will drive? If car manufacturers did, they would not have to pay their test drivers. Yet, test drivers are the ones that provide the most valuable input, that is closest to actual consumer’s experience. In software, testers can play a role of both, quality control personnel, and test drivers. Taking away one of those roles, or thinking that test drive can be replaced with “QC” stamp is not the way to optimize testing. And bare existence of mechanical testing should not be the reason to consider application “risk free” or “regression proof”. Otherwise you may end up in a situation where your car does not drive, and you don’t even know.

Test Automation vs. Mechanization