Integration testing your Asp .Net Core App – Dealing with Anti Request Forgery (CSRF), Form Data and Cookies

  1. Integration testing your asp .net core app with an in memory database
  2. Integration testing your asp .net core app dealing with anti request forgery csrf formdata and cookies (this)

This post can be considered a sequel to Setting up integration testing in Asp.Net Core. It builds upon some code provided there.

Running Integration Tests are awesome. (Be mindful though: They Are A Scam as well).

So great, you got it up and running. And you probably will run into a few (practical) things when dealing with a bit more useful scenario’s. In this case I describe visiting a page (GET) and POSTing a form. Also we deal with the case when you have set up Anti Request Forgery.

I will give you a few hints of code. For your convenience I have also put them up as Github Gists.

Disclaimer about the presented code:
I have come to this code after some investigation, but I lost track of what source lead to what piece of code. So if you are (or know) the source of pieces of code, please notify me and I will give credits where credits are due.

Now we have that out of the way, lets get started!

Context – some example code

To explain things easier, lets say you have a GET and POST action defined (for same URL). The GET delivers a view with a form; in your integration test you want to do a POST to the same URL as if the user filled in the form. (Since we’re not UI testing, we don’t care about the HTML – no need to play browser here).

In our example, lets say we have some code like this:

// GET request to awesomesauce
[Route("awesomesauce")]
public async Task<ActionResult> AwesomeSauce()
{
	var model = new MyAwesomeModel();
	return View(model);
}


// POST request to awesomesauce
[HttpPost, Route("awesomesauce"), ValidateAntiForgeryToken]
public async Task<IActionResult> AwesomeSauce(MyAwesomeModel myAwesomeModel)
{
	// if valid, do stuff
	
	// else...
	return View(myAwesomeModel);
}

Your integration test would look like this:


[Collection("Integration tests collection")]
public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		// How do we do this? Send data (POST) - with anti request forgery token and all?... lets find out!
		//var response = await client.SendAsync(requestMessage);
	}
}

In the test we marked our questions. How do we post data? And how do send this AntiRequestForgery token?

Lets begin with POSTing data. In a convenient world I would like to present a Dictionary with keys and values, then simply pass that as method BODY and let some helper method transform that into a true HttpRequest message. I made such a thing my own, it looks like this:

With the PostRequestHelper we can now POST data like so:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		var formPostBodyData = new Dictionary<string, string>
			{
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);

		// TODO: AntiRequestForgery token

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

Well that is easy isn’t it?

If you paid attention you already saw a hint in the PostRequestHelper about a CookiesHelper. Although it is not needed to deal with AntiRequestForgery, it is a handy tool. I’ll explain it below.

Dealing with the AntiRequestForgery token

In general it is easy, you do a GET, in its response you receive a token. You need to extract that token and put that token on your next POST request and you’re done.

To extract the token, you can use this:

Now we can use it in our test like so:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);

		var formPostBodyData = new Dictionary<string, string>
			{
				{"__RequestVerificationToken", antiForgeryToken}, // Add token
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

And voila, you can now do a POST request which will pass the token and make the POST happen. By omitting the token you can test if your action is protected by CSRF (or using a different token). Although I would not try to test the framework itself, I would advice to have tests in place that make sure specific controller actions are protected.

Dealing with Cookies

As bonus, lets deal with Cookies. You need to deal with those probably. Sometimes you need to post the data again (as if you are a real browser). In that case to make life easier there is a method on the PostRequestHelper called CreateWithCookiesFromResponse. This basically creates a POST request, and copies over your cookies from a (previous) GET request.

The CookiesHelper looks like this:

In our example test above, we could have used it like this:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce"); // this returns cookies in response
		response.EnsureSuccessStatusCode();

		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);

		var formPostBodyData = new Dictionary<string, string>
			{
				{"__RequestVerificationToken", antiForgeryToken}, // Add token
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		// Copy cookies from response
		var requestMessage = PostRequestHelper.CreateWithCookiesFromResponse("/awesomesauce", formPostBodyData, response);

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

Conclusion

After we have set up integration testing we want to get to do some basic interactions with our application. Using an AntiRequestForgeryHelper we can extract a token. Using the PostRequestHelper we can construct a new Request to easily send over a request. Combined they can make any scenario work with CSRF protection.

In case you need to pass over cookies information you can use the CookiesHelper.

Integration Testing your Asp .Net Core app with an in memory database

Parts:

  1. Integration testing your asp .net core app with an in memory database (this)
  2. Integration testing your asp .net core app dealing with anti request forgery csrf formdata and cookies

Revisions:

  • 14th august 2016 – updated for .net core 1.0
  • 29th april 2016 – first version covering RC1

Recently I am working with .Net (C#). And we’re working in .Net Core, which is awesome. (new stuff, wooh yeah!). I wanted to set up integration testing and it was tough to find resources to make it all happen. Which is pretty obvious considering how new some stuff is.

I found some articles scattered around this topic. But, there is not a full guide from “start till ‘full integration testing’ + in memory database”. So because of the lack of it, here is my take.

I couldn’t have made it this far without some notable resources (see below) and the answer to my Github question (with a friendly and very constructive response, thanks Asp.Net guys!).

Overview: What this blog post covers

  1. Setting everything up – your first integration test
  2. Run your tests against an in memory database + making sure the memory database has its tables set up.
  3. Then make it as fast as possible

Do note: An in memory database is NOT the same as your SQL Server. But if that is not bothering you (or not in these test cases), no worries there.

Step 1: First make it work – setting everything up

I assume you don’t have any integration test running yet. I am using xUnit. Read this well written article[#1] how to set up your integration test base. I summarise here quickly, but if you get stuck read the article. Then get back here.

Hook up your dependencies, here are mine (taken from project.json):

...
"dependencies": {
    ... // my project dependencies
    "FluentAssertions": "4.2.1",
    "xunit": "2.1.0",
    "xunit.runner.dnx": "2.1.0-rc1-build204",
    "Microsoft.AspNetCore.TestHost": "1.0.0",
  }
...

Within your test class, define:


	public TestServer server { get; }

	public HttpClient client { get; }

Then in the constructor of your test class:

var builder = new WebHostBuilder().UseStartup<Startup>();

server = new TestServer(builder);

client = server.CreateClient();

Now also create a test case. Something along the lines of:


[Fact]
public async void TestVisitRoot() {
    var response = await client.GetAsync("/");
    response.EnsureSuccessStatusCode();
}

This is basically the example from original article, but stripped down (where applicable).

Try running the test case first. It should run the app as if you ran it normally and it would visit the homepage. It also connects to your real database, webservices and whatnot.

Congrats, you completed step one. On to the next. Where we will be…

Step 2: Replacing database with an in-memory SQLite database

I assume you use a SQL Server in your ‘real world’ scenario. For integration testing you want to have a predictable state before running the test. An empty database is pretty predictable (after you fill it up with test data ;-)).

Also an in-memory database saves you the hassle of dealing with files, permissions, removing (temp) files, etc.

In order to inject our in memory database, we need to override the Startup class. We need to create a seam in our class so we can write our test-specific (ie override) database setup code there.

We start by creating a class TestStartup which extends from Startup. Then we make sure we use TestStartup in our constructor in our test class:

var builder = new WebHostBuilder()
	.UseStartup<TestPortalStartup>() // use our testStartup version


In your Startup class you need a method where you are setting up your database. You probably do this within a Configure or ConfigureServices method. Instead of wiring up the database within that method, extract that code in a separate method and call it SetupDatabase.

The code in that method might look a bit like this:

public virtual void SetUpDataBase(IServiceCollection services)
{
	services
		.AddEntityFramework()
		.AddSqlServer()
		.AddDbContext<YourDatabaseContext>(options =>
			options.UseSqlServer(
				Configuration["Data:DefaultConnection:ConnectionString"]
		));
}

Make sure you define this method as virtual. This allows us to override it within our TestStartup class.

Before we override the method, we need to make sure we have the appropiate SQLite dependency defined in our project.json. So add it. This should make the dependencies look like:

...  
"dependencies": {
    ...
    "FluentAssertions": "4.2.1",
    "xunit": "2.1.0",
    "xunit.runner.dnx": "2.1.0-rc1-build204",
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    "Microsoft.EntityFrameworkCore.InMemory": "1.0.0",
    "Microsoft.EntityFrameworkCore.Sqlite": "1.0.0",
  }

Now, in your TestStartup override method SetupDatabase and let it set up your SQLite in memory database:

public override void SetUpDataBase(IServiceCollection services)
{
	var connectionStringBuilder = new SqliteConnectionStringBuilder { DataSource = ":memory:" };
	var connectionString = connectionStringBuilder.ToString();
	var connection = new SqliteConnection(connectionString);

	services
		.AddEntityFrameworkSqlite()
		.AddDbContext<CmsDbContext>(
			options => options.UseSqlite(connection)
		);
}

Try running your app. See how it behaves.

You might run into problems where it complains about not having a database setup, or no tables being found. No worries, there are a few things left to do.

Ensure creation of database

At some place in your webapp you most likely create your dbContext, along the lines of:

//Create Database
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
	.CreateScope())
{
	var dbContext = serviceScope.ServiceProvider.GetService<YourDatabaseContext>();

	// run Migrations
	dbContext.Database.Migrate();
}

For making sure your in-memory database has a database setup (with tables, etc). In general you want to do this…:

//Create Database
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
	.CreateScope())
{
	var dbContext = serviceScope.ServiceProvider.GetService<CmsDbContext>();

	dbContext.Database.OpenConnection(); // see Resource #2 link why we do this
	dbContext.Database.EnsureCreated();

	// run Migrations
	dbContext.Database.Migrate();
}

Of course you want this only for integration tests. So don’t leave it like that. Again, create a seam. So you get:

// method in Startup.cs
public virtual void EnsureDatabaseCreated(YourDatabaseContext dbContext) {
	// run Migrations
	dbContext.Database.Migrate();
}

// within your Configure method:
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
	.CreateScope())
{
	var dbContext = serviceScope.ServiceProvider.GetService<YourDatabaseContext>();
	EnsureDatabaseCreated(dbContext);
}

And in your TestStartup you override it like so:

// method in TestStartup.cs
public override void EnsureDatabaseCreated(YourDatabaseContext dbContext) {
	dbContext.Database.OpenConnection(); // see Resource #2 link why we do this
	dbContext.Database.EnsureCreated();

	// now run the real thing
	base.Migrate(dbContext);
}

Same trick. Now re-run your test. It should work now. You could leave it like this. There are a few challanges up ahead, like dealing with cookies, anti-request forgery and so on. I might blog about those too.

Note: overriding like this might not be the only/best case after RC1, as there are changes that should make it way easier to add your own dependencies/setup that will be coming in RC2.

Now, the downside of integration tests is that b ooting them up is very slow compared to unit tests. So you want to do that only once (preferably) and then run all your tests. Yes, that also has downsides, your tests should be careful when sharing state throughout one webapp run. So make sure you keep your tests isolated.

Step 3: Speed up your integration tests! Use a TestFixture + xUnit collection

Inspired by another article[#3] and xUnit’s ability to use xUnit’s CollectionDefinition you can make sure your webapp is only booted once.

Sharing webapp between test cases using a TestFixture

This solves the problem: creating a web app for each test case.

To do this, in short, create a new class. For instance TestServerFixture. Move your client/server setup in this class. So it looks like this:

public class TestServerFixture : IDisposable
{
	public TestServer server { get; }

	public HttpClient client { get; }

    public TestServerFixture()
    {
		// Arrange
		var builder = new WebHostBuilder()
			.UseEnvironment("Development")
			.UseStartup<TestPortalStartup>();
			// anything else you might need?....

		server = new TestServer(builder);

		client = server.CreateClient();
	}

	public void Dispose()
    {
		server.Dispose();
		client.Dispose();
    }
}

Note the differences, the testFixture implements an IDisposable interface. Your setup which was in your test class constructor, has moved to the constructor of the fixture.

Now, to make things easier (as you will create more and more integration test classes), create an abstract test class, which will be setup to receive the TestServerFixture. Then you can extend from this abstract class in your concrete test classes.

The abstract class would look like this:

public abstract class AbstractIntegrationTest : IClassFixture<TestServerFixture>
{
	public readonly HttpClient client;
    public readonly TestServer server;


    // here we get our testServerFixture, also see above IClassFixture.
    protected AbstractIntegrationTest(TestServerFixture testServerFixture)
	{
		client = testServerFixture.client;
		server = testServerFixture.server;
	}
}

As you can see we use an IClassFixture. Which is used for shared context between test cases..

This little boilerplate code will now allow us to get our concrete test class to look like:

public class MyAwesomeIntegrationTest : AbstractIntegrationTest
{
	public MyAwesomeIntegrationTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

 [Fact]
public async void TestVisitRoot() {
    var response = await client.GetAsync("/");
    response.EnsureSuccessStatusCode();
}

// etc more tests...
}

Sharing your webbapp between test classes using xUnit’s CollectionFixture

This solves the problem: creating a web app (using a TestFixture) for each test class.

So awesome you have multiple test classes and you notice that you boot up your webapp everytime. And you want to solve this for (some) classes. Well that is possible. For obvious pro’s and con’s which I won’t dive into. (beware of state! ;-))

Setting up a CollectionFixture is also explained here. But for completeness sake, let me rephrase:

First create a class that we will use to define a collection. Like so:

[CollectionDefinition("Integration tests collection")]
public class IntegrationTestsCollection
{
	// This class has no code, and is never created. Its purpose is simply
	// to be the place to apply [CollectionDefinition] and all the
	// ICollectionFixture<> interfaces.
}

Now above all test classes you want to include in the Integration tests collection, simply add this line above the class definition:

[Collection("Integration tests collection")]

Which makes, in our above example, it like this:

[Collection("Integration tests collection")]
public class MyAwesomeIntegrationTest : AbstractIntegrationTest
{
	public MyAwesomeIntegrationTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

 // your tests here...
}

Now run your tests again and note you boot up your webapps for each collection only once. Hence if you put everything in one collection, your webapp will only boot once.

Conclusion

We can run integration tests. If we want we can run them against an in memory database. Using seams we can inject our own test setup code, which probably changes in RC2 or further. If we want to speed up our tests we can put them in one “Integration collection” and use a single TestFixture.

Resources:

1. Asp.net docs – Integration testing
2. SQlite in memory database create table does not work
3. Fast testing
4. My original question/ticket at Github

Don’t tell me how – tell me your intent

Since I have founded my own company, I have worked for/with multiple companies. During those times I made a few observations I’d like to share. This is one observation.

TLDR: Give a team an intent and the team will give you the best path to that intent (goal). A much better path (how) than you could ever figure out yourself.

A lot of (it) organizations work in iterations (so called ‘Agile’). Often using a process (but not limited to) called Scrum.

Whatever you’d like to call the process, there is a need to build things. This need is often presented as a list and is (should be) ordered by priority. In Scrum this is called the Product Backlog. This list is often discussed in so called refinement sessions where the Product Backlog Items are being prepared for the next (coming) iteration.

One of the questions I’ve heard is: “how much work/effort would it cost to do [backlog item] ?”.

This is a very useful question, but also a dangerous one.

Business/Product Owner/Managers – Be careful what you ask for! – Example #1

Lets use a more concrete example:

[Backlog Item] says: “As [Organisatie X] I’d like to have a JSON implementation so that I can work with [party Y]”.

Considering the question we posed (“how much effort..” etc), the answer will be a quantity expressed in Story Points, hours or whatnot. This is valuable information, as a business owner you can make the translation (roughly) to the amount of money this feature will cost. You can then evaluate if this ‘is worth it’.

You have the answer to your question. But did you actually get that much further? What did you miss?

Business: Use your team’s brainpower! – express intent! (business value) – Example #2

Lets reformulate the Backlog item so we express intent first:

[Backlog Item] says: “to offer service X to our customer(s) we’d like to send order information to [party Y] so that we can deliver orders to our customers.”.

Ah, so the intent is to ship orders to a customer which (apparently) party Y is only able to deliver!

This is a totally different question.

So where does JSON come from? Is it really required to send orders? Could we perhaps export a CSV (which might be way easier to do?)? None of these possibilities are explored in the first example. Simply because the question was asked differently.

Business: Be careful how you ask things to be done on your product backlog.

Team: ask for the intent when you’re presented with an ‘implementation’ question

Of course, with Software Development not only ‘the business’ is playing a role. In fact, working Agile simply means that we’re working together. As long as there is an ‘it’ and ‘business’ as separate entities you will never get the full potential and effectiveness of your people. True collaboration means that all ‘entities’ (people!) within the organization are (should be) working together to make that organization as successful as possible, right?

If that’s not the case for you, just get to know each other better (grab a beer?) first. 🙂

So back to the Backlog item. Considering you get a question as in the first example. How do you get to the intent from there?

One of those ways is to ‘ask why 5 times’. If you take that to literally you might sound like a spoiled brat. (although there is some sense in asking why all the time).

A bit more practical would be to ask questions like:

  • what will [implementation] happen?
  • what do you need [implementation] for?
  • once [implementation] is finished, what will it be used for?

Usually this leads to the intent. As in: “once we have a JSON message, we can send it to [party Y] so they can process orders”. The ‘so that’ part is important. If it’s not clear, don’t be afraid to ask. You don’t ask them because you doubt the business or their competences. You want to be professional, not wasting money, and delivering the actual value.

Business value usually is represented in money. When it is not about money, it usually is tight to the company vision. If you don’t think this is the case, keep asking.

Sometimes there are exceptions. Lets say your product needs a payment provider. And the company has a certain agreement to use Buckaroo for payments. Then in that case you might get the “build buckaroo to process payments” story. Although ideally it is not the story you want. (Especially if it is critical and getting buckaroo to work takes ages, while you know some other payment provider can be implemented with a fraction of the effort).

Team: By exploring intent, you can answer the ‘true’ business need.

This eliminates ‘waste’ on several levels. Waste in time, technology and missed opportunities.

So what does this boil down to? – trust your team

So what does this all mean? That you as manager involve your developers sooner? Preferable before you need to build up services/couplings with 3rd parties? yes!

Before you get into contract negotiation with these parties? yes!

That the team tells you how to process payments? you bet!

That the team decides when to go live? Yup!

Let the team think of how to do cross selling? Or how to make your checkout flow easier? (so don’t hire an external party to deliver a ‘report’ in the team, so they can ‘fix’ it). yes!

Give the team autonomy. (= responsibility). Give the team an intent, and the team will do their best to fulfill that role.

You have to get used to it.

It is exciting.

But above all, it’s awesome in many ways!

Go forth and be great!

Migrating from Spring 3.2.x to Spring 4 and using ‘spring-mock 2.0.8’ gives “java.lang.NoSuchMethodError: org.springframework.core.CollectionFactory.createLinkedMapIfPossible”

So this is a very short post, with a ‘gotcha’. I wasn’t able to find anything about this, thats why I write it down here right now:

If you are migrating from Spring 3 to 4 and you have in your pom.xml the following dependency:

    <properties>
        <spring.version>3.2.4.RELEASE</spring.version>
        <junit.version>4.9</junit.version>
    </properties>
...
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-mock</artifactId>
            <version>2.0.8</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>test</scope>
        </dependency>

Once you migrate to Spring 4 (lets say 4.0.3.RELEASE) and run your tests you might run into a following stacktrace:

java.lang.NoSuchMethodError: org.springframework.core.CollectionFactory.createLinkedMapIfPossible(I)Ljava/util/Map;
	at org.springframework.mock.web.MockHttpServletRequest.<init>(MockHttpServletRequest.java:107)
	at org.springframework.mock.web.MockHttpServletRequest.<init>(MockHttpServletRequest.java:210)
	at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:171)
	at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:100)
	at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:319)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:212)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:232)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:89)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:292)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:175)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Then all you need to do is make sure that you *DO NOT* have ‘spring-mock’ still in your dependencies configured. As it seems that ‘spring-test’ has assimilated this in its own JAR in Spring 4.

Remove from POM.xml, re-run tests and be happy again. It took me a while to figure this out. I hope it saved you some time!

Facilitating the Global Day of Coderetreat 2013 in Amsterdam

On the 14th of December 2013 – the Global Day of Coderetreat was held at ZilverlineI have experience with coderetreats and also organised one at the 7th of january in 2012, and the GDCR12.

This time I both hosted and facilitated this event. This means that besides practical stuff I also did the talking which I will explain further in this post. This was the first time I did this and I’d like to share how it was. If you want to get an impression of the day you can have a look at this slideshow.

A big thanks to Bob Forma and Diana Sabanovic who helped me with the hosting aspects throughout. This enabled me to mostly focus on facilitating.

I was anxious, especially since last years GDCR was very well done. Back then I had a great experience and I was not sure if I could give the participants the same experience. Yet, I wanted to do this: I just love sharing knowledge and give people something to learn or think about.

After attending the GDCR Facilitator Training by Jim Hurne, I had a clear image of how I wanted the participants to experience the Coderetreat: People having fun, learning from each other and the constraints given.

Thats it.

Continue reading

Paypal, vulnerability through obscurity?

I have been member of Paypal for quite some time, and I use it rarely.

When I use it, I want it to be a quick, seamless experience. I log in, do my business, log out. Thats it.

Reality is different. Although I must admit, it does not help that I forget my password every time. Since I use Paypal every 6 to 12 months I can’t get it in my muscle memory.

I bought 1password a while back to help me remember only one password (you don’t say?) and let it generate strong, secure passwords. I have been changing my passwords on all websites I visit ever since.

And so Paypal and I meet again. And I want to change my password.

The last time I wanted to change my password at Paypal it was a very, very unpleasant experience. I actually was glad I got through the process and wanted to forget about it. This time I decided to write about it because it is a long while back and it really is bad.

In a sense you could say Paypal has been compromised, not technically but through usability.

Before I could change my password I had to answer my security questions which I filled in whaaaay back and I could hardly remember them. Since I did know the answer of the security question but I could not write the down *exactly* I had a hard time getting past the first step. So I get it, you want to protect us from others changing our password when we forgot to log out and such. Why not ask the password *again* (old password) at this very step? (This step did not happen after I wanted to change my password again, so it is as if this is into effect when the user has not logged in for a while.)

Once I got past the ‘security questions’ page I actually get the familiar 3 fields: old password, new password and new password again.

I open 1password, let it generate a strong password and then I got smacked in the face again. You may not copy and paste a password in the ‘new password’ fields. Paypal deliberately blocks any copy/paste actions.

We’re not finished though, because Paypal is also very specific about what your password may or may not be.

– It may not contain your name or email address (which makes sense)

– It must contain a symbol, a number and a capital. Even though it does not even matter for your password strength. (it is not like computers actually *read* your password as humans do)

– It has a maximum length. What!? Got worried passwords take up too much space? I can’t possibly imagine why you would restrict this.

– Your password should be hard to guess for a relative or friend. (which kind of infers the 1st point)

Since I cannot copy/paste the password, I have to copy/paste the password in an editor. Re-arrange my windows so I can fill in my password and see it at the same time. After I filled the first new password field, I actually get a warning that my password is at maximum length. As if it is a bad thing my password is 20 characters long.

I go on, type the password again and (of course) I made a mistake (typo), which results in a red message saying the passwords are not the same.

So here I am, trying to change my password and about to give up because it is as if Paypal does not want me to have a secure account.

I believe we got here a ‘we think too much for the user’ syndrome. I believe Paypal does want their users to have secure accounts (the what part), but how they implemented it is having an opposite effect (at least on me). So how could they have done it better?

– get rid of the security questions first (*)

– don’t restrict maximum password length, keep your minimum. Seriously, there is no reason to do this.

– don’t enforce special symbols, capitals or numbers. Instead hint them how to create easy to remember yet very strong passwords.

– allow copy and pasting. If you are afraid of some users being compromised by that, then they probably are being compromised on several levels.

And perhaps the most important suggestion: Make it an easy, seamless and effortless to change your password.

(*) – Yes, this might indicate that if someone knew my password they could change it, which perhaps the security questions wanted to prevent. However, if someone knew my password then that is a problem on itself. And you’re probably trying to fix the wrong problem.

Stuff I’ve learned #04

Time has passed…

  • In commit messages, describe intent rather than implementation details. (thanks Remco!)
  • Using github? Then you can refer to issues in your commit messages using #. Ie when there is an issue (#12), and you want to fix it. Just refer to it with a #12 and Github will automatically link your commit to the issue.
  • It always takes more time then you always think to revamp your project(s).
  • Start with why. Easier said then done though. (Reading the book, so I might write a review about that soon)
  • Paypal’s functionality to change your password sucks. Perhaps I’d write a blog about it…
  • Github already rocks by making it so easy to host repo’s, and with Travis’ integration it made me drool. Especially when I wanted to merge a pull request:
    Screen Shot 2013-09-11 at 3.18.31 PM Screen Shot 2013-09-11 at 3.27.44 PM

Stuff I’ve learned #03

Another week has passed:

  • Unlike in Windows; in Chrome you cannot easily focus on your bookmarks bar with a keyboard short key on Mac OS X.
  • If you want to run rake tasks in your specs in a before block, be sure to set a line[code language=”ruby”]Rake::Task[name].reenable[/code]

    so you can re-execute them every time. Rake seems to remember which task has been executed, so you cannot execute it twice.

  • If you want to stub out STDOUT messages (like with ‘puts’) in your spec, use:[code language=”ruby”]STDOUT.stubs(:puts)[/code]
  • When in doubt, speak up. Always.
  • With Scrum, big stories are big risks. Split them up.
  • Don’t use PID files to remember which proces has been started and when it should be stopped. Especially if you want to reboot a deamon process automatically once it has died. Instead wait for it when the deamon has quit and act upon a not-normal exit code.
  • Sometimes using ‘git fetch -p’ is not enough to prune all your local branches (which do not exist anymore on remote). You can use a rather long command (see below, from stackoverflow question)[code language=”shell”]git branch -r | awk ‘{print $1}’ | egrep -v -f /dev/fd/0 <(git branch -vv | grep origin) | awk ‘{print $1}’ | xargs git branch -d[/code]
  • With editorconfig (*) you can create code formatting rules, nothing new here, but editorconfig has plugins for a lot of known editors, (I tested it in Vim & Sublime), meaning you can now share these rules cross-editor. Now that is cool!
  • With C++, when your function argument is using const, and you’re calling a non-const function on that argument you will end up with a message like:

    “error: passing ‘const xxx’ as ‘xxx’ argument of ‘function you where trying to call on xxx’ discards qualifiers”.

    You can fix this by telling the function body is const:

    [code language=”cpp”] bool myFunction() const { /* code here */ } [/code]

* Thx to Arjen about editorconfig.