bookmark_borderAutomating your SQL migrations in your Java (web) Application with ActiveJDBC’s DB Migrator

Regardless of your language or framework – if you write a (web) app and read/write data to a SQL database (you manage), you’re bound to have changes on that database for whatever reasons.

Usually this is tackled by writing SQL migration scripts. For instance SQL files you can execute in your release procedure. You might pass these SQL files along with a ‘release document’ for a separate (OPS) department.

But what if we could automate this? Wouldn’t it be much nicer your application could detect which SQL files to execute, and then execute them? And when things break, it stops.

This blog post covers one of the many ways to do this. I start from a more general process and move to a more concrete Java implementation.

An automated deploy process

In this case I assume the following:

  1. You are not allowed to run any build process on your environment you deploy on. Ie there is no maven on your prod environment (nor acceptance or test).
  2. Up until now you execute SQL migrations manually. These are SQL snippets you have and you use a SQL editor (or command line) to execute them.
  3. You want to automate these steps. (of course you do!)

So let’s make the hand by hand process a bit more formal:

  1. Write your own SQL script (You execute the bits you need, or you write it in such a way you can execute it fully all the time.)
  2. You stop your (web)app
  3. You open your SQL editor / prompt and execute your SQL script
  4. On failure, you roll back and reboot your app (and figure out what went wrong)
  5. On success you boot your (web)app

And what would an automated SQL migration process look like?

  1. It should use a SQL file that can be executed multiple times and have the same result (Ie the 2nd, or 3rd time should not result in other behaviour. Ie: Idempotent)
  2. When your (web) app boots it should check if there are any SQL files left to execute
  3. The SQL files are executed in a specific order (from oldest to newest)
  4. It remembers which migrations have been executed
  5. Upon failure, stop executing migrations and stop booting

 

Notice the similarities!

Of course, you should have created a backup before doing any migration. Though that is out of scope of this post. It usually boils down to:

  • create backup
  • execute release procedure
    • upon fail, report back. Put back backup, and boot old app.
    • upon success, good for you!

Using ActiveJDBC’s DB Migrator

I have worked on a few projects using Ruby On Rails (abbreviated as RoR), which uses rake as build tool (kinda like maven). With that you could easily create new database migrations, execute migrations, etc. I was charmed by the simplicity.

In the land of Java, where I have used mostly Spring and Hibernate – I found ActiveJDBC (which is part of Javalite) which seems to be inspired by the approach RoR has taken.

Within ActiveJDBC there is also the usage of a migrator which reminds me of RoR’s rake db:migrate approach. It is a maven plugin and you call mvn db-migrator instead. It is very nice to create new migrations or run them.

Now the running part is the problem. While on your local machine you can run mvn db-migrator, I would not suggest to do that on my production (or any other) environment. So we need a way to run the migrator without the usage of maven.

Fortunately this is easy to fix.

Let’s get into some code.

 

Setting up ActiveJDBC in your Maven project

If you follow ActiveJDBC’s documentation it is easy to set up within your project. So I take that here, but expand upon it.

In your pom.xml you add the following as a plugin within the plugins section:

<!-- for creating db migrations with maven -->
<plugin>
  <groupId>org.javalite</groupId>
  <artifactId>db-migrator-maven-plugin</artifactId>
  <version>${activejdbc.version}</version>
  <configuration>
    <migrationsPath>src/main/resources/sql/db-migrations</migrationsPath>
    <environments>MY ENVIRONMENT</environments>
  </configuration>
  <dependencies>
    <dependency>
      <groupId>${jdbc.groupId}</groupId>
      <artifactId>${jdbc.artifactId}</artifactId>
      <version>${jdbc.version}</version>
    </dependency>
  </dependencies>
</plugin>

Where there are a few details:

jdbc.groupId, jdbc.artifactId and jdbc.version is your db connector. It could be mysql for instance. You probably have this dependency already defined in your project. So take that and put it there.

For activejdbc.version I use now 2.0.

One other thing to note is that I have deviated a bit from the default migrationsPath. In my case I put the sql files into src/main/resources/sql/db-migrations.

If you set this up, you can now create a migration. Test it by executing:

mvn db-migrator:new -Dname=<your name for your migration script>

The naming of migration scripts does not really matter – although it is good to have a structure in it. Usually I tend to write them as <action>_<type>_<name of column/table>_extra_info. Which results into something like: create_table_Books.

Suppose you took the last example as input:

mvn db-migrator:new -Dname=create_table_Books

It will result into:

/src/main/resources/sql/db-migrations/20180711170817_create_table_Books.sql

As you can see, a timestamp is prepended to the name you have given.

The timestamp is used by ActiveJDBC to determine if the SQL script should be executed or not. It basically keeps a record of the last time it executed any scripts. Then it would diff against the sql scripts it found. Any newer scripts are executed in the correct order (oldest to newest). In other words, do not touch the timestamp.

I’d suggest you play a bit with the plugin first, before going to the next step.

 

Executing the SQL Migrations from your (web) application

Now the real fun begins.

Include db-migrator as dependency in your project

So not as a plugin but also as a dependencyLike so:

<!-- DB Migrations -->
<!-- yes we use it as a plugin but also as a dependency! -->
<dependency>
  <groupId>org.javalite</groupId>
  <artifactId>db-migrator-maven-plugin</artifactId>
  <version>${activejdbc.version}</version>
  <exclusions>
    <exclusion>
      <groupId>org.sonatype.sisu</groupId>
      <artifactId>sisu-guava</artifactId>
    </exclusion>
  </exclusions>
</dependency>
<!-- DB Migrations -->

**NOTE** I exclude the sisu-guava version because it is an older version than 20.0 and it will cause RuntimeExceptions when using Preconditions. Since I use version 25.x or newer in my own projects it clashes with the old version. So if you use Guava yourself and it is a newer version, make sure you exclude the old one like above.

Using it as a dependency makes it reachable for our code, while at the same time we can use the mvn db-migrator command alongside it.

 

Making sure your migrations run before your ORM is loaded

Usually web apps use an ORM (like Hibernate) and it requires the state of the DB to match the objects within the code. So your migrations (which alter state of the DB) should be executed before the ORM is booted.

So let’s start with that. You do this by adding a listener to your web.xml and place it at the top of all listeners. Making sure it is executed first.  If you do not have any listeners, in my case they are positioned right after filters but before servlet definitions.

<!-- order is important, the database migrator must run before everything else -->
<listener>
  <listener-class>com.fundynamic.webapp.listener.DatabaseMigratorListener</listener-class>
</listener>

Now we have to make sure the class exists. Create the class, and let it implement ServletContextListener like so:

package com.fundynamic.webapp.listener;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;

public class DatabaseMigratorListener implements ServletContextListener {

  // Public constructor is required by servlet spec
  public DatabaseMigratorListener() {
  }

  // -------------------------------------------------------
  // ServletContextListener implementation
  // -------------------------------------------------------
  public void contextInitialized(ServletContextEvent sce) {
  }

  public void contextDestroyed(ServletContextEvent sce) {
      /* This method is invoked when the Servlet Context 
         (the Web application) is undeployed or 
         Application Server shuts down.
      */
  }
}

I’d suggest you put some System.out or logging there and see if it is executed in your application in the correct order before going to the next step.

Executing the migrations

Here we get into the meat of the problem.

Let’s start with the code

package com.fundynamic.webapp.listener;

import org.javalite.db_migrator.maven.MigrateMojo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;

public class DatabaseMigratorListener implements ServletContextListener {

  private static final Logger log = LoggerFactory.getLogger(DatabaseMigratorListener.class);

  // Public constructor is required by servlet spec
  public DatabaseMigratorListener() {
  }

  // -------------------------------------------------------
  // ServletContextListener implementation
  // -------------------------------------------------------
  public void contextInitialized(ServletContextEvent sce) {
    String configFilePath = sce.getServletContext().getRealPath("/WEB-INF/classes/dbmigrator.properties");
    String migrationsPath = sce.getServletContext().getRealPath("/WEB-INF/classes/sql/db-migrations");

    try {
      MigrateMojo mojo = new MigrateMojo();

      mojo.setConfigFile(configFilePath);
      mojo.setMigrationsPath(migrationsPath);

      // we do not use environments like this, but we process the properties file in our build process with a maven profile
      mojo.setEnvironments("myenv");

      mojo.execute();
    } catch (Exception e) {
      log.error("Exception when executing migrator: ", e);
      System.exit(-1); // stop further execution of the webapp with a non zero error code
    }
  }

  public void contextDestroyed(ServletContextEvent sce) {
      /* This method is invoked when the Servlet Context
         (the Web application) is undeployed or
         Application Server shuts down.
      */
  }
}

 

Basically we do 2 things:

  1. We point to a dbmigrator.properties which provides properties for active-jdbc’s database migration properties
  2. We set up the directory where to find the SQL migration files (which is the same folder as we used in the plugin)

You can also see I set up an environment named myenv, in this case I use the same environment because I use maven profiles which interpolate environment specific variables for me. Another way would be to provide an environment variable so you can have 1 file but different property names.

Last but not least upon any exception we do a System.exit(1) to signify we should stop whatever we are doing.

Within the dbmigrations.properties file you will need at least credentials for the user which executes the migrations (I suggest you create a separate account for each environment for that), the file might look something like this:

myenv.driver=${jdbc.driverClassName}
myenv.username=${db.migrator.username}
myenv.password=${db.migrator.password}
myenv.url=${jdbc.url}

Here I also use serialized properties by maven, do note the variables all start with myenv which is the environment configured in the listener. The ${jdbc.driverClassName} and such are defined in my pom.xml.

Making execution optional

There are cases where you do not want to execute the migrations. For example you want to execute migrations automatically only on specific environments (ie, up till acceptance, but not yet on production). In that case

Add this in your web.xml

<!-- put this here somewhere on top of your web.xml file -->
<context-param>
  <param-name>executeDatabaseMigrationsOnBoot</param-name>
  <param-value>${db.migrator.execute}</param-value>
</context-param>

Again, the param-value is a serialized property. If you want to know more about those, see maven-war-plugin filtering options.

Either way you want to have the context-param so you can read out the value from your listener and do some logic with it.

I use a BoxingUtil class which is nothing but a simple helper method to make reading string to Boolean easier. The implementation of canParseBoolean is:

public static boolean canParseAsBoolean(String value) {
  if (Boolean.TRUE.toString().equalsIgnoreCase(value) ||
    Boolean.FALSE.toString().equalsIgnoreCase(value)) return true;
  return false;
}

and parseBoolean is exactly what it does, but it make sure the value is not null. It is not of much importance here.

To read the property and do some logic it becomes like this:

String value = sce.getServletContext().getInitParameter("executeDatabaseMigrationsOnBoot");
if (!BoxingUtil.canParseAsBoolean(value)) {
  log.error("Cannot parse value given for context-param (in web.xml) 'executeDatabaseMigrationsOnBoot', value is [" + value + "], but expected either [true] or [false]");
  System.exit(-1); // stop further execution of the webapp with a non zero error code
}

boolean executeMigrations = BoxingUtil.parseBoolean(value);

now you can execute the rest of the listener code when this value is true.

 

The listener, complete code example

For completeness sake, here is the listener fully, with a bit of logging and with the flag to execute or not.

package com.fundynamic.webapp.listener;

import com.fundynamic.util.BoxingUtil;
import org.javalite.db_migrator.maven.MigrateMojo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;

public class DatabaseMigratorListener implements ServletContextListener {

  private static final Logger log = LoggerFactory.getLogger(DatabaseMigratorListener.class);

  // Public constructor is required by servlet spec
  public DatabaseMigratorListener() {
  }

  // -------------------------------------------------------
  // ServletContextListener implementation
  // -------------------------------------------------------
  public void contextInitialized(ServletContextEvent sce) {
    log.info("--------------------------------------------------------------");
    log.info("--------------------------------------------------------------");
    log.info("---              DATABASE MIGRATOR STARTING                 --");
    log.info("--------------------------------------------------------------");
    log.info("--------------------------------------------------------------");
    String value = sce.getServletContext().getInitParameter("executeDatabaseMigrationsOnBoot");
    if (!BoxingUtil.canParseAsBoolean(value)) {
      log.error("Cannot parse value given for context-param (in web.xml) 'executeDatabaseMigrationsOnBoot', value is [" + value + "], but expected either [true] or [false]");
      System.exit(-1); // stop further execution of the webapp with a non zero error code
    }

    boolean executeMigrations = BoxingUtil.parseBoolean(value);

    log.info("executeMigrations flag is [" + executeMigrations + "]");

    if (executeMigrations) {
      executeMigrations(sce);
    } else {
      log.info("Execute migrations was disabled, skipping executing migrations automatically!");
    }

    log.info("--------------------------------------------------------------");
    log.info("--------------------------------------------------------------");
    log.info("---              DATABASE MIGRATOR FINISHED                 --");
    log.info("--------------------------------------------------------------");
    log.info("--------------------------------------------------------------");
  }

  private void executeMigrations(ServletContextEvent sce) {
    String configFilePath = sce.getServletContext().getRealPath("/WEB-INF/classes/dbmigrator.properties");
    String migrationsPath = sce.getServletContext().getRealPath("/WEB-INF/classes/sql/db-migrations");

    try {
      MigrateMojo mojo = new MigrateMojo();

      mojo.setConfigFile(configFilePath);
      mojo.setMigrationsPath(migrationsPath);

      // we do not use environments like this, but we process the properties file in our build process with a maven profile
      mojo.setEnvironments("myenv");

      mojo.execute();
    } catch (Exception e) {
      log.error("Exception when executing migrator: ", e);
      System.exit(-1); // stop further execution of the webapp with a non zero error code
    }
  }

  public void contextDestroyed(ServletContextEvent sce) {
      /* This method is invoked when the Servlet Context
         (the Web application) is undeployed or
         Application Server shuts down.
      */
  }
}

 

What if I want to rollback?

So there are 2 scenarios for this.

  1. you’re releasing, and something went wrong.
  2. your release went fine, but you want to revert some changes in the next release

In scenario #1 roll back to the backup you created before executing the release and migrations.

In scenario #2 your release went fine and you should create a new migration rolling back the changes. There is no reverting of SQL scripts. just a continuation. (roll-forward)

Conclusion

Automating your SQL migrations is pretty straight-forward. If you write idempotent SQL scripts and configure your (web) app to execute them automatically with ActiveJDBC’s db-migrator you get convenience for free. With this you are a step closer to automating your deploy process which is a very powerful tool for both developers and non-developers.

You can either use maven profile‘s or ActiveJDBC’s environments to differentiate properties. What you use is up to you.

If you found this helpful or have any questions, let me know.

bookmark_borderRTS Game: Incubator week started!

Today I have begun an ‘Incubator week’. This means I have allocated a whole week to work on my game. One week without any distractions, at a separate location. Focussed to work on the game.

And even more awesome, I have some help from Arjen van der Ende for a few days.

In this week we’re going to work on:

  • Merging the Harvester logic
  • Minimap
  • Power Resource
  • AI

A lot of the things to get the Minimal Playable Game finished!

 

I am very grateful that my family supports me to chase my dream and give me the opportunity to work on my game like this! Thanks! Also, thanks Arjen for helping me out!

bookmark_borderRTS Game: Building the game, plan of attack

In my previous post I announced the start of my dream: building my own game.

In this post I will elaborate further about the goal I have set and how I intent to reach that goal. Writing a game is, like any project, quite a challenge. It is a continuous process of ‘zooming in’ (doing the actual work) and ‘zooming out’ (keeping an eye on the bigger picture). Only that way you can be sure you reach the goal in the most efficient manner.

Like any project, to keep it clear what to do, there is a list of tasks. It is wise to (ball-park) estimate how much time they will consume. At the same time there is a desired ‘launch date’. This brings a certain tension as you want as much value (features/tasks) done on launch date. Usually you have a minimal amount of tasks you have to get done.

Since I am now the developer and ‘product owner‘ at the same time I experience both sides. I need to investigate what to do to reach my goals. At the same time estimate and do the actual work.

This blog post covers the questions:

  • What needs to be done?
  • When will they be done?
  • What is the ‘end-date’?

What needs to be done?

In the grand scheme of things (zoomed out), roughly:

  • make the game ‘feature complete’
  • make my own graphics, sounds, etc
  • easily distributable & installable

What is ‘feature complete’?

I am heavily inspired by the first versions of C&C (RA), and I feel that to have a minimum playable game there should be at least:

  • 1 resource to ‘mine’ to earn money with (est: 8 hours)
  • 1 ‘power’ resource (est: 4 hours)
  • 1 faction to play with as a player (est: 0 hours)
  • A simple tech-tree (structures) (est: 8 hours)
    • Central structure to produce other structures (Const yard in C&C)
    • Power plant
    • Barracks
    • Factory
    • Refinery
  • A few units which have a Rock-Paper-Scissors mechanism (est: 16 hours)
    • Infantry
      • Two types: rocket launchers and soldiers
    • Light unit (fast, trike/quad)
      • 1 or 2 types
    • Heavy unit (tank)
      • 1 or 2 types
  • A (random generated) map to play on (est: 4 hours)
  • A very simple AI to play against (est: 12 hours)
  • A clear objective (ie, destroy enemy) (est: 4 hours)
  • A beginning and ‘end’ of the game. (menu to start from, and ‘game won/lost’ screen). (est: 8 hours)

And I think if you really want to push it you can shave off some features. I believe the first step is to get ‘full circle’ as soon as possible. Meaning you have a concept of a game. It has a beginning and an end.

From here on I can expand scope to my ideal game. But not before I have done the two other important things.

Estimation: 64-80 hours (20% deviation)

Make my own graphics, sounds, …

The next important part is graphics, sounds, music, story, etc. As for graphics. At the moment I use Dune 2 graphics as placeholders. Once I have the game mechanics in place and I know which units/structures I need for the minimum version I can start creating/getting these. So in a way they are not required immediately, but they are required whenever I want to commercially release my own game.

Changing the graphics will have impact on some implementations for sure, although I do set up the code as flexible as possible, there are always cases that I have not thought about.

There is a caveat. Creating graphics is hard. It is not my primary skill. To tackle this I could:

  • learn to create my own (? hours)
  • find someone else who is willing to do graphics for me (0 hours, but $$$)
  • find premade graphics in a market place and use that (4 hours searching, and $$$)

The same logic can be applied to Sounds, possibly Music.

Estimation: 0 – 10.000 hours

Seriously: this is a risk and I need to decide which strategy to get a more reliable amount of hours.

Easily distributable & installable

Release early, release often. It is a phrase I use all the time when I am working at clients. The same goes for my game. There is a difference however. Usually I work on mid-large web applications. There is no need for customers to install anything to use the latest version of the website. When an organisation has implemented Continuous Deployment, it can easily deploy new versions at will. Customers automatically have the latest version upon a new visit of the website.

For applications that need to be installed there are several platforms out there. There is a reason why the concept of an ‘App store’ is famous. It delivers a web-like experience.

There is a lot to win here, the first step though is to make sure any user is able to download a distribution for their platform (Windows, Mac OS, Linux) and able to install the game. I already took the liberty of wanting to support these platforms. So yes, I target the PC platform. No mobile, game console, etc.

In that sense there are a few steps to be taken:

  1. Offer a distribution somewhere (website? distribution platform? need to figure out)
  2. Provide an easy installation procedure (how? install4j? etc)
  3. Provide an app-store like experience. (use a distribution platform like Steam?)

Estimation: 8-16 hours

Estimation is based on creating an installer.

When will it be done?

Ok great, so there is a lot to do. So when will all this be done?

Lets bring a few numbers together. The estimates and the available time.

Bringing the estimates together

I just take the hours from all 3 paragraphs above, and sum them.

Min: 64 + 0 + 8 = 72

Max: 80 + 0 + 16 = 96

(I purposely did not add the 10.000 hours from the graphics section, it seriously is way too unsure).

So basically, I am estimating that 72 to 96 hours should be enough. Meaning, given an 8 hour work day it would take 9 to 12 days to get a minimum version which is distributable in the easiest way. Without doing anything about graphics, sounds, etc. (using dune 2 graphics as ‘stub’)

Estimated time needed: 72 to 96 hours

This excludes graphics.

How much time do I have?

To calculate the amount of time I have realistically I also do a min/max calculation. The minimum amount being the amount I am 100% sure of I have. The max being added hours I probably can spend, but I should not count on it.

Minimum hours are easy. I have at least 8 hours a week (1 work day a week) to spend. And every 8 weeks I take a week off to spend even more time. This means in a period of 10 weeks I can spend 14 days.

The max would be weekly 0 to 8 hours more. I can spend a weekend sometimes, an evening, sometimes more evenings. It really depends on a lot of things.

I like to think in periods of 10 weeks, I consider a full 10 weeks as an ‘iteration’. When working on Magic Gatherers we used this mechanism and it worked out pretty well. Also, every iteration had a particular focus. The first iteration there was ‘from idea to launched product’ for instance. For this game it would be different of course.

One other aspect with time is ‘when should it be done?’. The easiest thing would be to use the iteration end-date. Considering that the first work-day is within this week, this week is counted as the first of the iteration. An iteration of 10 weeks will mean the end date is 29th of october.

Meaning:

End-date: 29th of october 2017

Time available: 14 days to 19 days (112 to 152 hours)

Looks like an easy feat!… oh wait, I have seen this before. This probably means I forgot something. Ah yes, the graphics… and so much more unknowns. Looks like it will be a close one.

So when will it be done?

Yes, good question, so in this case I choose to use the fixed-time flexible scope approach. Although I do know if the minimal scope is not met I will not launch the product. Then again, I really DO want to launch a product so I probably will be very harsh on the scope and just make it fit.

This brings me to another topic, priorities and goal. What do I try to achieve with releasing something at the end of iteration #1? I will elaborate on that in a different blog post.

Conclusion

It looks like it is feasible to get a minimalistic feature complete game done within the first iteration. That would mean at the 29th of october (latest) a downloadable and installable game should be available.

However, there are more things that need to be done that were not explicitly defined in overall 3 phases. You can think of, writing dev blogs, youtube video’s for demoing features, a monthly in-depth video for Patrons and so forth.

The only way to know is to just do it!

bookmark_borderKickstarting my dream! – building my own Real-Time-Strategy Game

My dream is to build my own games and make a living out of it. It is fairly obvious to me now, but at the same time I never dared to make it this explicit.

Once you decide to make work of your dreams, you get into some kind of planning stage. How am I going to build any game? What kind of game? And how do you find time? Practical stuff, dreamy stuff. There is so much needed to build a game, where to begin?

For me it is obvious I want to build an RTS game. I have fond memories of Dune 2, Command & Conquer and Warcraft. Those games got me into programming. No wonder I built Dunedit, Arrakis, Dune 2 – The Maker, and am still working on my own engine today. You could say: I have a strong desire to build my own RTS.

So what is needed to build a game on your own? Obviously you need practical skills, coding, graphics, sounds, music, etc. I’m convinced you can learn about anything – although that does not make you an expert at it. I can code, and I have some skill in music. And, if I can’t ‘create’ it myself, there are various resources out there…

Then the next question is, when am I going to build this? It requires some serious time. How am I going to make time for this? Especially if you have a family, work, etc.

The answer to that is, you ‘make’ time. My experience with working on Magic Gatherers is that I can at least free up 1 day a week and every 8/9 weeks 5 days. Freeing up means basically taking unpaid leave.

This means, over a period of 10 weeks, I can allocate 14 days. A full two weeks of time I can spend on my own stuff: 10 ‘day a week’ + 4 extra days (the 5 day week).

As a freelancer I can more easily free up time, but it also means I need to watch my cash-flow and be sure that my family can rely on a sort of steady income. This is my starting point, from here I want to free up more time (but I need preparation for that).

In the meantime, given that I know my time I can spend on the game for this year, it becomes obvious that I have to prioritize the things to do. This is related to goals, which I will talk about in a later post.

If you want to follow my progress, and get a more indepth view of what I encounter while working towards my dream, follow (or support) me on patreon.

bookmark_borderIntegration testing your Asp .Net Core App – Dealing with Anti Request Forgery (CSRF), Form Data and Cookies

  1. Integration testing your asp .net core app with an in memory database
  2. Integration testing your asp .net core app dealing with anti request forgery csrf formdata and cookies (this)

This post can be considered a sequel to Setting up integration testing in Asp.Net Core. It builds upon some code provided there.

Running Integration Tests are awesome. (Be mindful though: They Are A Scam as well).

So great, you got it up and running. And you probably will run into a few (practical) things when dealing with a bit more useful scenario’s. In this case I describe visiting a page (GET) and POSTing a form. Also we deal with the case when you have set up Anti Request Forgery.

I will give you a few hints of code. For your convenience I have also put them up as Github Gists.

Disclaimer about the presented code:
I have come to this code after some investigation, but I lost track of what source lead to what piece of code. So if you are (or know) the source of pieces of code, please notify me and I will give credits where credits are due.

Now we have that out of the way, lets get started!

Context – some example code

To explain things easier, lets say you have a GET and POST action defined (for same URL). The GET delivers a view with a form; in your integration test you want to do a POST to the same URL as if the user filled in the form. (Since we’re not UI testing, we don’t care about the HTML – no need to play browser here).

In our example, lets say we have some code like this:

// GET request to awesomesauce
[Route("awesomesauce")]
public async Task AwesomeSauce()
{
	var model = new MyAwesomeModel();
	return View(model);
}


// POST request to awesomesauce
[HttpPost, Route("awesomesauce"), ValidateAntiForgeryToken]
public async Task AwesomeSauce(MyAwesomeModel myAwesomeModel)
{
	// if valid, do stuff
	
	// else...
	return View(myAwesomeModel);
}

Your integration test would look like this:


[Collection("Integration tests collection")]
public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		// How do we do this? Send data (POST) - with anti request forgery token and all?... lets find out!
		//var response = await client.SendAsync(requestMessage);
	}
}

In the test we marked our questions. How do we post data? And how do send this AntiRequestForgery token?

Lets begin with POSTing data. In a convenient world I would like to present a

Dictionary

with keys and values, then simply pass that as method BODY and let some helper method transform that into a true HttpRequest message. I made such a thing my own, it looks like this:

With the

PostRequestHelper

we can now POST data like so:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		var formPostBodyData = new Dictionary
			{
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);

		// TODO: AntiRequestForgery token

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

Well that is easy isn’t it?

If you paid attention you already saw a hint in the

PostRequestHelper

about a

CookiesHelper

. Although it is not needed to deal with AntiRequestForgery, it is a handy tool. I’ll explain it below.

Dealing with the AntiRequestForgery token

In general it is easy, you do a GET, in its response you receive a token. You need to extract that token and put that token on your next POST request and you’re done.

To extract the token, you can use this:

Now we can use it in our test like so:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce");
		response.EnsureSuccessStatusCode();

		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);

		var formPostBodyData = new Dictionary
			{
				{"__RequestVerificationToken", antiForgeryToken}, // Add token
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

And voila, you can now do a POST request which will pass the token and make the POST happen. By omitting the token you can test if your action is protected by CSRF (or using a different token). Although I would not try to test the framework itself, I would advice to have tests in place that make sure specific controller actions are protected.

Dealing with Cookies

As bonus, lets deal with Cookies. You need to deal with those probably. Sometimes you need to post the data again (as if you are a real browser). In that case to make life easier there is a method on the

PostRequestHelper

called

CreateWithCookiesFromResponse

. This basically creates a POST request, and copies over your cookies from a (previous) GET request.

The CookiesHelper looks like this:

In our example test above, we could have used it like this:

public class AwesomeSauceTest : AbstractControllerIntegrationTest
{
	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
	{
	}

	[Fact]
	public async Task Visits_AwesomeSauce_And_Posts_Data()
	{
		var response = await client.GetAsync("/awesomesauce"); // this returns cookies in response
		response.EnsureSuccessStatusCode();

		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);

		var formPostBodyData = new Dictionary
			{
				{"__RequestVerificationToken", antiForgeryToken}, // Add token
				{"Awesomesauce.Foo", "Bar"},
				{"Awesomesauce.AnotherKey", "Baz"},
				{"Any_Other_Form_Key", "Any_Other_Value"}
			};

		// Copy cookies from response
		var requestMessage = PostRequestHelper.CreateWithCookiesFromResponse("/awesomesauce", formPostBodyData, response);

		var response = await client.SendAsync(requestMessage);

		// Assert
	}
}

Conclusion

After we have set up integration testing we want to get to do some basic interactions with our application. Using an AntiRequestForgeryHelper we can extract a token. Using the PostRequestHelper we can construct a new Request to easily send over a request. Combined they can make any scenario work with CSRF protection.

In case you need to pass over cookies information you can use the CookiesHelper.