Crazy chess: board representation

First a note: I spent a bunch of time writing code without blogging. The first issue is pretty far along. This first code analysis entry though is going to be just about the board representation.

At this point it’s tempting to anticipate the future and to make something that addresses concerns we expect to come up. This is best avoided though, and so at this stage of development the requirements on this board are quite trivial.

First the unit tests. Really all we need to do is make sure we can build a new board and then fill it with pieces:

    auto board = crazychess::board{};

    for (auto const& space : board)
        BOOST_CHECK(space == crazychess::pieces::empty);

    auto board = crazychess::board{};

    board[0] = crazychess::pieces::white_pawn;
    board[1] = crazychess::pieces::black_king;
    board[2] = crazychess::pieces::white_rook;

    BOOST_CHECK(board[0] == crazychess::pieces::white_pawn);
    BOOST_CHECK(board[1] == crazychess::pieces::black_king);
    BOOST_CHECK(board[2] == crazychess::pieces::white_rook);

    std::for_each( std::begin(board)+3, std::end(board)
                 , [](crazychess::piece p) { BOOST_CHECK(p == crazychess::pieces::empty); });

The first test just checks that the default constructor creates an empty board. I don’t know yet if this is the right thing to do, but if not I can always delete this test–this is something that is important to get used to. As soon as a test becomes obsolete or becomes a maintainence burden you should look for ways to decrease that burden or just delete the test entirely–maybe start over, maybe not.

The second ensures I can modify the board to have non-empty squares. I don’t do an exhaustive check, that’s a bit overkill, but I do want to make sure that the basic interface works.

The code that passes these tests is quite trivial:

enum struct piece
    empty = 0
  , white_pawn
  , white_rook
  , white_knight
  , white_bishop
  , white_queen
  , white_king
  , black_pawn
  , black_rook
  , black_knight
  , black_bishop
  , black_queen
  , black_king
using pieces = piece;

using board = std::array<piece, 64>;

First a note what this code does because it may look a bit funny to people used to C++03 or are learning at college where they’re not instructing in the new language yet.

The first bit defines a new enumeration type. Unlike in previous C++, this one defines a truly unique type and not just a bunch of integer values. I couldn’t pass a black_rook as a color for example. This introduces some bits of inconvenience as you’ll see later in some position work that uses bitmasks but it also introduces clarity and safety to the code–I value the latter more than convenience. The explicit use of 0 to assign empty is superfluous but it also adds clarity–one could go either way on that since a C++ developer should know that the first element in an enum will be 0 unless otherwise stated, but I just assume I don’t need to judge and add clarification when I think of it and it has no extra cost such as in this case.

Next I create an alias for piece so that plural sense can be used when it makes sense. This simply increases the vocabulary options in the language we are creating to speak about the constructs in the code. Code should be expressive to humans and so we want to enable humans to communicate well. As you create a new program you’re also creating a language that communicates the construction of that program and will impose on your ability to communicate as you go.

Next I just create an alias for an array that is called “board”. This means that the board indentifier throughout the code (when that identifier isn’t overloaded in a more limited scope) will have the same syntactic meaning as an array, though semantically, to humans, the name gives it slightly more meaning. We have to watch out here though because in this case “board” and “array<piece,64>” are exactly the same types–you can interchange them implicitly and so if you are using the same basic type to represent a different concept you might have troubles.

An argument here can certainly be made that this code represents a case of “primitive obsession”. In fact we can already get a sense that we’re on a bad track by looking at some unit test code I created:

std::string quick_board_string(crazychess::board const& board)
    auto result = std::string(64, ' ');

    std::transform( std::begin(board), std::end(board)
                  , std::begin(result)
                  , [](crazychess::piece p)
                        constexpr auto pc = " PRNBQKprnbqk";
                        return pc[static_cast<int>(p)];
                    } );

    return result;

This seems like pretty inoculous code, and to some degree it is, but it represents a potential to spread logic around the system rather than to encapsulate it locally. If I made a fen_generator(board const&) function it would necessarily contain a lot of the same logic. This is quite undesirable and is one of the first things that will cause your project to become unmaintainable pretty darn quickly. When something about the board class changes for example, all of these functions that implement this same logic will have to be changed. Some may get lost in the shuffle if the type system doesn’t catch it, and since I used a rather primitive type here that becomes more likely.

Here also though is a great temptation to be too darn perfect. If we spend all our time analyzing every potential problem and dealing with it we’ll stagnate and get nothing done. There’s a bit of a balance to hunt down here and there’s not really any clear way of deciding who’s being too sloppy and who’s being too analytical. Except for extremes on either end, which are both quite detrimental to your success as a professional developer, there’s a lot of wiggle room and subjective likes that come into play.

One way that you can handle this uncertainty, and its the proceedure recommended in agile development, is to just write whatever code you need to pass the tests you have. Then later, previous to making your potential commit changeset, you look at what you’ve written and look for smells that already exist and can be cleaned out. If you have a bunch of copy-pasta (code that’s been copied to numerous locations and then perhaps modified slightly) you now look for ways to consolidate shared logic. This has the advantage of getting you to a point where you have something working pretty rapidly so that if your employer or customer cuts you off–like they need the feature NOW–you can toss it out the door.

This later action is a decision that is sometimes made, perhaps by you, perhaps not–maybe you’re being forced to do it against your objections. The choice to release code that is not quite the cleanest it could be but gets the job done is known as “Technical Debt”. This is a debt that you absorb by deciding to make your later self more miserable in some way. Either you’ll have to go back and pay the debt by fixing the unclean code, preferably fairly soon, or you’ll have to constantly pay it as you move forward. When too much technical debt develops your project will collapse, usually long before management actually decides to discontinue it or pay the debt.

Some people or employers consider bugs technical debt. They might let bugs build up and then do a “technical debt payment rush” to fix as many as they can and get the defect count down. I do not agree that they’re the same thing. Technical debt is code that actually does what it is meant to, it just represents a continued cost in time and mental work needed to maintain the product. Bugs are actually defects in the program that will impact some customers. It is true that we often put off fixing them because they only affect a few customers or the fixing of them is too great (usually because of a decision to take on actual technical debt earlier in the process), but technical debt is debt that a company or team takes on of its own…its only affect on the customer is in feature request or issue turnover.

At this point I’m taking on this technical debt. As I get closer to actually considering this task complete and submit a pull request (or code review if you’re not using github) I will look back and see if there’s debt I do not want to take on. I have this luxury since I’m in charge, usually you won’t.

As a general rule I disfavor primitive obsession and think it’s a huge mistake so I imagine I’ll find some practical excuse that requires I create real types rather than aliases. I certainly see forces that might come in the future that would require them, but at the same time the temptation to go off trying to anticipate the future is probably one of the strongest (at least for me) so I may instead force myself to stop where I am. I will let the future decide.

Much more than this has been checked in and you’re encouraged to look into it. This entry though represents the code as it was at this point in the history.

As I move forward you should feel free to, if you have a github account, comment on changesets. You can comment on any. Ask questions, suggest alternatives, etc… Probably the most important part of being a professional developer is collaboration. It can also be a pretty hard one to develop–many of us would prefer to instead be lone wolf developers. One great way to learn, besides teaching under guidance of peers, is to ask questions and critique others’ code. I encourage you therefor to comment on mine, especially as it pertains to this project. You can comment on any changeset, including this one.


Crazy Chess: acceptance tests

First a little side note for anyone that’s been waiting for updates: I am on sabbatical but taking little side jobs once in a while. This will be going on, off and on, for a while. Sometimes I’ll be too tired to blog.

I almost forgot an important part of professional development because it’s very often done by a different team: that of acceptance or functional testing. This level of testing actually runs the product and tests it from the user’s perspective to ensure that it meets their needs, which of course includes actually functioning. It is very nice to have it automated so that as a developer I can just run the tests before I even propose my changes for review. Any time it’s difficult for a developer to run a well designed and specified set of acceptance/regression tests the turnaround time for any change either increases or it becomes a more uncertain process–meaning more mistakes make it into the trunk.

Although it’s not really my direct area I do have some experience with helping test teams come up with tests (theoretically it’s more a job for the customer and PM to help testers decide what to test but theory and practice don’t always coincide). I looked into some of the automated acceptance testing frameworks I’ve heard of, such as FitNesse, but none seem to really fit into the TravisCI setup. It seemed easier just to leverage Boost.Test to drive a custom class I made to spin up the command-line program and talk to it.

The first stage in development involves writing a program that just “draws” a chess position on the console. So the first set of acceptance tests just checks that the startup position creates the right board view, one variation that is not startup, and an invalid position. Certainly far from exhaustive but this seems sufficient to me, especially as the classes that implement this behavior will be unit tested. That and we haven’t really introduced a lot of complexity yet; at this point it’s more important to set off in a good direction.

So in the test directory I moved unit test stuff into a ‘unit’ subdirectory, and created an ‘acceptance’ subdirectory. Both create boost tests with slightly different targets. I made a basic `process` class that uses posix functionality to fork out and exec a command line program–it then uses boost’s IOStreams library to communicate with the program through stdin and stdout. Normally you’d think about how you might test this class but:

  1. It would be really tough to test something like this.
  2. It really does very little–after abstracting a testable interface there’d not be much to test.

So this class isn’t going to be tested in an automated way. It was created and tested a couple times to make sure it worked. Beyond that not much is needed.

Using this class, the three tests were created. We just send the fen to the program via `argv[1]` and expect to read a series of lines that represent the board. There’s some funkiness here but there’s a couple practical reasons to put up with it, the most important is that this whole set of tests will be relpaced rather quickly and the program created will go away also–both to be replaced by more complex behaving programs.

The test code is here.

The simple, failing program is here.

The important thing to note here is that we have our three tests before the code that will pass them is made. This tests the whole of the board issue. Now that this is done we’ll write unit tests to exercize classes and functions that will build up a program that passes these acceptance tests. This process is known as “Test Driven Development” and the single deciding aspect that distinguishes this process is that tests are written before the code that passes them.

Crazy Chess: managing the project

Usually when you are working for someone you will be required to report progress in some manner. Bosses and customers are obsessed with knowing that stuff is getting done for some reason. There are a myriad ways to do this. I prefer it being as lightweight a process as meets the needs of my employer or customer.

Some people insist on having gantt charts and all kinds of planning diagrams and such. I’ve not found them all that helpful and especially when working alone this is some pretty heavy weight process just to create a chess program. So no, not going to do that.

Most of my agile experience is with Scrum. This also can get pretty heavy weight and so in the last few years people have been tearing away at the time sinks in this process to create Lean, with one of the more common being KanBan.

Project management isn’t going to be a huge part of this blog series but since the object is to provide a view of what working on real C++ projects is like, there will be some. At this time I’ve decided to use the github issues system with on top. This is a fairly feature starved setup but sometimes that’s a good thing–the more features you have in your tools the more your process tends to turn into a slog fest where you spend more time ticking off little process requirements rather than simply write code and tests.

At this point there are very few issues in the queue. This doesn’t mean there isn’t much to do in order to get to our first release milestone. At this point they represent a very rough outline of the things to do. As we move forward we also need to look at these issues (‘epic’ issues of a sort) and split them up into possibly smaller tasks.

On the other hand we may find that they don’t really split down any further. In agile a development task should represent the smallest grain of a complete feature and no further. You don’t want to create tasks that take months to resolve and result in massive changesets, but you want to limit the measure of completion at every level to something that is actually complete. At any time in the process the customer or employer may decide to release–they should be able to do so.

We use a rather liberal definition of “complete feature” though in order to make this possible. In this case we have a general idea that the first release is going to be a rather simple chess playing program. To create releasable features that can be completed in short time we need to split that down a bit. So I have planned the following sort of roadmap toward that end, and at each stage in the process we’ll have something that could potentially be released as a product–even if a rather pointless one.

  1. A program that prints out an ASCII drawing of a chessboard with pieces on it. Takes a position from the user and makes the drawing.
  2. A program that takes a starting position and who’s turn it is, and enters a read loop that requests moves from the user and alters the position and game state based on those moves.
  3. A computer opponent that’s rather dumb–doesn’t know that the game has ended nor some of the other complex rules. Just finds good moves and makes them.
  4. A complete set of rules the computer opponent will use in its analysis–it should never try to make an illegal move.
  5. Enforcing those rules on the human opponent so that any attempt to make an illegal move results in error and re-request.

It is preferable to have a more parallel set of issues so that multiple developers can work on different things at the same time. It’s not unusual though for that parallelism to require some ramping up time, as we see here. At around stage 3 we begin to see opportunity for it though, someone could work on the computer opponent while someone else works on the complex rules and maybe someone else enforces those rules on the human opponent.

With that in place we can begin development. To begin that process I have assigned the first issue to myself and moved it into the ‘In Progress’ state. I will create a new branch to work on as you would in a production environment. The process of work will be:

  1. Grab an issue, assign it to myself, and begin progress.
  2. Create a branch to work on the issue–named in a way to reflect what’s being worked on.
  3. Do work, committing changes on the way.
  4. When finished do a pull request to start the code review process–I will be involving the community in this.
  5. Upon receiving input and finishing any changes that may be needed merge that pull into master.
  6. Observe the CI server to ensure I didn’t screw up somewhere and break the build
  7. Close the issue as complete.

Crazy Chess: pre-development setup

In the beginning stages of development our “customer” doesn’t really know what they want more than at the most vague level. They want a program that plays chess. They have agreed that this can be a command line program (this wouldn’t ever happen but it simplifies my task). We’ve agreed to a billing system based on work performed, not an all encompassing rate to finish the project…or we’ve been hired on a salary to implement this thing for our boss. We’ve agreed to an agile process based not only on the vague requirements but also because it is comfortable for us. So we’ll work a bit on tasks the customer says they want for a short time, show the customer the result, and get input about changes and future direction. Releases happen when the customer decides the project has implemented enough features to release to customers.

So in other words we have a go.

The first step is to set up a development environment. The customer has dictated we use Open Source frameworks and services. They’ve also stated that the product will be released under an Open Source license. This makes writing about it easy 😉 We’re also expected to limit expenses, requiring we use free services. After some minor debate we’ve decided to use github to host the project.

Every agile project should have Continuous Integration. There is indeed a free CI service that ties directly in with github: Travis CI.

We’ll use C++ because well, that’s what we do. We also know that chess engines need to be pretty close to the machine in order to perform well so C++ is a good choice for this need.

We’ll use Boost and CMake because we have experience with them and know they work relatively well. If something about that changes we’ll revisit this choice.

Setting up github is fairly straight forward so I won’t go into much detail. Create an account, add a new project, follow the directions. Hardest part in the whole thing is probably setting up git on your computer, which is probably not hard at all (`apt-get install git` on Ubuntu).

Setting up Travis to build your project is also incredibly easy. Just register/sign-in with your github account. If your repository already exists it will appear in your profile. Click the wrench icon to configure it and turn on, “Build only if .travis.yml is present.” Then go back to your profile page and turn on the repository. You’re now continuously integrating the project.

Making a C++ project build though is a bit more work (why we turned on the yml requirement). C++ doesn’t have really good dependency support. In fact it has none at all. So this means we have to do everything by hand. The Travis help tells us how but it still requires some messing about. Namely Travis uses a Ubuntu that is older than what you’d currently download; its latest boost version is 1.48–see the history of the .travis.yml file to see some of what I went through to figure this out. We believe we can live with this.

The final result of the travis configuration thus far looks like this:

  email: false

language: cpp

- clang

  sudo apt-get install --force-yes --yes libboost1.48-all-dev

  mkdir debug &&
  cd debug &&
  cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_COMPILER=clang++ .. &&
  make check

This sets up our CI server to install our only dependency, boost, and build the project from scratch. Travis doesn’t do incremental builds so this happens every time.

Normally we might chose something more powerful and customizable like Jenkins. In this case though the requirement to use free, online services sort of forces our hands. Travis will do the build and run the unit tests. It will then give us a green or red light based on whether this succeeded or not. Later it’ll push releases into github for us. These are the bare minimum required of a CI server. It would be nice to have complete reports, graphs, and such for unit tests, static analysis, code coverage, leak detection, etc… As we learn more we may add some of these steps, but it’ll be a pass/fail type thing without graphs.

The first section in this configuration turns off the email notifications. In a corporate environment you’d probably have these. As my github is connected to my personal account I don’t want this, and besides there are better ways. Instead of email notification I installed the tray application that shows the status of my CI projects on my desktop.

I also added the status icon to the main github project page via the content. Doing so was not all that hard:

 [![Build Status](](

This creates a Markdown link on top of an image. I got the information from directions supplied in the Travis help on status images. Unfortunately it was not entirely correct so I had to additionally look up how to make a Markdown link out of an image.

With this and some cmake file setup we have the beginning of a project. It has a revision repository and it builds and runs unit tests on every check-in. In the future we will add additional levels of testing as well as release pushing (continuous delivery) but for now we have a solid starting setup.

With this done we have a minimally building project (it builds a single unit test that tests nothing) and a little status light:

Crazy Chess

I have been struggling to think up content for this blog. It is difficult to come up with examples that show reasonable techniques in completely context-less examples. One has to invent some contrived reason to do something and that is actually a bad foundation to teach good engineering principles. I have ideas about what techniques should be talked about, or at least can be, but coming up with examples that are both simple enough to create a short blog entry that can’t instantly be refuted with, “But why would you do that?? It overcomplicates the issue,” is really tough.

Some recent interactions online have given me an idea for a good project I can use to examine my approach to software engineering: chess. There are a lot of reasons this domain provides a good test-bed to learn from. First of all, the most basic engines are pretty thin. We need to only learn a couple things to get started. On the other hand, there’s a basically endless road to travel enhancement wise–we might even branch out and play with things like CUDA and distributed computing.

Somewhere between 12 and 15 years ago I began the process of creating my own chess AI. I had decided at that point to write a Chinese Chess program since regular, western chess is basically a solved problem. Why not take a slight deviation in other words. I began it as a personal project through college and eventually turned it into my team’s senior project for a BS in computer science. I spent a lot of time reading the research of others (a lot of people call this ‘research’ but real research isn’t just using google to learn new stuff but to lay out completely new track) and then leveraging it to create my own AI. I succeeded even if the AI was always rather weak.

I spent some time after college working on it. I thought I was going to make a complete set of tools for Chinese Chess players to use to play and analyze games. At some point I sort of lost interest. Others tried to leverage my work at this point and I feel a bit sorry for them :p I left it in a very bad state and its very out of date in its dependencies. That work can be found here. I have a decade of professional software development experience since then.

This time I am going to write a western chess AI, at least to start. We are going to pretend that we are responding to requirements from a customer or a boss. This is still going to represent a rather utopian view of the career you may be entering into, since working on what we call “green field” code is pretty uncommon. After some time though the project should begin to gain some legacy weight that will have to be resolved as our “customer” comes up with new requirements and feature requests.

One issue we’re not likely to run into though is a large blob of legacy code that has no unit tests–I’m going to start using them right away. Honestly I don’t have a firm grasp on how many firms out there use unit tests to begin with. In my experience its not many, firms that use C++ tend to be working with large legacy code bases that have existed before a lot of the important agile methods were really well known. Things like automated unit tests, continuous integration, etc… are often not in active use. Often the company will be in an extended effort to become “agile”, and there will be a whole massive mess surrounding that. I’m not even going to get into that bit with this project–maybe I’ll address some of those issues in off-category posts (this isn’t the only thing I’ll blog about here).

We’ll see where it takes us. One big issue in this career is the massive weight of personal ignorance. We like to think we can anticipate the future but eventually we learn we just can’t. Software engineering then becomes not the well thought out process of planning something out and implementing it, but in constantly struggling to respond to completely unanticipated requests from customers and business people with their new, brilliant ideas. Nobody knows how to do this, myself included, so I’m just going to try working through the process and writing it down as I go.

With that in mind, our customer has asked us to create a program that will play chess with a human opponent. They think this is great and new and they’re awesome for coming up with this great plan. This is all the information they’ve given us. Our first task is to do a bit of googling and to set up a new project. The latter is going to be what the next entry in this category will be about.

Introduction to unit testing

One of the most important parts of writing maintainable code in any language, and that includes C++, is writing unit tests. There are some people who might disagree, and not all of those disagreements are completely ill-conceived, but leveraging unit tests is a huge part of my process and so they’ll be a large part of entries in this blog. This post provides a very brief introduction to get us started.

Even developers who don’t agree with unit testing generally agree that developers need to do some kind of testing of their own code before passing it on. In my experience its best if that is automated and that’s easiest if done in the same language that the production code is in. So even those who disagree with me that unit testing is an important part of writing sane code can leverage similar methods to write whatever kind of tests they’re writing.

I use Boost.Test for all my unit testing, and more even. There’s no great technical reason why I chose this framework over say googletest or one of the many new ones that keep coming out. I use it because it’s easy to use, I’m already using boost, and I find that it’s more than sufficient for the task. I’ve heard of there being issues around maintenence and the library author being a PITA about fixing things, but I’ve never run into any of that. There are many times when, “Who cares, it’s working,” are more than adequate reason to use something–you know, until it stops working. I do believe you need more than just C asserts so I’m not doing that–many people find asserts work just fine for them and so that’s a great answer too.

Integrating unit tests into the build

Unit tests should be easy to run as a target in the usual build setup that developers always use. For C++ this is generally make. Not many people are insane enough these days to use make directly but instead use something else that generates the makefile for them. I have so far found cmake to be good enough; better than autotools at least. It’s got its problems but it’s good enough and until I find something else I like, this blog will use it.

First thing you need to do specific to testing (see code on github for the rest) is to tell cmake that you will be adding tests. This must be done before you ever call add_test:


Next you need to tell cmake how to find Boost.Test. I do this in the CMakeLists.txt located in the test code directory:

find_package(Boost COMPONENTS unit_test_framework)

I also like to add a check target to run the tests much like I would do with autotools in the past. This target builds all, builds the tests, and then runs them by activating cmake’s ctest system. I also add a tests target to just build the tests without running them:

add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND})

Finally, you need to ensure that the boost headers can be found and I also like to have a function for adding unit test programs rather than needing to repeatedly do the same thing over and over:


function(add_boost_test name)
  add_executable(${name} EXCLUDE_FROM_ALL ${ARGN})

  target_link_libraries(${name} ${Boost_UNIT_TEST_FRAMEWORK_LIBRARY})

  add_test(${name} ./${name})

  add_dependencies(check ${name})
  add_dependencies(tests ${name})

Both cmake and Boost.Test support adding all of your unit tests into a single, monolithic executable and then running tests as a whole or individually. I have found it easier to use the old paradigm of keeping each suite of tests separate, or at least separate them by related modules.

Our first unit test

This example test won’t actually test any code. It instead just shows some of the features of Boost.Test that you’ll use in writing real tests, or that you’ll see here in this blog or attached code. I’ll just dump the whole code on you now and explain after:

#define BOOST_TEST_MODULE helloworld
#include <boost/test/unit_test.hpp>

void fun() { throw std::runtime_error(""); }

    // BOOST_FAIL("hello");

    BOOST_CHECK(0 == 0);
    BOOST_CHECK_THROW(fun(), std::exception);

    std::vector<int> ints{3,2,1};

    BOOST_REQUIRE_EQUAL(ints.size(), 3U);
    BOOST_CHECK_EQUAL(ints[1], 2);

Lines 1-3 are the minimum you need for a unit test program. If you want to make a monolithic executable spanning multiple cpp files then this is all you’ll put in main.cpp. Otherwise you put it as the base for each test cpp file. Line #1 is some sort of magic that you need if you’re using dynamic libraries; leave it out if you are not. Line #2 defines the name of the top level test suite and also tells the preprocessor to generate the main function–the rules to do so are in the unit test headers. Line #3 includes the unit test framework so you can then use the various macros and other constructs to create unit tests and test suites.

Line #5 defines a function that throws an exception. It is there for illustration only.

Line #7 starts a new unit test function. The BOOST_AUTO_TEST_CASE macro creates and names a new test case and automatically registers it with the test framework. It does a bunch of stuff you can do by hand but I’ve found little reason not to use the macro. It defines a function so is followed by a brace enclosed block of code.

Line #9 is a good first version of any unit test. This ensures that your test is actually being compiled and run. The automated registration of tests that Boost.Test creates for us makes that less likely, but you might have forgotten to add the whole file to your build system. If you then do a bunch of coding assuming your tests are passing because you’re not seeing any failures…well that’s an ungood situation.

Lines 11-14 are a few examples of the common test macros you will use to assert behavior:

  • BOOST_CHECK: passes if the boolean expression yields true. If it fails you get the text of the test in the failure message.
  • BOOST_CHECK_EQUAL: passes if the two operands compare equal using ‘==’. If it fails you get text including the values of both operands. This is very useful but requires that both operands are streamable (can apply << to them).
  • BOOST_CHECK_NE: passes if the two operands compare an non-equal. It behaves similarly to the equal version.
  • BOOST_CHECK_THROW: passes if the expression given as the first operand results in an exception being thrown that is the same type or a subtype of the second operand.

Lines 16-19 illustrate how you protect your tests from crashing. All ‘check’ macros in Boost.Test allow the test to proceed, reporting the error but not aborting. If you naively checked the size of the vector ints with a check macro then the next check, where you retrieve a value within the vector, would possibly cause a crash. You can replace ‘CHECK’ with ‘REQUIRE’ in any of the boost macros to get a version that will abort any further tests in the current case if the check fails.

Test run

To run these tests you first initialize your build environment. I usually just make build directories in the source root, but you can actually make them anywhere; you just have to give the right path argument to cmake. So to make a debug build I do:

$ mkdir debug ; cd debug
$ make check

The second and third generate a bunch of text including the cmake analysis and the compilation. After all that is done though the tests are run via the check target and you’ll get something like this:

[100%] Built target hello_world
Test project /home/eddie/github/sanecpp/20140810-unittest/debug/test
    Start 1: hello_world
1/1 Test #1: hello_world ......................   Passed    0.00 sec

100% tests passed, 0 tests failed out of 1

Total Test time (real) =   0.01 sec
[100%] Built target check


That’s it for now. This has been a very, VERY rudamentary introduction but you can actually use these tools to do quite a bit without needing to know a lot more. I encourage you to pull the code and play with it. You can use the CMakeLists.txt files as a basis for your own work if you’d like.