Sunday, June 22, 2008

Monkeypatching doctest

The Python doctest module rocks. Lately, I have been using it to write unit tests for Crunchy: for each module, I write a reStructuredText file which contains sample tests written as simulated interpreter sessions, using doctest.testfile(). This has worked really well in general ... however, I have encountered one small annoyance, which I managed to get rid of in an "elegant" way using Monkeypatching.

doctests allow the use of directives. One "powerful" directive is the ELLIPSIS directive. Quoting from the documentation:
When specified, an ellipsis marker (...) in the expected output can match any substring in the actual output. This includes substrings that span line boundaries, and empty substrings, so it's best to keep usage of this simple. Complicated uses can lead to the same kinds of "oops, it matched too much!" surprises that .* is prone to in regular expressions.
Unfortunately, I encountered a case where the ellipsis marker did not allow enough matching! Consider the following situation: I have a program (Crunchy!) that saves the user's preferences (including the language) in a configuration file each time its value is changed. It also gives some feedback to the user whenever this happens.

>>> original_value = crunchy.language
>>> set_language('en') # setting this value for some standardized tests
Language has been set to English

At the end of the test, I want to restore the original value.
>>> set_language(original_value) #doctest: +ELLIPSIS
...
Here I want the ellipsis (...) to match the string that is going to be printed out in the original language as I have no idea what this string will look like. The problem is that the ellipsis in this case is thought to be a Python (continuation) prompt and not a string that is "matched". One workaround that I had been using was to modify set_language to add a parameter ("verbose") that was set to True by default but that I could turn off when running tests. While this is simple enough that it surely would never (!) introduce spurious bugs, it does not feel right; one should not modify functions only for the purpose of making them satisfy unit tests.

According to the documentation,
register_optionflag(name)

Create a new option flag with a given name, and return the new flag's integer value. register_optionflag() can be used when subclassing OutputChecker or DocTestRunner to create new options that are supported by your subclasses. register_optionflag should always be called using the following idiom:
  MY_FLAG = register_optionflag('MY_FLAG')

This is great ... except that I want to used doctest.testfile() which does not allow me to specify a subclass of OutputChecker to use instead of the default. Also, I wanted to use as much of possible of the existing doctest module, with as little new code as possible.

This is where monkeypatching comes in.

After a bit of work, I came up with the following solution:

from doctest import OutputChecker
original_check_output = OutputChecker.check_output
import doctest

IGNORE_ERROR = doctest.register_optionflag("IGNORE_ERROR")

class MyOutputChecker(doctest.OutputChecker):
def check_output(self, want, got, optionflags):
if optionflags & IGNORE_ERROR:
return True
return original_check_output(self, want, got, optionflags)

doctest.OutputChecker = MyOutputChecker

failure, nb_tests = doctest.testfile("test_doctest.rst")
print "%d failures in %d tests" % (failure, nb_tests)

And here's the content of test_doctest.rst

Test of the new flag:

>>> print 42
42
>>> print 2 # doctest: +IGNORE_ERROR
SPAM!


This yields a test with no failures. There might be a more elegant way of doing this; if so, I would be very interested in hearing about it.