Saturday, December 20, 2014

Review of IPython Notebook Essentials

Disclaimer: following a post on Google+ by a Packt representative, I volunteered to do a review of IPython Notebook Essentials and got a free copy as an ebook (I used the pdf format).

The verdict: Not recommended in its current state.

IPython Notebook Essentials starts very well.  It suggests to use the Anaconda 3.4 distribution (which I have on my computer - more on that later) or connecting online using Wakari.  Both methods worked fine for simple examples.

In chapter 1, we are given a quick tour of what the IPython notebook can do using the modeling of a cup of coffee as an example. The idea, on its own works well.  [As a scientist, I found the use of degree Farenheit a bit odd and reminded me of books I read that were written 50 or more years ago.]  However, while the authors used variables that follow the normal Python convention, separating names by underscore as in temp_cream, the code formatting is at times atrocious as variable names are sometimes split up after an underscore character as in the following two lines of code:

initial_temp_mix_at_shop = temp_mixture(temp_coffee, vol_coffee, temp_
cream, vol_cream)

which are, ironically enough, in bold in the text as the author wants us to focus our attention on their meaning (but no mention of the error in formatting).

While I usually prefer holding a paper book over reading an ebook on my screen, I must admit that being able to copy-paste code samples beats having to retype everything.  So, it was possible to quickly reproduce the examples by copy-pasting and fixing the typos to run the examples.

However, a much better way exists which is often used by Packt books: making code samples available for download.  The IPython notebooks have been designed for easy sharing.  Here, the author has chosen not to make use of this.  This, in my opinion, is something that is a major flaw in this case.


Chapter 2 covers in more details the notebook interface. This is undoubtably the most important chapter of the book for anyone who wishes to learn about the IPython notebook.  It covers a lot of grounds.

However, I encountered many problems, some more serious than others. The first, which is perhaps more an annoyance than a real problem, is that one example intended to show code completion using the tab key is given as follows:

print am

Python programmers will immediately recognize this as being a Python 2 example.  However, as far as I could tell (using ctrl-f "python 2") there isn't a single mention anywhere that Python 2 is used.
I happened to have the Anaconda 3.4 distribution installed (as recommended) but with Python 3.4 and not Python 2.  Python 3 is now 6 years old and there is no excuse to focus on an old, and soon to be obsolete, version of Python without at least mentioning why that choice was made. Minor syntax difference, like adding parentheses for print statements, are easily fixed; some more subtle ones are not.   Furthermore, while I had the Anaconda distribution installed, I was still using the online Wakari site to work through the examples, so that was not a major problem at that point.

While still in chapter 2,  we are invited to replace "%pylab inline" by "%pylab" and run an example again to see a plot appear in a window (I first assumed a separate browser window) instead of being shown in the document.    This does not work using the online Wakari site: the window is actually a Qt window, not something that works in a browser.  So, to try this example, I had to start my local version and recopy the code, which worked correctly.

Shortly thereafter, we are introduced to more "magic" methods and invited to try running a cython example, loading from a local file. This did not work.  The recommended "%%cython" magic method is no longer working in the latest IPython version included with the Python 3.4 Anaconda 3.4 distribution.  After a while, I found the proper way to run cython code with the latest version BUT the example provided raised a (numpy-related?) syntax error.  I copy-pasted the code from my browser to the Wakari online version and it worked correctly, confirming that I had not made an error in reproducing the code given by the author. However, I was not able to figure out the source of the error using the local version.

After finishing Chapter 2, I stopped trying to run every single examples and simply skimmed the book.

Chapter 3 focuses on producing plots with matplotlib, including animations. While not specific to the IPython notebook, this topic felt like an appropriate one to cover.

In Chapter 4, we learn about the pandas library which has very little to do with the IPython notebook.  The situation is similar with Chapter 5 which covers SciPy, Numba and NumbaPro, again three libraries that have very little to do with the notebook as such.  The choice of NumbaPro struck me as a bit odd since it is a commercial product.  True enough, it can be tried for free - however, it is not something that I would consider to be an "essential" for the IPython notebook.

I know very little more about the IPython notebook than what I have learned from this book. However, I do know that it is possible to write extensions for the IPython notebook which is something that I would have thought should be included in any book titled  "IPython Notebook Essentials", well before discussing specialized libraries such as Pandas, SciPy, Numba and NumbaPro.

There might very well be other topics more notebook specific that should be included, but I have no way to know from this book.

The book includes three appendices: a brief IPython notebook reference card, a brief review of Python, and an appendix on Numpy arrays.   Both the Reference card and the Numpy arrays appendices seem like worthwhile additions.  However, the brief review of Python seems a bit out of place.  By including code like:

def make_lorenz(sigma, r, b):
   def func(statevec, t):
      x, y, z = statevec
      return [ sigma * (y - x),
              r * x - y - x * z,
              x * y - b * z ]
   return func


in Chapter 2, the author seems to assume, and rightly so in my opinion, that the reader will be familiar with Python.  However, the appendix only covers the standard Python construct that one may find in a beginner's book intended for programmers that are familiar with other languages.  As such, the Python review appendix seems just like a filler, increasing the page count while adding nothing of value to the book. Thankfully, it is relegated to an appendix instead of being inserted in an early chapter.

In summary, about half of the book contains information of value for someone who wants to learn about the IPython notebook itself; the choice of Python 2 over Python 3 is odd, and almost inexcusable given that it is not even mentioned anywhere; the lack of downloable code samples" (mostly IPython notebooks in this case) greatly reduces the value of this book and is something that could be remedied by the author.  In fact, given the typos mentioned (where variable names are split over two lines), downloadable copies of notebooks should be made available.

As I write this review, Packt is having a sale during which ebooks are available for $5.  At that price, I would say that IPython Notebook Essentials is worth it if one wants to learn about the IPython Notebook; however, based on a quick scan of other Packt books covering the IPython notebook, I  believe that better choices exist from the same editor.

Tuesday, December 09, 2014

Reloadable user-defined library with Brython/Python


[Note: this post is a more detailed explanation of something that is briefly described in a previous post on supporting multiple human languages for Reeborg's World.]

In Reeborg's World, I want to have programmers (read: Python beginners who have never programmed before) to be able to learn about using libraries in Python.  As usual, I was looking at the simplest way to introduce the idea of libraries.  Since almost all of the programs that programmers write make use of their own definition for turning right:

def turn_right():
    turn_left()
    turn_left()
    turn_left()

it makes sense to have this "reusable" function be put in a library. So, instead of a single code editor, the code editor has two tabs: one for the basic program and one for the library. Initially, I only had a Javascript version of Reeborg's World, but I still wanted to introduce the concept of using libraries. So, when I first introduced support for using a library with Javascript, I cheated.  And I continued to cheat when I added Python support.  If the programmer wanted to use the functions (or anything else) defined in their library, I required them to insert the following statement in their program.

from my_lib import *

Before running the program using Brython's exec(), I would scan the code in the editor's program tab. If I found this general import statement, I would replace the line by the entire content of the editor's library tab and execute this modified source instead.   [Note: I have since removed the idea of using a library in the Javascript version since there is no natural syntax for importing a library using Javascript.]

However, this approach had two problems:
  1. It did not support the other ways to use an import statement in Python (see below for an example).
  2. It encouraged a bad practice of including everything, polluting the program's namespace.  I already had shown the idiom "from some_lib import *" when explaining how to have Reeborg understand instruction using other human languages (such as from reeborg_fr import * for the French version, or from reeborg_es import * for the Spanish version; other translations welcome! ;-)
I wanted to encourage good programming practice, such as using

from my_lib import turn_right

One problem I had is that Brython's import mechanism is based on retrieving files on the server using ajax calls.  This by itself might not be a problem ... except that I do not want to store anything created by users on the server: Reeborg's World is meant to be run entirely inside the browser, with no login required. (The content of the editor and library tabs are saved in the browser local storage so that they are available when the user comes back to the site using the same browser with local storage enabled.)

Another problem I had is that, once a module is imported, future import statements for that module make use of the cached version.  If the programmer modifies the code in their library (tab), the corresponding module needs to be reloaded.  I need for this to be done automatically, without the programmer having to do anything special.

One solution to these problems might have been to create a special importer class that could import code directly from the library tab and add it to sys.meta_path.  Then, after a program has been run, remove all traces of the imported module (user's library) so that the next time it is executed, the import takes place all over again.

I decided instead on a different approach.  I created a simple module, called my_lib.py  (and another one, biblio.py, for the French version) and put it in Brython's site package directory.  The content of that module is simply:

from reeborg_en import *

which ensures that all normal robot commands can be used in that module.  When Reeborg's World is first loaded, I import that module so that it is cached.  Then, whenever the programmer's code needs to be executed, instead of simply having exec(src) called, the following is called instead:

def generic_translate_python(src, lib, lang_import, highlight):
    ''' Translate Python code into Javascript and execute

        src: source code in editor
        lib: language specific lib (e.g. my_lib in English, biblio in French)
             already imported in html file
        lang_import: something like "from reeborg_en import *"
    '''
    # save initial state of lib
    initial_lib_dict = {}
    for key in lib.__dict__:
        initial_lib_dict[key] = lib.__dict__[key]

    exec(library.getValue(), lib.__dict__)
    exec(lang_import)
    if highlight:
        src = insert_highlight_info(src)
    exec(src)

    # remove added definitions
    new_keys = []
    for key in lib.__dict__:
        if key not in initial_lib_dict:
            new_keys.append(key)
        else:
            lib.__dict__[key] = initial_lib_dict[key]

    for key in new_keys:
        del lib.__dict__[key]


In the above, highlight refers to some pre-processing of the code which allow to show which line of the code is executed as illustrated in two previous blog posts.  library.getValue() is a method that returns the content of the library tab.

Friday, December 05, 2014

Still baffled by the Python 2/3 discussions

I'm ... baffled...

For the past few years, I've been focused mostly on doing my own things, and not really following what was happening in the "core" Python community.   Reading this post today by Brett Cannon about the "consensus" that has apparently emerged  by the language summit at PyCon 2014 about writing code compatible for both Python 2 and 3, I was reminded about the release of version 1.0 of Crunchy 

Crunchy 1.0 is compatible with Python 2.4, 2.5, 2.6 ... and 3.1. It is also compatible with Jython 2.5 modulo some bugs when trying to work with examples containing unicode strings.

That was in 2009.   At 2.1 MB (zipped), Crunchy was not exactly a small script...

Why has it taken so long for this to become the norm?....

Due to a lack of interest in Crunchy, I have essentially not developed it much further past that point, and it is almost certainly not compatible with newer versions of Python... 

Sunday, November 30, 2014

Step by step execution ... and reverse steps

A few years ago, Greg Wilson mentioned that, a useful feature when teaching students would be the ability of recording programs and have the possibility of playing them back one step at a time either in the forward or backward direction.   Actually, I am paraphrasing what Greg said or wrote, as I don't exactly remember the context: it could have been in a web post, a tweet, or the one time we met and when I gave him a brief demo of what Crunchy was capable of doing at the time.   While I can not say for sure what Greg said/wrote and when he did it, the idea stuck in my head as something that I should implement at some point in the future.

This idea is something that the Online Python Tutor, by Philip Guo, makes possible.

It is now possible to do this with Reeborg's World as well. :-)


Thursday, November 27, 2014

Practical Python and OpenCV: conclusion of the review

I own a lot of programming books and ebooks; with the exception of the Python Cookbook (1st and 2nd editions) and Code Complete, I don't think that I've come close to reading an entire book from cover to cover.  I write programs for fun, not for a living, and I almost never write reviews on my blog.  So why did I write one this time?

A while ago, I entered my email address to receive 10 emails meant as a crash course on OpenCV, using Python (of course), provided by Adrian Rosebrock.  The content of the emails and various posts they linked intrigued me. So, I decided to fork out some money and get the premium bundle which included an introductory book (reviewed in part 1), a Case Studies (partly reviewed in part 3), both of which include code samples and (as part of that package) free updates to future versions.  Included in the bundle was also a Ubuntu VirtualBox (reviewed in part 2) and a commitment by the author to respond quickly to emails - commitment that I have severely tested with no complaints.

As I mentioned, I program for fun, and I had fun going through the material covered in Practical Python and OpenCV.  I've also read through most of both books and tried a majority of the examples - something that is really rare for me.  On that basis alone, I thought it deserved a review.

Am I 100% satisfied with the Premium bundle I got with no idea about how it could be improved upon?  If you read the 3 previous parts, you know that the answer is no.  I have some slightly idiosynchratic tastes and tend to be blunt (I usually prefer to say"honest") in my assessments.

If I were 30 years younger, I might seriously consider getting into computer programming as a career and learn as much as I could about Machine Learning, Computer Vision and other such topics.  As a starting point, I would recommend to my younger self to go through the material covered in Practical Python and OpenCV, read the many interesting posts on Adrian Rosebrock's blog, as well as the Python tutorials on the OpenCV site itself.  I would probably recommend to my younger self to get just the Case Studies bundle (not including the Ubuntu VirtualBox): my younger self would have been too stubborn/self-reliant to feel like asking questions to the author and would have liked to install things on his computer in his own way.

My old self still feels the same way sometimes ...

Tuesday, November 25, 2014

Practical Python and OpenCV: a review (part 3)

In part 1, I did a brief review of the "Practical Python and OpenCV" ebook which I will refer to as Book 1.  As part of the bundle I purchased, there was another ebook entitled "Case Studies" (hereafter referred to as Book 2) covering such topics as Face Detection, Web Cam Detection, Object Tracking in Videos, Eye Tracking, Handwriting Recognition, Plant Classification and Building an Amazon.com Cover Search.

Each topic in Book 2 is presented as part of a story with different characters (Jeremy, a student; Laura, a bank software programmer; etc.).   I have often read about how framing topics within a story is a good way to keep the interest of a reader.  However, personally I tend to prefer getting right down to the topic at hand (show me the executive summary! ...I gather from discussion that this is a trait shared by others that had a job like my previous one) and so, these stories do not really really appeal to me, but I do recognize their creativity and the work that went into creating them.

As a test of what I have learned to do while reading these books, I thought I should combine various topics of both books into a single experiment which constitutes the bulk of this post.  All the image processing (other than the screen captures) was done using OpenCV.

I decided to start with a photo, taken a few years ago with my daughter while visiting Montreal.
This photo was too large to fit entirely on my computer screen; as a test, I used a slightly modified version of the resize.py script included in Book 1 so that I could see it entirely on my computer screen, as shown below on the left.

 Then, I combined samples from a few scripts from Book 1 (load_display_save.py, drawing.py, cropy.py) together with ideas from this OpenCV tutorial covering mouse control and callbacks: The idea was to take the (resized) full image, show a rectangular selection (with a blue rectangle) and the corresponding cropped image on a separate window as shown below.


As the mouse moves, the selection changes.  The code to do so is listed below. Note that, when the selection is saved, the rectangular outline on the original image is changed to a green colour (not shown here) so as to give feedback that the operation was completed.


'''Displays an image and a cropped selection {WIDTH} x {HEIGHT} in a second
   window. Use ESC to quit the program; press "s" to save the image.

   Note: This program is only meant to be used from the command line and
   not as an imported module.
'''

import argparse
import cv2
import copy

WIDTH = 640
HEIGHT = 480
SELECT_COLOUR = (255, 0, 0)  # blue
SAVE_COLOUR = (0, 255, 0)    # green
_drawing = False
_x = _y = 0
_original = _cropped = None
default_ouput = "cropped.jpg"


def init():
    '''Initializes display windows, images and paths
       init() is meant to be used only with script invoked with
       command line arguments'''
    ap = argparse.ArgumentParser(
                    description=__doc__.format(WIDTH=WIDTH, HEIGHT=HEIGHT))
    ap.add_argument("-i", "--image", required=True,
                    help="Path to the original image")
    ap.add_argument("-o", "--output", default=default_ouput,
                    help="Path to saved image (default: %s)"%default_ouput)
    args = vars(ap.parse_args())

    original = cv2.imread(args["image"])
    cv2.namedWindow('Original image')
    cv2.imshow('Original image', original)

    cropped = original[0:HEIGHT, 0:WIDTH]  # [y, x] instead of the usual [x, y]
    cv2.namedWindow('Cropped')
    cv2.imshow("Cropped", cropped)
    return args, original, cropped


def update(x, y, colour=SELECT_COLOUR):
    '''Displays original image with coloured rectangle indicating cropping area
       and updates the displayed cropped image'''
    global _x, _y, _original, _cropped
    _x, _y = x, y
    _cropped = _original[y:y+HEIGHT, x:x+WIDTH]

    cv2.imshow("Cropped", _cropped)
    img = copy.copy(_original)
    cv2.rectangle(img, (x, y), (x+WIDTH, y+HEIGHT), colour, 3)
    cv2.imshow('Original image', img)


def show_cropped(event, x, y, flags, param):
    '''Mouse callback function - updates position of mouse and determines
       if image display should be updated.'''
    global _drawing

    if event == cv2.EVENT_LBUTTONDOWN:
        _drawing = True
    elif event == cv2.EVENT_LBUTTONUP:
        _drawing = False

    if _drawing:
        update(x, y)


def main():
    '''Entry point'''
    global _original, _cropped
    args, _original, _cropped = init()

    cv2.setMouseCallback('Original image', show_cropped)

    while True:
        key = cv2.waitKey(1) & 0xFF  # 0xFF is for 64 bit computer
        if key == 27:  # escape
            break
        elif key == ord("s"):
            cv2.imwrite(args["output"], _cropped)
            update(_x, _y, colour=SAVE_COLOUR)

    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Using this script, I was able to select and save a cropped version of the original image.

With the cropped image in hand, I was ready to do some further experimentation including face and eye detection as well as blurring faces.  I decided to combine all these features into a single program listed below.  While the code provided with Book 2 worked perfectly fine for feature detection [using the appropriate version of OpenCV...] and gave me the original idea, I decided instead to adapt the code from the OpenCV face detection tutorial as I found it simpler to use as a starting point for my purpose.  I also used what I had learned from Book 1 about blurring.

The following code, was put together quickly and uses hard-coded paths. Since incorrect paths given to classifiers generate no mistakes/exceptions, I included some assert statements to insure that I was using the correct files for reasons that you can probably guess...

import cv2
import os
import copy

face_classifiers = 'cascades/haarcascade_frontalface_default.xml'
eye_classifiers = 'cascades/haarcascade_eye.xml'

cwd = os.getcwd() + '/'
assert os.path.isfile(cwd + face_classifiers)
assert os.path.isfile(cwd + eye_classifiers)

face_cascade = cv2.CascadeClassifier(face_classifiers)
eye_cascade = cv2.CascadeClassifier(eye_classifiers)

original = cv2.imread('images/cropped.jpg')
cv2.namedWindow('Image')
cv2.imshow('Image', original)

gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
blue = (255, 0, 0)
green = (0, 255, 0)

faces = face_cascade.detectMultiScale(gray, 1.3, 5)

def blur_faces(img):
    for (x, y, w, h) in faces:
        cropped = img[y:y+h, x:x+w]
        cropped = cv2.blur(cropped, (11, 11))
        img[y:y+h, x:x+w] = cropped
    cv2.imshow('Image', img)
def show_features(img, factor=1.1):
    for (x, y, w, h) in faces:
        cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = img[y:y+h, x:x+w]
        eyes = eye_cascade.detectMultiScale(roi_gray, scaleFactor=factor)
        for (ex, ey, ew, eh) in eyes:
            cv2.circle(roi_color, (ex+ew/2, ey+eh/2), (eh+eh)/4, (0, 255, 0), 1)
        cv2.imshow('Image', img)

while True:
    key = cv2.waitKey(1) & 0xFF  # 0xFF is for 64 bit computer
    if key == 27 or key == ord("q"):
        break
    elif key == ord("o"):
        cv2.imshow('Image', original)
    elif key == ord("f"):
        show_features(copy.copy(original))
    elif key == ord("b"):
        blur_faces(copy.copy(original))
    elif key == ord("5"):
        show_features(copy.copy(original), factor=1.5)

cv2.destroyAllWindows()

The results are shown below; first the original (reduced, cropped) image:

This is followed by the automated face and eye detection.  Note that the eye detection routine could not detect my eyes; my original thought was that this could be due to my glasses.  I did look for, and found some other training samples from the OpenCV sources ... but the few additional ones I tried did not succeed in detecting my eyes.  



The author mentioned in Book 2 that the "scaleFactor" parameter could be adjusted resulting sometimes in improved detection (or reduced false positives).  However, no matter what parameter I chose for the scale factor (or other possible parameters listed in Book 2), it did not detect my eyes ... but found that my daughter had apparently four eyes:




Finally, using a simple blur method adapted from Book 1, I could also blur the faces as shown below:




One important point to note though: I had initially downloaded and installed the latest version of OpenCV (3.0 Beta) and found that the face detection script included in Book 2 did not work -- nor (but for a different reason) did the one provided in the tutorial found on the OpenCV website.  So, in the end, and after corresponding with Adrian Rosebrock, the author of Books 1 and 2,   (who has been very patient in answering all my questions, always doing so with very little delay), I downloaded the previous stable version of OpenCV (2.49) and everything worked fine.

As an aside, while I found the experience of using a Virtual Box a bit frustrating, as mentioned in part 2 of this review, I must recognize that all the scripts provided worked within the Virtual Box.
However, the Virtual Box cannot capture the web camera.  Having OpenCV installed on my computer, I was able to run the scripts provided by the author together with my webcam ... and found that the face tracking using the web cam works very well; the eye tracking was a bit quirky (even without my glasses) until I realised that my eyes are rarely fully open: if I do open them wide, the eye tracking works essentially flawlessly.

Stay tuned for part 4, the conclusion.



Sunday, November 23, 2014

Practical Python and OpenCV: a review (part 2)

In part 1, I mentioned that I intended to review the "Case Studies" of the bundle I got from Practical Python and OpenCV  and that I would discuss using the included Ubuntu VirtualBox later.  However, after finishing the blog post on Part 1, I started looking at the "Case Studies" and encountered some "new" problems using the VirtualBox that I will mention near the end of this post.  So, I decided to forego using it altogether and install OpenCV directly.

Note: If you have experience using VirtualBoxes, then it might perhaps be useful to get the premium bundle that includes them; for me it was not.  Including a Ubuntu VirtualBox already set up with all the dependencies and the code samples from the two books is a very good idea and one  that may work very well for some people.

If you need to use VirtualBoxes on Windows for other reasons, perhaps you will find the following useful.

Setting up the VirtualBox

Running Windows 8.1, I encountered an error about vt-x not being enabled.   My OS is in French and googling French error messages is ... hopeless.  So, I used my best guess as to what were the relevant English pages.

From http://superuser.com/questions/785672/linux-64bit-on-virtual-box-with-window-7-profession-64-bit  I understood that I needed to access the BIOS to change the settings so that I could enable virtualization mode.

Unfortunately, I (no longer) was seeing an option to access the bios at boot time.     There are *many* messages about how to re-enable bios access at boot time, most of which simply did not work for me.  The simpler method I found to do so was following (at least initially) the explanation given at http://www.7tutorials.com/how-boot-uefi-bios-any-windows-81-tablet-or-device.

(However, I found out afterwards, that the bios not being accessible is possibly/likely simply because I had a fast startup option checked in power settings.)

Once I got access to the bios, I changed my settings to enable virtualization; there were two such settings ... I enabled them both, not knowing which one was relevant.  I do not recall exactly which settings (I've done this one month ago and did not take notes of that step)... but navigating through the options, it was very easy to identify them.

This made it possible to start the virtual box,  but when I tried for the first few times, I had to use the option to run as Administrator for this to work. 

The first time I tried to start the image (as an administrator), it got stuck at 20%.  I killed the process.  (I might have repeated this twice.)   Eventually, it started just fine and I got to the same stage as demonstrated in the demonstration video included with the bundle.   Started the terminal - the file structure is slightly different from what what is shown in the video but easy enough to figure out.

Using the VirtualBox

I've used the VirtualBox a few times since setting it up.  For some reason, it runs just fine as a normal user, without needing to use the option run as an Administrator anymore.  

My 50+ years old eyes not being as good as they once were, I found it easier to read the ebook on my regular computer while running the programs inside the VirtualBox.  Running the included programs, and doing some small modifications was easy to do and made me appreciate the possibility of using VirtualBoxes as a good way to either learn to use another OS or simply use a "package" already set up without having to worry about downloading and installing anything else. 

As I set up to start the "Case Studies" samples, I thought it would be a good opportunity to do my own examples.  And this is where I ran into another problem - which may very well be due to my lack of experience with Virtual Boxes.

I wanted to use my own images.  However, I did not manage to find a way to set things up so that I could see a folder on my own computer.  There is an option to take control of a USB device ... but, while activating the USB device on the VirtualBox was clearly deactivating it under Windows (and deactivating it was enabling it again on Windows indicating that something was done correctly), I simply was not able to figure out how to see any files on the usb key from the Ubuntu VirtualBox.  (Problem between keyboard and chair perhaps?)

I did find a workaround: starting Firefox on the Ubuntu VirtualBox, I logged in my Google account and got access to my Google Drive.  I used it to transfer one image, ran a quick program to modify it using OpenCV.  To save the resulting image (and/or modified script) onto my Windows computer, I would have had to copy the files to my Google Drive ...

However, as I thought of the experiments I wanted to do, I decided that this "back-and-forth" copying (and lack of my usual environment and editor) was not a very efficient nor very pleasant way to do things.

So, I gave up on using the VirtualBox, used Anaconda to install Python 2.7, Numpy, Matplotlib (and many other packages not required for OpenCV), installed OpenCV (3.0 Beta), ran a quick test using the first program included with Practical Python and OpenCV ... (loading, viewing and saving an image)  which just worked.

Take away

If you have some experience running VirtualBoxes successfully, including being able to copy easily files between the VirtualBox and your regular OS, then you are in a better position than I am to figure out if getting the premium bundle that includes a VirtualBox might be worth your while.

If you have no experience using and setting up VirtualBoxes, unless you wanted to use this opportunity to learn about them, my advice would be to not consider this option.

Now that I have all the required tools (OpenCV, Numpy, Matplotlib, ...) already installed on my computer, I look forward to spending some time exploring the various Case Studies.

---
My evaluation so far:  Getting the "Practical Python and OpenCV" ebook with code and image samples was definitely worth it for me.   Getting the Ubuntu VirtualBox and setting it up was ... a learning experience, but not one that I would recommend as being very useful for people with my own level of expertise or lack thereof.

My evaluation of the "Case Studies" will likely take a while to appear - after all, it took me a month between purchasing the Premium bundle and writing the first two blog post.  (Going through the first book could easily be done in one day.)  

I do intend to do some exploration with my own images and I plan to include them with my next review.