”’: Documentation “’

Introduction

In the Basic osgART Application tutorial, a convenient high-level osgART scene setup is introduced, providing a foundation upon which many of the other tutorials on this website build. To recap, the following four lines of code configure both live video input and marker tracking, and introduce them into the scene graph, to produce an osgART scene.

osgART::Scene* scene = new osgART::Scene();
scene->addVideoBackground("osgart_video_artoolkit2");
scene->addTracker("osgart_tracker_artoolkit2");
scene->addTrackedTransform("single;data/patt.hiro;80;0;0");

For many osgART applications, this high-level interface is sufficient, and we can ignore that which the Scene node encapsulates. However, for some more advanced osgART applications, it is necessary reveal the low-level configuration process, to provide us with greater control and allow more flexibility.

The purpose of this tutorial, therefore, is to demonstrate an alternative, low-level version of the osgART scene setup process.

The Low-Level Scene Setup Process

First we must create an osgViewer and add various event handlers, etc. as always. In addition we require a root node for our application, this will be the root of the entire scene graph.

osg::ref_ptr<osg::Group> root = new osg::Group;

Setup the Live Video Stream

Next we need to load and configure a video plugin, and open the video stream. In this tutorial we use the ARToolKit video capture plugin. The first step is to preload the plugin (this finds and loads the dynamic library), and gives us a video ID. We can then use this video ID to get an instance of the video plugin.

Once we have a valid instance of the video plugin, we can open the video. This will not yet start the video stream but will get information about the format of the video which is essential for the connected tracker.

int video_id = osgART::PluginManager::instance()->load("osgart_video_artoolkit2");

osg::ref_ptr<osgART::Video> video =
    dynamic_cast<osgART::Video*>(osgART::PluginManager::instance()->get(video_id));

if (!video.valid()) {

        // Without video an AR application can not work. Quit if none found.
        osg::notify(osg::FATAL) << "Could not initialize video plugin!" << std::endl;
        exit(-1);
}
video->open();

if (osg::ImageStream* imagestream = dynamic_cast<osg::ImageStream*>(video.get())) {
        osgART::addEventCallback(root.get(), new osgART::ImageStreamCallback(imagestream));

}

Setup the Tracker

We now need to load and configure a tracking plugin. This follows a similar process to loading the video. First we preload the tracker plugin, then we load an instance of the plugin. We also add a callback to the scene graph root that ensures the tracker isfoundation updated each frame. This in turn means that all markers we eventually load from the tracker will also be updated each frame. Here we use the ARToolkit tracking plugin.

int tracker_id = osgART::PluginManager::instance()->load("osgart_tracker_artoolkit2");

osg::ref_ptr<osgART::Tracker> tracker =
        dynamic_cast<osgART::Tracker*>(osgART::PluginManager::instance()->get(tracker_id));

if (!tracker.valid()) {

    // Without tracker an AR application can not work. Quit if none found.
    osg::notify(osg::FATAL) << "Could not initialize tracker plugin!" << std::endl;
    exit(-1);
}

osgART::TrackerCallback::addOrSet(root.get(), tracker.get());

In order to operate accurately, trackers often require information about the camera being used. This information typically includes the intrinsic parameters of the camera, which, in addition to being required for computer vision algorithms, are also used to construct the correct projection matrix for rendering 3D objects. Therefore, we next provide our tracker with the camera calibration information.

Camera Calibration

In osgART, this calibration information is provided through a calibration object associated with the tracker. Here the information is loaded from a file named “camera_para.dat”, which is generated by the ARToolKit calibration application calib_dist.

osg::ref_ptr<osgART::Calibration> calibration = tracker->getOrCreateCalibration();

if (!calibration->load("data/camera_para.dat")) {

       // the calibration file was non-existing or couldn't be loaded
       osg::notify(osg::FATAL) <<
        "Non existing or incompatible calibration file" << std::endl;
    exit(-1);
}

Connect Video Stream and Tracker

Once the calibration information is loaded we need to connect the video and tracker plugins so that the tracker knows where to get live video.

tracker->setImage(video.get());

The OSG Camera

Once the calibration information is loaded and the video stream and tracker connected, we need to create an OSG camera object using the calibration information. The camera’s projection matrix is generated from this calibration information.

In the scene graph, the 3D objects we eventually display on markers will be child nodes of this camera, thus we add the camera as a child of the root node.

osg::ref_ptr<osg::Camera> cam = calibration->createCamera();
root->addChild(cam.get());

Add Video Stream to Scene Graph

We already have the live video stream, but need some way to display it in the Viewer. The support function createImageBackground takes our video stream and returns a video background node which we can add to the scene graph, as a child of the camera node.

In augmented reality applications the video stream should always appear behind everything else - this is achieved by setting the video background node Render Bin to zero.

osg::ref_ptr<osg::Group> videoBackground = createImageBackground(video.get());

videoBackground->getOrCreateStateSet()->setRenderBinDetails(0, "RenderBin");

cam->addChild(videoBackground.get());

” createImageBackground”

This creates the graphical objects necessary to display the video stream in the viewer. osgART provides the classes VideoLayer and VideoGeode for this.

A VideoLayer is a node that sets up the correct rendering state for displaying the video. The VideoLayer sets up an orthographic projection, disables lighting and disables depth testing.

A VideoGeode sits beneath the VideoLayer and contains the actual textured geometry that is displayed onscreen. Given an osgART::Video, the VideoGeode constructs the geometry (possibly including a mesh to undistort the video), creates and attaches a texture, and sets up the appropriate rendering states.

This method a node containing the video graphics, which can be added directly to the scene graph.

osg::Group* createImageBackground(osg::Image* video, bool useTextureRectangle = false) {

    osgART::VideoLayer* layer = new osgART::VideoLayer();

    osgART::VideoGeode* geode =
    new osgART::VideoGeode(video, NULL, 1, 1, 20, 20, useTextureRectangle ?

    osgART::VideoGeode::USE_TEXTURE_RECTANGLE : osgART::VideoGeode::USE_TEXTURE_2D);

    layer->addChild(geode);

    return layer;
}

Load One or More Markers

Now we load information about any marker we want to recognise and track.

The details about the marker are provided as a string argument, in this case the string is “single; patt.hiro; 80; 0; 0”, which loads the “hiro” pattern, which we have printed out at size 80mm. The last two items in the string refer to the centre point on the marker. Almost always these values are zero. For more information about this, please refer to ARToolKit specific documentation.

We add the marker to the tracker and set the marker to ‘active’. At some point in the scene graph we may wish to deactivate the marker, which we can using setActive(false)

osgART::Marker* marker = tracker->addMarker("single;data/patt.hiro;80;0;0");
if (!marker) {

        // Without marker an AR application can not work. Quit if none found.
        osg::notify(osg::FATAL) << "Could not add marker!" << std::endl;
        exit(-1);
}
marker->setActive(true);

We now have a video plugin providing live video, and a tracking plugin using that video to compute the position, orientation and visibility of the marker we have loaded.

For a true AR application, we will of course want to “attach” something to the marker. In many AR applications, this would be a 3D object, such as a model created in a 3D-modelling program. However, there are other things that could be associated with the marker, such as an audio or video file.

Before we do that though, we may simply wish to test that the marker is in fact being tracked. To do this we could use this (optional) callback provided by osgART, which prints information about a marker to the console. We add the callback to the root node, so that the information will be updated and printed every update traversal.

osgART::addEventCallback(root, new osgART::MarkerDebugCallback(marker));

Tracking Multiple Markers

For each additional marker, we can just repeat the above process, adding all markers to the same tracker.

Match 3D Object to Marker

All that remains is to add some object/s to our marker for a complete AR application. In this tutorial, we will now use the real-time information about the tracked marker to ‘attach’ a simple 3D object to it, and transform the 3D object along with the marker, so it looks as if it sits on the marker. This is the essence of Augmented Reality. The 3D model we will use here is osgART’s test cube.

osgART provides some useful callbacks, which are used to apply the current state of a marker to that of a scene graph node. (One of these, the MarkerDebugCallback was introduced in the previous section).

To display an object on a marker we can add the object into the scene graph as a child of an osg::Transform transformation node (either a MatrixTransform or a PositionAttitudeTransform). We can then add osgART callbacks to the transform node, to apply real-time information about the current state of the marker to the transform node. Typically, we will want the transform node (and its child nodes) to update position according to that of the marker, and the object to be visible only when the marker is. osgART provides both a transformation and a visibility callback, however there is also a shortcut - attachDefaultEventCallbacks - which will add both these callbacks to a node.

Finally, we want to ensure that our 3D object is rendered later than the video background. One way to achieve this is to place the object into a higher render bin. Render bins are an Open Scene Graph concept for managing rendering order. For more information consult the Open Scene Graph documentation. For the purposes of this tutorial, it is enough to know that nodes in higher render bins render later than sibling nodes in lower render bins.

osg::ref_ptr<osg::PositionAttitudeTransform> patTransformHiro = new osg::PositionAttitudeTransform();

osgART::attachDefaultEventCallbacks(patTransformHiro, markerHiro);
osgART::addEventCallback(patTransformHiro, new osgART::MarkerDebugCallback(markerHiro));
osgART::addEventCallback(patTransformHiro, new osgART::MarkerTransformCallback(markerHiro));
osgART::addEventCallback(patTransformHiro, new osgART::MarkerVisibilityCallback(markerHiro));

patTransformHiro->addChild(osgART::testCube());
patTransformHiro->getOrCreateStateSet()->setRenderBinDetails(100, "RenderBin");
cam->addChild(patTransformHiro);

Finally …

Start the video stream, then start the simulation loop, then close the video stream once the loop is finished.

video->start();
int r = viewer.run();
video->close();
return r;

As this application is equivalent to the Basic osgART Application tutorial, the resulting scene is the same - simply a cube on a marker.

lowLevelBasicSShot.png