Adding a menu & dialog in Second Life

I thought that I would quickly add an option to the menu bar in my Second Life client to pop up a dialog (a window) to display the data streaming in over the serial port & maybe some options to enable/disable controls & whatnot. Unfortunately the page on the wiki for how to add dialogs hasn’t been updated since 2008 & the code/instructions no longer work, so I had to explore the viewer source to see how dialogs are currently implemented & along with help from the mailing list I succeeded.

(Note on terminology – “Windows and dialogs in Second Life are implemented as LLFloater objects” which is why the word ‘floater’ appears a lot).

Dialogs

The content & layout of the floater is defined in an xml file that goes in newview/skins/default/xui/en/floater_serial_monitor.xml (mine is called floater_serial_monitor, you obviously call yours what you want). This very simple example just creates an empty floater & the only point of interest is the layout="topleft" line which means that 0,0 will be in the top left of the floater rather than the default bottom left (OpenGL style).

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<floater
 border="true"
 can_close="true"
 can_minimize="true"
 can_resize="false"
 save_rect="true"
 height="320"
 width="480"
 name="serial_monitor_floater"
 title="Serial Monitor"
 layout="topleft">

The implementation of the stuff in the floater goes in newview/llfloaterserialmonitor.h & newview/llfloaterserialmonitor.cpp. Of course you need to make the build system aware of these new files, on Linux by references to the relevant sections of newview/CMakeLists.txt.

You then have to register your new floater in newview/llviewerfloaterreg.cpp by adding something like this, along with an include statement at the top of the file like #include 'llfloaterserialmonitor.h".

LLFloaterReg::add("serial_monitor", "floater_serial_monitor.xml", LLFloaterBuildFunc)&LLFloaterReg::build<LLFloaterSerialMonitor>);

Once you’ve done this, you’ll be able to show the dialog using showInstance.

LLFloaterReg::showInstance("serial_monitor");

For examples from the viewer source, particularly to give you an idea of what to put in the .h/.cpp files, take a look at;

Menus

The wiki page for adding menu items hasn’t been updated since 2009 however is still mostly relevant. So if you want to do display your new dialog from the main menu bar that runs along the top of the viewer window, first add a menu item to an existing menu on the menu bar, or add your own menu to the menu bar & then add a menu item to that, by editing newview/skins/default/xui/en/menu_viewer.xml. This example adds a menu called ‘Serial’ that contains a single item called ‘Serial Monitor’.

<menu
      create_jump_keys="true"
      label="Serial"
      name="Serial"
      tear_off="true">

      <menu_item_call
        label="Serial Monitor"
        name="Serial Monitor">
        <menu_item_call.on_click
          function="ShowSerialMonitor"
          parameter="agent" />
        </menu_item_call>
    </menu>

The function line refers to a method you add to newview/llviewermenu.cpp, first by adding a listener.

view_listener_t::addMenu(new LLShowSerialMonitor(), "ShowSerialMonitor");

Then by adding an event handler class that uses the showInstance method mentioned above.

class LLShowSerialMonitor : public view_listener_t
{
	bool handleEvent(const LLSD& userdata)
	{
		LLFloaterReg::showInstance("serial_monitor",userdata);
		return true;
	}
};

So now you know how to add a menu entry that displays a blank dialog, but you still need to put something in it…

Arduino + HMC6343 + u-blox MAX-6

Part of my investigation into simultaneous presence in real & virtual environments involves streaming real world position, heading & orientation data into a modified Second Life viewer. After much faffing about evaluation of different accelerometers, magnetometers & GPS receivers, as well as learning about GNSS & exciting concepts like magnetic declination, hard/soft iron offsets & tilt compensation, the result is an Arduino with an HMC6343 tilt-compensated magnetometer & a u-blox MAX-6 GPS receiver.

For the interaction style that I want to investigate with the VTW project I need to know 3 things about the user;

  • position – where they are
  • heading – the direction they are looking
  • pitch – the angle they are facing (up, down, level)

Pitch is the easiest of the three – you simply use an accelerometer. I tried both the ADXL335 with signal conditioned voltage outputs & the MMA8452 which uses I2C. These are both 3-axis accelerometers so can tell you roll, pitch & yaw all at the same time. They both did what I wanted, but the MMA8452 was easier to use as it didn’t involve experimenting to find out what values represented the rest/peak positions of each axis like the ADXL335 did.

Heading (by which I mean a compass heading) also seems easy to begin with – you just use a 3-axis magnetometer (aka ‘digital compass’ or ‘e-compass’) & perform a calculation that uses the readings of the 2 axes that represent pitch & roll. I tried the HMC5883L but soon discovered that things get much more complicated if you aren’t holding the magnetometer flat – because the calculation doesn’t take the third axis into consideration, the heading becomes more useless the more you tilt it! To calculate a heading even when not held level (which is important for many applications) you need to integrate the reading from the third axis, which can be thought of as recording the magnetic field lost by the other two when tilted out of alignment, into the heading calculation. But to do this you need to know how the magnetometer is tilted – good thing we have a 3-axis accelerometer!

There are a number of good tutorials/guides for this, including these two that I followed, however if you are lazy &/or scared by maths you can also buy ICs such as the HMC6343 that combine a 3-axis accelerometer with a 3-axis magnetometer & perform the tilt compensation algorithms onboard.

Of course there must be a downside to this convenience & most obvious is the price. If you are a SparkFun sort of person the HMC6343 will set you back $149.95. To put that into perspective the ADXL335 is $24.95, the MMA8452 a paltry $9.95 & the HMC5883L just $14.95.

The HMC6343

The HMC6343 ‘3-Axis Compass with Algorithms’ is an I2C device so is easy to hook up (SDA to analog 4, SCL to analog 5 on an Arduino Uno) & interact with. The datasheet contains the protocol definition & communicating with it looks something like this (the example here changes the rate of measurements from the default of 5Hz to the fastest of 10Hz). HMC6343_ADDRESS is 0x19 & obviously you will also need to include Wire.h at the top of your sketch.

Wire.beginTransmission(HMC6343_ADDRESS);
Wire.write(0xF1);
Wire.write(0x05);
Wire.write(0x02);
Wire.endTransmission();

The first Wire.write tells the device what command we want to run, in this example 0xF1 is the ‘Write to EEPROM’ command that expects 2 binary byte arguments. The first argument is which register we want to write to, in this example 0x05 is the address of ‘Operational Mode Register 2’ which stores the measurement speed. The second argument is the value that we actually want to write to this register & in this case 0x02 means that we want measurements at 10Hz. Simple enough, no?

If you look through the sketch you’ll see this pattern repeated for setting the variation angle correcton, the Infinite Impulse Response filter & temporarily changing the orientation (so that you can mount the IC in different orientations without having to manually swap the axes, which is very handy).

The HMC6343 also allows you to access just the accelerometer readings, separately from the tilt-compensated magnetometer readings, so this single IC gives me both the (tilt compensated) heading & the pitch.

One more handy feature of the HMC6343 is the hard-iron offset calibration. Metallic stuff around the chip (wires, the GPS module, the Arduino, etc.) can affect the magnetic field detected by the magneto resistors & throw off the readings, so we can compensate for this using the built-in calibration mode. Read the datasheet to see how this works, but essentially all you have to do is send command 0x71, rotate the device in a certain manner, then send command 0x7E.

In case you’re wondering what the difference between hard-iron offset & soft-iron offset is, the former is for things that won’t change during operation (eg the wires connecting the IC to the Arduino are always going to be there) whereas the latter refers to things that will change during operation (such as moving the IC around your lodestone collection). Hard-iron only need be calculated once, whereas soft-iron requires dynamic attention.

u-blox MAX-6

My research into GNSS led me to the u-blox MAX-6 as my best option. Impressive specification, solid performance & compatibility with SBAS including EGNOS in Europe (Wikipedia page on GNSS augmentation/SBAS here). It turns out that it’s a popular GPS receiver amongst high-altitude ballooners (the kind of people who attach Arduinos & cameras to weather balloons & let them go) as it operates correctly at high altitudes which some receivers evidently don’t. So after some discussion on the #highaltitude channel on freenode I ended up ordering a MAX-6 with the appropriate level conversion to work with an Arduino (the MAX-6 uses 3v logic, whereas the Arduino uses 5v, so conversion is required so as not to fry the module). I ordered it from here & based my sketch on the example code here.

The MAX-6 is a serial device (no convenient I2C here) so interfacing with an Arduino is slightly different. The wiki page linked above recommends not using Software Serial, however as I also want to run the HMC6343 on the same Arduino I have no choice – so far I haven’t experienced any trouble with it.

The protocol specification for the MAX-6 is substantially longer & the messages much more complex than the HMC6343. If you want to have a go at assembling the messages by hand you can do so by reading the 220+ page document & I actually had some success with this, however then discovered that the easier way is to use the u-center software which allows you to configure the receiver from Windows using tickboxes & menus then see/copy the messages from the relevant console window to paste into your sketch. The u-center software worked fine for me in a Windows 7 VM running on a Linux host, using pins 0 & 1 on an Arduino to connect the receiver directly to the USB connection via the Arduino’s UART.

Once you have your message, sending it to the receiver looks something like this (the example here sets the dynamic platform model to ‘pedestrian’);

uint8_t CFG_NAV5[] = {0xB5, 0x62, 0x06, 0x24, 0x24, 0x00, 0xFF, 0xFF,
                    0x03, 0x03, 0x00, 0x00, 0x00, 0x00, 0x10, 0x27,
                    0x00, 0x00, 0x05, 0x00, 0xFA, 0x00, 0xFA, 0x00,
                    0x64, 0x00, 0x2C, 0x01, 0x32, 0x3C, 0x00, 0x00,
                    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
                    0x00, 0x00, 0x00, 0x00};
calculateUBXChecksum(CFG_NAV5, (sizeof(CFG_NAV5)/sizeof(uint8_t)));

while (!success)
{
  sendUBX(CFG_NAV5, (sizeof(CFG_NAV5)/sizeof(uint8_t)));
  success = getUBX_ACK(CFG_NAV5);
}

The first thing we do is create an array of type unit8_t that contains the message itself. The final 2 hex values are the checksum & the call to calculateUBXChecksum on line 2 calculates these values & substitutes them into the array. To send the message we use the sendUBX method & keep on trying until the receiver sends us a confirmation.

Again you will see this pattern repeated a number of times in the sketch, to set the dynamic platform model, enable SBAS using EGNOS, disable all the messages that I don’t want & enable the ones that I do want.

Getting readings

Getting data from the HMC6343 is a simple case of sending the correct command (0x50), requesting the correct number of bytes & then reading them into sensible variables (heading, pitch, etc.).

Getting data from the MAX-6 isn’t much harder, using SoftwareSerial.available() & SoftwareSerial.Read(), however as with most GPS receivers the data that it returns is in the form of NMEA 0183 messages which look a bit like this;

$GPRMC,225446,A,4916.45,N,12311.12,W,000.5,054.7,191194,020.3,E*68

It’s not hard to parse it yourself, but why go to the effort when there are libraries like TinyGPS that can do it for you?

…using readings?

So what was the point of all this again? Well, my modified Second Life client listens on the serial port using Boost.Asio, receives these messages & then uses then to control avatar & camera – more on this soon!

OpenSim Region Modules & GPS

Something else I did late last year/early this year, investigating how one might combine OpenSim Region Modules with GPS readings to control movement of an avatar & other exciting things, which weren’t developed to fruition but may be of interest to somebody.


From the OpenSim wiki

Region modules are .net/mono DLLs. During initialization of the simulator, the OpenSimulator bin directory (bin/) and the scriptengines (bin/ScriptEngines) directory are scanned for DLLs, in an attempt to load region modules stored there.

Region modules execute within the heart of the simulator and have access to all its facilities. Typically, region modules register for a number of events, e.g. chat messages, user logins, texture transfers, and take what ever steps are appropriate for the purposes of the module.

Essentially they allow you to develop much more complex extensions to the OpenSim platform than in-world LSL does, but are easier & more accessible than directly modifying the client &/or server source code.

Instructions for making your own region modules can be found on the OpenSim wiki (follow the link above) but I also wrote a heavily commented version of the boilerplate code for a shared region module that might make it easier to understand which parts you actually need to implement/change to get a basic region module doing something. This was the first time I had worked with C# so apologies if some of the comments seem superfluous ;)

Instead of having hundreds of lines of code in this post, I’ve put all of the examples in public Bitbucket repositories – here is the first one for the boilerplate region module code. All you really need to do is change names in a few places & then add some functionality starting in PostInitialise() to get a basic region module that can result in some visible effect in world.

One of the most basic visible effects is making something move & this example does just that. Despite its name it doesn’t actually have anything to do with GPS yet, it simply creates a spherical prim in each region & moves it a short distance on each tick of a timer. This shows the most basic usage of the SceneObjectGroup class to get a reference to a primitive in a scene & then do something with it (move it).

Moving onto something that actually begins to involve GPS, or at least begins to make some connection between real world latitude & longitude values & ‘equivalent’ positions in the virtual world, this next example waits for a latitude & longitude to be reported via a TCP connection & then moves the avatar to the equivalent position in the region. This approach assumes that there is one position within the OpenSim region for which the equivalent real world latitude & longitude is known (referred to as the ‘anchor’ point) & that the scale of the OpenSim region compared to the real world is also known (eg that every meter in the real world is represented by 1.2m in the OpenSim region).

When a latitude & longitude is received via TCP the haversine formula is used to calculate the real world ‘great circle’ distance between the anchor point & this new point. This distance is then scaled according to the scale of the real world to the OpenSim region & thus the equivalent OpenSim position is calculated as a displacement from the anchor point – to which the avatar is then moved.

This is a fairly ‘rough & ready’ proof-of-concept – the avatar’s name & the position of the anchor point are currently hard-coded & movement across region boundaries isn’t supported. The implementation of the haversine formula & the GPSSanitizer method (which checks for both dms & signed decimal latitude/longitude representations using regular expressions) may be useful to other applications. It has also been tested by manually piping in latitude/longitude values via a simple TCP client – a rudimentary example included below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.Net.Sockets;

namespace TCPClient {

    /*
     * A very simple TCP client test program. When run (with no command line arguments) it will send a message from the CLI to
     * localhost port 13000 via a TCP connection. The message, address & port can be changed by changing the hardcoded values.
     */
    class SimpleTcpClient {

        static void Main(string[] args) {
            Console.WriteLine("Enter GPS value to move to (eg '56.340626, -2.808015' or '56 20 26 N, 2 48 28 W'");
            string s = Console.In.ReadLine();
            Connect("127.0.0.1", s);
        }

        static void Connect(String server, String message) {
            try {
                // Create a TcpClient.
                // Note, for this client to work you need to have a TcpServer 
                // connected to the same address as specified by the server, port
                // combination.
                Int32 port = 13000;
                TcpClient client = new TcpClient(server, port);

                // Translate the passed message into ASCII and store it as a Byte array.
                Byte[] data = System.Text.Encoding.ASCII.GetBytes(message);

                // Get a client stream for reading and writing.
                NetworkStream stream = client.GetStream();

                // Send the message to the connected TcpServer. 
                stream.Write(data, 0, data.Length);

                Console.WriteLine("Sent: {0}", message);

                //send & forget, don't bother waiting for a response

                // Close everything.
                client.Close();
            }
            catch (ArgumentNullException e) {
                Console.WriteLine("ArgumentNullException: {0}", e);
            }
            catch (SocketException e) {
                Console.WriteLine("SocketException: {0}", e);
            }

            Console.WriteLine("\n Press Enter to continue...");
            Console.Read();
        }
    }
}

The final example is something a bit different, but still GPS related. This one takes a latitude & longitude via TCP in the same manner as the previous example, but instead of moving something it instead queries the Google Maps API for a satellite image centered about this position & applies this image as a texture to a prim that covers the entire region.

This was written to make testing of the previous example easier – it’s easier to visualise whether the movements of the avatar are correct if s/he is walking over imagery of the real world than blank terrain. This was written a long time ago but I think I started extending it such that when an avatar moved into a neighbouring region it would automatically be textured with another satellite image – at any rate the code is in a very unfinished state, but please feel free to harvest any bits that might be useful to you.


It became apparent during these experiments that it would make more sense to handle GPS/accelerometer/magnetometer avatar movement on the client side rather than the server side, thus these experiments were abandoned – however they still serve as an interesting demonstration of what can be achieved with region modules & a bit of imagination :)

In addition, much of the GPSAvatar code will be transferred into my modified Second Life viewer & will help to speed up development there.

Arduino + accelerometer as joystick for Second Life

Update – example code available here.

Here’s something I hacked together earlier this year & although I didn’t end up developing it further it may prove interesting to others.


First off a quick video showing the Arduino + accelerometer ‘joystick’ controlling the avatar & then the flycam in the official Second Life client. If this piques your interest, read on to find out how (& why) it was done.

I wanted to investigate how I might use real world orientation data, such as that recorded by an accelerometer attached to an Arduino, to control a Second Life avatar without having to modify the source code of the Second Life client. This approach would have two major advantages;

  • conceivably much less work required
  • compatibility with any Second Life client sans modification – no reliance on a bespoke modified client

Unfortunately the official Second Life client (upon which all third party clients are based) doesn’t really present any interfaces for input devices apart from the usual mouse, keyboard & joystick. But some work that my colleague John was doing at the time getting XBox controllers to move avatars using the joystick interface got me thinking & a Google search for ‘Arduino joystick’ led to my discovery of the Arduino UNO Joystick HID firmware based on LUFA (the Lightweight USB Framework for AVRs) which allows an Arduino to appear to a computer as a standard USB HID joystick, instead of as a serial device, by reprogramming the USB-to-serial converter. Of course this ‘joystick’ can in fact be used by any program/game, not just Second Life.

Note: this is only possible with Arduino Uno & Mega, which use an ATMega chip for USB-to-serial conversion. It does not work with older Arduino, such as the Duemilanove, which use a FTDI chip. My experiments used an Uno R3 which has the ATMega16U2, check compatibility before you attempt the following with a Uno/Mega R1/R2 which use the ATMega8U2.

Reprogramming the ATMega16U2 involves putting the Arduino into DFU mode & using dfu-programmer. Unfortunately the latest version of dfu-programmer (0.5.4) doesn’t know about the ATMega16U2 so has to be patched. Grab the latest dfu-programmer source & apply this patch as discussed in this thread on the Arduino.cc forums.

With your Arduino connected to your computer as normal you should see something similar if you run lsusb.

[root@flatline /]# lsusb
Bus 003 Device 007: ID 2341:0043 Arduino SA Uno R3 (CDC ACM)

To enter DFU mode find the 6-pin AVR header near the USB socket & briefly connect the 2 pins closest to the USB socket (see picture beneath, click for full size) using the tip of a screwdriver, a paperclip, piece of wire, etc.

The Arduino should no longer register in lsusb but a myserious Atmel Corp. device should have appeared in its place. The serial port (eg /dev/ttyACM0) should also have disappeared.

[root@flatline /]# lsusb
Bus 003 Device 011: ID 03eb:2fef Atmel Corp.

You can then go ahead & erase the original firmware & flash the joystick HID firmware.

[root@flatline /]# dfu-programmer atmega16u2 erase
root@flatline /]# dfu-programmer atmega16u2 flash Arduino-joystick-0.1.hex 
Validating...
4076 bytes used (33.17%)
[root@flatline /]# dfu-programmer atmega16u2 reset

And you’re done! At this point you will need to physically disconnect & reconnect the Arduino for the computer to recognise it as a joystick. After you have done so, lsusb should report something like this.

[root@flatline /]# lsusb
Bus 003 Device 012: ID 03eb:2043 Atmel Corp. LUFA Joystick Demo Application

With the joystick firmware in place you will no longer be able to upload sketches as normal. If you don’t have a USBtiny ISP or similar you will have to revert back to the original USB-to-serial firmware each time you want to upload a new sketch. This process is exactly the same as above, but substituting the joystick .hex file with the USB-to-serial one.

[root@flatline /]# dfu-programmer atmega16u2 erase
[root@flatline /]# dfu-programmer atmega16u2 flash Arduino-usbserial-atmega16u2-Uno-Rev3.hex 
Validating...
4034 bytes used (32.83%)
[root@flatline /]# dfu-programmer atmega16u2 reset

Once again, you will have to disconnect & reconnect the Arduino for your computer to register the change.


As for the sketch itself, mapping accelerometer readings to the joystick axes is simply a case of inserting them into the correct variables in the joyReport struct & sending it over Serial – take a look at the example sketch that comes with the joystick firmware & you should soon see how to do it. Beneath is a rudimentary example using readings from the Honeywell HMC6343 from Sparkfun (see here for how to use this with Arduino), mapping the accelerometer’s roll to the joystick’s X axis & its pitch to the Y axis.

 
#include <Wire.h>
 
#define HMC6343_ADDRESS 0x19
#define HMC6343_HEADING_REG 0x50

// data structure as defined by the joystick firmeware
struct {
    int8_t x;
    int8_t y;
    uint8_t buttons;
    uint8_t rfu;
} joyReport;

void setup() {
  Wire.begin();          // initialize the I2C bus
  Serial.begin(115200);  // initialize the serial bus
}
 
void loop() {
  byte highByte, lowByte;
 
  Wire.beginTransmission(HMC6343_ADDRESS);    // start communication with HMC6343
  Wire.write(0x74);                           // set HMC6343 orientation
  Wire.write(HMC6343_HEADING_REG);            // send the address of the register to read
  Wire.endTransmission();
 
  Wire.requestFrom(HMC6343_ADDRESS, 6);       // request six bytes of data from the HMC6343
  while(Wire.available() < 1);                // busy wait while there is no byte to receive
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float heading = ((highByte << 8) + lowByte) / 10.0; // heading in degrees
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float pitch = ((highByte << 8) + lowByte) / 10.0;   // pitch in degrees
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float roll = ((highByte << 8) + lowByte) / 10.0;    // roll in degrees

  joyReport.buttons = 0;
  joyReport.rfu = 0;
  
  joyReport.x = constrain(((int)(map(roll, -90, 90, -100, 100))), -100, 100);
  joyReport.y = constrain(((int)(map(pitch, -90, 90, -100, 100))), -100, 100);
  
  Serial.write((uint8_t *)&joyReport, 4);

  delay(100); // do this at approx 10Hz
}

When you fire up your Second Life client (either the official client or a third-party client) go into Preferences -> Move & View -> Other Devices to open the Joystick Configuration window & you should see something like the screenshot beneath (click for full size). Note ‘Arduino Arduino Joystick’ has been recognised – however also note that it is a limitation of the client that it only recognises the first joystick device connected to the computer. Depending on how you have mapped your axes & what you want to control you will have to change the numbers in this window accordingly – with the above sketch 0 is the X axis & 1 is the Y axis (-1 disables a control).


In the end this approach proved to be unsuitable for my purposes, due to the difficulty of mapping readings to discrete virtual world orientations rather than to relative movements from the previous orientation. But it was still interesting to do :)

Open Virtual Worlds at the University of St Andrews

I thought it was time to start posting something here other than the occasional photo (not that I even do that very often…) so I’m going to start posting about the exciting things I’m doing for my PhD. There is a new category ‘Academic’ for these posts, so you can easily follow/ignore them.


I am part of the Open Virtual Worlds group at the School of Computer Science at the University of St Andrews. We have an official blog, we own the openvirtualworlds.org domain & we even have a Facebook page because that’s the done thing these days. Stolen from the blog…

Open Virtual Worlds are multi-user 3D environments within which users are represented by the proxy of an avatar. They are similar to multi-player computer games but differ in the important respect that their appearance, interactive characteristics, content and purpose are all programmable. In addition they can act as a portal for organising multiple media, including web pages, video streams, textual documents and simulations.

Unlike computer games they have no pre-set goals; users or groups of users are free to make up their own goals. They offer the potential of providing the core of the future 3D Internet. Our research addresses issues that need to be addressed for this potential to be realised.

My interest lies in the concept of simultaneous presence in real & virtual environments via the cross reality paradigm & investigating solutions to the vacancy problem – the inability with current technologies to simultaneously immerse oneself in both the real world & a complete virtual world.

This is different to the concept of augmented reality, which many people are now familiar with thanks to the popularity of augmented reality smartphone apps, as cross reality deals with the combination of a complete virtual world with the real world, rather than the sparse digital augmentations upon the real world that augmented reality performs.

I am investigating this concept via a case study that builds upon existing work within the Open Virtual Worlds group, by developing a system that allows visitors to the ruined cathedral at St Andrews to simultaneously explore our virtual world reconstruction of the cathedral in a natural & intuitive manner via a tablet computer, in a project dubbed the Virtual Time Window (VTW).

Stay tuned for some more frighteningly exciting updates on what I’m working on, along with details of how you can log into our reconstructions & explore them yourselves!