Mirrorshades update

It’s been a while since I posted anything about the Mirrorshades project, so here’s a few things I’ve been working on!

rift

I’ve stopped using the Playstation Eye cameras because the drivers were too buggy; I would frequently try to start work on the project only to find that Unity couldn’t find the DLL & there was seemingly no fix other than to repeatedly reinstall & restart for hours until they started working.

I’m now using a pair of Logitech C310 & whilst the resolution is higher than the Playstation Eye cameras (1280×960 vs 640×480), the refresh rate is lower (30hz vs 60hz). To my eyes, the Playstation Eye cameras actually gave a nicer experience, but of course when they weren’t working they were no use!

I’m using the same 3D printed clips (red) with the cameras epoxy’d to thermoplastic (white) so they can be adjusted via the nuts & bolts with rubber washers. Once again, inspiration taken from William Steptoe’s AR Rift project.

I quickly measured the latency introduced by the C310 webcams (& then realised that it would’ve been interesting to have done the same experiment on the PlayStation Eye cameras!). I placed the Rift, with the lenses removed, facing a LCD monitor displaying a timer from flatpanels.dk. I placed a camera behind such that it could see both the monitor & the Rift’s screen, then cranked the sensitivity up on the camera so that it could record 50fps video with a 1/4000th shutter speed.

The monitor & the Rift were both refreshing at 60fps, each frame lasting 16.67ms, whilst a 1/4000 shutter speed on the camera meant that the shutter was open for 0.25ms. The response time of the monitor (quoted by the manufacturer as 8ms GTG) was evidently much higher than that of the Rift, as the tenths & even hundredths digit on the monitor was usually legible in each frame of the video whereas on the Rift the hundredths & thousandths digits were always illegible. So I went through the video frame-by-frame looking for adjacent frames where a transition from one tenth digit to the next was good enough to read on the Rift & the hundredths/thousandths digits were good enough to read on the monitor, such as this pair;

00000.MTS.Still001

00000.MTS.Still002

From these we can infer that the tenths digit on the Rift screen (right eye) changed from 9 to 0 sometime between 181 & 198 on the monitor, meaning a latency of between 181ms & 198ms. Out of 11 pairs of frames like this, 7 pairs showed this 181-198ms latency, whilst 4 pairs showed 198-215ms as in the pair below;

00000.MTS.Still015

00000.MTS.Still016

I was also able to take some still photos with the same 1/4000th shutter speed, which all showed the same 181-215ms latency (3 images following), however as timing shots to get legible digits was entirely down to luck it was easier to video at 50fps to get enough frames to work from.

still1

still2

still3

This latency of 181-215ms is substantially worse than the 60ms latency between head movement & resultant VR changes being displayed that is often quoted as the upper limit for an acceptable VR experience. This increased camera latency compared to the tracker-to-VR latency (quoted as typically being 30-60ms for applications running at 60fps on the Rift DK1, same link) will probably arise in experimental results when users actually try out the platform.

sallies

I’ve mapped St Salvator’s Chapel using IndoorAtlas. We plan to use this site for our case studies as it fits with one of my research group’s interests, cultural heritage, whilst also providing a good example of where mobile cross reality is useful. I wasn’t expecting IndoorAtlas to work well in this building, as it doesn’t have a metal frame, but I was pleasantly surprised. Perhaps the addition of central heating & electricity later in the building’s history helped?

diagram

Other than that, I’ve been focussing on theoretical work & designing experiments – after all, the platform is no good without evaluation!

Control Second Life with GPS, accelerometer & magnetometer

As part of my PhD work I have produced a modified version of the Second Life viewer, which I have dubbed Pangolin after the Open Virtual Worlds research group’s previous Mongoose, Armadillo & Chimera projects, that allows;

  • connecting a serial device to the viewer
  • controlling movement of the avatar according to GPS readings
  • controlling the camera according to accelerometer & magnetometer readings

The combination of these features allows you to do things like connect an accelerometer, magnetometer & GPS receiver to an Arduino, have it dump readings into the viewer & have the viewer use them to control the avatar & camera. The motivation behind this was to address the ‘vacancy problem’ by creating a mobile cross reality interface; allowing a user to experience simultaneous presence in a real environment & an equivalent synthetic environment, using their physical position & orientation as an implicit method of control for their synthetic representation. I’m presenting a paper about this at the iED 2013 Boston summit in June – if I can secure funding to actually get me there!

Getting Code & Building

The Pangolin source code is available on Bitbucket, with my additions & modifications licensed under the GNU General Public License. The serial IO functionality uses Terraneo Federico’s AsyncSerial class which is licensed under the Boost Software License. The viewer codebase was forked from Linden Lab’s viewer-release before the removal of the --loginuri flag so Pangolin is compatible with OpenSim grids/servers.

Instructions for building the viewer are available on the Second Life wiki. I build using 32-bit Debian GNU/Linux (specifically by chrooting into a 32-bit debootstrap install from a 64-bit Arch Linux host, see my instructions) which produces a binary that runs on 32-bit Linux & on 64-bit Linux with 32-bit compatibility libraries installed.

The serial connectivity makes use of Boost.Asio. The Linden-provided Boost pre-built library is missing some of the features that my modifications make use of, so I build with LightDrake’s alternative; his public libraries are available here. To use these libraries, edit the corresponding entry in the autobuild.xml file in the root of the codebase. You’re looking for this section, here I’ve commented out the original library & hash for the Linux version & replaced it with LightDrake’s;

<key>boost</key>
  <map>
    <key>license</key>
    <string>boost</string>
    <key>license_file</key>
    <string>LICENSES/boost.txt</string>
    <key>name</key>
    <string>boost</string>
    <key>platforms</key>
    <map>
      <key>darwin</key>
      <map>
        <key>archive</key>
        <map>
          <key>hash</key>
          <string>d98078791ce345bf6168ce9ba53ca2d7</string>
          <key>url</key>
          <string>http://automated-builds-secondlife-com.s3.amazonaws.com/hg/repo/3p-boost/rev/222752/arch/Darwin/installer/boost-1.45.0-darwin-20110304.tar.bz2</string>
        </map>
        <key>name</key>
        <string>darwin</string>
      </map>
      <key>linux</key>
      <map>
        <key>archive</key>
        <map>
          <key>hash</key>
          <!--<string>a34e7fffdb94a6a4d8a2966b1f216da3</string>-->
          <string>2523af5082f44628e553635de6bbea70</string>
          <key>url</key>
          <!--<string>http://s3.amazonaws.com/viewer-source-downloads/install_pkgs/boost-1.45.0-linux-20110310.tar.bz2</string>-->
          <string>https://bitbucket.org/LightDrake/public-libs/downloads/boost-1.45.0-linux-20120213.tar.bz2</string>
        </map>
        <key>name</key>
        <string>linux</string>
      </map>
      <key>windows</key>
      <map>
        <key>archive</key>
        <map>
          <key>hash</key>
          <string>98be22c8833aa2bca184b9fa09fbb82b</string>
          <key>url</key>
          <string>http://s3.amazonaws.com/viewer-source-downloads/install_pkgs/boost-1.45.0-windows-20110124.tar.bz2</string>
        </map>
        <key>name</key>
        <string>windows</string>
      </map>
    </map>
  </map>

I’ve been building with the Linux 1.45.0 version; if you try the more recent Linux version or the Windows/Darwin versions let me know how it goes! Using this new library leads to a rather nasty namespace collision (at least with the Linux 1.45.0 version) for which I have uploaded a fix.

I’ve also put an example Arduino sketch on Bitbucket, which uses an HMC6343 accelerometer/magnetometer & a u-blox MAX-6 GPS receiver, which I wrote about previously.

Binary

I’ve uploaded a 32-bit Linux binary to Bitbucket if you just want to try it out without the rigmarole of successfully setting up the (rather particular) build environment.

Usage

Start the viewer & login as normal, then take a look at the Serial menu which contains a single entry Serial Monitor. Click this & you will see something like this;

2013-04-17-145652_956x1041_scrot

Put the path to the serial device & the baudrate into the fields in the Device settings section at the top & click [Connect]. For me using an Arduino on Linux boxes the serial device normally appears at /dev/ttyACM0 or /dev/ttyS0 if it’s the first serial device, /dev/ttyACM1 or /dev/ttyS1 if it’s the second serial device, etc. If you’re using an Arduino & are having trouble finding it, just start the Arduino IDE & look at the Serial Port entry in the Tools menu.

Pangolin expects messages, separated by newline characters, in the following format;

<bearing> <pitch> <roll> <latitude> <longitude>

For example;

183.90 75.80 -59.30 56.339991 -2.7875334

So make sure that your serial device adheres to this message format; the example Arduino sketch linked above does this & might be a useful starting point for you.

The fields in the Anchor settings section are for entering the location of a single point for which you know both the real world latitude/longitude & the corresponding virtual world coordinates along with the scale of the virtual world to the real world – eg if 1m in the real world is represented by 1.2m in the virtual world then enter 1.2 into the Scale field.

Sim X & Sim Y of the anchor point are global; this is necessary to allow Pangolin to work across multiple regions (such as mega regions). Because regions are 256x256m you can easily calculate the global coordinate of a point by doing (256 * region position) + local coordinate. For example, if my anchor point is at 127,203,23 in a region at 1020,1043, then the global X coordinate of the anchor point is (1020 * 256) + 127 = 261247 & the global Y coordinate of the anchor point is (1043 * 256) + 203 = 267211. Height isn’t implemented (vertical accuracy of GPS is substantially worse than horizontal) so just pick a Sim Z of around your sim’s ground level. The latitude & longitude fields have 6 decimal places of accuracy.

Once you have input all of the anchor settings, click [Set] & if everything is okay you should see the data from the serial device in the Received data section & the processed position values in the Calculated data section. The Pause checkbox will stop the fields from updating so you can copy the values, etc. If you get garbled received data then you have probably set the baudrate incorrectly – Pangolin will ignore this data instead of trying to process it & sending your avatar’s movement haywire.

To enable/disable control of the camera & your avatar’s movement from the received/calculated data use the Orientation control & Position control checkboxes in the Controls section. The various spinners in this section allow you to alter the smoothing, high pass filters & frequency of updates. Good values for these will depend upon what hardware/sensors you are using, the scale of your sim, etc.

Does it work?

The code itself works nicely. I tested it walking around the ruins of the cathedral at St Andrews, for which the Open Virtual Worlds group has produced an OpenSim reconstruction, using a MSI WindPad 110W tablet computer. This sim is accessible on our own grid & on OSgrid (search for regions called ‘StAndrewsOVW’) though the OSgrid version is usually older than our local grid.

tablet

Camera control from orientation was a very rewarding experience, but will benefit from a faster update rate than the 10Hz of the HMC6343 that I used.

test

The accuracy attainable from a GPS receiver, even a fairly high-spec one like the u-blox MAX-6 that I used, isn’t enough to have the avatar ‘follow in your footsepts’ as I’d originally envisaged, but is good enough to move/teleport the avatar between different points of interest when you move within a certain distance/threshold of them.

Extending

Hopefully my additions & modifications to the viewer will provide a convenient starting point or reference for other people wishing to interface serial devices with the viewer &/or to leverage real world sensor data for controlling avatar/camera.

The code is well commented throughout (more so than the rest of viewer codebase!) & should be fairly self-explanatory. The commit logs on Bitbucket will reveal where the additions/modifications reside, however as a brief overview to get you started;

  • Terraneo Federico’s AsyncSerial class is at indra/newview/AsyncSerial
  • most of my added functionality resides in indra/newview/llviewerserialmovement
  • see LLAppViewer::mainLoop() in indra/newview/llappviewer.cpp for how the serial functionality is actually started
  • the floater is in indra/newview/skins/default/xui/en/floater_serial_monitor.xml

Credits

As well as the people I’ve mentioned whose code I have used, I received huge amounts of help from various people on the Second Life opensource development IRC channel (#opensl on Freenode) & the opensource-dev mailing list – thanks guys!