I tested my Mirrorshades platform at St Salvator’s chapel in St Andrews for the first time today!
This test represents a less contrived scenario in which one might use a mobile cross reality platform like Mirrorshades, compared to when I was using it while walking around the Computer Science building, because while the real & virtual environments in this test share largely the same layout, they have substantial visual differences which are intriguing to contrast & compare.
The virtual component here shows the chapel as it stood in 1450-1460, since when there have been substantial changes – not least the removal of the original stone roof & its replacement with a wooden roof & the change in the room division.
It’s been a while since I posted anything about the Mirrorshades project, so here’s a few things I’ve been working on!
I’ve stopped using the Playstation Eye cameras because the drivers were too buggy; I would frequently try to start work on the project only to find that Unity couldn’t find the DLL & there was seemingly no fix other than to repeatedly reinstall & restart for hours until they started working.
I’m now using a pair of Logitech C310 & whilst the resolution is higher than the Playstation Eye cameras (1280×960 vs 640×480), the refresh rate is lower (30hz vs 60hz). To my eyes, the Playstation Eye cameras actually gave a nicer experience, but of course when they weren’t working they were no use!
I’m using the same 3D printed clips (red) with the cameras epoxy’d to thermoplastic (white) so they can be adjusted via the nuts & bolts with rubber washers. Once again, inspiration taken from William Steptoe’s AR Rift project.
I quickly measured the latency introduced by the C310 webcams (& then realised that it would’ve been interesting to have done the same experiment on the PlayStation Eye cameras!). I placed the Rift, with the lenses removed, facing a LCD monitor displaying a timer from flatpanels.dk. I placed a camera behind such that it could see both the monitor & the Rift’s screen, then cranked the sensitivity up on the camera so that it could record 50fps video with a 1/4000th shutter speed.
The monitor & the Rift were both refreshing at 60fps, each frame lasting 16.67ms, whilst a 1/4000 shutter speed on the camera meant that the shutter was open for 0.25ms. The response time of the monitor (quoted by the manufacturer as 8ms GTG) was evidently much higher than that of the Rift, as the tenths & even hundredths digit on the monitor was usually legible in each frame of the video whereas on the Rift the hundredths & thousandths digits were always illegible. So I went through the video frame-by-frame looking for adjacent frames where a transition from one tenth digit to the next was good enough to read on the Rift & the hundredths/thousandths digits were good enough to read on the monitor, such as this pair;
From these we can infer that the tenths digit on the Rift screen (right eye) changed from 9 to 0 sometime between 181 & 198 on the monitor, meaning a latency of between 181ms & 198ms. Out of 11 pairs of frames like this, 7 pairs showed this 181-198ms latency, whilst 4 pairs showed 198-215ms as in the pair below;
I was also able to take some still photos with the same 1/4000th shutter speed, which all showed the same 181-215ms latency (3 images following), however as timing shots to get legible digits was entirely down to luck it was easier to video at 50fps to get enough frames to work from.
This latency of 181-215ms is substantially worse than the 60ms latency between head movement & resultant VR changes being displayed that is often quoted as the upper limit for an acceptable VR experience. This increased camera latency compared to the tracker-to-VR latency (quoted as typically being 30-60ms for applications running at 60fps on the Rift DK1, same link) will probably arise in experimental results when users actually try out the platform.
I’ve mapped St Salvator’s Chapel using IndoorAtlas. We plan to use this site for our case studies as it fits with one of my research group’s interests, cultural heritage, whilst also providing a good example of where mobile cross reality is useful. I wasn’t expecting IndoorAtlas to work well in this building, as it doesn’t have a metal frame, but I was pleasantly surprised. Perhaps the addition of central heating & electricity later in the building’s history helped?
Other than that, I’ve been focussing on theoretical work & designing experiments – after all, the platform is no good without evaluation!
Did some more work on Mirrorshades & reached the point where I could actually give it an early go walking around the building I work in! IndoorAtlas didn’t seem to behave as well as in previous experiments, but the fact I didn’t walk into any walls or fall on my face is promising of future progress.
I’m not actually pressing anything on the phone, just tapping the screen occasionally to stop it from sleeping.
The setup isn’t exactly graceful atm… Add to this the Xbox controller used to toggle between real & virtual plus the Android smartphone that gets real position.
Scroll down for a fun video if you don’t want to read ;)
A long overdue update on Mirrorshades, my project that aims to let you walk around wearing an Oculus Rift using cameras to see your real surroundings, whilst using the IndoorAtlas indoor positioning system to track your position & move you around a Unity environment that you can switch to viewing through the Rift whenever you want.
New camera mounts
I realised from William Steptoe’s Rift-based AR platform (incidentally a much more professionally approached endeavour than mine!) that I had made a glaring error with my camera mount by having the cameras horizontal rather than vertical. The Rift’s 1280×800 display is split vertically into two 640×800 segments, one for each eye, so the area that each camera renders to is actually ‘portrait’ rather than ‘landscape’. So I went back to the 3D printer & made some new mounts.
They’re much simpler, still allow the interoccular/interpupillary distance to be altered & I switched from using metal hex spacers & washers to using rubber washers which both makes toe-in adjustments easier & keeps the sensors closer to the eyes so the ‘eyes on stalks’ feeling of the cameras being physically several inches in front of your eyes should be marginally reduced over the old mount.
IndoorAtlas
I’ve now got things set up such that position data from an Android device using IndoorAtlas is dumped into a MySQL database & a Unity app with a nice model of the building I work in (obtained from another student – my 3D modelling skills are much more rudimentary!) queries the database for the current location of the device & can then use it to move a camera controller.
As a quick first test & to show things starting to work, I simply scripted a sphere with a camera pinned above it to move instantaneously to each new position value & built the app for Android so I could quickly try it out without having to carry around a laptop. This is obviously a very rudimentary approach – for a proper implementation you would almost certainly want to move the marker/camera smoothly, maybe with some pathfinding &/or extrapolation.
This could of course have been achieved with a single device by integrating the IndoorAtlas code into the Unity app, but by using two devices I could start playing around straight away & as the Rift will most likely be running from a Windows laptop carried in a backpack it was a good test to use a separate device for collecting the position data.
Next steps?
Next comes the fun part – building the app for Windows & walking around with the Rift on, switching between looking at the real world through the cameras & the Unity world when pressing a button!
As you may have seen on YouTube I’ve recently been experimenting with the beta of IndoorAtlas, a service that allows a smartphone to use its magnetometer to determine its position within a building by measuring fluctuations to the Earth’s magnetic field caused by metal building materials;
I’m not so much interested in using IndoorAtlas to make a location aware smartphone app, but instead in using it to provide indoor positioning to a Unity application that displays to an Oculus Rift. To do this I had to make the positioning data available to devices other than the phone itself; the IndoorAtlas API allows a device to obtain its own position, but not to make this position available to other devices (obviously there are privacy concerns involved here!).
So as a quick hack (I believe the research term is ‘proof of concept’) I modified the example Eclipse project that comes with the IndoorAtlas Android API download to additionally submit the position data that the phone receives to a MySQL database on one of my servers, which can then be accessed by whatever I see fit. I borrowed a lot of code from here.
On the phone, it’s simply a case of editing the onServiceUpdate method to build a HTTP POST (with the relevant data as POST parameters) & to send it to a piece of server side PHP which adds or updates the details for the particular phone to the database. The PHP looks something like this;
<?php
// array for JSON response
$response = array();
// check for required fields in POST data
if (isset($_POST['deviceId']) && isset($_POST['buildingId']) && isset($_POST['levelId']) && isset($_POST['floorplanId']) && isset($_POST['latitude']) && isset($_POST['longitude']) && isset($_POST['x']) && isset($_POST['y']) && isset($_POST['i']) && isset($_POST['j']) && isset($_POST['heading']) && isset($_POST['probability']) && isset($_POST['roundtrip']) && isset($_POST['time'])) {
// extract data from POST into variables
$deviceId = $_POST['deviceId'];
$buildingId = $_POST['buildingId'];
$levelId = $_POST['levelId'];
$floorplanId = $_POST['floorplanId'];
$latitude = $_POST['latitude'];
$longitude = $_POST['longitude'];
$x = $_POST['x'];
$y = $_POST['y'];
$i = $_POST['i'];
$j = $_POST['j'];
$heading = $_POST['heading'];
$probability = $_POST['probability'];
$roundtrip = $_POST['roundtrip'];
$time = $_POST['time'];
// include database connect class
require_once __DIR__ . '/db_connect.php';
// connect to the database
$db = new DB_CONNECT();
// check whether primary key already exists
$result = mysql_query("SELECT * FROM devicelocations WHERE deviceId = '$deviceId';");
$arr = mysql_fetch_array($result);
// If the select statement returns the empty set, sizeof will return 1. Otherwise sizeof will return 28 (14 columns in a row).
// if the deviceID does not already exist in the database, do an INSERT
if (sizeof($arr) == 1) {
$result = mysql_query("INSERT INTO devicelocations (deviceId, buildingId, levelId, floorplanId, latitude, longitude, x, y, i, j, heading, probability, roundtrip, time) VALUES ('$deviceId', '$buildingId', '$levelId', '$floorplanId', '$latitude', '$longitude', '$x', '$y', '$i', '$j', '$heading', '$probability', '$roundtrip', '$time');");
}
// if the deviceID does already exist in the database, do an UPDATE
else {
$result = mysql_query("UPDATE devicelocations SET buildingId = '$buildingId', levelId = '$levelId', floorplanId = '$floorplanId', latitude = '$latitude', longitude = '$longitude', x = '$x', y = '$y', i = '$i', j = '$j', heading = '$heading', probability = '$probability', roundtrip = '$roundtrip', time = '$time' WHERE deviceId = '$deviceId';");
}
// check if row inserted or not
if ($result) {
// successfully inserted into database
$response["success"] = 1;
$response["message"] = "Entry successfully inserted or updated.";
// echoing JSON response
echo json_encode($response);
} else {
// failed to insert row
$response["success"] = 0;
$response["message"] = "Entry was not successfully inserted or updated.";
// echoing JSON response
echo json_encode($response);
}
} else {
// required POST field is missing
$response["success"] = 0;
$response["message"] = "Required POST field(s) missing";
// echoing JSON response
echo json_encode($response);
}
?>
Once the data are in the database, I can access them from pretty much anything I want. As a quick test I put together a simple Web page that displays the floor plan image with a red dot to represent the phone’s last known location as well as dumping the data in text form;
The PHP for this page looks something like this (it’s a quick hardcoded hack, but you get the idea);