Mirrorshades update

It’s been a while since I posted anything about the Mirrorshades project, so here’s a few things I’ve been working on!

rift

I’ve stopped using the Playstation Eye cameras because the drivers were too buggy; I would frequently try to start work on the project only to find that Unity couldn’t find the DLL & there was seemingly no fix other than to repeatedly reinstall & restart for hours until they started working.

I’m now using a pair of Logitech C310 & whilst the resolution is higher than the Playstation Eye cameras (1280×960 vs 640×480), the refresh rate is lower (30hz vs 60hz). To my eyes, the Playstation Eye cameras actually gave a nicer experience, but of course when they weren’t working they were no use!

I’m using the same 3D printed clips (red) with the cameras epoxy’d to thermoplastic (white) so they can be adjusted via the nuts & bolts with rubber washers. Once again, inspiration taken from William Steptoe’s AR Rift project.

I quickly measured the latency introduced by the C310 webcams (& then realised that it would’ve been interesting to have done the same experiment on the PlayStation Eye cameras!). I placed the Rift, with the lenses removed, facing a LCD monitor displaying a timer from flatpanels.dk. I placed a camera behind such that it could see both the monitor & the Rift’s screen, then cranked the sensitivity up on the camera so that it could record 50fps video with a 1/4000th shutter speed.

The monitor & the Rift were both refreshing at 60fps, each frame lasting 16.67ms, whilst a 1/4000 shutter speed on the camera meant that the shutter was open for 0.25ms. The response time of the monitor (quoted by the manufacturer as 8ms GTG) was evidently much higher than that of the Rift, as the tenths & even hundredths digit on the monitor was usually legible in each frame of the video whereas on the Rift the hundredths & thousandths digits were always illegible. So I went through the video frame-by-frame looking for adjacent frames where a transition from one tenth digit to the next was good enough to read on the Rift & the hundredths/thousandths digits were good enough to read on the monitor, such as this pair;

00000.MTS.Still001

00000.MTS.Still002

From these we can infer that the tenths digit on the Rift screen (right eye) changed from 9 to 0 sometime between 181 & 198 on the monitor, meaning a latency of between 181ms & 198ms. Out of 11 pairs of frames like this, 7 pairs showed this 181-198ms latency, whilst 4 pairs showed 198-215ms as in the pair below;

00000.MTS.Still015

00000.MTS.Still016

I was also able to take some still photos with the same 1/4000th shutter speed, which all showed the same 181-215ms latency (3 images following), however as timing shots to get legible digits was entirely down to luck it was easier to video at 50fps to get enough frames to work from.

still1

still2

still3

This latency of 181-215ms is substantially worse than the 60ms latency between head movement & resultant VR changes being displayed that is often quoted as the upper limit for an acceptable VR experience. This increased camera latency compared to the tracker-to-VR latency (quoted as typically being 30-60ms for applications running at 60fps on the Rift DK1, same link) will probably arise in experimental results when users actually try out the platform.

sallies

I’ve mapped St Salvator’s Chapel using IndoorAtlas. We plan to use this site for our case studies as it fits with one of my research group’s interests, cultural heritage, whilst also providing a good example of where mobile cross reality is useful. I wasn’t expecting IndoorAtlas to work well in this building, as it doesn’t have a metal frame, but I was pleasantly surprised. Perhaps the addition of central heating & electricity later in the building’s history helped?

diagram

Other than that, I’ve been focussing on theoretical work & designing experiments – after all, the platform is no good without evaluation!

Mirrorshades project update – first walking test!

Did some more work on Mirrorshades & reached the point where I could actually give it an early go walking around the building I work in! IndoorAtlas didn’t seem to behave as well as in previous experiments, but the fact I didn’t walk into any walls or fall on my face is promising of future progress.

I’m not actually pressing anything on the phone, just tapping the screen occasionally to stop it from sleeping.

The setup isn’t exactly graceful atm… Add to this the Xbox controller used to toggle between real & virtual plus the Android smartphone that gets real position.

IMG_20140129_181553

Mirrorshades project update – new camera mounts, IndoorAtlas into Unity

Scroll down for a fun video if you don’t want to read ;)

A long overdue update on Mirrorshades, my project that aims to let you walk around wearing an Oculus Rift using cameras to see your real surroundings, whilst using the IndoorAtlas indoor positioning system to track your position & move you around a Unity environment that you can switch to viewing through the Rift whenever you want.

New camera mounts

I realised from William Steptoe’s Rift-based AR platform (incidentally a much more professionally approached endeavour than mine!) that I had made a glaring error with my camera mount by having the cameras horizontal rather than vertical. The Rift’s 1280×800 display is split vertically into two 640×800 segments, one for each eye, so the area that each camera renders to is actually ‘portrait’ rather than ‘landscape’. So I went back to the 3D printer & made some new mounts.

rift

They’re much simpler, still allow the interoccular/interpupillary distance to be altered & I switched from using metal hex spacers & washers to using rubber washers which both makes toe-in adjustments easier & keeps the sensors closer to the eyes so the ‘eyes on stalks’ feeling of the cameras being physically several inches in front of your eyes should be marginally reduced over the old mount.

back

side

IndoorAtlas

I’ve now got things set up such that position data from an Android device using IndoorAtlas is dumped into a MySQL database & a Unity app with a nice model of the building I work in (obtained from another student – my 3D modelling skills are much more rudimentary!) queries the database for the current location of the device & can then use it to move a camera controller.

As a quick first test & to show things starting to work, I simply scripted a sphere with a camera pinned above it to move instantaneously to each new position value & built the app for Android so I could quickly try it out without having to carry around a laptop. This is obviously a very rudimentary approach – for a proper implementation you would almost certainly want to move the marker/camera smoothly, maybe with some pathfinding &/or extrapolation.

This could of course have been achieved with a single device by integrating the IndoorAtlas code into the Unity app, but by using two devices I could start playing around straight away & as the Rift will most likely be running from a Windows laptop carried in a backpack it was a good test to use a separate device for collecting the position data.

Next steps?

Next comes the fun part – building the app for Windows & walking around with the Rift on, switching between looking at the real world through the cameras & the Unity world when pressing a button!