St Salvator’s Chapel gigapixel panorama part 3

We finally invested in a KRPano license at work, so now I can show off the gigapixel panoramas I’ve taken in their full resolution glory on the Web :)

Click the right most icon to go full screen, click & drag to move, scroll to zoom – I particularly recommend the stained glass windows, as hinted at in the previous post!

St Salvator’s Chapel gigapixel panorama part 2

After Color Efex Pro refused to properly process the full 47,980 x 23,990 panorama (even on a computer with 64GB of RAM & 4x striped SSDs!) I resorted to splitting the image into 4x pieces with overlap between them to apply the post-processing. Not an ideal situation, but after an afternoon’s work I managed it without any visible joins between the pieces.

One of the next stages is to decide how to host it online so it can be viewed in a browser at its full resolution, letting you zoom right in to the detail. My experiments with Google Maps over the summer revealed a 100 megapixel limit, while Round.me seems to have a 25MB file size limit – neither of which are even close to enough for a 1.15 gigapixel image that is 224MB as a jpeg! So far krpano looks like it is going to be the winner, but we’re going to investigate some other options before we invest in a license.

The Chaplaincy have expressed an interest in using the image in one of next year’s printed booklets, however as with many equirectangular panoramas the image looks rather odd when viewed flat – see the first image in the first post. But having the whole sphere around the camera & so many pixels to play with means that I can use PTGui to produce high resolution rectilinear images of arbitrary orientations. In a rather odd experience, it’s a bit like taking a photo in post…!

Here’s one example – the full size version would be 26,542 x 15,272.

17 Chapel (large) west small

And just to demonstrate the level of detail that the full panorama has, which of course means you can essentially zoom in when framing these rectilinear images in post, if you click the image beneath & view it at full size (1920 x 1080) the magnified section is 1:1.

17 Chapel (large) east detail

Mirrorshades Stills

I exported some still frames from videos taken during the user studies of my Mirrorshades platform for inclusion in my thesis, thought I’d share them here too!

participant-f

participant-f-2

participant-f-3

participant-f-4

participant-f-5

participant-m

participant-m-2

participant-m-3

Mirrorshades Update

As promised, a video showing excerpts of people taking part in Mirrorshades user studies! Once again this is in St Salvator’s chapel in St Andrews, with the participants comparing what the chapel looks like today against a virtual reconstruction of what it looked like in 1450.

For those that haven’t read previous posts about it, Mirrorshades is…

…a hardware & software platform which allows its user to observe & move around their real world environment whilst wearing a wide field of view, stereoscopic 3D, head mounted display which allows them to alternatively view an immersive virtual reality environment from the equivalent vantage point.

This is achieved by combining an Oculus Rift DK1 with the IndoorAtlas indoor positioning system & the Unity game engine. The user places the Rift on their head & wears a satchel containing a laptop & other accessories over one shoulder. An Android smartphone held in one hand determines their position via IndoorAtlas, such that as they walk around the real environment their position within the Unity environment moves accordingly, while an Xbox controller held in the other hand lets them trigger transitions between real & virtual visuals.

In the first stage, participants had 3x different transitions to play around with, mapped to the [A] & [B] buttons & the right trigger. Pressing the [A] button would instantly switch the visuals of the Rift from real to virtual;

switching-hard-with-controller

Pressing the [B] button would fade the visuals from real to virtual over the course of about a second;

switching-soft-with-controller

While pulling the right trigger would fade the real visuals by an amount that depends upon how far down the trigger is pulled (so pulling the trigger all the way displays only virtual, releasing the trigger displays only real, pulling the trigger 43% of the way displays a 67/43 real/virtual split, etc.);

switching-analogue-with-controller

Next, participants still had the same 3x transitions available to them, but their view would momentarily switch to virtual for a fraction of a second every 3 seconds;

timed-switch

Finally, participants had only the [B] button transition, but the base view was changed to a 75/25 real/virtual split & then a 50/50 real/virtual split (here visualized using the [A] button transition instead);

base-opacity-hard-switch

There are a lot of questionnaire & log data to work through, as well as interview transcripts & videos to analyse, but there are a few opinions that seemed to stand out (& will probably be backed up by the data).

Of the 3x transitions, people definitely seemed to prefer the right trigger one that let them control exactly how fast they switched from real to virtual & back again, plus choose any intermediary position to linger at. The momentary switch to virtual every 3 seconds was almost unanimously disliked. Preference between the 75/25 & 50/50 splits is less clear, with some participants saying that they didn’t like the 50/50 split because they couldn’t see enough of the real environment to feel safe walking around while others preferred the 50/50 split as it made switching fully to virtual more comfortable & encouraged them to switch to virtual more often.

Mirrorshades Update

I’ve just finished putting a bunch of people through the Mirrorshades treatment & will have a highlights video up sometime soon hopefully :)

mirrorshades