OpenSim Region Modules & GPS

Something else I did late last year/early this year, investigating how one might combine OpenSim Region Modules with GPS readings to control movement of an avatar & other exciting things, which weren’t developed to fruition but may be of interest to somebody.


From the OpenSim wiki

Region modules are .net/mono DLLs. During initialization of the simulator, the OpenSimulator bin directory (bin/) and the scriptengines (bin/ScriptEngines) directory are scanned for DLLs, in an attempt to load region modules stored there.

Region modules execute within the heart of the simulator and have access to all its facilities. Typically, region modules register for a number of events, e.g. chat messages, user logins, texture transfers, and take what ever steps are appropriate for the purposes of the module.

Essentially they allow you to develop much more complex extensions to the OpenSim platform than in-world LSL does, but are easier & more accessible than directly modifying the client &/or server source code.

Instructions for making your own region modules can be found on the OpenSim wiki (follow the link above) but I also wrote a heavily commented version of the boilerplate code for a shared region module that might make it easier to understand which parts you actually need to implement/change to get a basic region module doing something. This was the first time I had worked with C# so apologies if some of the comments seem superfluous ;)

Instead of having hundreds of lines of code in this post, I’ve put all of the examples in public Bitbucket repositories – here is the first one for the boilerplate region module code. All you really need to do is change names in a few places & then add some functionality starting in PostInitialise() to get a basic region module that can result in some visible effect in world.

One of the most basic visible effects is making something move & this example does just that. Despite its name it doesn’t actually have anything to do with GPS yet, it simply creates a spherical prim in each region & moves it a short distance on each tick of a timer. This shows the most basic usage of the SceneObjectGroup class to get a reference to a primitive in a scene & then do something with it (move it).

Moving onto something that actually begins to involve GPS, or at least begins to make some connection between real world latitude & longitude values & ‘equivalent’ positions in the virtual world, this next example waits for a latitude & longitude to be reported via a TCP connection & then moves the avatar to the equivalent position in the region. This approach assumes that there is one position within the OpenSim region for which the equivalent real world latitude & longitude is known (referred to as the ‘anchor’ point) & that the scale of the OpenSim region compared to the real world is also known (eg that every meter in the real world is represented by 1.2m in the OpenSim region).

When a latitude & longitude is received via TCP the haversine formula is used to calculate the real world ‘great circle’ distance between the anchor point & this new point. This distance is then scaled according to the scale of the real world to the OpenSim region & thus the equivalent OpenSim position is calculated as a displacement from the anchor point – to which the avatar is then moved.

This is a fairly ‘rough & ready’ proof-of-concept – the avatar’s name & the position of the anchor point are currently hard-coded & movement across region boundaries isn’t supported. The implementation of the haversine formula & the GPSSanitizer method (which checks for both dms & signed decimal latitude/longitude representations using regular expressions) may be useful to other applications. It has also been tested by manually piping in latitude/longitude values via a simple TCP client – a rudimentary example included below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.Net.Sockets;

namespace TCPClient {

    /*
     * A very simple TCP client test program. When run (with no command line arguments) it will send a message from the CLI to
     * localhost port 13000 via a TCP connection. The message, address & port can be changed by changing the hardcoded values.
     */
    class SimpleTcpClient {

        static void Main(string[] args) {
            Console.WriteLine("Enter GPS value to move to (eg '56.340626, -2.808015' or '56 20 26 N, 2 48 28 W'");
            string s = Console.In.ReadLine();
            Connect("127.0.0.1", s);
        }

        static void Connect(String server, String message) {
            try {
                // Create a TcpClient.
                // Note, for this client to work you need to have a TcpServer 
                // connected to the same address as specified by the server, port
                // combination.
                Int32 port = 13000;
                TcpClient client = new TcpClient(server, port);

                // Translate the passed message into ASCII and store it as a Byte array.
                Byte[] data = System.Text.Encoding.ASCII.GetBytes(message);

                // Get a client stream for reading and writing.
                NetworkStream stream = client.GetStream();

                // Send the message to the connected TcpServer. 
                stream.Write(data, 0, data.Length);

                Console.WriteLine("Sent: {0}", message);

                //send & forget, don't bother waiting for a response

                // Close everything.
                client.Close();
            }
            catch (ArgumentNullException e) {
                Console.WriteLine("ArgumentNullException: {0}", e);
            }
            catch (SocketException e) {
                Console.WriteLine("SocketException: {0}", e);
            }

            Console.WriteLine("\n Press Enter to continue...");
            Console.Read();
        }
    }
}

The final example is something a bit different, but still GPS related. This one takes a latitude & longitude via TCP in the same manner as the previous example, but instead of moving something it instead queries the Google Maps API for a satellite image centered about this position & applies this image as a texture to a prim that covers the entire region.

This was written to make testing of the previous example easier – it’s easier to visualise whether the movements of the avatar are correct if s/he is walking over imagery of the real world than blank terrain. This was written a long time ago but I think I started extending it such that when an avatar moved into a neighbouring region it would automatically be textured with another satellite image – at any rate the code is in a very unfinished state, but please feel free to harvest any bits that might be useful to you.


It became apparent during these experiments that it would make more sense to handle GPS/accelerometer/magnetometer avatar movement on the client side rather than the server side, thus these experiments were abandoned – however they still serve as an interesting demonstration of what can be achieved with region modules & a bit of imagination :)

In addition, much of the GPSAvatar code will be transferred into my modified Second Life viewer & will help to speed up development there.

Arduino + accelerometer as joystick for Second Life

Update – example code available here.

Here’s something I hacked together earlier this year & although I didn’t end up developing it further it may prove interesting to others.


First off a quick video showing the Arduino + accelerometer ‘joystick’ controlling the avatar & then the flycam in the official Second Life client. If this piques your interest, read on to find out how (& why) it was done.

I wanted to investigate how I might use real world orientation data, such as that recorded by an accelerometer attached to an Arduino, to control a Second Life avatar without having to modify the source code of the Second Life client. This approach would have two major advantages;

  • conceivably much less work required
  • compatibility with any Second Life client sans modification – no reliance on a bespoke modified client

Unfortunately the official Second Life client (upon which all third party clients are based) doesn’t really present any interfaces for input devices apart from the usual mouse, keyboard & joystick. But some work that my colleague John was doing at the time getting XBox controllers to move avatars using the joystick interface got me thinking & a Google search for ‘Arduino joystick’ led to my discovery of the Arduino UNO Joystick HID firmware based on LUFA (the Lightweight USB Framework for AVRs) which allows an Arduino to appear to a computer as a standard USB HID joystick, instead of as a serial device, by reprogramming the USB-to-serial converter. Of course this ‘joystick’ can in fact be used by any program/game, not just Second Life.

Note: this is only possible with Arduino Uno & Mega, which use an ATMega chip for USB-to-serial conversion. It does not work with older Arduino, such as the Duemilanove, which use a FTDI chip. My experiments used an Uno R3 which has the ATMega16U2, check compatibility before you attempt the following with a Uno/Mega R1/R2 which use the ATMega8U2.

Reprogramming the ATMega16U2 involves putting the Arduino into DFU mode & using dfu-programmer. Unfortunately the latest version of dfu-programmer (0.5.4) doesn’t know about the ATMega16U2 so has to be patched. Grab the latest dfu-programmer source & apply this patch as discussed in this thread on the Arduino.cc forums.

With your Arduino connected to your computer as normal you should see something similar if you run lsusb.

[root@flatline /]# lsusb
Bus 003 Device 007: ID 2341:0043 Arduino SA Uno R3 (CDC ACM)

To enter DFU mode find the 6-pin AVR header near the USB socket & briefly connect the 2 pins closest to the USB socket (see picture beneath, click for full size) using the tip of a screwdriver, a paperclip, piece of wire, etc.

The Arduino should no longer register in lsusb but a myserious Atmel Corp. device should have appeared in its place. The serial port (eg /dev/ttyACM0) should also have disappeared.

[root@flatline /]# lsusb
Bus 003 Device 011: ID 03eb:2fef Atmel Corp.

You can then go ahead & erase the original firmware & flash the joystick HID firmware.

[root@flatline /]# dfu-programmer atmega16u2 erase
root@flatline /]# dfu-programmer atmega16u2 flash Arduino-joystick-0.1.hex 
Validating...
4076 bytes used (33.17%)
[root@flatline /]# dfu-programmer atmega16u2 reset

And you’re done! At this point you will need to physically disconnect & reconnect the Arduino for the computer to recognise it as a joystick. After you have done so, lsusb should report something like this.

[root@flatline /]# lsusb
Bus 003 Device 012: ID 03eb:2043 Atmel Corp. LUFA Joystick Demo Application

With the joystick firmware in place you will no longer be able to upload sketches as normal. If you don’t have a USBtiny ISP or similar you will have to revert back to the original USB-to-serial firmware each time you want to upload a new sketch. This process is exactly the same as above, but substituting the joystick .hex file with the USB-to-serial one.

[root@flatline /]# dfu-programmer atmega16u2 erase
[root@flatline /]# dfu-programmer atmega16u2 flash Arduino-usbserial-atmega16u2-Uno-Rev3.hex 
Validating...
4034 bytes used (32.83%)
[root@flatline /]# dfu-programmer atmega16u2 reset

Once again, you will have to disconnect & reconnect the Arduino for your computer to register the change.


As for the sketch itself, mapping accelerometer readings to the joystick axes is simply a case of inserting them into the correct variables in the joyReport struct & sending it over Serial – take a look at the example sketch that comes with the joystick firmware & you should soon see how to do it. Beneath is a rudimentary example using readings from the Honeywell HMC6343 from Sparkfun (see here for how to use this with Arduino), mapping the accelerometer’s roll to the joystick’s X axis & its pitch to the Y axis.

 
#include <Wire.h>
 
#define HMC6343_ADDRESS 0x19
#define HMC6343_HEADING_REG 0x50

// data structure as defined by the joystick firmeware
struct {
    int8_t x;
    int8_t y;
    uint8_t buttons;
    uint8_t rfu;
} joyReport;

void setup() {
  Wire.begin();          // initialize the I2C bus
  Serial.begin(115200);  // initialize the serial bus
}
 
void loop() {
  byte highByte, lowByte;
 
  Wire.beginTransmission(HMC6343_ADDRESS);    // start communication with HMC6343
  Wire.write(0x74);                           // set HMC6343 orientation
  Wire.write(HMC6343_HEADING_REG);            // send the address of the register to read
  Wire.endTransmission();
 
  Wire.requestFrom(HMC6343_ADDRESS, 6);       // request six bytes of data from the HMC6343
  while(Wire.available() < 1);                // busy wait while there is no byte to receive
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float heading = ((highByte << 8) + lowByte) / 10.0; // heading in degrees
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float pitch = ((highByte << 8) + lowByte) / 10.0;   // pitch in degrees
 
  highByte = Wire.read();
  lowByte = Wire.read();
  float roll = ((highByte << 8) + lowByte) / 10.0;    // roll in degrees

  joyReport.buttons = 0;
  joyReport.rfu = 0;
  
  joyReport.x = constrain(((int)(map(roll, -90, 90, -100, 100))), -100, 100);
  joyReport.y = constrain(((int)(map(pitch, -90, 90, -100, 100))), -100, 100);
  
  Serial.write((uint8_t *)&joyReport, 4);

  delay(100); // do this at approx 10Hz
}

When you fire up your Second Life client (either the official client or a third-party client) go into Preferences -> Move & View -> Other Devices to open the Joystick Configuration window & you should see something like the screenshot beneath (click for full size). Note ‘Arduino Arduino Joystick’ has been recognised – however also note that it is a limitation of the client that it only recognises the first joystick device connected to the computer. Depending on how you have mapped your axes & what you want to control you will have to change the numbers in this window accordingly – with the above sketch 0 is the X axis & 1 is the Y axis (-1 disables a control).


In the end this approach proved to be unsuitable for my purposes, due to the difficulty of mapping readings to discrete virtual world orientations rather than to relative movements from the previous orientation. But it was still interesting to do :)

Open Virtual Worlds at the University of St Andrews

I thought it was time to start posting something here other than the occasional photo (not that I even do that very often…) so I’m going to start posting about the exciting things I’m doing for my PhD. There is a new category ‘Academic’ for these posts, so you can easily follow/ignore them.


I am part of the Open Virtual Worlds group at the School of Computer Science at the University of St Andrews. We have an official blog, we own the openvirtualworlds.org domain & we even have a Facebook page because that’s the done thing these days. Stolen from the blog…

Open Virtual Worlds are multi-user 3D environments within which users are represented by the proxy of an avatar. They are similar to multi-player computer games but differ in the important respect that their appearance, interactive characteristics, content and purpose are all programmable. In addition they can act as a portal for organising multiple media, including web pages, video streams, textual documents and simulations.

Unlike computer games they have no pre-set goals; users or groups of users are free to make up their own goals. They offer the potential of providing the core of the future 3D Internet. Our research addresses issues that need to be addressed for this potential to be realised.

My interest lies in the concept of simultaneous presence in real & virtual environments via the cross reality paradigm & investigating solutions to the vacancy problem – the inability with current technologies to simultaneously immerse oneself in both the real world & a complete virtual world.

This is different to the concept of augmented reality, which many people are now familiar with thanks to the popularity of augmented reality smartphone apps, as cross reality deals with the combination of a complete virtual world with the real world, rather than the sparse digital augmentations upon the real world that augmented reality performs.

I am investigating this concept via a case study that builds upon existing work within the Open Virtual Worlds group, by developing a system that allows visitors to the ruined cathedral at St Andrews to simultaneously explore our virtual world reconstruction of the cathedral in a natural & intuitive manner via a tablet computer, in a project dubbed the Virtual Time Window (VTW).

Stay tuned for some more frighteningly exciting updates on what I’m working on, along with details of how you can log into our reconstructions & explore them yourselves!