Introduction
This is a second article in a series explaining how to build Augmented Reality applications with MonoTouch. My first article explained the basics - now it's time to move on and cover sensor integration. iPhones prior to version 4 had some problems with getting consistent and accurate data from their sensors. You will have to perform some manual work even with the later versions, 4 and 4S.
Luckily, this have changed with the advent of iOS 5. It introduces reference frame for attitude - by setting reference to the desired frame and using MotionManager we can change our attitude (note that "attitudeā refers to the orientation of a body relative to a given frame of reference). If we ignore CPU performance overhead we can use CMAttitudeReferenceFrameXArbitraryCorrectedZVertical frame reference and collect data needed for our app logic. Data collected this way is by far the most accurate we can get from the device and is collected from all available sensors and combined to achieve the optimal results.
The only thing we have to do is to utilize all available sensor information and present it to the user.
Note that we should have only one instance of a particular sensor per application, or one shared instance. This is accomplished using the
Singleton pattern.
Don't forget to read comments in the AppDelegate:
// You have 17 seconds to return from this method, or iOS will terminate your application.
We want to monitor our location using GPS and our attitude using other sensors. GPS data is retrieved using
LocationManager by creating the
LocationManagerDelegate class.
We simply need to override methods
UpdatedLocation and
UpdatedHeading to populate the location values.
Using LocationManagerDelegate
using
System;
using
MonoTouch.CoreLocation;
namespace
iPhoneApp
{
public
class
LocationManagerDelegate:CLLocationManagerDelegate
{
AppDelegate app;
public
LocationManagerDelegate (AppDelegate appDelegate) :
base
()
{
this
.app = appDelegate;
}
public
override
void
UpdatedLocation (CLLocationManager manager
, CLLocation newLocation, CLLocation oldLocation)
{
this
.app.DeviceListenerShared.Altitude = newLocation.Altitude;
this
.app.DeviceListenerShared.Longitude = newLocation.Coordinate.Longitude;
this
.app.DeviceListenerShared.Latitude = newLocation.Coordinate.Latitude;
this
.app.DeviceListenerShared.HorizontalAccuracy = newLocation.HorizontalAccuracy;
this
.app.DeviceListenerShared.VerticalAccuracy = newLocation.VerticalAccuracy;
}
public
override
void
UpdatedHeading (CLLocationManager manager, CLHeading newHeading)
{
this
.app.DeviceListenerShared.TrueHeading = newHeading.TrueHeading;
this
.app.DeviceListenerShared.HeadingAccuracy = newHeading.HeadingAccuracy;
}
}
}
Note the
HorizontalAccuracy and
VerticalAccuracy - you can decide to propagate values only if they are accurate enough.
DeviceListener
To start using the sensors you need to set up paramters for the each sensor, start them and handle value changes.
Location and orientation of a device should always be accessible - aditionally, we should be notified when sensor values change, configure sensors, activate/deactivate sensors on demand, and lock user position on demand. Because device heading orientation can be changed (rotating phone) we need the ability to restart all sensors and update heading orientation and our reference frame.
We will proceed by initiating our sensors
private void InitMonitors()
{
//Init LocationManager
if (locationMngr == null)
{
throw new NullReferenceException("Cannot start monitor. LocationManager is null!");
}
locationMngr.Purpose = "Sample Application Purpose";
locationMngr.DesiredAccuracy = CLLocation.AccuracyBest;
locationMngr.HeadingFilter = -1;
locationMngr.DistanceFilter = -1;
//Init MotionManager
if (motionMngr == null)
{
throw new NullReferenceException("Cannot start monitor. MotionManager is null!");
}
motionMngr.ShowsDeviceMovementDisplay = true;
motionMngr.AccelerometerUpdateInterval =
1.0/10.0
;
motionMngr.GyroUpdateInterval = 1.0/10.0;
motionMngr.DeviceMotionUpdateInterval = 1.0 / 10.0;
}
We are defining our application purpose that will be displayed in the info window when the application requires user approval to access location services, and proceed by setting the desired accuracy and update intervals of each sensor. In our code all LocationManager accuracy settings are set to the best accuracy.
Now we can start the sensors:
/// <summary>
/// Starts the device position monitor.
/// </summary>
public
void
StartDevicePositionMonitor()
{
InitMonitors();
//Start monitoring updates
if
(locationMngr !=
null
)
{
locationMngr.StartUpdatingLocation();
}
if
(motionMngr !=
null
)
{
//Attach handles
RestartAllMotionUpdates();
}
}
Since we need to be able to restart the motion sensors we are deactivating them initially.
public
void
RestartAllMotionUpdates()
{
if
(motionMngr !=
null
)
{
RestartMotionUpdates();
motionMngr.StopAccelerometerUpdates();
motionMngr.StartAccelerometerUpdates();
motionMngr.StopGyroUpdates();
motionMngr.StartGyroUpdates();
motionMngr.StopMagnetometerUpdates();
motionMngr.StartMagnetometerUpdates();
}
}
public
void
RestartMotionUpdates()
{
if
(motionMngr !=
null
)
{
motionMngr.StopDeviceMotionUpdates();
motionMngr.StartDeviceMotionUpdates(CMAttitudeReferenceFrame.XArbitraryCorrectedZVertical);
motionMngr.StartDeviceMotionUpdates(NSOperationQueue.MainQueue, MotionData_Received);
}
}
As always, I am attaching a complete code sample to this article.