OSVR-Core
|
Takes care of the initial room calibration startup step, when we learn the pose of the camera in space and the yaw offset of the IMU. More...
#include <RoomCalibration.h>
Public Member Functions | |
RoomCalibration (Eigen::Vector3d const &camPosition, bool cameraIsForward=true) | |
bool | wantVideoData (TrackingSystem const &sys, BodyTargetId const &target) const |
Since during startup, we only want video data on a single target, we can save processing power by asking before we compute. More... | |
void | processVideoData (TrackingSystem const &sys, BodyTargetId const &target, util::time::TimeValue const ×tamp, Eigen::Vector3d const &xlate, Eigen::Quaterniond const &quat) |
void | processIMUData (TrackingSystem const &sys, BodyId const &body, util::time::TimeValue const ×tamp, Eigen::Quaterniond const &quat) |
bool | postCalibrationUpdate (TrackingSystem &sys) |
When completed feeding data, this method will check to see if calibration has finished and perform updates accordingly. More... | |
bool | calibrationComplete () const |
Accessors only valid once postCalibrationUpdate() has returned | |
true! Gets the calibration yaw for the IMU on a body. Returns an empty optional if the body ID given was not the one with the IMU used to calibrate. | |
boost::optional< util::Angle > | getCalibrationYaw (BodyId const &body) const |
Eigen::Isometry3d | getCameraPose () const |
Gets the pose of the camera in the room (the transform from camera space to room space) | |
Takes care of the initial room calibration startup step, when we learn the pose of the camera in space and the yaw offset of the IMU.
bool osvr::vbtracker::RoomCalibration::postCalibrationUpdate | ( | TrackingSystem & | sys | ) |
When completed feeding data, this method will check to see if calibration has finished and perform updates accordingly.
Coordinate systems involved here: i: IMU c: Camera r: Room Keep in mind the right to left convention: iTc is a transform that takes points from camera space to IMU space. Matching coordinate systems cancel: rTi * iTc = rTc
We'll start by saying that the camera space orientation is unknown and that the IMU space orientation is aligned with the room, so we want that transformation. (That's what we've kept track of in a running fashion using the filter all along: iTc)
We discard the positional component of iTc because that varies depending on where the device is held during calibration.
If the user has set "camera is forward", we'll then extract the yaw component and create another transform so that the camera is always looking down the z axis of our room space: this will provide a rotation component of rTi.
Finally, we will set the position of the camera based on the configuration.
the IMUs need to know the yaw so they can un-reverse it.
void osvr::vbtracker::RoomCalibration::processVideoData | ( | TrackingSystem const & | sys, |
BodyTargetId const & | target, | ||
util::time::TimeValue const & | timestamp, | ||
Eigen::Vector3d const & | xlate, | ||
Eigen::Quaterniond const & | quat | ||
) |
put an end to the dots
bool osvr::vbtracker::RoomCalibration::wantVideoData | ( | TrackingSystem const & | sys, |
BodyTargetId const & | target | ||
) | const |
Since during startup, we only want video data on a single target, we can save processing power by asking before we compute.
we only want video data once we have IMU
If it's our first video data, the body ID should match with the IMU.
If it's not our first video data, it should be from the same target as our first video data.