Box Projection Exercise

29 09 2011




My Job

25 09 2011

My role in our group was to get the Kinect working with Unity. We had a feeling that this might be the biggest challenge of our project and it really was. After doing quite a bit of research on sites like Kinect Hacks I found that there were 3 things to install: SenorKinect, OpenNI, and NITE. These had to be installed in the correct order and needed to be the right versions in order to work correctly. Once I successfully got these packages installed I was able to test the Kinect on my computer with some of their sample programs. They would give me images like this:

Once I knew my computer was receiving the data from the Kinect I tested the Unity wrapper. Luckily the package I downloaded came with some sample Unity projects. The most useful one was Blockman. After the Kinect has taken in your position and calibrates, it will move a primitive skeleton based on your movements. This told me that OpenNI already had code that specifically looked for the human form and kept track of where each of your joints were.

The next step was to locate where in the code the joint data was passed into Unity. The Unity Wrapper consists of multiple scripts that are all essential in making the Kinect’s use possible. After examining each one I found that OpenNISkeleton.cs was the script I needed. I found where the script iterates through all the joints and saved the values for the Head and RightHand joints into my own variables. I use these values in FollowHead.cs to move the character controller.

OpenNISkelton.cs

using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using OpenNI;

public class OpenNISkeleton : MonoBehaviour
{
	public Transform Head;
	public Transform Neck;
	public Transform Torso;
	public Transform Waist;

	public Transform LeftCollar;
	public Transform LeftShoulder;
	public Transform LeftElbow;
	public Transform LeftWrist;
	public Transform LeftHand;
	public Transform LeftFingertip;

	public Transform RightCollar;
	public Transform RightShoulder;
	public Transform RightElbow;
	public Transform RightWrist;
	public Transform RightHand;
	public Transform RightFingertip;

	public Transform LeftHip;
	public Transform LeftKnee;
	public Transform LeftAnkle;
	public Transform LeftFoot;

	public Transform RightHip;
	public Transform RightKnee;
	public Transform RightAnkle;
	public Transform RightFoot;

	public bool UpdateJointPositions = false;
	public bool UpdateRootPosition = false;
	public bool UpdateOrientation = true;
	public float RotationDamping = 15.0f;
	public Vector3 Scale = new Vector3(0.001f,0.001f,0.001f);

	private Transform[] transforms;
	private Quaternion[] initialRotations;
	private Vector3 rootPosition;

	private SkeletonJointTransformation[] jointData;
	public bool absolute = true;

	// MY VARIABLES
	public static int updateHeadJointCount=0; // used to create the initial positions, store value only once
	public static int updateHandJointCount=0; // used to create the initial positions, store value only once

	public static float myHeadPositionZ;
	public static float myHeadInitPositionZ; //the change in Z will be in respect to this number

	public static float myHandPositionX;
	public static float myHandInitPositionX; //the change in X will be in respect to this number
	public static float myHandPositionY;
	public static float myHandInitPositionY; //the change in Y will be in respect to this number

	public void Awake()
	{
		int jointCount = Enum.GetNames(typeof(SkeletonJoint)).Length + 1; // Enum starts at 1

		transforms = new Transform[jointCount];
		initialRotations = new Quaternion[jointCount];
		jointData = new SkeletonJointTransformation[jointCount];

		transforms[(int)SkeletonJoint.Head] = Head;
		transforms[(int)SkeletonJoint.Neck] = Neck;
		transforms[(int)SkeletonJoint.Torso] = Torso;
		transforms[(int)SkeletonJoint.Waist] = Waist;
		transforms[(int)SkeletonJoint.LeftCollar] = LeftCollar;
		transforms[(int)SkeletonJoint.LeftShoulder] = LeftShoulder;
		transforms[(int)SkeletonJoint.LeftElbow] = LeftElbow;
		transforms[(int)SkeletonJoint.LeftWrist] = LeftWrist;
		transforms[(int)SkeletonJoint.LeftHand] = LeftHand;
		transforms[(int)SkeletonJoint.LeftFingertip] = LeftFingertip;
		transforms[(int)SkeletonJoint.RightCollar] = RightCollar;
		transforms[(int)SkeletonJoint.RightShoulder] = RightShoulder;
		transforms[(int)SkeletonJoint.RightElbow] = RightElbow;
		transforms[(int)SkeletonJoint.RightWrist] = RightWrist;
		transforms[(int)SkeletonJoint.RightHand] = RightHand;
		transforms[(int)SkeletonJoint.RightFingertip] = RightFingertip;
		transforms[(int)SkeletonJoint.LeftHip] = LeftHip;
		transforms[(int)SkeletonJoint.LeftKnee] = LeftKnee;
		transforms[(int)SkeletonJoint.LeftAnkle] = LeftAnkle;
		transforms[(int)SkeletonJoint.LeftFoot] = LeftFoot;
		transforms[(int)SkeletonJoint.RightHip] = RightHip;
		transforms[(int)SkeletonJoint.RightKnee] = RightKnee;
	    transforms[(int)SkeletonJoint.RightAnkle] = RightAnkle;
		transforms[(int)SkeletonJoint.RightFoot] = RightFoot;

    }

    void Start()
    {
		// save all initial rotations
		// NOTE: Assumes skeleton model is in "T" pose since all rotations are relative to that pose
		foreach (SkeletonJoint j in Enum.GetValues(typeof(SkeletonJoint)))
		{
			if (transforms[(int)j])
			{
				// we will store the relative rotation of each joint from the gameobject rotation
				// we need this since we will be setting the joint's rotation (not localRotation) but we
				// still want the rotations to be relative to our game object
				initialRotations[(int)j] = Quaternion.Inverse(transform.rotation) * transforms[(int)j].rotation;
			}
		}

		// start out in calibration pose
		RotateToCalibrationPose();
	}

	public void UpdateRoot(Vector3 skelRoot)
	{
        // +Z is backwards in OpenNI coordinates, so reverse it
		rootPosition = Vector3.Scale(new Vector3(skelRoot.x, skelRoot.y, -skelRoot.z), Scale);
		if (UpdateRootPosition)
		{
			transform.localPosition = transform.rotation * rootPosition;
		}
	}

	public void UpdateJoint(SkeletonJoint joint, SkeletonJointTransformation skelTrans)
	{
		// save raw data
		jointData[(int)joint] = skelTrans;

		// make sure something is hooked up to this joint
		if (!transforms[(int)joint])
		{
			return;
		}

		// modify orientation (if confidence is high enough)
        if (UpdateOrientation && skelTrans.Orientation.Confidence > 0.5)
        {
			// Z coordinate in OpenNI is opposite from Unity
			// Convert the OpenNI 3x3 rotation matrix to unity quaternion while reversing the Z axis
			Vector3 worldZVec = new Vector3(-skelTrans.Orientation.Z1, -skelTrans.Orientation.Z2, skelTrans.Orientation.Z3);
			Vector3 worldYVec = new Vector3(skelTrans.Orientation.Y1, skelTrans.Orientation.Y2, -skelTrans.Orientation.Y3);
			Quaternion jointRotation = Quaternion.LookRotation(worldZVec, worldYVec);
			Quaternion newRotation = transform.rotation * jointRotation * initialRotations[(int)joint];

			transforms[(int)joint].rotation = Quaternion.Slerp(transforms[(int)joint].rotation, newRotation, Time.deltaTime * RotationDamping);
        }

		// modify position (if needed, and confidence is high enough)
		if (UpdateJointPositions)
		{
            Vector3 v3pos = new Vector3(skelTrans.Position.Position.X, skelTrans.Position.Position.Y, -skelTrans.Position.Position.Z);
			transforms[(int)joint].localPosition = Vector3.Scale(v3pos, Scale) - rootPosition;
		}

		//HEAD DETECTION
		// Sets the initial value for the x&z position. Needs to be done only once and not updated!
		if(updateHeadJointCount == 0 && joint == SkeletonJoint.Head){
			myHeadInitPositionZ = skelTrans.Position.Position.Z;
			updateHeadJointCount++;
		}

		// FollowHead script uses these variables so that the camera can move accordingly
		if(joint == SkeletonJoint.Head){
			myHeadPositionZ = skelTrans.Position.Position.Z;
		}

		//HAND DETECTION
		// Sets the initial value for the x&y position. Needs to be done only once and not updated!
		if(myHandPositionX != 0 && updateHandJointCount == 0 && joint == SkeletonJoint.RightHand){
			myHandInitPositionX = skelTrans.Position.Position.X;
			myHandInitPositionY = skelTrans.Position.Position.Y;
			updateHandJointCount++;
		}

		// FollowHand script uses these variables so that the camera can move accordingly
		if(joint == SkeletonJoint.RightHand){
			myHandPositionX = skelTrans.Position.Position.X;
			myHandPositionY = skelTrans.Position.Position.Y;
		}

	}

	public void RotateToCalibrationPose()
	{
		foreach (SkeletonJoint j in Enum.GetValues(typeof(SkeletonJoint)))
		{
			if (null != transforms[(int)j])
			{
				transforms[(int)j].rotation = transform.rotation * initialRotations[(int)j];
			}
		}

		// calibration pose is skeleton base pose ("T") with both elbows bent in 90 degrees
		if (null != RightElbow) {
			RightElbow.rotation = transform.rotation * Quaternion.Euler(0, -90, 90) * initialRotations[(int)SkeletonJoint.RightElbow];
		}
		if (null != LeftElbow) {
        	LeftElbow.rotation = transform.rotation * Quaternion.Euler(0, 90, -90) * initialRotations[(int)SkeletonJoint.LeftElbow];
		}
	}

	public Point3D GetJointRealWorldPosition(SkeletonJoint joint)
	{
		return jointData[(int)joint].Position.Position;
	}

	public Hashtable JSONJoint(SkeletonJoint j)
	{

		ArrayList positionList = new ArrayList();
		positionList.Add(jointData[(int)j].Position.Position.X);
		positionList.Add(jointData[(int)j].Position.Position.Y);
		positionList.Add(jointData[(int)j].Position.Position.Z);
		ArrayList orientationList = new ArrayList();
		orientationList.Add(jointData[(int)j].Orientation.X1);
		orientationList.Add(jointData[(int)j].Orientation.X2);
		orientationList.Add(jointData[(int)j].Orientation.X3);
		orientationList.Add(jointData[(int)j].Orientation.Y1);
		orientationList.Add(jointData[(int)j].Orientation.Y2);
		orientationList.Add(jointData[(int)j].Orientation.Y3);
		orientationList.Add(jointData[(int)j].Orientation.Z1);
		orientationList.Add(jointData[(int)j].Orientation.Z2);
		orientationList.Add(jointData[(int)j].Orientation.Z3);
		Hashtable ret = new Hashtable();
		ret.Add("Position", positionList);
		ret.Add("Orientation", orientationList);
		return ret;
	}
    public ArrayList JSONSkeleton()
	{
		ArrayList data = new ArrayList();
		foreach (SkeletonJoint j in Enum.GetValues(typeof(SkeletonJoint)))
		{
			data.Add(this.JSONJoint(j));
		}
		return data;
	}
    public void SkeletonFromJSON(ArrayList data)
	{
		foreach (SkeletonJoint j in Enum.GetValues(typeof(SkeletonJoint)))
		{
			this.JointFromJSON(j, (Hashtable)data[(int)j]);
		}
	}
	public void JointFromJSON(SkeletonJoint j, Hashtable dict) {

		ArrayList positionList = (ArrayList)dict["Position"];

		ArrayList orientationList = (ArrayList)dict["Orientation"];
		SkeletonJointOrientation sjo = new SkeletonJointOrientation();
		sjo.X1 = 1.0f;
		SkeletonJointPosition sjp = new SkeletonJointPosition();
		SkeletonJointTransformation xform = new SkeletonJointTransformation();
		// object -> double ->float is okay, but object->float isn't
		// (the object is a Double)
		sjp.Position = new Point3D((float)(double)positionList[0],
		                           (float)(double)positionList[1],
		                           (float)(double)positionList[2]);
		sjo.X1 = (float)(double)orientationList[0];
		sjo.X2 = (float)(double)orientationList[1];
		sjo.X3 = (float)(double)orientationList[2];
		sjo.Y1 = (float)(double)orientationList[3];
		sjo.Y2 = (float)(double)orientationList[4];
		sjo.Y3 = (float)(double)orientationList[5];
		sjo.Z1 = (float)(double)orientationList[6];
		sjo.Z2 = (float)(double)orientationList[7];
		sjo.Z3 = (float)(double)orientationList[8];
		xform.Orientation = sjo;
		xform.Position = sjp;
		UpdateJoint(j, xform);
	}

}

This was not a pre-existing script. I wrote this script to move the player based on the change in the Head joint. I initially used transforms to moved the character instead of the Move function and it caused a lot of problems. Essentially, the character was was moving visibly but its actual position in Unity wasn’t being updated. This meant that the character would never collide with anything even if there was a collider on that object. Using the Move function on the character controller instead solved the problem. Another problem that we ran into was that your small size in the second room caused you to move way too slow. The value that comes in from the Kinect data already needed to be divided so that the change in position wasn’t so large. Dividing the value by 2000 created a good speed for the “big” room but it wasn’t fast enough for the “small” room. I created two booleans to change to value of magnitude (FollowHead int) to 200 when your small and back to 2000 when you are large.

FollowHead.cs

using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using OpenNI;

public class FollowHead : MonoBehaviour
{
	private CharacterController controller;
	int magnitude = 2000;

	void Awake ()
	{
		controller = GetComponent<CharacterController>();
	}

	public void LateUpdate()
	{
		float CurrentZ = ((OpenNISkeleton.myHeadPositionZ - OpenNISkeleton.myHeadInitPositionZ) * -1); // current position of the head on Z axis in respect to the original position, multiplied by -1 because it is flipped

		// If you are within 60 units of the original spot then stop accelerating, makes controls a little less touchy
		if (CurrentZ < 30 && CurrentZ > -30) CurrentZ = 0;

		Vector3 forwardFace = transform.TransformDirection(Vector3.forward); //always point forward

		// if you are small then you walk way too slow within the space, this will speed you up because CurrentZ isn't divided by such a big number
		if (decreaseSpeed.BigSpeed && !increaseSpeed.SmallSpeed) magnitude = 2000;
		if (increaseSpeed.SmallSpeed && !decreaseSpeed.BigSpeed) magnitude = 400;
		controller.Move(forwardFace.normalized * (CurrentZ/magnitude));

		print("magnitude = " + magnitude);
	}

}

I also modified the MouseLook.cs script to read in the position of your right hand instead of the mouse. This allows your hand to control the camera angle.

MouseLook.cs

using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using OpenNI;

/// MouseLook rotates the transform based on the mouse delta.
/// Minimum and Maximum values can be used to constrain the possible rotation

/// To make an FPS style character:
/// - Create a capsule.
/// - Add the MouseLook script to the capsule.
///   -> Set the mouse look to use LookX. (You want to only turn character but not tilt it)
/// - Add FPSInputController script to the capsule
///   -> A CharacterMotor and a CharacterController component will be automatically added.

/// - Create a camera. Make the camera a child of the capsule. Reset it's transform.
/// - Add a MouseLook script to the camera.
///   -> Set the mouse look to use LookY. (You want the camera to tilt up and down like a head. The character already turns.)
[AddComponentMenu("Camera-Control/Mouse Look")]
public class MouseLook : MonoBehaviour {

	public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 }
	public RotationAxes axes = RotationAxes.MouseXAndY;
	public float sensitivityX = 15F;
	public float sensitivityY = 15F;

	public float minimumX = -360F;
	public float maximumX = 360F;

	public float minimumY = -60F;
	public float maximumY = 60F;

	float InitY; //Initial position of left hand on Y axis
	float CurrentY; //Current position of left hand on Y axis
	float InitX; //Initial position of left hand on X axis
	float CurrentX; //Current position of left hand on X axis

	float rotationY = 0F;

	void Update ()
	{
		// Get value from OpenNISkeleton script
		InitY = OpenNISkeleton.myHandInitPositionY;
		InitX = OpenNISkeleton.myHandInitPositionX;
		// Current position is relative to the initial postion, also divided by 1000 to limit the rotation
		CurrentY = ((OpenNISkeleton.myHandPositionY - InitY)/1000);
		CurrentX = ((OpenNISkeleton.myHandPositionX - InitX)/1000);

		// This range will allow the viewer to have some leeway as to how close they need to be to the initial position to stop/slow down
		if (CurrentY < 0.025 && CurrentY > -0.025) CurrentY = 0;

		if (axes == RotationAxes.MouseXAndY)
		{
			float rotationX = transform.localEulerAngles.y + CurrentX * sensitivityX;

			rotationY += CurrentY * sensitivityY;
			rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);

			transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0);
		}
		else if (axes == RotationAxes.MouseX)
		{
			transform.Rotate(0, CurrentX * sensitivityX, 0);
		}
		else
		{
			rotationY += CurrentY * sensitivityY;
			rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);

			transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0);
		}

	}

	void Start ()
	{
		// Make the rigid body not change rotation

		if (rigidbody)
			rigidbody.freezeRotation = true;
	}
}

Finally, I needed a couple scripts to make the minor changes between the two rooms. IncreaseSpeed.cs and DecreaseSpeed.cs are applied to the portals between the two rooms and make these necessary changes. They create states that change the value of magnitude within the FollowHead script and they also switch the audio for each room.

increaseSpeed.cs

using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using OpenNI;

public class increaseSpeed : MonoBehaviour
{
	public static bool SmallSpeed = false;

	public AudioSource SmallRoomAudio;
	public AudioSource BigRoomAudio;

	void OnTriggerEnter(Collider FollowHead)
    {
		print("Door Collider entered!");

		SmallSpeed = false;
		decreaseSpeed.BigSpeed = true;

		// Switches the audio clip
		SmallRoomAudio.Stop();
		BigRoomAudio.Play();
    }
}

decreaseSpeed.cs

using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
using OpenNI;

public class decreaseSpeed : MonoBehaviour
{
	public static bool BigSpeed = true;

	public AudioSource SmallRoomAudio;
	public AudioSource BigRoomAudio;

	void OnTriggerEnter(Collider FollowHead)
    {
		print("Book Collider entered!");

		BigSpeed = false;
		increaseSpeed.SmallSpeed = true;

		// Switches the audio clip
		SmallRoomAudio.Play();
		BigRoomAudio.Stop();
    }
}

After the controls were working correctly I made made some final changes such as inserting Autumn’s fan and lantern models into the room and adding the audio clips.





Honey, I Shrunk Something!

25 09 2011

The project is now complete. For the most part we accomplished what we set out to do. The room turned out nicely and the transition between the “big” room and “small” room is fairly smooth. The rooms are just different enough to give you a feel that a change has occurred but it is still obvious that you are in the same room. We could use a different piece of audio for the “small” room though. Although we ran into quite a few problems, the final project only had a couple glitches. The portal back to the “big” room must be entered at just the right spot in order for you to enter the “big” room correctly and at the right speed.

At first when we begun the project we all were working on everything together. Once were able to get our hands on a Kinect we divided up the tasks. My job was to get the Kinect working with Unity while Autumn modeled, and Mike built the Unity scene. The models (and scene) are featured in these screen shots. The code will be discussed in the next post.

When interviewing other people that played our game, we found that they thought it was fun and immersive but they definitely struggeled to learn the controls. When using both your feet and your hand to navigate it is extremely easy to forget about one or the other. When the player conentrated on where the camera was pointed, they didn’t remember to move forward or backward in the space. Also, many times the player would take a step forward a wonder why the camera would start dropping. They didn’t remember to keep their hand at the same height while they moved around.

If we had more time to continue working on this project we would like to develop the story for why you are exploring this bedroom. There would be a more explicit reason to become small and to get large again. Maybe there would be more than just one room that you bounce back and forth between. After you get the hang of navigating with your feet and hand then you could find a portal that takes you to larger spaces and be on a quest to find something fitting at the end of the trail. Also, in order to add more interest, we would like to add animations inside the room. The fan could easily be spinning or the bugs under the dresser could have moving legs. Finally, we would make the navigation a little more forgiving. For example,  if your hand dropped when you moved forward it would know not to drop the camera angle.

Here is a demo of the project:





Presentation Slides

23 09 2011





Kinect Control

12 09 2011

After analyzing the scripts that were included in the Blockman sample, I was able to narrow down where the data was stored for each joint on the skeleton. I decided to focus on the head joint. This way the viewer can simply lean the direction they want to move within the physical space and see it happen in the virtual. After isolating the head joint data I used a series of print statements to see if the logic was correct.

Next, I coded the camera to update  according to the data being read in from the Kinect. The result works quite nicely. The further the viewer leans, the faster they accelerate in that direction. As of now the camera does not move along the Y-axis at all. The only thing that is missing now is a rotation functionality.





From the Ground Up

12 09 2011

Along with figuring out the Kinect, our group has also been modeling the room. We have the major items finished but would like to continue adding smaller items to add more interest. Then we will begin texturing and tweaking the lights.





Beginning with the Kinect

12 09 2011

It took a while to figure out how to install all of the drivers and middleware to get the Kinect working with my PC. It requires SensorKinect, OpenNI, NITE, and eventually a Unity Wrapper to get it to integrate with Unity 3D. The middleware came with a few samples to make sure the Kinect worked properly:

Stick Figure Sample

PrimeSence NITE Scene Segmentation Viewer

There were also sample Unity projects that came with the wrapper:

This one tracks your hand to control the movement of the sphere.

This one uses a primitive skeleton that matches your body movements.