<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?action=history&amp;feed=atom&amp;title=Kinect</id>
	<title>Kinect - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?action=history&amp;feed=atom&amp;title=Kinect"/>
	<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;action=history"/>
	<updated>2026-05-05T18:48:20Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.32.0</generator>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=115&amp;oldid=prev</id>
		<title>Diegom: /* Temporal aliasing of aperiodic, nonmonotonic motion */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=115&amp;oldid=prev"/>
		<updated>2011-12-16T20:43:28Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Temporal aliasing of aperiodic, nonmonotonic motion&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 20:43, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l40&quot; &gt;Line 40:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 40:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;common &lt;/del&gt;example is a punch by a highly-skilled martial artist. (&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Th &lt;/del&gt;motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:punch1.png|border|punch]][[Image:punch2.png|border|punch 2]][[Image:punch3.png|border|punch 3]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;obvious &lt;/ins&gt;example is a punch by a highly-skilled martial artist. (&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;The &lt;/ins&gt;motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;It may be difficult to capture the movements of this guy, for example: http://www.youtube.com/watch?v=qdSY-_qs_mg &lt;/ins&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:punch1.png|border|punch]][[Image:punch2.png|border|punch 2]][[Image:punch3.png|border|punch 3]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=114&amp;oldid=prev</id>
		<title>Diegom: /* Temporal aliasing of periodic motion */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=114&amp;oldid=prev"/>
		<updated>2011-12-16T20:37:04Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Temporal aliasing of periodic motion&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 20:37, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l36&quot; &gt;Line 36:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 36:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Since &lt;/del&gt;the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;We assume here &lt;/del&gt;that the Kinect &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;can sample equally well along all three &lt;/del&gt;spatial &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;dimensions&lt;/del&gt;.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|border|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|border|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;For this particular example, since &lt;/ins&gt;the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;It should be noted at this point &lt;/ins&gt;that &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;the &lt;/ins&gt;the Kinect&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;'s depth sampling performance falls quadratically with the distance from the lens. See the section below on &lt;/ins&gt;spatial &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;resolution errors&lt;/ins&gt;.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|border|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|border|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=113&amp;oldid=prev</id>
		<title>Diegom at 20:30, 16 December 2011</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=113&amp;oldid=prev"/>
		<updated>2011-12-16T20:30:44Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 20:30, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l33&quot; &gt;Line 33:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 33:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]][[Image:y_g_t.PNG|y(t)]][[Image:z_g_t.PNG|z(t)]]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|x(t)]][[Image:y_g_t.PNG&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|y(t)]][[Image:z_g_t.PNG&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|z(t)]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:punch1.png|punch]][[Image:punch2.png|punch 2]][[Image:punch3.png|punch 3]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;/&amp;gt;&lt;/del&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:punch1.png&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|punch]][[Image:punch2.png&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|punch 2]][[Image:punch3.png&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|border&lt;/ins&gt;|punch 3]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=112&amp;oldid=prev</id>
		<title>Diegom: /* References */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=112&amp;oldid=prev"/>
		<updated>2011-12-16T20:27:18Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;References&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 20:27, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l53&quot; &gt;Line 53:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 53:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Khoshelham, K. (2011). Accuracy analysis of kinect depth data. ''ISPRS Workshop Laser Scanning'' (Vol. 38, p. 1). Retrieved December 16, 2011, from http://www.isprs.org/proceedings/XXXVIII/5-W12/Papers/ls2011_submission_40.pdf&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; Fernandez, D. (2011, June 16). Skeletal Tracking Fundamental. ''Kinect for Windows SDK Quickstarts''. Retrieved December 16, 2011, from http://channel9.msdn.com/Series/KinectSDKQuickstarts/Skeletal-Tracking-Fundamentals#time=1m24s&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Khoshelham, K. (2011). Accuracy analysis of kinect depth data. ''ISPRS Workshop Laser Scanning'' (Vol. 38, p. 1). Retrieved December 16, 2011, from http://www.isprs.org/proceedings/XXXVIII/5-W12/Papers/ls2011_submission_40.pdf&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; Fernandez, D. (2011, June 16). Skeletal Tracking Fundamental. ''Kinect for Windows SDK Quickstarts''. Retrieved December 16, 2011, from http://channel9.msdn.com/Series/KinectSDKQuickstarts/Skeletal-Tracking-Fundamentals#time=1m24s&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Smisek, J., Jancosek, M., &amp;amp; Pajdla, T. (2011). 3D with Kinect. Presented at the 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision, Barcelona, Spain. Retrieved from ftp://cmp.felk.cvut.cz/pub/cvl/articles/pajdla/Smisek-CDC4CV-2011.pdf&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=111&amp;oldid=prev</id>
		<title>Diegom: /* Spatial resolution errors */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=111&amp;oldid=prev"/>
		<updated>2011-12-16T20:26:52Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Spatial resolution errors&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 20:26, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l44&quot; &gt;Line 44:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 44:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Khoshelham (2011) reports that random error of Kinect depth measurements increases quadratically with increasing distance from the sensor. The maximum random error is 4 cm. Khoshelham concludes that at a distance beyond the optimal distance of 1-3 meters, the quality of the data is degraded by noise and low spatial resolution. Keep this in mind when you plan your motion capture activities with the Kinect. For more &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;about &amp;lt;br /&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Khoshelham (2011) reports that random error of Kinect depth measurements increases quadratically with increasing distance from the sensor. The maximum random error is 4 cm. Khoshelham concludes that at a distance beyond the optimal distance of 1-3 meters, the quality of the data is degraded by noise and low spatial resolution. Keep this in mind when you plan your motion capture activities with the Kinect. For more &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;experimental results on Kinect's spatial resolution, see also Smisek, Jancosek, &amp;amp; Pajdla (2011).&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Occlusion errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Occlusion errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=109&amp;oldid=prev</id>
		<title>Diegom at 19:50, 16 December 2011</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=109&amp;oldid=prev"/>
		<updated>2011-12-16T19:50:46Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 19:50, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l32&quot; &gt;Line 32:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 32:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]][[Image:y_g_t.PNG|y(t)]][[Image:z_g_t.PNG|z(t)]]&amp;lt;br /&amp;gt;&amp;lt;br /&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt; &lt;/del&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]][[Image:y_g_t.PNG|y(t)]][[Image:z_g_t.PNG|z(t)]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&amp;lt;br /[[Image:punch1.png|punch]][[Image:punch2.png|punch 2]][[Image:punch3.png|punch 3]]/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&amp;lt;br /&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;gt;&lt;/ins&gt;[[Image:punch1.png|punch]][[Image:punch2.png|punch 2]][[Image:punch3.png|punch 3]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;/&amp;gt;&amp;lt;br &lt;/ins&gt;/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=108&amp;oldid=prev</id>
		<title>Diegom at 19:47, 16 December 2011</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=108&amp;oldid=prev"/>
		<updated>2011-12-16T19:47:57Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 19:47, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l32&quot; &gt;Line 32:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 32:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;{|&lt;/del&gt;[[Image:x_g_t.PNG|x(t)]] &lt;del class=&quot;diffchange diffchange-inline&quot;&gt; | &lt;/del&gt;[[Image:y_g_t.PNG|y(t)]] &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;| &lt;/del&gt;[[Image:z_g_t.PNG|z(t)]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|} &lt;/del&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]][[Image:y_g_t.PNG|y(t)]][[Image:z_g_t.PNG|z(t)]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; &lt;/ins&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;{|&lt;/del&gt;[[Image:punch1.png|punch]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|&lt;/del&gt;[[Image:punch2.png|punch 2]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|&lt;/del&gt;[[Image:punch3.png|punch 3]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|}&lt;/del&gt;/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&lt;/ins&gt;[[Image:punch1.png|punch]][[Image:punch2.png|punch 2]][[Image:punch3.png|punch 3]]/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=107&amp;oldid=prev</id>
		<title>Diegom: /* Temporal aliasing of aperiodic, nonmonotonic motion */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=107&amp;oldid=prev"/>
		<updated>2011-12-16T19:46:18Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Temporal aliasing of aperiodic, nonmonotonic motion&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 19:46, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l36&quot; &gt;Line 36:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 36:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;[[Image:punch1.png|punch]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&lt;/del&gt;[[Image:punch2.png|punch 2]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&lt;/del&gt;[[Image:punch3.png|punch 3]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br &lt;/del&gt;/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;{|&lt;/ins&gt;[[Image:punch1.png|punch]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|&lt;/ins&gt;[[Image:punch2.png|punch 2]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|&lt;/ins&gt;[[Image:punch3.png|punch 3]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|}&lt;/ins&gt;/&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Spatial resolution errors ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=106&amp;oldid=prev</id>
		<title>Diegom: /* Temporal aliasing of periodic motion */</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=106&amp;oldid=prev"/>
		<updated>2011-12-16T19:43:53Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Temporal aliasing of periodic motion&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 19:43, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l32&quot; &gt;Line 32:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 32:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of periodic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;/del&gt;[[Image:y_g_t.PNG|y(t)]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&lt;/del&gt;[[Image:z_g_t.PNG|z(t)]]&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; &lt;/del&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;span&amp;gt;''''&amp;lt;/span&amp;gt;&lt;/del&gt;can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;{|&lt;/ins&gt;[[Image:x_g_t.PNG|x(t)]] &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; | &lt;/ins&gt;[[Image:y_g_t.PNG|y(t)]] &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;| &lt;/ins&gt;[[Image:z_g_t.PNG|z(t)]]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|} &lt;/ins&gt;The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
	<entry>
		<id>http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=97&amp;oldid=prev</id>
		<title>Diegom: Added a lot of stuff</title>
		<link rel="alternate" type="text/html" href="http://blackbox.wiki.iat.sfu.ca/blackbox/index.php?title=Kinect&amp;diff=97&amp;oldid=prev"/>
		<updated>2011-12-16T19:39:11Z</updated>

		<summary type="html">&lt;p&gt;Added a lot of stuff&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 19:39, 16 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;'''&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;TODO&lt;/del&gt;'''&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;:&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;= &lt;/ins&gt;'''&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Kinect libraries and APIs&lt;/ins&gt;''' &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;* An overview of the Kinect and Kinect libraries;  A description of the Kinect libraries available on the Blackbox machines&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;** OpenNI&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;** libfreenect&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;** Microsoft SDK&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;** [http://www.ipisoft.com/company.php iPi]&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;* Where &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Vicon equipment lives&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Several different libraries exist for &lt;/ins&gt;the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Kinect. Some of these have been installed in the Blackbox computers. &amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Kinect &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;resources&lt;/del&gt;:  &lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;* Microsoft SDK: Microsoft's official SDK for the &lt;/ins&gt;Kinect &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;is installed on the only Windows 7 machine in the Blackbox and is clearly labeled as such.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;* OpenNI&lt;/ins&gt;: &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;The OpenNI libary&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Other libraries (which have not been installed in the Blackbox machines yet) include the following:&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;* libfreenect&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;* iPi&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;One word of caution: some of these libraries cannot be installed on the same macine. For instance, the OpenNI library should not be installed along with the Microsoft Kinect SDK.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;== Using Kinect motion data ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;You can export Kinect motion capture data into BVH format, a standard motion data format that can be imported into, say, Credo Interactive's DanceForms 2.0 choreography and animation software. The software is available on http://tech.integrate.biz/kinect_mocap.htm&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;You can export Kinect motion capture data into BVH format, a standard motion data format that can be imported into, say, Credo Interactive's DanceForms 2.0 choreography and animation software. The software is available on http://tech.integrate.biz/kinect_mocap.htm&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;= Errors in using the Kinect for motion tracking =&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Three kinds of errors that can arise from using the Kinect for motion tracking: &amp;lt;span&amp;gt;''temporal aliasing''&amp;lt;/span&amp;gt;, &amp;lt;span&amp;gt;''spatial resolution errors''&amp;lt;/span&amp;gt;, and &amp;lt;span&amp;gt;''occlusion errors.''&amp;lt;/span&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== Temporal aliasing ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;The Nyquist theorem states that given a sampling rate &amp;lt;span&amp;gt;''f, ''&amp;lt;/span&amp;gt;any frequency above what is known as the Nyquist frequency (which is &amp;lt;span&amp;gt;''f''&amp;lt;/span&amp;gt;/2) will not be reconstructed properly. What does this mean in terms of motion capture? This means that movement that is &amp;lt;span&amp;gt;''very fast''&amp;lt;/span&amp;gt; and &amp;lt;span&amp;gt;''nonlinear ''&amp;lt;/span&amp;gt;may suffer from aliasing. Two kinds of nonlinear motion are affected by temporal aliasing: &amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* &amp;lt;span&amp;gt;''Periodic ''&amp;lt;/span&amp;gt;motion. Examples of period motion include: an arm being swung around in a wide circle; jumping up and down; a finger tracing a sinusoidal pattern in the air; a repeated dabbing motion.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* &amp;lt;span&amp;gt;''Non-periodic, non-monotonic''&amp;lt;/span&amp;gt;. Examples include: a single, martial-arts style punch in the air; starting from the time the hand extends from the chest and finishing with the arm near the chest again; tracing random, curvy patterns in the air with your nose&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;We consider each of these two cases in turn.&amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;=== Temporal aliasing of periodic motion ===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Consider the movement of a single point on the human body through 3D space, measured with reference to a 3D coordinate system that is fixed to the room. Let &amp;lt;span&amp;gt;''x''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), &amp;lt;span&amp;gt;''y''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;), and &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt;&amp;lt;sub&amp;gt;''g''&amp;lt;/sub&amp;gt;(&amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;) represent the true (ground truth) values of the position of the limb&amp;lt;span&amp;gt;'', ''&amp;lt;/span&amp;gt;as measured within that coordinate system, at time &amp;lt;span&amp;gt;''t''&amp;lt;/span&amp;gt;. We can plot the position of that point along each of three axes. For the purposes of illustration, imagine the position of the tip of a finger as it steadily traces a sine wave along a plane perpendicular to the &amp;lt;span&amp;gt;''z''&amp;lt;/span&amp;gt; axis, moving from &amp;quot;left to right&amp;quot;, that is, from one end of the &amp;lt;span&amp;gt;''x ''&amp;lt;/span&amp;gt;axis to the other. If we assume that the coordinate position (0, 0, 0) is located near the finger of the mover, the motion can be plotted along the three reference axes, and might look something like this:&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:x_g_t.PNG|x(t)]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t.PNG|y(t)]]&amp;lt;br /&amp;gt;[[Image:z_g_t.PNG|z(t)]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The question is: at what point would this motion be erroneously reconstructed from Kinect sampling because of temporal aliasing? Since the x and z components of the motion will always be reconstructed correctly, we focus on the reconstruction of the y component. (We assume here that the Kinect can sample equally well along all three spatial dimensions.) Given the Kinect sampling rate is 30 Hz, the Nyquist theorem predicts the frequency of the finger moving must not exceed 15 Hz (called the &amp;lt;span&amp;gt;''Nyquist frequency''&amp;lt;/span&amp;gt;). That's an awfully fast finger, and so we can safely use the Kinect to sample this motion. In fact, any other kind of periodic motion of the body (swinging an arm in a large circle, jumping up and down) will happen at a frequency far less than the Nyquist frequency. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; The image below shows how a periodic motion at a frequency less than the 15Hz &amp;lt;span&amp;gt;''''&amp;lt;/span&amp;gt;can be reconstructed from Kinect sampling data. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;[[Image:y_g_t_sampling.PNG|sampled regular]][[Image:y_g_t_sampling_reconstructed.PNG|reconsturcted]]&amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;=== Temporal aliasing of aperiodic, nonmonotonic motion ===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;However, the human body &amp;lt;span&amp;gt;''can ''&amp;lt;/span&amp;gt;do very fast, nomonotonic, aperiodic movement, which is susceptible to temporal aliasing. The most common example is a punch by a highly-skilled martial artist. (Th motion capture technician at Emily Carr, Rick Overington, has reported this to be true at their own facilities.) &amp;lt;br /&amp;gt;[[Image:punch1.png|punch]]&amp;lt;br /&amp;gt;[[Image:punch2.png|punch 2]]&amp;lt;br /&amp;gt;[[Image:punch3.png|punch 3]]&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; This phenomenon can also be understood in the context of the Nyquist theorem. Any movement gesture or phrase can be seen as a finite signal in 3 dimensions that is decomposable into a Fourier series. In the case of this example of a martial arts punch, one of the components of series is a high-amplitude signal with a frequency that greater than the Nyquist frequency. (An example of a movement that contains a low-amplitude signal with a high frequency might be very strong shivering.) This component will be aliased upon reconstructed from the sampled signal. And since this component is high amplitude and thus critical to our perception of the movement, an aliased reconstruction of the movement will be perceptually significantly different from the original gesture. The movement will be &amp;quot;smoothed out&amp;quot;, appearing less jerky than it really is.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; In fact, the usefulness of the Kinect for sampling needs to be closely paid attention to for any movement that contains very rapid changes in velocity (which is the first derivative of position as a function of time) or acceleration (which is the second derivative). For instance, the expressivity of the urban dance form of &amp;lt;span&amp;gt;''popping ''&amp;lt;/span&amp;gt;hinges precisely on very rapid and sophisticated changes in acceleration. ([http://www.youtube.com/playlist?list=PLB52F45219B7297B9&amp;amp;feature=view_all This is a a playlist of popping videos from YouTube.]) &amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== Spatial resolution errors ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Khoshelham (2011) reports that random error of Kinect depth measurements increases quadratically with increasing distance from the sensor. The maximum random error is 4 cm. Khoshelham concludes that at a distance beyond the optimal distance of 1-3 meters, the quality of the data is degraded by noise and low spatial resolution. Keep this in mind when you plan your motion capture activities with the Kinect. For more about &amp;lt;br /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== Occlusion errors ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;If you are using the Microsoft SDK to perform skeleton tracking, know that the SDK can sometimes infer joint positions when the joint is occluded (Fernandez, 2011). You can query the SDK on the quality of the skeleton data. Refer to [http://channel9.msdn.com/Series/KinectSDKQuickstarts/Skeletal-Tracking-Fundamentals#time=1m24s this video] find out more about how to do this. The SDK allows you to apply filtering to smooth out &amp;quot;skeleton jitter&amp;quot; (Fernandez, 2011), but you will lose movement information through this smoothing. Whether the loss is significant depends on how you need to use the motion data.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;= References =&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;There are quality parameters that are associated with how reliable the &lt;/del&gt;depth data &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;is &lt;/del&gt;(&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;see 8&lt;/del&gt;:&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;11 in &lt;/del&gt;http://&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;www&lt;/del&gt;.&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;brekel&lt;/del&gt;.com/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;?p&lt;/del&gt;=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;731)&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Khoshelham, K. (2011). Accuracy analysis of kinect &lt;/ins&gt;depth data&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. ''ISPRS Workshop Laser Scanning'' &lt;/ins&gt;(&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Vol. 38, p. 1). Retrieved December 16, 2011, from http&lt;/ins&gt;:&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;//www.isprs.org/proceedings/XXXVIII/5-W12/Papers/ls2011_submission_40.pdf&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt; Fernandez, D. (2011, June 16). Skeletal Tracking Fundamental. ''Kinect for Windows SDK Quickstarts''. Retrieved December 16, 2011, from &lt;/ins&gt;http://&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;channel9&lt;/ins&gt;.&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;msdn&lt;/ins&gt;.com/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Series/KinectSDKQuickstarts/Skeletal-Tracking-Fundamentals#time&lt;/ins&gt;=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;1m24s&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Diegom</name></author>
		
	</entry>
</feed>