Saturday, July 30, 2011

Gallery?

I thought about creating a gallery page that showcases uses of Candescent NUI. I'd like to see what you do with it!

If you think it's a good idea and would like to show what you're implementing, I'd welcome you to send an e-mail to: info@candescent.ch

What you should include:
  • Short description
  • Link to a video or screenshot(s)
  • Whether you would like your entry with your name or anonymous
  • Optional: Link to your own project page

    Sunday, July 24, 2011

    Thoughts about Kinect interfaces

    I've watched lots of Kinect 'hack' videos and I've asked myself in what way they make interaction with computers [insert positive adjective here]. Is it easier, faster, more comfortable, more practical than traditional input with mouse and keyboard (or touch)? I think in most cases I find the answer is no. If you take away the coolness factor of controlling something without touching anything and look at it from a neutral perspective the interfaces are slower, less accurate and often uncomfortable (Stretching out arms for hours? Having to hover a button for x seconds to press it?)

    Kinect is good for games in which you play a character that moves in a small area (like dance, boxing, fitness). It's good for places where you should not have to touch anything (clean room, wearing gloves) and maybe for public showcases if you don't want to use a touch screen. But it won't replace mice, keyboards and touchscreens.

    There are a lot of videos featuring Kinect interfaces for games. These are in most cases not natural user interfaces. Yes, you don't need a device to control your character. But it's not natural to use your arms to look or move around. In reality you use your hands to manipulate things, turn your head or body to look around and use your legs to move.

    I think this is going in the right direction:
    SUVIX demo - VR environnment using Kinect 

    Now imagine combining 3D glasses with skeleton tracking and a 2D treadmill (like this). Star Trek Holodeck anyone? The only things still missing are force fields, that let you feel what you interact with ;-)

    Sunday, July 17, 2011

    Candescent NUI Release Notes (8638)

    There is a new release of Candescent NUI on CodePlex.

    Changes
    1. Replaced xn.Point3D with CCT.NUI.Core.Point
      This makes most of the code independent from OpenNI
    2. Introduced an optional maximal cluster depth setting
      The default ist 150mm. The hand should not take up more than 15cm depth so the rest (of the arm) is filtered.
    3. Fixed a bug that prevented the convex hull from drawing
    4. Improved the finger detection algorithm (less false positives)
    5. Increased speed
    • Clustering 7ms -> 6ms
      (15% improvement)
    • Hand and finger detection 16ms -> 12ms
      (25% improvement)
    Get the source or binary version.

      Tuesday, July 12, 2011

      Candescent NUI Release Notes (8511)

      Finger tip depth information
      Finger tips now also have a z value indicating the distance from the Kinect in mm. I've already got some ideas how to use this!

      Finger depth
       Configuration
      I've added three configuration "templates": default, fast and accurate. I'll tweak these settings, but for now I've just picked some values which I thought reasonable.

      Settings
      I've run some quick performance tests:
      Fast
      Clustering     6.8ms
      Hands        11.3ms

      Default
      Clustering     7.2ms
      Hands        16.2ms

      Accurate
      Clustering     8.1ms
      Hands        32.3ms

      ... and I've fixed a dispose problem that lead to performance issues in the sample app when the hand and finger detection button was clicked more than once.

      Get the release here, the source code is here.

      Skeleton tracking combined with finger detection - problems

      Some of my readers have proposed to combine skeleton tracking with finger detection but I haven't tried this yet. And here is an example why: I've found this video (already 2 months old) in which Patricio Gonzalez demonstrates a combination of the two techniques.

      The problem is that for the skeleton tracking to work, you must be far enough away from the Kinect. And for the finger detection to work you have to be close enough. This defines a range in which both algorithms work, but in my opinion that range is empty at the current resolution of 640x480. I guess when the depth cameras provide a resolution of 1024x768 plus, we will quickly see this catch on.

      Thursday, July 7, 2011

      Hand Detection Performance

      I've worked on the performance of the hand detection, here is a summary:

      When I started the optimizations the detection of a hand took around 22 - 23ms (which is a lot if you only have time for 33ms per frame). The distribution was like this:

      Finding the contour
      Used to take 12ms per frame (so this accounts for half of the time used). I've reduced this to 9ms and think there is still potential.

      Center of the palm
      Used to take 7ms per frame. Now it takes 3.5ms and is also more accurate.

      Mapping the hands in the new frame to the hands in the old frame
      3.5ms per frame, I didn't try to optimize this yet.

      Finding the convex hull
      < 1 ms per frame, I didn't try to optimize this yet.

      Detecting the fingers
      < 1 ms per frame, I didn't try to optimize this yet.

      With the improvements I've got the total time down to 16ms (around 30% less).


      I've introduced TPL (Task Parallel Library) to take advantage of multi core CPUs. Example from PalmFinder.cs:

      Parallel.For(0, candidates.Count, (index) =>
      {
          distances[index] = FindMaxDistance(contour, candidates[index]);
      });

      This will start multiple threads (the actual number depends on the number of cores that are available) to calculate the distances for the candidate points.

      I've also made some variables that were used in loops method scoped instead of class scoped, this also increased speed.


      This was measured on my Intel Core i7 860  (this is a quad core CPU). Speed improvements on two or single core CPUs might be smaller.

      Edit: The new version is on CodePlex: 8454

      Sunday, July 3, 2011

      Image Manipulation on CodePlex

      By popular request ;-) I've put the image manipulation code on CodePlex (change set 8391). It'd need some more refactoring, but I think as a sample it's good enough.

      It requires some practice by the user, especially scaling with two fingers (as you can see in the video).

       
      The movement isn't very smooth. That's because the center of the palm jumps around a bit. It is a trade-off vs. performance. I'll try to improve this.

      Edit: I've tried it with the SDK, but the depth limitation is making it almost impossible to work with this sample. I hope Microsoft will add an option so values closer than 850mm can be used soon. And I've found a little problem, the standard depth threshold was used in changeset 8391, so it would not work at all; I've fixed that in 8402.