I did this work a few weeks ago, and I think it's pretty cool.
It started out with the idea of constructing probability curves from just the distances in time between motion events in order to predict if a motion sensor will go off in the future. I'm hoping it has more applications, though.
Oh, yeah, and I did all this analysis and graphing and stuff using R.
Forward Analysis
So if one sensor goes off somewhere, what's the chance that another sensor will go off and when?
This situation is easiest when in a corridor. On average, only one answer should appear: however long it takes to walk from one to the other. So I looked at the data from a sensor in the north corridor of the MediaLab, looked at all the data that came after each motion event (within 40 seconds), and here's what I came up with:

These are the number of hits within 40 seconds after a certain motion sensor went off (keep in mind, this is not after a specific motion event, but several motion events that belong to the same motion sensor). I ordered them so that it's evident that there are several that seem to correlate more with the motion sensor, since there are a lot more hits for those sensors.
Now I chose a couple of sensors, and plotted the density function of the time it took for the sensor to go off after the motion sensor I was looking at. There is an obvious peak on each curve at the time (in milliseconds) that it takes to walk from one sensor to the next.
Backward Analysis
There's another way to approach the data, though. Instead of looking 40 seconds
after a motion sensor's event, we can look a few (I chose 10) seconds
before a motion sensor goes off. This worked better for my inital idea of using the curves to predict if a sensor was about to go off. When I plotted and ordered the number of hits 10 seconds before a motion sensor's events in the Tangible Media corridor, I got this graph:

This is actually even more drastic than the previous similar graph. A lot of the sensors seem to have no correlation (the leftmost data points). Now we can look at the outlying data points:

This is a map of the sensor locations, with the Tangible Media sensor in question in red, those outside of two standard deviations from the mean number of hits in green, and those between one and two standard deviations in blue. This was one of the better results, but even with this, one can see that there's a sensor in the cafe area that is obviously unrelated to the sensor in question, probably because it goes off a lot.
We can then plot the density function as we did before, and similar graphs arise. Notice that most curves peak at the same time as another curve. This can be explained by the two directions a person can walk in to reach the motion sensor. It should also be noted that these motion sensors were graphed because they had two standard deviations above the average number of hits associated with them.

This process is pretty general, and can pretty easily be repeated. Above are pretty good results. A counterexample is observing a motion sensor in the cafe area. Once again, we have a few outlying sensor hits (not shown), but when we actually plot the density functions of those above 2 standard deviations from the average number of hits, we get this graph:

So, the density of hits seems pretty evenly spread over the 10 second window. This gives us almost no predictive capabilities for this sensor. Still, the graph below shows what we can infer from the data.

This graph, though, shows that the process is not completely wasted on such spaces: the sensors it correlates to are a rather kitchen-like area, and perhaps a natural subsection of the whole cafe area. It should be noted that the red coloring of the sensor in question actually masks the fact that this sensor in fact correlates to itself (it's green underneath).