A major challenge in using multi-modal, distributed sensor systems for activity recognition is to maintain a temporal synchronization between individually recorded data streams. A common approach is to use well defined ‘synchronization actions’ performed by the user to generate, easily identifiable pattern events in all recorded data streams. The events are then used to manually align data streams. This paper proposes an automatic method for this synchronization.
We demonstrate that synchronization actions can be automatically identified and used for stream synchronization across widely different sensors such as acceleration, sound, force, and a motion tracking system. We describe fundamental properties and bounds of our event-based synchronization approach. In particular, we show that the event timing relation is transitive for sensor groups with shared members. We analyzed our synchronization approach in three studies. For a large dataset of 5 users and totally 308 data stream minutes we achieved a synchronization error of 0.3 s for more than 80% of the stream.
|Name||Lecture Notes in Computer Science|
|Conference||conference; 4th European Conference, (EuroSSC 2009), Guildford, UK, September 16-18, 2009|
|Period||1/01/09 → …|
|Other||4th European Conference, (EuroSSC 2009), Guildford, UK, September 16-18, 2009|