Δημοσίευση - A multimodal dataset of spontaneous speech and movement production on object affordances
ΤΑΥΤΟΤΗΤΑ

A multimodal dataset of spontaneous speech and movement production on object affordances

Ερευνητική περιοχή:  
    
Είδος:  
Άρθρο σε περιοδικό

 

Έτος: 2016
Συγγραφείς: Αργυρώ Βατάκη; Κατερίνα Πάστρα
Περιοδικό: Scientific Data
Τόμος: 3
Αριθμός: 150078
DOI: 10.1038/sdata.2015.78
Περίληψη:
In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.
[Bibtex]