[Open Source Cinema] Embodied Cognition and Interaction

Assignment: Allow a user to move or arrange the elements with something besides mouse and keyboard. 


Updated April 25, 2018: Lisa Jamhoury got the code to work! Check it out here

This week I attempted to apply Lisa Jamhoury’s code for grabbing objects within a 3D environment using a Kinect to the sketch I had made. I used the code from her osc_control sketch here. This is currently where I’m at and even with help I couldn’t get it to work. I used the same overall architecture I built from the Story Elements assignment from before:

  • I used Google Street View Image API to get the panorama I am using as a background.
  • I suspect that this is causing my problems: I am not wrapping an image or video onto a Sphere Geometry nor am I creating a 3D scene in the traditional Three.js sense.

Stray thoughts from the reading, “The Character’s Body and the Viewer: Cinematic Empathy and Embodied Simulation in the Film Experience”:

  • Empathy has only existed as a concept since the early 20th century???
  • “Proprioceptive” –– I did not realize there was a word for the sense people have of the position of their own bodies in space. This is a feeling dancers know very well.
  • You will find a picture of me in the Wikipedia entry for “kinesthetic strivings.”
  • Could facial mapping software be used to track the unconscious facial expressions viewers reproduce when watching a character’s facial expressions, and how could that be applied?
  • The concept of motor empathy reminds me of a character from the show Heroes: Monica Dawson’s power was that she could replicate the movements of anyone she observed. Here she is beating some bad guys with the some moves she got off a tv show: