|
5 months ago | |
---|---|---|
02_files | 2 years ago | |
03-files | 2 years ago | |
Assignment-Kinematic_and_LMA descriptors | 1 year ago | |
Mocap_Resources | 1 year ago | |
data/89/126762-EDE1-4D11-8D7B-625B0AB23852 | 5 months ago | |
figures | 1 year ago | |
.gitignore | 1 year ago | |
03.ipynb | 1 year ago | |
Miniproject-Instructions.org | 1 year ago | |
MiniprojectIdeas-feedback.txt | 2 years ago | |
Miniprojects-2020.org | 5 months ago | |
Miniprojects-2021.org | 5 months ago | |
README-Mocap_Resources.org | 1 year ago | |
README.org | 5 months ago | |
mini-2020.bib | 1 year ago | |
mini-2020.org | 1 year ago |
[5/7]
[1/1]
Jelle van Dijk material, The forerunner cite:vanDijk:2014cm has also nice cartoons. Finally some influence of cite:Svanaes2020_CHI First in cite:dijk-2018-desig-embod, and further elaborated in cite:dijk-2018-desig-embod Embodied_Interaction_NL
The practical work is based on the MATLAB MoCap Toolbox by Burger, B. & Toiviainen, P. If you don't have MATLAB installed, you can use MATLAB Online at https://matlab.mathworks.com.">https://matlab.mathworks.com. We'll cover the tools in the first, Introduction Session, and elaborate their usage in the Second Session.
SMC-specific: Embodied music cognition and other references, see all at Moodle [F21-37360]: Miniproject Instructions
CLOCK: [2021-02-09 Tue 01:12]–[2021-02-09 Tue 14:16] => 13:04
https://www.moodle.aau.dk/mod/folder/view.php?id=1207074 /GP43NC/smc8-courses-embodied-interaction/src/branch/master/Mocap_Resources
[5/7]
CLOSED: [2021-02-08 Mon 23:24] SCHEDULED: <2020-02-04 Tue 09:00-16:00>
CLOCK: [2020-02-04 Tue 09:00]–[2020-02-04 Tue 15:20] => 6:20 CLOCK: [2019-02-05 Tue 05:22]–[2019-02-05 Tue 05:53] => 0:31 CLOCK: [2018-02-08 Thu 16:15]–[2018-02-08 Thu 16:40] => 0:25 CLOCK: [2018-02-08 Thu 14:34]–[2018-02-08 Thu 16:15] => 1:41 CLOCK: [2018-02-07 Wed 14:23]–[2018-02-07 Wed 14:23] => 0:00 CLOCK: [2018-02-06 Tue 15:42]–[2018-02-06 Tue 16:07] => 0:25
After completing EI with your mini-project, you'll be able to
distinguish the three key directions in theories of embodiment, after cite:Svanaes-2020-CHI, In 2020 this was \cite{Hornecker:2017we}.
In 2021 this is cite:Svanaes-2020-CHI
\begin{aligned} &\begin{array}{|c|c|c|c|} \hline \begin{array}{c} \text { Point-of-view/ } \\ \text { Tense } \end{array} & \text { Past } & \text { Present } & \text { Future } \\ \hline \text { lst - Me } & \begin{array}{c} \text { Accessing } \\ \text { memories of } \\ \text { how it felt for } \\ \text { me in the past. } \end{array} & \begin{array}{c} \text { Awareness of } \\ \text { how it feels for } \\ \text { me here and } \\ \text { now. } \end{array} & \begin{array}{c} \text { Awareness of } \\ \text { how it feels for } \\ \text { me when I am } \\ \text { enacting a } \\ \text { possible future. } \end{array} \\ \hline 2 \text { nd - You } & \begin{array}{c} \text { Empathically } \\ \text { observing } \\ \text { recordings of } \\ \text { someone else in } \\ \text { the past. } \end{array} & \begin{array}{c} \text { Empathically } \\ \text { observing } \\ \text { someone else } \\ \text { here and now. } \end{array} & \begin{array}{c} \text { Empathically } \\ \text { observing } \\ \text { someone else } \\ \text { enacting a } \\ \text { possible future. } \end{array} \\ \hline \begin{array}{c} 3 \text { rd - } \\ \text { He/She } \end{array} & \begin{array}{c} \text { Analytically } \\ \text { observing } \\ \text { recordings of } \\ \text { one self or } \\ \text { someone else in } \\ \text { the past. } \end{array} & \begin{array}{c} \text { Analytically } \\ \text { observing one } \\ \text { self or someone } \\ \text { else here and } \\ \text { now. } \end{array} & \begin{array}{c} \text { Analytically } \\ \text { observing one } \\ \text { self or someone } \\ \text { else enacting a } \\ \text { possible future. } \end{array} \\ \hline \end{array}\\ &\text { able } 1 \text { . A } 3 x 3 \text { matrix of Point-of-View and Tense } \end{aligned}
identify mover-observer-machine perspectives in EI cite:Loke:2013ic
understand the needs of
movement as a design material
developing bodily skills cite:Loke:2013ic, and
stream data from SmartSuit Pro to Unity/Unreal
[3/4]
CLOSED: [2018-02-08 Thu 18:02]
CLOSED: [2018-02-08 Thu 16:34]
See https://github.com/cerkut/EI18-Vitruvian-Processing
CLOSED: [2018-02-08 Thu 16:34]
download RAM Dance Toolkit v1.3.0 for oF v0.9.8 (released on 23 Oct 2017) https://github.com/YCAMInterlab/RAMDanceToolkit/releases
Below we look into OSX version. Windows users should and optionally Motion Data OSC Server
wget https://github.com/YCAMInterlab/RAMDanceToolkit/releases/download/v1.3.0/RAM-app_osx_v1_3_0.zip wget https://github.com/YCAMInterlab/RAMDanceToolkit/releases/download/v1.0.0/RAM-OSCServer_mac-v1_0_0.zip
unzip the files.
unzip RAM-app_osx_v1_3_0, launch the RAM Dance Toolkit.app within the folder
If the app opens and you see the debug menu above the checkerbox floor, all fine, proceed. If not, see the note at https://github.com/YCAMInterlab/RAMDanceToolkit/releases
Load a recorded movement data from data/Resources/MotionData by drag & drop file to the app You can load up to five movement data files Press TAB to switch between different UIs
Try some of the Effects on Actors (e.g., Keppler)
Learn more at https://github.com/YCAMInterlab/RAMDanceToolkit/wiki/Overview
CLOSED: [2018-02-08 Thu 16:34]
The Motion Data OSC Server sends OSC messages to the RAMDanceToolkit, useful for testing.
This application will send OSC messages to the RAMDanceToolkit when you drag and drop XML files onto the server app screen, instead of the client window.
https://github.com/YCAMInterlab/RAMDanceToolkit/wiki/Structure-of-RAMDanceToolkit
<String> ActorName | <Int> #Nodes | Array of Nodes | <f> Message Timestamp |
CLOSED: [2018-02-08 Thu 17:
# wget https://api.rokoko.com/v1/download/software/smartsuit-studio/SmartsuitStudioInstaller_1_3_1b.exe wget https://api.rokoko.com/v1/download/software/smartsuit-studio/SmartsuitStudio_OSX_1_3_1b_installer.dmg
https://cdn.rokoko.com/software/smartsuit-studio/RokokoStudio_osx_1_7_0b_installer.dmg http://help.rokoko.com
Launch the app, open SmartsuitDemo Project by pressing the arrow
Select one of the recordings in scene-1
Go leftfmost icon / Advanced settings / Network settings
Enable Forward data
Set Forward port
to anything else than 14041
(this is used for communication with the actual suit) 14042 is good.
Create a new project, when launched go to Window / AssetStore, search for Rokoko and get the Smartsuit Plugin.
Import the Smartsuit Plugin (ALL)
Open the scene under Rokoko/Smartsuit/Examples/SmartsuitExampleActor
Go in Hierarchy to SmartsuitReceiver and change the bold-face PortRangeStart and PortRangeEnd to the value you used in Smartsuit Studio (14043).
Press Play, your humanoid avatar should sync to the one on Smartsuit Studio (14043)
You can now add components to the SmartsuitActor, e.g. Particles, Lines, or other Visual Effects https://docs.unity3d.com/Manual/comp-Effects.html
Unicast/Broadcast governed by an icon near to the suit on the right.
To demonstrate RT usage of SmartSuit
00 - Primitives 01 - Particles 02 - FBX
Unity: http://help.rokoko.com/tutorials/unity-plugin/unity-plugin-tutorial
Especially this: http://help.rokoko.com/guides/how-to-forward-data-from-smartsuit-studio-to-unity-with-smartsuit-plugin-110b
CLOCK: [2018-02-07 Wed 15:36]–[2018-02-08 Thu 12:57] => 21:21
To represent the observer perspective, on Unity/Unreal/Processing etc., put at least
a circle
a square
a triangle
(a pentagram) around photo(s) of yourself (see https://en.wikipedia.org/wiki/Vitruvian_Man)
make it dynamic (with Rokoko Studio). For the curious, here is a dynamic, first-person representation of the felt movement: https://goo.gl/images/9oE1ra Oskar Schlemmer, Egocentric Space Delineation
Asahi | I | Body Map | |
Jelle 1 | P1 | EI Tech Definition | 8 parts |
P2 | |||
P3 |
MATLAB
Learning outcomes: after this session, you will be able to
understand the bodily skills needed for technological development, decision making, steering, and path finding in Games via AI apply methods and techniques to real world scenarios (games) and project concepts analyze, compare, and assess the potential of different methods and techniques in order to make the proper design choices in games Please check out the links in the MATERIAL before the lecture, and start thinking how to integrate these elements in your mini-project design.
https://sway.office.com/DwYt3g5WrXUL8Z91?ref=Link
CLOCK: [2021-02-27 Sat 06:42]–[2021-02-27 Sat 07:59] => 1:17
EI-4 External representations (23.2) on [[https://teams.microsoft.com/l/message/19:5c75652b76c34e63a5d3159a963fa95f@thread.tacv2/1614001568573?tenantId=f5dbba49-ce06-496f-ac3e-0cf14361d934&groupId=18e0bcb5-1194-4ba4-9035-19d80ae4e424&parentMessageId=1614001568573&teamName=2021-Embodied Interaction&channelName=General&createdTime=1614001568573][TEAMS]]
The breakdown of EI-4: we will meet on Teams, using this channel
08:45 - 09:00 The Movement Stream starts (not mandatory)
09:00 - 09:50 Activities based on Scott Klemmer's papers mentioned in the videos (two papers in the Files)
10:00 - 10:50 Recap External Representation / Computation videos (70 minutes)
[ ] 11:00 - 11:50 Mocap Formats (Meredith & Moddock, in the files); Mocap Toolbox work
CLOCK: [2021-02-26 Fri 14:37]–[2021-02-26 Fri 14:51] => 7:14
08:45 - 09:00 The Movement Stream (not mandatory)
[ ] 09:00 - 09:20 Brief feedback on descriptions of mini project ideas
09:20 - 09:50 Recap and discussion Socially Situated Practices with examples from other activities. Please prepare by watching videos Socially Situated Practices 80 minutes). https://www.moodle.aau.dk/mod/page/view.php?id=1175703
10:00 - 10:50 different perspectives in interaction design and embodied music cognition. Activities based on Hornecker, Marshall & Hurtienne 2017(in Files) also Svæness?
11:00 - 11:50 Extracting features from Mocap Data; Mocap Toolbox work. Download script (danceDataFeatures.m) and data in zipped file in files and on moodle There will be an assignment with simple data description and analyisis.
https://www.moodle.aau.dk/course/view.php?id=25152#section-1
Intro/Wrap-up slides: https://drive.google.com/open?id=1SgEQYxzRvBRdpUyKLktIf27NZLwL2-43
a miniature version of the course with both theoretical and practical elements.
CLOCK: [2018-02-22 Thu 09:07]–[2018-02-22 Thu 09:44] => 0:37 CLOCK: [2017-03-02 Thu 09:09]–[2017-03-02 Thu 15:32] => 6:23 CLOCK: [2017-03-01 Wed 21:06]–[2017-03-01 Wed 21:12] => 0:06
How can the type of action we do affect our perception?
How can changing the perspective aid in the design process?
What type of tools can we use to describe and characterize the movements and interactions we design for?
After this session, you'll be able to
understand how our bodies affect perception and action
identify different perspectives used in design and describe how these affect the design process and outcomes.
compare and apply different approaches to describe and characterize movements:
physical-based descriptors such as
position
velocity
acceleration
jerk
hi
CLOSED: [2017-03-02 Thu 09:39]
CLOCK: [2017-03-01 Wed 21:12]–[2017-03-02 Thu 09:09] => 11:57
cite:Proffitt:2013tv
Laban Movement Analysis :: <<<LMA>>> is a theoretical and experiential system for the observation, description, prescription, performance, and interpretation of human movement.
Walk through door opening. Observe how you relate the opening to your own body width and the need to rotate it when the opening becomes too narrow.
Select one or two movements in the playlist https://www.youtube.com/channel/UCellwPbJvcurLMqBCrvfDVQ
And describe it using Laban Movement Analysis (Effort). A table of EFFORT briefly describing the different characteristics can be found in this document
CLOCK: [2020-02-18 Tue 01:01]–[2020-02-18 Tue 01:46] => 0:45
Submit a pdf with your (brief) answers to the following tasks:
Select two movement videos in the folder with MOCAP FILES here below to study more in detail.
Try to describe the selected movements in terms of Laban Effort (Space, Time, Weight, and Flow) Which computable movement descriptors (link to the paper by Larboulette & Gibet) would seem good to use to describe and separate the movement characteristics of the two videos?
In the same folder you will find ascii files with simple movement descriptors ( position, derivatives, hand distance) from the folder below. Use a program of your choice to load and plot the files over time. Compare, and try to match with the movements shown in the available avi files. Which of these movement videos are you looking at? What other movement features might be usable to compute /use to compare?
Details on data files:
SmoothedPos.tsv - Smoothed Position data of three markers 1,2 3,(x,y,z) Velocity.tsv - Velocity data, three markers (x, y, z) Acceleration.tsv - Acceleration data, three markers (x, y, z) Handdist.dat - distance between hands (markers 2,3), vector
CLOCK: [2018-03-22 Thu 09:00]–[2018-03-22 Thu 11:20] => 2:20
https://www.moodle.aau.dk/course/view.php?id=25152#section-6
How can the different perspective aid and affect the design process and outcomes?
How does our bodies affect perception and action?
What are the similarities between movement and any other material in designing interactive systems?
How can the developers/designers develop and use their bodily skills?
Select one of the visualizations of motion capture data for Kung Fu, dance, and Music Conducting.
Suppose that the visualizations/sounds/feedback were interactive in real time.
How will the programming of the visualization promote certain quality of movement for the mover? That is, what type of movements would you expect users to do with this type of feedback/interaction?
Variation 2:
Variation 3.1: Expanding into emptines
Kung-fu: - body: circular movements within own kinesphere flexible - sword
Suppose you want the quality of interaction to be drastically different? (For instance slow and smooth instead of fast and jerky). How could the parameters of the visualizations/sounds/feedback be changed to encourage this type of movements instead?
CLOCK: [2018-03-22 Thu 16:21]–[2018-03-22 Thu 16:46] => 0:25
Please fill in a brief reflection on how doing the movement exercises felt and what you learned through them.
The exercises were:
Body scan (sitting, feet to head) @ class paying attention to the position, direction and contact of the body.
Rolling heel-to-toe, falling into step (balance, contact to floor, and what foot we step with). left.
Walking with different movement qualities (honey/oil; being a stick person; being a glass person; being a rubber person) (one dimension: viscosity, another quantitiy of movement, plus scake
Leader-follower (one person with eyes closed following the lead of another).
Changing perspectives (standing on desk, sitting under it)
trajectories of the markers (inc. virtual) are often mapped to a virtual skeleton:
defined by a hierarchy of joints and angle rotations
ensures that the body limbs have fixed lengths.
Mocap data are provided in different formats. e.g.,
C3D (3D marker positions)
Acclaim,
Bio-vision (BVH), and
Vicon (both the skeleton and the motion data);
text (comma or space delimited);and
more general 3D asset formats such as COLLADA and FBX.
http://effect.motionbank.org/ https://github.com/motionbank/effect-data-player https://github.com/motionbank/effect-player-examples https://medium.com/motion-bank/choreographic-coding-effect-b0ab5501c069
https://github.com/SMC7-2019/ASDF-RNN
https://github.com/SMC7-2019/ASDF-RNN
https://github.com/SMC7-2019/ASDF-RNN
Another method frequently mentioned by experts in soma-based design is to slow down or disrupt a habitual movement
to be able to discern small changes,
to note how the movements relate to your emotional experiences,
to enjoy or feel pain, and
to be engaged cite:Bell:2005ut, cite:Wilde:2017chi, Schiphorst 2007;
CLOSED: [2020-03-24 Tue 17:44] DEADLINE: <2018-03-14 Wed>
CLOCK: [2018-03-03 Sat 10:05]–[2018-03-03 Sat 10:07] => 0:02
http://www.cs.man.ac.uk/~toby/bvh/ Last update: <2016-10-19 Wed> Last visit : <2018-03-03 Sat>
Last update: <2016-10-19 Wed> Last visit : <2018-03-03 Sat>
https://www.moodle.aau.dk/course/view.php?id=25152#section-4
We will look at examples of MoCap data and tools to analyze.
Then we divide into groups to test out
the Motion Capture system in Multisensory Experience lab
Rokoko Smart Suit (you can download smart suit studio)
We will do some data collection and practical exercise of getting in (and analysing) movement data.
TOOL | Formats | DOC |
---|---|---|
MOKKA | C3D, other BIOMEC | |
ofxMotionMachine |
SCHEDULED: <2020-01-06 Mon>
CLOCK: [2018-08-28 Tue 15:18]–[2018-08-28 Tue 15:43] => 0:25
Explode | Bogdan | |
Sun salutation & bird position | Anna | |
Circle kick | Niclas | |
Zombie | Mathias RT | |
Pulling a rope | Patrick | |
Jumping | Camilla | |
Ballerina spin | Aishah | |
Karate Movement | Mathias MC | |
Hand stand arm spin | Laurynas | |
Circles | Franc | See below |
Worm Movement | Andreas | |
??? | Mads |
Tracing circles in the air with my arms, slowly, inwards and outwards. Laban effort factors: indirect, light, sustained, bound. Trying to convey a feeling of calmness and balance.
The Rokoko Suit allowed me to explore the space and rethink its boundaries myself: first I had thought of a still position, but then I decided to start walking into the room.
On the other hand, I felt constrained and a bit clumsy from wearing the suit. This led me to perform a heavier movement than I thought.
CLOCK: [2018-03-03 Sat 10:07]–[2018-03-03 Sat 10:11] => 0:04
Think of a movement to do {{{perform?}}}
Think of the QUALITY (alternative feeling) you want the movement to have and express. (In the optical MoCap, you will have markers on head, hands, and perhaps legs.)
Write this down on the post-it.
Once you have done the movement, you will be asked to describe how it FELT doing the movement.
We will note your intentions and descriptions of movements done, and analyse.
For instance using: http://biomechanical-toolkit.github.io/docs/Mokka/index.html https://github.com/Biomechanical-ToolKit/Mokka
We are also looking at this repo, as described in [1], but it does't look production-ready: https://github.com/numediart/ofxMotionMachine
CLOSED: [2017-03-09 Thu 09:32]
CLOCK: [2017-03-07 Tue 10:00]–[2017-03-07 Tue 11:30] => 0:03 CLOCK: [2017-03-07 Tue 13:45]–[2017-03-07 Tue 16:30] => 0:03
Data: https://www.c3d.org
installed Max7, sadam library, cnmat tools, also Lobjexts needed: for http://www.uio.no/english/research/groups/fourms/downloads/software/mcrtanimate/index.html Downloaded JAVA from https://support.apple.com/kb/DL1572?locale=en_US to Make MAX5 standalone work
Installed mocca
CLOSED: [2018-03-03 Sat 09:56]
CLOCK: [2017-03-09 Thu 09:39]–[2017-03-09 Thu 21:05] => 11:26
mcread
CLOCK: [2018-04-04 Wed 12:27]–[2018-04-04 Wed 12:34] => 0:07
Wwizard THINQ link to CD attachment: https://goo.gl/RtDqpf}
MDA: https://www.youtube.com/watch?v=NxiGduvDJ8s https://youtu.be/uepAJ-rqJKA
Hunicke at Voices of VR: https://youtu.be/muITY_vs-FQ
Design space
Dagstuhl body-centered
human-centered machine learning?
CLOSED: [2018-04-03 Tue 10:31]
The Bohemian Rhapsody Experience
http://venvi.org/venvihome.html (code soon)
Roberto Pugliese PhD, see also intro video at https://vimeo.com/147323218
CLOCK: [2018-04-03 Tue 10:46]–[2018-04-03 Tue 11:11] => 0:25
That video is too emotion. This CHI16 one is better for guidelines: https://youtu.be/opUK79BJvJI Exploit RISK.
CLOCK: [2018-04-03 Tue 14:03]–[2018-04-03 Tue 14:28] => 0:25
The game was developed in the game engine Unity 5 with three scripts:
Player; creating randomly generated obstacles and agents.
EnemyAI; consisting of the behaviour of the agents.
Output; for storing the in-game data.
CLOCK: [2018-04-06 Fri 08:28]–[2018-04-06 Fri 08:53] => 0:25
[1/1]
ATTACH VR
CLOSED: [2019-04-09 Tue 08:20] SCHEDULED: <2019-03-14 Thu 09:00-12:00> DEADLINE: <2019-03-10 Sun>
CLOCK: [2019-03-14 Thu 11:31]–[2019-03-14 Thu 11:44] => 0:13 CLOCK: [2018-03-22 Thu 05:55]–[2018-03-22 Thu 07:23] => 1:28 CLOCK: [2018-03-22 Thu 05:42]–[2018-03-22 Thu 05:46] => 0:04 CLOCK: [2018-03-21 Wed 08:19]–[2018-03-21 Wed 08:56] => 0:37 CLOCK: [2018-03-11 Sun 09:01]–[2018-03-11 Sun 09:45] => 0:44
15.3 AM Cumhur
Learning outcomes: after this session, you will be able to
understand the three illusions in VR
place, plausibility, embodiment
understand the bodily skills needed for technological development, decision making, steering, and path finding in VR
apply methods and techniques to real world scenarios (VR) and project concepts
analyze, compare, and assess the potential of different methods and techniques in order to make the proper design choices in VR
Please check out the links in the MATERIAL before the lecture, and start thinking how to integrate these elements in your mini-project design.
CLOSED: [2019-09-11 Wed 08:45]
CLOCK: [2019-03-05 Tue 20:25]–[2019-03-05 Tue 20:50] => 0:25
cite:Smith2018_IJPADM:
cite:Dixon2006_IJPADM
Lahunta, Scott, 2002. Virtual Reality and Performance
Gillies:2019:TOCHI: Distinguishes three interaction strategies:
object-focused
direct-mapping (VR)
movement-focused
\cite{Spanlang:2014fe}: How to build an embodied lab
Embodiment, under certain conditions can make body ownership and agency
\cite{oulasvirta2019}: Oulasvirta, Antti. “It's Time to Rediscover HCI Models.” Interactions 26 (2019): 52–56. doi:10.1145/3330340.
CLOSED: [2018-05-09 Wed 15:32] SCHEDULED: <2018-03-28 Wed>
Read the VR-book Chapter 4: Immersion, Presence, and Reality-Trade offs
Consider the applications of the design guidelines to your mini-project (Section 5.4), and bring your ideas to the class.
Watch the interaction technologies & design examples part of the SIGGRAPH 2017 VR interaction course https://youtu.be/RNypfiiyI8A?t=2h4m10s (only the 3rd speaker )
CLOCK: [2018-03-21 Wed 09:57]–[2018-03-21 Wed 10:22] => 0:25
VR-book Chapters 4,5, 28, 29.
https://www.gdcvault.com/play/1023649/Human-Centered-Design-of-Immersive
What is
interaction fidelity?
GoGo Technique?
Stefania Serafin, Niels Christian Nilsson, Cumhur Erkut, and R Nordahl. 2016. Virtual reality and the senses. Danish Sound Innovation Network. Retrieved from https://issuu.com/danishsound/docs/dtu_whitepaper_2017_singlepages
Uncanny valley:
Andrea Stevenson Won, Jeremy N Bailenson, Jimmy Lee, and Jaron Lanier. 2015. Homuncular Flexibility in Virtual Reality. Journal of Computer-Mediated Communication 20, 3: 241-259. http://doi.org/10.1111/jcc4.12107
K Kilteni, Ilias Bergstrom, and Mel Slater. 2013. Drumming in immersive virtual reality: The body shapes the way we play. IEEE VR http://doi.org/10.1109/VR.2013.6549442
CLOCK: [2018-03-22 Thu 07:23]–[2018-03-22 Thu 07:48] => 0:25
https://www.moodle.aau.dk/mod/assign/view.php?id=749778
Select an interaction pattern from Ch28 (in lecture material) relevant to your mini-project. Find more examples (videos, projects, code, etc), and the associated guidelines in Ch29, if any.
Describe the interaction pattern as a first-person experience, emphasizing the differences of felt experiences in real and virtual worlds, and if possible in Laban dimensions.
Move to a related pattern until you describe at least three patterns.
Submit your descriptions as pdf.
CLOSED: [2018-03-22 Thu 07:34]
CLOCK: [2018-03-21 Wed 08:56]–[2018-03-21 Wed 09:41] => 0:45 :END
A good investigation of interaction patterns, with application in mind
(direct hand) manipulation patterns
subtle / simple
GM won't look behind: not too much precision or fine control
non-spatial control pattern gestures, voice communication for storytell
indirect control patterns Users (GM) won't watching the effects while they are “performing”
First persin perspective is good. Laban dimensions missing (but maybe could be added when you's specifiy gestures)?.
Interesting mini-project and references. You're interested in EI + Storytelling, so check this out: http://voicesofvr.com/629-embodied-storytelling-innovations-from-sundance-2018-with-shari-frilot/
SCHEDULED: <2018-02-19 Mon 09:00 - 12:00>
CLOCK: [2020-03-24 Tue 14:34]–[2020-03-24 Tue 17:38] => 3:04 CLOCK: [2018-02-19 Mon 08:27]–[2018-02-19 Mon 12:27] => 4:00 CLOCK: [2018-02-07 Wed 14:23]–[2018-02-07 Wed 14:48] => 0:25 CLOCK: [2017-02-23 Thu 08:32]–[2017-02-23 Thu 08:57] => 0:25
Debate Instructions: https://www.youtube.com/watch?v=EWfMV_jbiOU example: Berkeley vs Harvard: https://www.youtube.com/watch?v=JhzwSlK4uEc Dartmouth practice: https://www.youtube.com/watch?v=LMO27PAHjrY
CLOSED: [2018-03-28 Wed 20:01]
In this debate, two opposing groups are formed, based on their stance in respect to
Kristina Höök, et. al. Embracing First-Person Perspectives in Soma-Based Design
Informatics 2018, 5(1), 8; doi:10.3390/informatics5010008 (Main literature)
Antti Oulasvirta and Kasper Hornbæk. 2016. HCI Research as Problem-Solving. ACM Press, 4956–4967. http://doi.org/10.1145/2858036.2858283
The groups will present their own views on the chosen theme (philosophical background) and give counter-arguments to the opposing views (e.g, pyilosophy of science ve somaesthetics). The students Practice presenting justifications and arguments for their own opinions and evaluating other people's opinions. They start with the arguments given in the two papers, with application on their own (previous) designs they want to improve / extend through Embodied Interaction. The goal is not to beat the opponent, but further own understanding.
A chairperson is chosen for the debate, who ensures everyone has a chance to talk. Cumhur and Sofia will help the chairperson, who will also control that the arguments do not last too long. The chairperson lets the opposing arguments in turns. If the debate does not progress, the chairperson may also give the teachers a chance to present a stimulating argument to further the debate. If it succeeds, the dabate will force the participants to analyse their opinions.
Phenomenology from Svanes
Embodied cognition from Kirsh
CLOCK: [2020-03-20 Fri 09:24]–[2020-03-20 Fri 09:24] => 0:00 CLOCK: [2018-02-17 Sat 17:49]–[2018-02-17 Sat 18:14] => 0:25 CLOCK: [2018-02-17 Sat 08:16]–[2018-02-17 Sat 08:27] => 0:11
/GP43NC/smc8-courses-embodied-interaction/src/branch/master/Miniprojects-2021.org
/GP43NC/smc8-courses-embodied-interaction/src/branch/master/Miniprojects-2020.org
CLOSED: [2018-02-02 Fri 16:54]
CLOCK: [2018-02-02 Fri 12:50]–[2018-02-02 Fri 13:54] => 0:04 CLOCK: [2018-02-02 Fri 16:50]–[2018-02-02 Fri 16:54] => 0:04
HTC VIVE opens a stage with drums, keyboard, rythm sequencer. Intrument have I/O controls, and spatial sound – main menu has different options for rendering, including mono. The app runs on win and mirrors to desktop. Opening the Unity binary with left-shift pressed brings out input choices. But I could not map the HTC VIVE controls to keyboard / unit controls. Hence, the app will start up without an HTC, but could be interacted with.
We could try out VRTK with the source code, or look into forks if anybody did that before. We could also check how to add new instruments. And learn how to jam with it. The headphone cable to VIVE is too short. MedifaceUSB drivers of the WFS64 are not running on the VIVE computers.
w