Module 2 Activity Research

Weekly Activity Template

Sanren Zhou


Project 2


Module 2

In Project 2, we continue our exploration of physical computing. Building on the previous module, our focus shifts to designing and developing more advanced interactions that connect the physical and digital worlds. This involves experimenting with various sensors that capture physical data, enabling us to prototype and test diverse forms of interaction.

Workshop 1 Geurilla Prototyping I

In Workshop 1, my teammate and I planned to make a phone stand with adjustable angles. We cut five evenly spaced grooves on the cardboard without cutting through the back, so that the supporting panel could fit in securely without piercing the base, ensuring stability. We cut out the base and support pieces, then created a rectangular slot on the base for inserting the adjustable support structure. We focused on making the adjustable part first, but after finishing it, we realized the phone couldn’t stay on the board. To solve this, we glued two small cardboard strips together and attached them under the board to support the phone. We then inserted the support panel into the slot, forming a tilted angle to hold the phone securely. Although it was a simple craft, we still tested each notch to verify that the stand could stably support the phone at every adjustment level and allow the user to operate the phone smoothly. These two images display the final product at different angles, demonstrating the stand’s stability and functionality in various positions.Next, if we continue improving the stand, we will focus on enhancing its stability, especially in the parts that need to be frequently bent. The challenge will be strengthening those areas without affecting usability. We might also explore different assembly methods, such as using interlocking joints instead of glue, or experimenting with alternative materials. Overall, through the Activity 1 workshop, we learned how to build a prototype from scratch, realized that even simple designs can go wrong, and that everything has room for iteration. This was a great starting point for developing prototypes in Project 2.

Workshop 2 Geurilla Prototyping II

First, in Workshop 2, I drew several sketches as the starting point for building our prototype. My initial idea was that when the user rests their head on the pillow, they would block the light sensor embedded inside it, which would then trigger TouchDesigner. A projector facing the ceiling would display calming visuals above the user. However, this approach had some clear drawbacks: the projector’s cables are all on the back, making it impossible for the device to lie flat and face the ceiling properly, and placing a projector right next to the user’s head is not a good UX design.
Another idea for presenting the prototype was to build a small room model out of cardboard. Instead of showing how the user interacts with the device, this version would directly display the final projected visuals inside the miniature room. However, we eventually rejected this approach because it didn’t clearly demonstrate the interaction between the user and the device. It focused too much on the final outcome rather than the process, which made it less effective for storytelling.
The third iteration was to place the light sensor on a cushion so that when the user leans against it, the sensor is covered and triggers TouchDesigner to play the visuals. This approach solved the main issue from the first idea, where we didn’t know where to place the projector.
We made a prototype using a cardboard box covered with fabric, using the box to simulate a cushion and placing the Arduino in front of it to represent the final usage scenario. However, this prototype revealed several issues:

1. Comfort — If the sensor is placed on the surface of the cushion, the user will definitely feel uncomfortable when leaning on it.
2. Trigger accuracy — We need a way to ensure that TouchDesigner is activated only when the user leans on the cushion. If the room gets dark, the light sensor might also be triggered, which is clearly not what we want.
One of our teammates tested the prototype. As shown in the red circle, the device is not very large, so the comfort level was slightly better than we expected, though there is still a lot of room for improvement. Because of this, in Project 3 we are considering replacing the light sensor with a pressure sensor. A pressure sensor is thinner and can be placed inside the cushion to improve comfort, and it would prevent the system from being accidentally activated just because the room becomes dark.

Activity 1: My Resarch


Our initial idea was to use a sensor kit because it includes many different sensors, which we thought would give us a lot of creative possibilities. However, the biggest problem with the sensor kit is that there are very few tutorials available for it. As a result, we had to adapt Arduino Uno tutorials to the sensor kit by finding the corresponding pin locations on the board. The image shows our attempt to follow an Arduino Uno tutorial and recreate the same circuit on the sensor kit. But the circuit did not work, and the light sensor could not control the LED.
As a side note, we connected the LED not because it is part of our final design, but because it served as a clear indicator to help us see whether the circuit was actually working.
After discovering that the LED could not be controlled by the light sensor as expected, we began a series of troubleshooting steps. The first thing we checked was the sensor kit itself. Following a tutorial from Shotoku Tech, we found the starter code specifically made for the Arduino Sensor Kit in the Arduino IDE. The image shows our test using the accelerometer sensor from the kit. We tested every sensor on the kit, and all of them worked correctly, so we ruled out the sensor kit as the source of the problem. Next, we checked whether the LED was functioning properly. We wrote a simple program that makes the LED light up and removed the resistor from the circuit to eliminate any interference, testing only the LED itself. The LED worked correctly, so the issue clearly did not come from the LED. At this point, I had already started to suspect that the problem might be caused by using an inappropriate resistor value, which could prevent the circuit from connecting properly. However, at the time we believed we only had one type of resistor available, so we thought we had no way to change the resistance. As a result, we continued troubleshooting other possibilities and tried to think of alternative ways to make the circuit work.
We continued the troubleshooting process. We looked up more tutorials and wrote code specifically for the light sensor. However, without the LED, it was difficult to tell whether the sensor was actually responding. Later, I discovered the Serial Plotter feature in the Arduino IDE, which shows real-time data as a graph. As shown in the image, when a hand is placed over the sensor, the line on the graph changes. This confirmed that our light sensor was also functioning correctly.
As shown in the image, these are some of the codes we used during the troubleshooting process. We also searched the library for additional testing code, which had already been written and verified by others, so we knew the issue was not caused by coding errors. Even so, we still weren’t able to get the circuit to function as expected at this stage.

Activity 2: My Reearch

In the second stage of testing, we stopped using the sensor kit and switched to the Arduino Uno, trying to match the tutorial setup as closely as possible. As shown in the Arduino Serial Monitor, the light sensor was functioning correctly, since the values changed whenever a hand was placed over it.
After checking every component and confirming they all worked, we still couldn’t find the reason the circuit wouldn’t connect. But time was running out, and we couldn’t keep going in circles. We decided to meet at school over the weekend to at least build a prototype that could communicate our design idea to users. In a desk drawer, I found several resistors of different specifications. With a “let’s just try it” mindset, we replaced the unknown resistor we had been using with a 10kΩ resistor for the light sensor and a 220Ω resistor for the LED.
This time, we finally succeeded. It turned out that our initial suspicion was correct—the problem was caused by the resistor. But since we didn’t have alternatives at the time, we ended up spending a lot of time on troubleshooting. As shown in the image, when a hand covers the light sensor, the LED lights up. This aligns with the core idea of our final design: using sensor data changes to trigger TouchDesigner. With this breakthrough, we were finally able to move forward.
We began connecting Arduino to TouchDesigner and followed the method from the Project 1 activity. We successfully transferred the light-sensor data into TouchDesigner.
After completing the particle animation following the tutorial, we wanted to connect the light sensor to TouchDesigner. We followed another tutorial in an attempt to bring the serial data into the particle animation we had already built. However, in the final step—when TouchDesigner was supposed to read the incoming data—an error appeared. Because of this, the data-mapping part is still incomplete and will require further learning and experimentation in Project 3.

Additional Research or Workshops

In Project 2, I was also responsible for creating the visuals in TouchDesigner. I first followed a tutorial to build a particle system that changes shape and brightness based on mouse movement. The main components I used were Metaball, and Force.

Metaball creates a soft, blob-like influence field that can attract or repel particles.
The Force node acts as the particle system’s force field, and by connecting inputs such as mouse, metaball, and math values, I could control whether the particles are attracted, pushed away, or rotated.
This allowed me to design dynamic visual behaviors that respond smoothly to user input.
The Mouse In node reads the mouse position and outputs coordinate values that determine where the force field influences the particles. The Math node then converts, scales, or normalizes the Mouse In output so it fits the coordinate range of the particle force field. This allows the mouse movement to directly affect how the particles behave.
I also added a Lag function to smooth the input, making the movements from the mouse or sensor gentler and preventing the particle force field from shaking.
This is the final working interface, but the effect I created doesn’t fully match the tutorial. I probably set one of the parameters incorrectly at some point. Even so, I still learned many TouchDesigner functions and built a solid foundation.
I followed another tutorial to create a flowing, wave-like particle animation. This kind of abstract, gently moving visual is the direction we want for a relaxing experience.
In the final stage of setup, we needed to hide all the parameter panels and show only the background visuals. At first, I didn’t know how to do this. After checking other tutorials and asking ChatGPT, I learned that in order to present the work properly, we needed to add an OUT node to output the final image.

Project 2


Project 2 Prototype

Our prototype video demonstrates how the entire device is intended to function under ideal conditions. For more details, please watch the video.

Project 2 continues the direction of Project 1 by exploring how real-time environmental data can shape relaxing visual output. In this stage, we focused on using light-based interaction as the main way to activate and influence the media experience, staying aligned with our goal of creating a device that helps reduce stress.
Our prototype uses a light sensor connected to Arduino to detect changes in brightness. When the environment darkens or when the user leans onto the cushion-like structure, the sensor values shift and trigger TouchDesigner to display soft, calming visuals such as slow-moving particles and gentle gradients. The cardboard cushion houses the circuit and allows the system to activate simply through the user’s presence.
Through this process, we learned how environmental input—especially light changes—can drive responsive media behaviors. By experimenting with particle animations and interactive mapping, we built a foundation for creating soothing sensory experiences. Project 2 becomes an early step toward developing a more immersive and relaxing system in the next stage.
 <a href='https://youtube.com/shorts/X39DAAcwqGI?si=SACyuVqKreyc-OmZ'>Prototype Demo Video</a>
×

Powered by w3.css