Our main idea is to create an interactive therapeutic experience for the users to speak out and get rid of their deepest fear in their life. The users will have a chance to physically eliminate their deepest fears by waving their left hand and swipe out those fears. The movement is detected with PoseNet. We imported external databases (Google quickdraw dataset) as the fears are portrayed in scribbles. The quick draw dataset is a collection of 50 million drawings across 345 categories. Whenever the user clicks on "draw", a related object will pop up on the screen. It is not only a fun game but also an interactive art therapy experience.
Peer Feedback For Our Moodboard:
- Why do you call it White Night?
- How will the audience interact with the project?
- How will phones be used to interact with the project?
- I like the slicing effect because people can slice their fears.
- How will the movement help to achieve the purpose of helping people to overcome fear?
- How will you integrate Google quickdraw dataset to your project?
Our Response To Feedback:
We call it white night because those fears inside our mind not only appear in our dream but also exist in daytime, that is, our normal daily life. Introduced from classic novels and films, this name implies a sense of mystery, imagination, and fantasy. The audience will interact with it by adjusting their position and changing their gestures to “slice off” their fears. We observe that “slicing” is becoming a common movement in many pressure-releasing games nowadays. Similarly, it must be a good way to conquer their fears. Based upon such a behavioral pattern, the users are expected to navigate successfully by themselves even if there are no instructions given. We plan to connect phones so that users are able to submit their inputs by logging onto our web (or by scanning QR code) through their own devices.
- Imported ml5 library and Bodypix
- Get all the data and index needed in console log
- Import Google Quick, Draw! API
- Working on how to fetch the finished quickdraw sketch and convert them into pictures
- Use createGraphics() to create graphics like canvas and use image(img,x.y) to place them on the screen.
- Figuring out the criterion for slicing movement. One way to do that is creating two separate vectors between the points when your wrist get into the image and out of the image, then calculate the angle between them to see if it’s greater than 90 degree. I use CollidePointLIne() from Collide2D library and It works well when there’s only one still image on the screen. But when I imported the array with many flying images it’s very hard to detect the slicing movement. So I turned to CollidePointRect() to detect the movement but I can not successfully get a obtuse angle anymore.
- Make the created graphics bounce on the screen, but the moving graphics are creating traces on the canvas.
- Exploring how to clear the canvas (to clear the traces) while still completing the animation before the animation part disappear
- Created a drawNextStroke() that animates the drawing of one stroke, then calls it out increasingly as the draw loop is running continuously.
- Trying to figure out the slicing problem and found that (1) the second point is recorded right after the first point so the angle may never achieve 90 degree. (2) Lerp() somehow doesn’t work. (Going to find other ways to track more dots between those two points If time permits)
- Understanding what is animation from draw(): flipbook animation. Every time the canvas is wiped clean and drawn on something more than the previous page. Complete the clear canvas.
- Use arrays and class to make smoke effect whenever the fear is swiped out
- Use variables to assign to the sketches on the screen, manage to splice assigned image out when slice move triggers the image.
- Upload the project to ITP database and Heroku app
- Add voiceover and make a recorded demo.