My name is Yiying (jade) Lin, a London based multidisciplinary artist and creative technologist who works closely with game medium. My creative practice is characterised by a mix of mediums and techniques ranging from Machine Learning aided audio-visual installation to games as immersive interactive narratives, underpinned by a distinct theme of curiosity and play.
walk through video
Toxic Parents Simulator is a time-based simulation that uses VR to offer players a strong immersive and interactive experience. By situating them in a ‘toxic household’ where the player needs to complete their morning tasks on time, while dealing with the toxic parents who get progressively more abusive as time passes.
The creation of Toxic Parents Simulator is driven by my own experience growing up in such a family, however, it turns serious topics and personal trauma into a gamified experience with a tone of dark humour inspired by the satirical backdrop in Job Simulator. This game calls for care and awareness for domestic violence, and potentially be used in an awareness raising education setting for toxic parents themselves.
Developed for meta quest platform using Unity (XR Interaction Toolkit), the game is designed with diegetic UIs and high affordance, VR specific interactions that replicates what one does in real life, such as eating food and opening drawers. Each task comes with its unique challenge and interactions, we aim to deliver an intuitive and engaging playing experience.
Team Members:
Yiying Jade Lin - Lead game designer, developer, animator, script writer, project manager
Xinde Ren - AI voice-line generation personnel, implemented part of the features for the dad avatar.
When first entering the game, the player will see a complete dark screen with the mother’s voice playing, asking the player to wake up and catch the school bus by 7AM. This sets the context of the game, that the player is a student with difficult parent(s), tasked to leave the house for school in one hour. Once the sound finished playing, a hint text is displayed, asking the player to press A to wake up, which transitions to Scene 2, the main game.
Once the player enter scene 2, they'll find themselves in bed, with a phone ringing and vibrating (animation) on the bedside table. Player should be able to intuitively grab the phone to stop the ring, and a world-space UI to do list with checkboxes will float above the phone while it's being grabbed, , it’s an analogue list which the player can scroll and mark the tasks as complete if they wish. The player will also be able to press A to spawn the phone to their hands. This design reinforces the game objective, helps the player keep track of what they're doing.
Time also plays an important role in this game, in the scene there are 2 analogue clocks to remind the player of the time. Mom's anger level goes up every 20 minutes, this is indicated by the number of fire sprites floating above her head, her animation and voice-lines also changes according to her mood, this gives the player a sense of urgency.
Win/lose is decided by whether the player complete the tasks on time. Either way there will be a pop up window with educational debriefing text, with the option for player to restart. And if the player attempts to leave the house before task completion, mom will stop them angrily.
The project development window is 1.5 months, and I divided them into 3 stages.
Stage 1 is for basic design of the game, where I first consolidated a game design document, wrote down the key tasks and technologies needed for those feature, also marked down what new skills I need to learn, as well as started writing scripts for the avatars. Then, I did a basic VR project setup and created a repository on Github.
Stage 2 is the main development stage, where I implemented multiple game features, such as mouth interaction, UI elements, avatar AI navigation, animation, and anger level, task handle logics, etc. By the end of stage 2, the game is minimum viable product (MVP), with all the key features implemented.
Stage 3 is for the final polish, added features such as end game UI, animate the mom avatar's face, baked lighting for better performance, and added anti-cheat function to prevent player from peeking through walls.
I added a low pass filter to muffle mom's knocking and chasing voice in the entry scene, make it sound like it come through the door.
To Achieve a smooth scene transition, I position a quad positioned to cover the camera view, with a material shader and a script that controls the alpha value of the material, both when the scene switch is trigged and new scene is loaded.
I try to create an apartment/family look for the environment to fit the context of the game, using free assets available on Unity Store. I added a custom skybox so when the player looks through the window, they’ll know they’re in a neighbourhood.
When I baked lighting, I chose a warm colour to give the scene a homely and early morning look.
When first entering the main scene, the phone's ringing noise play on awake. To make the phone easier to find, I added phone vibration by applying random transform to the phone using script. Both of the sound and animation will stop when the phone is grabbed (Select Entered), at the same time, a world space UI scrollable to do list is showed floating above the phone. All of these are controlled by a single script (see image).
Using a phoneSpawner script, the players are also able to press A button to spawn the phone to their hand.
There are 2 events/interactions with mouth, one is eating & drinking, the other is brushing teeth.
This is achieved by attaching a collider under the XR camera as the mouth with a Consumer script attached to it detecting which item is collided with the mouth, as well as whether those tasks has been completed.
Each food game item contains 4 different models representing how much it has been eaten, each food item also has a Consumable script attached to it. If collision with the mouth is detected, the model index will add 1, thereby switching to a different model to indicate that it's eaten, and checks whether the current food item is fully finished, a munching sound is also played every bite the player takes.
The toothbrush item is tagged as "Toothbrush", so when its collided with the mouth, it plays a brushing sound, and keeps the controller vibrating as long as the toothbrush is still in contact with the mouth.
According to user research, people find interactions with mouth are the “The most satisfying".
Players can open/close drawer using direct integrator. By setting the drawer handle to XR Grab intractable and adding a configurable joint to the drawer body. I locked the drawer's motion on the X and Y axis, and limited its movement on the Z axis, so it opens and closes without exceeding the boundary.
The doors can be open/closed by grabbing the handle to trigger a opening or closing animation. This is achieved by using XR simple integrator with a script that controls the animator of the door on select entered.
One of the main objectives of the game is finding the correct items in the room. When the correct item is collided with the bag, it shrinks and disappears to symbolise the item is packed. Mom’s chasing sound will be played when colliding with the bag to remind the player and give a sense of urgency. she will also say something when the task is completed. the script keeps track of whether all items are correctly packed.
I scripted the majority the mother's voice-lines and marked them with 3 anger levels, so my teammate can generate AI voice-lines with the corresponding tone.
When the player walk by (collided with the mother's outer collider), the mother avatar will say some random chasing voice line. She also turns towards the player while speaking, adding a sense of realism.
Every 20 minutes, the mother also announces the time to give the player a sense of urgency.
Upon (any) task completion, the mother will also say something as a reminder for the player of task completion.
I set the mother's avatar as an AI Nav Mesh Agent, and baked the scene with Unity NavMesh, then I implemented a script to let the mother walk to a few set spots in the house randomly. If the player gets in her boundary while she's walking, she will stop, say her line, and continue walking to her spot.
One of the player task is to ask their parents to sign exam paper (tagged as "exam"). When when example paper is collided with the mother, she plays an voice line refusing to sign the paper due to its low grade.
Using the same logic, I tagged some items as "weapon", and once the "weapons" are collided with the mother, she will be pissed and ask the player to be respectful.
Billboard (always facing camera) animated 2d fire appearing above on Mom’s head, adding one every 20 (in-game) mins, and reducing one every per task completion.
Using a state machine and conditions in Unity's animator, I control Mom avatar's body language and speech depending on stage (anger level).
Since the animations I downloaded from Mixamo doesn't have facial animation, I created my own using Unity Face Capture and ARKit with my iPhone, store the new animation and apply them to the avatar's corresponding events.
First, as per request I help her set up her avatar by importing and configuring avatar, and connecting animator for her to start working on her part.
Her initial implementation of the father coughing isn't in sync with the coughing animation, but rather plays coughing sound every 60 seconds, which is not ideal as it doesn't align with animation clip. Her chosen coughing clip also has the duration of 14 seconds. To fix this I first trim the coughing audio to 3 seconds, then I wrote a function to manually select the clip to insert an event (play coughing sound) to the coughing animation clip.
Her original script also detects any collision with the father to fire the father's exam paper voice line, I fixed this by adding a tag to the exam paper and added condition to compare tag for triggering the exam paper voice line.
I understand that the game is dealing with sensitive topics, so a debriefing text is added at the end no matter the win/lose condition. Urging possible victims to seek help or urging parents to rethink their behaviour.
The restart button will restart the current scene.
I added an anti-cheat feature so the player cannot look through the walls. This is achieved by controlling a quad positioned right in front of the camera, and use a script to fade its colour when the camera collided with the noPeek layer - basically any walls and drawers.
As my first VR project, this project was a steep learning curve. While my initial goal was to create a psychologically impactful simulator with complex avatar interactions that dynamically responded to the player, the scope proved overly ambitious for the project time frame. Therefore, I shifted my emphasis from deep emotional immersion to a gamified task-completion experience, mitigating potential harm to players from the intense dialogue while still offering a glimpse into the stressful environment of a toxic household. The addition of a stylised anger indicator above the mother's head served to reinforce the game-like nature of the experience, offering a layer of detachment from real-world trauma. By putting more of my focus on implementing a more gamified VR mechanics that includes not only simple interactions like grabbing and teleporting, but also along with more challenging elements like avatars, eating, and physical interaction with drawers, by the end of this project, I can confidently say that I understand and is able to create a good amount of VR iterations.
The most significant challenge was navigating a team dynamic where contributions were unevenly distributed. This experience taught me valuable lessons in communication, patience, leadership, conflict resolution and the importance of taking initiative rather than relying on others. I actively sought solutions by creating clear task outlines, offering support and tutorials, and encouraging collaboration. In the future, I'll prioritise early communication with my supervisor to proactively address team dynamics and ensure everyone understands their role and responsibilities.
Moving Forward, there's some potential further development for this project's future. I could expanding it with diverse levels and tasks, or potentially adapting it into an educational tool for parents, offering them a glimpse into the child's perspective. I also plan to refine the ending by integrating it seamlessly into the environment and improving the mother's response more dynamic with her mood for a more immersive experience. This project, while imperfect, has been an invaluable learning experience. I'm excited to apply these lessons to future VR projects, creating more impactful and engaging experiences in this field.