
12/8/2022
For p3_gold, we spent all of our time wrapping up loose ends and polishing our game for a nicer experience. We adjusted our UI padding and color for a easier-to-read interface.
During our playtests, we found that we were lacking in more guidance towards the user(especially as it wasnt intuitive in teaching them what button to click), so we added more tool tips and instructions in the beginning on the basics of the interface.
For users to learn more vocabulary, we also added more interactable objects around town for them to grab and add to their word bank.
One major difference from last week is that all of our audio files have been added and synced up to every speech dialogue or action. This includes:
- The background music for all scenes
- All voice lines (130+)
- 20 learned words audio hooked up to the pronunciation button in the dictionary
- All newly created sound effects( for example improved truck crashing sound effects, and golem interaction sound effects)
We over-hauled all of our beginning cutscenes to give them a more visual and attention-grabbing experience to generate interest. The starting screen is no longer a simple black screen but rather a open sky with a “learning” tree. For the truck cutscene, we created a city scene by creating different building variations. When the user wakes up in the new world, they wake up in a simple wooden house with minimal furniture.
Major bugs that we fixed:
- Players getting stuck at the phonogolems
- Water shader rendering incorrectly in the right eye
- Not being able to teleport into static meshes
- Conversation getting stuck and not moving forward
12/5/2022 11/21/2022
This week we spent our time building our project out by implementing real game play and finalizing all of the story details.
During our playtests, players often needed direction about what buttons they could press on the dialogue. Throughout the game, we also anticipate times where additional instruction is needed in order to prevent the player from getting stuck. Because of this, we have implemented a tooltip system that displays a widget in the top right of your VR view that can contain useful tips. This is showcased in the dialogue video where the tooltip informs the player that they can press the translate button to translate the dialogue from Japanese to English. These tooltips could be triggered by dialogue, player location, and many other events.
Throughout our time using this speech recognition plugin, we’ve noticed its limitations and have found workarounds. During our playtests, background talking messed up word recognition and string matching, and some words sometimes had false positives for other words (e.g. いいえ (iie) becoming いや (iya) ). As such, we have relaxing string matching to just be if the player’s speech includes a specific word (workaround for background noise) and now choose dialogue options with multiple inputs (e.g. accepting いや as いいえ because they sound similar enough).
One feature that we requested was the ability to greet an NPC with “konnichiwa” and starting the dialogue from there. This was implemented into the game and is showcased at the end of the dialogue demo video.
Since we are largely done with all the features we have planned, we moved on to building out the game and piecing our features together into a cohesive game. Below is a demo of the first few scenes of our game, which introduces the player to the new world and teaches them vowels and phrases in Japanese before their first quest.
We added PhonoGolems which are rock creatures scattered around the world. Each golem has a different word on their face. When the player walks close enough to the golem it will move towards the player. If it gets close enough then the player will be frozen in place and unable to teleport around. The player must say the word on the golem’s face to destroy the golem and be set free. The golem has firefly like particle effects to show the area that the golem can freeze the player. When the golem freezes the player the player has a ring around them that they can see if they look down. The ring signifies that the player is unable to teleport. When the golem is destroyed the word explodes into particles and the golem dissolves.
We overhauled the UI so that the player can click buttons to translate the text into English. We also added buttons so that the player can hear the mp3 of the lines or words. We have also changed the font so that the Japanese also includes romaji. Another change we made to the UI was merging the learned words list and found items list. We also added the alphabet menu that allows the user to click on each letter of the alphabet to hear what it sounds like.
We started adding beginning cutscenes to our project. We first added a start menu scene which is the start screen and allows the user to press a button to start the game. We also added a truck crash cutscene which simulates the player getting knocked unconscious. The player is able to click on a truck and the truck moves towards them really fast. Then the screen changes to a black screen (“unconscious screen”) which explains what happens. At this point, the user can click the continue button which will transport them to our fantasy RPG world to begin their language learning journey.
We also made massive progress on polishing our open world environment by creating and adding more 3D modeled assets and materials. With all of this, the player can now enjoy a pleasant and relaxing scenic scene while they learn their desired language.
3D model list: baby bear npc, father bear npc, bunny inn owner npc, frog shop owner npc, rich bunny npc, wanderer owl npc, medicine, veg_1, veg_2, town sign, flower_1, flower_2, watermelon, apple, banana, orange, bread, sack, lamp, fence, shop stand, inn, barrel, house_variation_1, house_variation_2, house_variation_3, rock_1, rock_2, grass, tree_1, tree_2, tree_3, tree_4, flower box, and barrel.
Check out our hand-created assets by clicking here!
In addition to the 3D model additions, we have also created a custom font that adds each of the romanized pronunciations of each of the symbols on top of themselves. The goal is to allow users to be able to read the more complex dialogue early as a way to familiarize themselves with the language as soon as possible, as well as assisting our users in understanding the semantics of Japanese as they start their journey of learning. This font was showcased in our first demo above. Lastly after listening to further feedback, we changed our website layout to focus on our product.
A few things we are missing are music for the game and voice acting assets, but we expect to have these before the final iteration of our game.
This week, we decided to move away from the VR Escape Room project and fully focus on the VR Language RPG project. Our decision was based on our research into the core technical functionalities of both projects. If we were to pursue the VR Escape Room, we would likely need an extra week of development to further look into the networking behind the shared VR/multiplayer experience as there is currently very limited documentation online. Given these constraints and limitations on time, we came to the conclusion of pursuing the VR Language RPG project. We also believe that this combination of educational, VR, and RPG would create a novel solution to language immersion and encourage more gamified educational content in VR in a fun and exciting way. If we are successful in marketing this to schools, teachers, as well as individuals and provide an exciting educational experience, future language immersion games with more expansive vocabulary or in other languages can be made for our audience, ensuring the studio’s future.
On the technical side, we created an inventory menu UI that the player can access by turning their wrist and looking towards it. It has five tabs: found items, alphabet list, learned words, quest, and past dialogue. Each tab can display up to five lines of text and the user can click the arrow buttons to go between pages. The selected menu is colored red with white text. The non-selected menus are dark grey with light grey text.
Here is also a demo of the Found Items tab of the menu UI. Being able to pick up and interact with objects provides a more visceral experience when learning vocabulary and will aid the player in remembering vocabulary words. This is one of our advantages as a VR RPG over our competitors in that RPG elements such as item collection can give our players an experience they will remember better. In the next iterations, we plan on adding buttons to this UI to hear vocabulary word pronunciation on demand.
In addition to this, we have polished the Dialogue system and UI. It was structured in a way that would be able to take care of a typical RPG’s dialogue needs, which usually would take the form of a graph due to dialogue options. There are buttons in place that will be implemented for re-listening to dialogue said by the NPC, hearing your dialogue options so you can repeat them, as well as the translation of your dialogue options. We will also get voice acting for our NPCs in a future iteration. Here is a demo of the system where an NPC teaches a player some words:
We also worked on creating 3D modeling assets, animating them, and blocking out our landscape/preparing shaders. Currently, the player is able to teleport around a landscape that we created in the VR headset and explore the house, hilly scene, water, and village. We have also created several different animations in Mixamo with the Cat NPC model.
Check out our hand-created assets by clicking here!
In terms of our planned product, we have constructed a language lesson plan incorporated with interactable dialogues and story. We believe that this is an advantage we have compared to our competitors, which is a story the users can invest into when using our application to learn a new language. In VILLA, we also plan to have the villagers speak entirely of the language that our users want to learn, with the player’s mascot translating everything that the player has not learned from our application yet. We believe that exposing the users to more advanced terms without requiring them to learn them at the early stage is beneficial in the long run.
To verify our design choices and motivation behind them fits with demand in language learning. We contacted Mayumi Oka, a retired Japanese instructor who used to be the dean of the Japanese department of the University of Michigan. She is a renowned instructor in the Japanese teaching community, especially in reading and writing, as well as a textbook writer. We will be interviewing her on Monday night to hopefully get some insight from her and hopefully improve our application.
11/14/2022
This week our group explored two possible prototypes for our final project, Abilliterate and Teledetectus.
Abiliterate is a RPG in VR where you learn a new language as a wizard in an assimilating world while Teledetectus an online collaborative escape room game where players get to solve puzzles in real-time virtually across timezones that promote leadership skills.
In Abiliterate, we worked on our core features: Choosing dialogue options with NPCs by saying the sentence aloud. We used a Speech Recognition plugin for Unreal and matched the output text against the dialogue options which is shown in our demo video. The result is promising as it works well in English. Some possible hardware limitation we realized is that a sensitive mic helps with getting more accurate output text.
In addition to this, we worked another core feature of identifying objects players collect throughout the game in the practice language. In our demo video, we print out the names of collectibles the player picks up. In the future, this will be placed in a “dictionary” with the translation and pronunciation.
There were some problems in other languages such as Chinese/Japanese(our desired language for players to learn) for this plugin, so we reached out to the plugin developers about this. The plugin developer responded saying that they fixed the issue and are currently waiting for Unreal Marketplace to approve the fix( hopefully soon as the fix patch was submitted 15 days ago).
In Teledectus, we also tried working on our core feature: a shared AR space through Niantic’s Lightship SDK. One of the key features of Lightship is being able to set up a SharedAR experience between people; however, we had some issues with getting this to work on our end because of networking issues. Whenever a player tried to host a game, they would be timed out, so we decided to pivot. At this point, we realized this idea was not just limited to AR, but could also be done in VR, so we decided to look into SharedSpaces. SharedSpaces is an out-of-the-box solution to allow different VR players to join one VR world. The nice part of SharedSpaces is that it would manage this networking layer that we were struggling with in Lightship. However, we were also not able to set up SharedSpaces properly because the codebase was not up-to-date, we were not able to set-up Photon (a networking software) properly with their project source code. So, we ultimately pivoted to starting from scratch with a multiplayer VR game using Photon. This would essentially be a customized version of SharedSpaces. The demo shows two players in one VR space and would be used as a starting point for our VR Escape room.
Lastly, we worked on story concepts and sketching out environment concepts.