top of page

Robin's Rainbow Island

Overview

  • Role: Programmer

  • Genre: 2D Platformer

  • LanguageSwift

  • Development ToolXcode

  • Team Size: 3 People

  • Development Time: 1 month

High Concept

Robin's Rainbow Island is a 2D Adventure Mobile Game. Our main character Robin is an adventurer who seeks treasure. This time he arrives on a strange, and yet shimmering and colorful island. Amazed and determined, Robin then sets his foot on his miraculous journey on Rainbow Island. In the game, you will act as Robin, meeting interesting and various people who live a simple life on this island for generations. In pursuit of the hidden buried treasure, you leave your footprints and moments on this island.

Responsibilities

  • Implemented on-screen control: movement, jump, interactions and inventory.

  • Implemented a level design tool for other teammates to create levels.

  • Designed the third level.

  • Designed and created in-game animations.

  • Implemented in-game physics and collisions.

  • Designed and implemented the quest system and item effects.

Core Mechanics

On-Screen Control

    Robin's Rainbow Island is my first developed game that has completed features and more than 10 minutes of gameplay, and the first challenge I met during the development stage is how to implement on-screen control. Most solutions I found online are for professional engines but not for Xcode. Also, most games are developed in C++ instead of Swift, so it was a hard time to find references online.

  To solve this problem, I used a stupid but simple way to implement the on-screen buttons: render all buttons last so they are always on the most top and then use finger touch positions to determine which button is pressed and what functions need to be triggered. The inventory system is implemented in the same way where an index integer represents the currently selected slot, and whenever the player uses or picks up any items, the system assigns an empty slot for the item and renders the item above that inventory slot. The image below demonstrate the on-screen buttons in the game and the code of how to implement those buttons and their on-click functions.

Capture.PNG
Screen Shot 2021-07-15 at 5.10.09 PM.png

Tiles and Map

    The second challenge I encountered after the on-screen control problem is how to design and create levels. Since Xcode is designed for implementing IOS applications instead of mobile games, designing levels on Xcode seems to be impossible. Fortunately, when we started working on this project, the newest version of Xcode just updated a new feature called tilemap. Now the problem is solved!

    To use the tilemap, I first needed to import art assets, especially sprites for each block. In Xcode, there is no function to import all assets at the same time, so I had to import each block asset one by one and rename them correctly. Once all assets were imported into the project, each team member was able to create their own level using the art assets. The way how the tilemap works is similar to the tilemap function in Unity where we could draw each block and define different layers.

    Another problem when designing the level is that since the game is using the same scene all the time, we needed to put all four levels into one map, and only one person could edit the map. We did not find a solution to solve this problem. Instead, we scheduled the tasks for each team member properly so that while one person was editing the map, other team members would have other tasks to do without touching the map scene. The image below shows our final map which contains four different levels and five scenes.

Screen Shot 2021-07-15 at 5.03.19 PM.png

    Since in Xcode, there is no function or feature that sets collision for tiles and maps, we had to write a function that creates a collision structure for each block. My initial solution was to loop every block and assign a box collision to it. However, the character would stick on the wall due the gap between every two blocks, and we wanted to have different friction settings for walls and the ground, we eventually had to manually add surface and side collisions for every touchable and interactable block. The image below shows the first level of the game and all collision boxes attached to the level.

Screen Shot 2021-07-15 at 5.04.20 PM.png

Player Interactions and Motion Detection

    In the game, the player has three different interactions with the game objects: pick up an item, throw an item, and eat fruits. Picking up items will pick up the nearest item in a certain range. Throwing an item will throw a selected item from the inventory to the character's facing direction. Eating fruits will consume selected fruit and gain speed up. Players need to use those three features to solve puzzles and complete NPCs' quests. 

    To perform those interactions, players need to hold the interaction button and perform the required motions. Each interaction has a unique motion. Once the player holds the button, the game starts calling the accelerometer component and getting information from the motion sensor in the accelerometer. Then, the game will pass the data into the Support Vector Machine model integrated into the game and call the related interaction function based on the predicted result from the model. The images below illustrate how the system collects the motion data and passes it to the SVM model. Once the button is pressed, the data will be detected by the sensor and saved into the motion data. Then, it will convert the data into a multi-dimension array. Lastly, the array is passed into the SVM model which will return a string of the action name. 

Screen Shot 2021-07-15 at 5.06.34 PM.png
Screen Shot 2021-07-15 at 5.06.47 PM.png

Support Vector Machine

    To have a better prediction of player's motion, we applied the machine learning technique in the backend. The machine learning model written in python would get player's motion data and predict a possible interaction based on the data and trained parameters. The image below shows the code sample of how the SVM model predicts actions using the motion data. 

Screen Shot 2021-07-15 at 5.08.45 PM.png

    When we started building the model, the challenge is which algorithm and method we should use for our game. Even though we narrowed down the number of methods to 3-4, which algorithm we should use for the method was another problem. Hence, we built a backend server that contains two different machine learning methods: Support Vector Machine and Random Forest. Each method has four different algorithms that we could adjust. Then, we implemented a small developer app that collects user motion data and trains the selected model. We spent 2 days training all different models and eventually were able to find the best model based on the prediction accuracy.

     The image below shows the application that we used to train our models. The DSID on the top represents the model ID, which allows us to create a new model instance based on the current settings. The options in the middle are the algorithms and methods that we can choose to create a model instance. The actions on the bottoms are the three actions we want to predict in the game. Once we press the "Calibrate Once" button and choose which action we want to input the motion data, we can hold our phone to collect motion data for that action and pass it to the model. This solution helps us to determine the best model to use, and also train the model really fast since it allows all of us to input the data at the same time.

Screen Shot 2021-07-15 at 5.10.46 PM.png

Post Mortem

What I Learned

  • This game is my first completed game and my first group developed game. It is such a valuable experience that gives a general idea of how to develop a game in a team and how to solve problems that I've never had before.

  • Many problem I encountered in this project also happened in my first game in SMU GuildHall, Yonder. For example, we also had the same collisions problem on Unity that the character sometimes sticks on the wall. Hence, I was able to immediately find a solution to solve the problem.

  • The mobile game development experience is unique. I didn't much mobile game development experience, especially a game on IOS.

  • I was able to learn how to apply machine learning technique in video games, including model implementation, model training, and model integration.

  • Even though using accelerometer for core gameplay is frustrating sometimes, but this gives me an experience of how to use sensing elements in a device to provide unique game mechanics and gameplay.

Robin's Raibow Islan
Overview
Responsiblities
Core Mechanics
On-Screen Control
Tiles and Map
Motion Detection
Support Vector Machine
Post Mortem
bottom of page