top of page

Game Design & Development

Sneaky Stacks Trailer 

Developed in UE4/5 Using Blueprint and C++

User's Guide and Game Features for Sneaky Stacks

Introduction:

Welcome to the User's Guide for Sneaky Stacks! This guide is designed to help you understand the gameplay mechanics and features implemented in our game. Sneaky Stacks is a single-player, third-person stealth game with strong platformer overtones, set in a busy movie theater environment.

 

Gameplay Features: 

  1. Play as a whimsical stack of children disguised in a trench coat, attempting to sneak into a movie theater while gathering additional children to expand your stack along the way.

  2. Journey through the bustling and vibrant movie theater environment, skillfully evading obstacles such as strategically placed garbage cans and unsuspecting adult NPCs.

  3. Outwit the vigilant theater worker NPCs, who are equipped with advanced UE5 Enemy AI systems that patrol the theater, all the while keeping your suspicion level to a minimum.

  4. Enhance your stack by strategically collecting children scattered throughout the theater, creating an increasingly precarious and challenging experience.

  5. Earn points for each child added to the stack, and master the art of maintaining balance and speed with our innovative player physics feature that introduces a realistic wobbly effect to the stack's movements.

  6. Preserve a low suspicion level by carefully avoiding collisions with objects and NPCs, and utilize popcorn power-ups to reduce any suspicion that may have been aroused.

  7. Seize soda can power-ups to gain temporary stability for your stack, effectively eliminating any wobbling, allowing for a smoother navigation experience.

  8. Achieve victory by successfully reaching the desired number of children in your stack, all without being detected by theater workers or running out of precious time.

Experience the thrill, challenge, and unique gameplay mechanics of Sneaky Stacks, as you put your dexterity and stealth skills to the test in a captivating and entertaining game world.

User's Guide

  1. Controls:

    • Use [WASD] keys or the left joystick (on a game controller) to move the stack of children.

    • Click on the screen to start the game.

    • Interact with children and power-ups by simply colliding with them. No additional button input is required for interaction.

  2. Suspicion Bar Management:

    • Avoid bumping into obstacles and adults to prevent raising suspicion.

    • Collect popcorn power-ups to lower the suspicion bar.

  3. Power-ups:

    • Soda Can: Temporarily stabilizes the stack, preventing wobbling for a short period.

    • Popcorn: Lowers the player's suspicion bar.

  4. Winning the Game:

    • Successfully collect the desired number of children without exceeding the suspicion bar limit or running out of time.

We hope you enjoy playing Sneaky Stacks and make the most of this unique and entertaining experience!

Star Catcher Trailer 

Developed in Unity with Steam VR plugins

User's Guide for Star Catcher

 

Introduction

Welcome to Star Catcher, a single-player VR game where you play as a child trying to catch falling stars during a magical meteor shower! This guide will help you navigate through the game and explain the main features you will encounter.

 

System Requirements

To play Star Catcher, you will need a PC with SteamVR and a compatible VR headset, such as the HTC Vive. Please refer to the game's system requirements for more detailed information.

 

Getting Started

Once you have installed and launched the game, you will find yourself in the main menu. Here, you can start a new game. When you're ready to begin, select "Start Game" which will take you to a quick tutorial screen. Once you are finished reading the tutorial, select “Play” by pressing the trigger key on your controller and immerse yourself in the enchanting world of Star Catcher.

Game Features

Visual and Sound Effects 

The game features various sound effects and music to enhance your experience, such as button interaction sounds, collection noises for items, and music for the win and lose screen. The game also features particle effects to showcase star trails and collisions for stars and meteors. 

 

Falling Stars

The main objective of Star Catcher is to fill your star jar with starlight by catching as many falling stars as possible before time runs out. The color of the star will be an indication of its rarity and point value, with pink stars being the most common and yellow stars being the most rare. The more rare the star, the higher the point value as well as the speed of its fall, which will make them harder to catch. Be quick, as stars have limited lifespans and will burn out if not caught in time. 

 

Obstacles and Space Debris

While catching stars, you will need to protect the star jar in the middle of the forest from harmful space debris. This can be done by catching meteors before they collide with the jar. If the jar collides with a meteor, it will result in lost points.

 

Power-ups

Catch fireflies as they fly by to gain additional power-ups. Picking up a firefly will pause the meteor shower for 10 seconds, giving you a brief respite to catch your breath and focus on your strategy. A UI screen will appear to showcase the timer to the user. 

 

Scoring System and Star Jar

Your score is determined by the number of stars you catch and the type of stars collected. Collected stars will fill the 3D star jar in the middle of the forest. Your progress is displayed on the in-game UI on your left hand (represented by a timer and a current score) as well as on the jar itself (represented as the number of star points currently collected and the threshold for the level). Make sure to keep an eye on the star jar, as you need to catch enough stars to fill it up and protect it from debris to avoid losing points.

 

Level Progression

To win the game, you must reach the minimum point requirement for each level. The difficulty will increase as you progress through the levels, requiring more points and introducing more obstacles.

Controls and Navigation

  • Use your VR headset to look around and locate falling stars, power-ups, and obstacles. Physically move within your play area to dodge obstacles or get closer to stars and power-ups. Use the VR controllers to interact with the game world:

    • Point and hold the trigger on hand controllers to catch falling stars when in the "catching" zone or to interact with buttons on UI screens.  

    • Lift the left controller to check the UI, which displays your current score and time remaining.

Tips and Strategies

  • Prioritize catching higher-value stars to maximize your score, but don't forget that lower-value stars are easier to catch.

  • Keep an eye out for power-ups, as they can significantly boost your performance.

  • Always be aware of your surroundings and be prepared to shield your jar from obstacles as they appear.

  • Manage your time wisely; try to catch as many stars as possible without compromising the safety of your star jar.

 

We hope you enjoy playing Star Catcher! Good luck, and may the stars be ever in your favor!

Quantum Shootout Trailer 

Developed in Unity with C#

User's Guide and Game Features for Quantum Shootout

Game Overview: Quantum Shootout is an engaging and straightforward first-person shooter game. The objective is simple: shoot all five red buttons scattered across the scene to turn them green. Once all buttons have been hit and changed to green, you'll emerge victorious. Test your accuracy and speed as you master the art of hitting buttons in this fun and challenging demo!

Instructions:

  1. Navigate through the scene using the standard first-person controls (W, A, S, and D for movement, and mouse for aiming).

  2. Use the left mouse button to shoot projectiles from your gun.

  3. Locate and shoot all five red buttons in the scene, turning them green.

  4. Win the game by successfully turning all buttons green. Good luck!

Advanced Rendering

Physically Based Rendering

Overview

Physically-Based Rendering (PBR) System: Implemented a point-light-based PBR shader for simulating realistic light-material interactions. Utilized Cook-Torrance and Lambertian BRDF models for glossy and diffuse reflections respectively. Also ensured energy conservation in the system by constraining the sum of the BRDF components to be less than or equal to one.​

 

Advanced Lighting Techniques: Built shaders to handle ambient light and point light intensity falloff, ensuring more realistic lighting in the scenes. This system also handled gamma correction, color space remapping for working in HDR, and included a visibility test in the Light Transport Equation to account for obstructed light paths.

 

Enhanced Material Rendering and Effects: Developed a system to support normal mapping and displacement mapping for more detailed and realistic surface rendering. Implemented precomputation methods for diffuse and glossy irradiance, and incorporated Fresnel term using the Schlick approximation. Also added an approximation for subsurface scattering, giving plastic materials a more realistic glow effect

In conclusion, through careful implementation of these techniques and principles, I succeeded in creating a sophisticated physically based rendering system that can generate highly realistic 3D scenes.

Monte Carlo Path Tracer

Overview

Monte Carlo Path Tracing Implementation: Developed a robust path tracing system employing Monte Carlo integration techniques, to simulate complex light behavior in 3D scenes. This method allows for the creation of highly realistic images by probabilistically simulating light paths and their interactions with different objects and surfaces. My implementation supports the rendering of complex lighting effects such as hard and soft shadows, depth of field, motion blur, caustics, and diffuse and specular inter-reflections.

Advanced Illumination Techniques: I incorporated advanced techniques for estimating direct illumination. The system supports multiple importance sampling, which strikes a balance between different sampling strategies to minimize variance and improve image quality. It allows the path tracer to generate accurate, noise-free images faster, as it more effectively captures light behavior such as specular reflections and transmissions. I also implemented various Bidirectional Scattering Distribution Functions (BSDFs) for handling different material types, enabling the representation of a wide range of real-world materials.

Support for Different Light Sources and Materials: The path tracer caters to a variety of light sources such as point lights, spotlights, and area lights, each with their unique sampling methods. It also accommodates diverse material types including specular (reflective and transmissive) and diffusive materials, as well as more complex materials like microfacets.

Optimization and Fine-Tuning: In order to improve the efficiency of the system, I implemented a mechanism to prevent double sampling of direct light at each intersection point. This ensured that only the direct illumination computed by multiple importance sampling contributes to the light's final color, barring the scenarios where the ray comes directly from the camera or a specular surface. I also optimized the global illumination computation by maintaining a throughput variable, accounting for the compounding attenuation of light as it bounced through the scene.

Interactive Computer Graphics

Projects

MINI MINECRAFT

  • Developed procedural generation of terrains and caves using 3D noise functions based on real-time player location

  • Implemented optimization of procedural terrain loading through chunking techniques by interleaving VBO data

  • Generate branching rivers that carve themselves out of the surface terrain procedurally using 2D L-systems

  • Created OpenGL framebuffer for water and lava effect with post-processing shaders, added GLSL procedural skybox  

  • Link: https://youtu.be/0U8RksERmr4

Programming Language: C++, OpenGL

.

MINI MAYA

  • Half-edge algorithms: Implemented mesh data structures that include split edge, triangulation, extrude, and Catmull-Clark subdivision

  • Loaders: Implemented OBJ file loader and JSON file loader

  • Skeletons and Skinning: Implemented distance-based automatic skinning, interactive skeleton, shader-based skin deformation based on bind matrices, joint transformations, vertex's influencer IDs, and vertex's weights

  • Implemented Graphical user interface for mesh display and deform, and OpenGL mesh visualization 

Programming Language: C++, OpenGL

OPENGL SHADERS

  • Surface Shader Programs: Implemented Blinn-Phong Reflection Shader, Matcap Reflection Shader, Iridescent Shader, Vertex Deformation Shader

  • Post-Process Shader Programs: Implemented Greyscale and Vignette Shader, Gaussian Blur Shader, Sobel Filter Shader, Fake Bloom Shader, Noise-Based Post-Process Shader

Programming Language: C++, OpenGL

RASTERIZER

  • 2D Triangle Rasterizer: Implemented line segments, bounding boxes, triangle rendering, barycentric interpolation, and Z-buffering

  • 3D Triangle Rasterizer: Implemented perspective camera class, interactive camera, perspective-correct interpolation, texture mapping, and Lambertian reflection

Programming Language: C++, OpenGL

Computer Animation

Animation Toolkit - Curve Editor
(C++)

  • Spline Curve: Implemented linear spline, cubic Catmull-Rom Splines, and equivalent Bezier curves (Bernstein, De Casteljau, and Matrix), Hermite Spline, and Cubic B-Spline

  • Rotation: Implemented linear and cubic Euler angle interpolations, orientation representations (between Euler Angle and rotation matrix, between rotation matrix and quaternion, and between quaternion and Axis/Angle representations), and linear and cubic quaternion interpolations

Programming Language: C++

Animation Toolkit - FKIK (C++)

  • Implemented the ATransform class that includes features to support a character skeleton whose joints are arranged in a hierarchy where each child is positioned and oriented relative to its parent as the basis of FK (Forward Kinematics) and IK (Inverse Kinematics)

  • FK (Forward Kinematics): Implemented FK which is the process of computing the position and orientation of each joint in a skeleton relative to the world given the local joint transformations.

  • IK (Inverse Kinematics): Implemented IK which is the process of determining the joint angles and link lengths of a robot or mechanism that would achieve a desired position and orientation of an object or system.

Programming Language: C++

Animation Toolkit - Particle System
(C++)

  • Basic C++ Particle System Simulation: Implemented a basic particle system class whose job is to emit particles. Each particle has a set of properties including position, orientation, and color number, and all properties change over time

  • C++ Based Fireworks Simulation: Implemented a fireworks simulation based on the C++ particle system where when the user presses the space key, a rocket will fire into the air with a randomized velocity and time to live. When the rocket's time to live expires, the rocket explodes into concentric rings of sparks. When the sparks reach the ground, the sparks bounce. 

  • Houdini- based Fireworks Simulation: Created a firework simulation using the Houdini FX software

Programming Language: C++

Animation Toolkit - Behavioral Animation (C++)

  • Implemented GUI of relevant behavior parameters, the agent body dynamics, and control laws as the basis for behavioral animation

  • Behaviors: Implement 6 types of individual behaviors
    and 5 types of group behaviors. The 6 types of individual behaviors include Seek, Flee, Arrival, Departure, Wander, and Obstacle Avoidance. The 5 types of group behaviors include Separation, Cohesion, Alignment, Flocking and Leader Following

Programming Language: C++

Computer Vision

Machine Perception Project: Two-View Stereo Algorithm Implementation (Python)

Overview

This project required the creation of a two-view stereo algorithm capable of transforming multiple 2D viewpoints into a comprehensive 3D reconstruction of a scene. It involved extensive work with Python and various associated libraries. One of the key features of this project was the use of the K3D library, which facilitated the visualization of the 3D point clouds generated by the stereo algorithm. The assignment was broken down into several key tasks:

  1. Rectify Two View: The first step involved rectifying the two viewpoints. This involved understanding the camera configuration, computing the right-to-left transformation, and rectification rotation matrix. The rectification process was derived from the provided lecture slides but required careful consideration due to the clockwise rotation of the images in our dataset.

  2. Compute Disparity Map: Next, I implemented a method to compute the disparity map, which involved comparing patches using three different metrics: SSD (Sum of Squared Differences), SAD (Sum of Absolute Differences), and ZNCC (Zero-Mean Normalized Cross-Correlation). This required me to write kernel functions for each metric and apply them to the image patches.

  3. Compute Depth Map and Point Cloud: After obtaining the disparity map, I developed a function to compute the depth map and point cloud. This allowed each pixel to store the XYZ coordinates of the point cloud, creating a more accurate 3D reconstruction.

  4. Postprocessing: I implemented some post-processing steps to remove background noise, crop the depth map, remove outliers from the point cloud, and transform the point cloud from the camera frame to the world frame.

  5. Visualization: Using the K3D library, I visualized the reconstructed point cloud directly within the Jupyter notebook. The result was a detailed, interactive 3D reconstruction of the scene, generated from the initial 2D viewpoints.

  6. Multi-pair Aggregation: Finally, I used multiple view pairs for the two-view stereo and aggregated the reconstructed point cloud in the world frame.

The culmination of this project was a fully reconstructed 3D point cloud of a temple, which was captured in various screenshots and can be viewed interactively. Overall, this project was a great opportunity to dive deep into machine perception and demonstrate my ability to manipulate and interpret complex datasets.

Machine Perception Project: Fitting 2D Images and 3D Scenes Using Multilayer Perceptron Networks (Python)

Overview

This project is based on the principles of machine perception. The project was primarily centered around fitting a 2D image and a 3D scene using Multilayer Perceptron (MLP) networks.

In the first part of the project, I focused on fitting a 2D image. This entailed:

  1. Implementing Positional Encoding to map continuous input coordinates into a higher dimensional space. This allowed the neural network to approximate high-frequency variations in color and texture more effectively.

  2. Designing an MLP with three linear layers, using ReLu activation for the first two layers and a Sigmoid activation function for the last layer.

  3. Training the MLP to fit the given image, using the Adam optimizer and Mean Square Error as the loss function. The pixel coordinates were normalized and the output was transformed back to an image. The performance of the MLP was assessed by computing the Peak Signal-to-Noise Ratio (PSNR) between the original and reconstructed image.

In the second part of the project, I tackled fitting a 3D scene. The steps I undertook for this task were:

  1. Computing the images' rays based on the transformation between the camera and the world coordinates along with the intrinsic parameters of the camera.

  2. Sampling points along a ray, with points chosen evenly along the ray from the near to the far end.

  3. Designing a Neural Radiance Fields (NeRF) MLP that took as input the position and direction of the sample points along the ray, after applying positional encoding to both.

  4. Implementing a volumetric rendering formula to compute the color of each pixel. This involved numerically approximating a continuous integral for the ray color, based on the density and color of an adequate number of samples along a ray.

  5. Rendering an image by calculating all the rays, sampling a number of points from these rays, passing them through the neural network, and then applying the volumetric equation to produce the reconstructed image.

  6. Integrating all the aforementioned steps to train the NeRF model using the Adam optimizer and Mean Square Error as the loss function.

Throughout the project, I experimented with positional encoding of different frequencies, which provided valuable insights into its effects on the image-fitting process. This project allowed me to delve deep into the applications of machine learning in computer vision and gave me a solid understanding of advanced concepts like NeRF and volumetric rendering. The final model achieved a PSNR over 24 after 3000 iterations, demonstrating effective learning and a decent approximation of the 3D scene from 2D views.

cv project.JPG

Face Recognition & Style Recognition
Python 

In this project, we ran our data set through face-detecting software and then used a deep neural network to generate 3D reconstruction for the data we gathered from face detection, and in the end, we compare our data's overall similarity to see how the data from the face reconstruction can function as style recognizer among different styles of art from the different historical period.

Analysis on comparison included natural faces

Since in the training of the dataset, we intentionally picked the natural faces dataset to be 21-year-old characters that cover 188 faces in White, Asian and Black characters, thus:

  • the closer the correlation of similarity is between an artist’s dataset and the natural faces dataset, the more likely the artist is from a relatively more realistic in terms of being photogenic

  • the more clustered a specific cluster is, the more racially homogenized this artist’s characters are

IMG_2946.jpg

Experiment Implications

  • Botero has a very unique artistic style that differentiates clearly from almost every other style of art from a historical period, which make sense since Botero’s character are artistically coherent in primitivism art

  • Compared to Botero and Kahlo, most of the artists are from a period where their work is racially focused on white characters; and all artists from a more ancient data set have their characters mainly white character, which makes the data more racially homogenized and therefore creates denser plotting

  • Unless the artist has a coherently not too realistic style of portraits or the artist’s choice of race is relatively more diverse or different from the rules from the past

  • Interesting observation: most paintings we looked at did not overlap with a small cluster of natural faces, it could be potentially used to understand the styles of the artists we selected.

bottom of page