Thursday, October 29, 2015

Mobile Infinite Downhill Derby VR

I recently started working on an infinite arcade style racing game which uses nothing but head tilt to control the vehicle. The turning is controlled by rolling your head slightly left and right. This allows you to also look around without steering out of control as well as not requiring the use of a controller with the Gear VR to play, check it out.

This video may be down and the old file is missing, here's a link to it on facebook:

I modeled the car and road pieces by myself. Currently working on a track generation algorithm.

Next on my list to script is:
  • Suspension updating the axles correctly.
  • Random traffic
  • Scoring system
As usual, staying under 50k triangles has been difficult, I'm experimenting with new inexpensive ways to add detail, luckily my draw calls are still in check.

Thursday, May 7, 2015

New Fauxtography Video

Before milestone 4 I wanted to do another video including the short tutorial stage. Graphics are a bit prettier than the Gear VR version but the mobile version will still look pretty enough.

Once again, challenge post page here:

Thursday, March 19, 2015

Fauxtography Progress

The Game has come a long way since my last post. In a about a week I have done most of the main menu, finished taking photos, saving them to disk, finding out what subjects were captured, reading the quality of the photo and serializing the relevant game info. I have also start constructing the first world "Bidwell Park". Everything is being done with mobile hardware in mind which is the biggest challenge for me. It's hard not to place the nice looking reflective water but in the end its all about performance. I've uploaded a video of my progress so far. Please note that the light maps are not baked in this demo so the lighting is very off.

I've started a repository for my scripts if you want to see them, I'll be uploading everything with a delay of a  few days:

My favorite part of this project so far was choosing how to grade photos. I first started fiddling with some way to judge the photo based on where the subject was in the picture by doing a bunch of vector math. I originally wanted to get the user to try to use good composition and the rule of thirds with their photos but I ended up going with something simpler. Photos are awarded more points if the subject is centered ( 20 degrees or so away from the center) and how close the subject is to the camera (judged by the magnitude of a vector shot towards the target. For now this is all I'll do to grade the photo but I may change my mind in the future.

A "SnapShot" taken from the game

Finding if a "Target" was in the photo was also interesting. I originally just used Renderer.isVisible() but this does not take into account something being behind another object even if its in a camera's view port. What I ended up doing was looking at each subject that was visible, and then fired a raycast at it. If it hits, it counts as being captured. The result is if the target is obscured enough the ray may not hit it, but I'm going to play this off as "You need a clear shot" to get points for capturing a subject. Once again this is also subject to change later.

So far so good, the next things on my list to implement are the photo album, the ability to delete snaps from the game, animal AI, and finish building "Bidwell Park". Some people have requested the ability to "zoom" but since I'm building this for mobile I want it to work with only one button, maybe I could use voice commands or something, I guess we'll see.

Monday, March 9, 2015

Photo VR - Saving the Snap

I've started working on my project for Oculus' "VR Jam", an event where developers are encouraged to create a VR game or experience for the Samsung "Gear VR". I don't own a Gear VR but I thought I'd go ahead and test with the DK2 until I find someone that does. My game is pretty simple, anyone whose played the N64 classic "Pokemon Snap" will find the concept familair. You are carted around an environment and then take pictures of the wildlife around you. Your "snapshots" will be graded on things like composition, subject matter, and lighting. Since the game is a mobile experience I am designing it around using one button so bait and such probably won't be a feature, but we'll see.

I started with the "PictureTaker" class. This handles saving snapshots to file, as well as the game data used to judge the quality of the photo. I've run into one obstacle so far working with VR which wasn;t too much of an issue but it was fun to solve. Since the screen is rendering the two cameras for both eyes I can't just use Application.ScreenShot because the snap will show what you would see if you tried to view the game on a normal monitor, or this:

This is obviously not ideal, so my solution was to have a third camera in the scene that is disabled by default. When the fire button is used, the camera is re-activated, and aligned between the two eye cams, this camera renders to a render texture and a frame from that is saved to file as a PNG. The result is this:

Much better... 

For saving the game data I made a sub class called "SnapShot" this contains the game data for each picture like what level it was taken in and what it is a picture of. As of now to tell this subject Im just firing a single ray from the center and pushing that objects name to a snapshot objects list of subjects List, then that snap is added to another list that will be eventually serialized and saved to file. The next trick is to make sure the saved game data lines up with each saved PNG. Here's my class so far: