Thursday, March 19, 2015

Fauxtography Progress


The Game has come a long way since my last post. In a about a week I have done most of the main menu, finished taking photos, saving them to disk, finding out what subjects were captured, reading the quality of the photo and serializing the relevant game info. I have also start constructing the first world "Bidwell Park". Everything is being done with mobile hardware in mind which is the biggest challenge for me. It's hard not to place the nice looking reflective water but in the end its all about performance. I've uploaded a video of my progress so far. Please note that the light maps are not baked in this demo so the lighting is very off.

I've started a repository for my scripts if you want to see them, I'll be uploading everything with a delay of a  few days:

https://github.com/AnthonyGraceffa/Fauxtography-VR

My favorite part of this project so far was choosing how to grade photos. I first started fiddling with some way to judge the photo based on where the subject was in the picture by doing a bunch of vector math. I originally wanted to get the user to try to use good composition and the rule of thirds with their photos but I ended up going with something simpler. Photos are awarded more points if the subject is centered ( 20 degrees or so away from the center) and how close the subject is to the camera (judged by the magnitude of a vector shot towards the target. For now this is all I'll do to grade the photo but I may change my mind in the future.


A "SnapShot" taken from the game


Finding if a "Target" was in the photo was also interesting. I originally just used Renderer.isVisible() but this does not take into account something being behind another object even if its in a camera's view port. What I ended up doing was looking at each subject that was visible, and then fired a raycast at it. If it hits, it counts as being captured. The result is if the target is obscured enough the ray may not hit it, but I'm going to play this off as "You need a clear shot" to get points for capturing a subject. Once again this is also subject to change later.

So far so good, the next things on my list to implement are the photo album, the ability to delete snaps from the game, animal AI, and finish building "Bidwell Park". Some people have requested the ability to "zoom" but since I'm building this for mobile I want it to work with only one button, maybe I could use voice commands or something, I guess we'll see.

Monday, March 9, 2015

Photo VR - Saving the Snap

I've started working on my project for Oculus' "VR Jam", an event where developers are encouraged to create a VR game or experience for the Samsung "Gear VR". I don't own a Gear VR but I thought I'd go ahead and test with the DK2 until I find someone that does. My game is pretty simple, anyone whose played the N64 classic "Pokemon Snap" will find the concept familair. You are carted around an environment and then take pictures of the wildlife around you. Your "snapshots" will be graded on things like composition, subject matter, and lighting. Since the game is a mobile experience I am designing it around using one button so bait and such probably won't be a feature, but we'll see.

I started with the "PictureTaker" class. This handles saving snapshots to file, as well as the game data used to judge the quality of the photo. I've run into one obstacle so far working with VR which wasn;t too much of an issue but it was fun to solve. Since the screen is rendering the two cameras for both eyes I can't just use Application.ScreenShot because the snap will show what you would see if you tried to view the game on a normal monitor, or this:


This is obviously not ideal, so my solution was to have a third camera in the scene that is disabled by default. When the fire button is used, the camera is re-activated, and aligned between the two eye cams, this camera renders to a render texture and a frame from that is saved to file as a PNG. The result is this:

Much better... 

For saving the game data I made a sub class called "SnapShot" this contains the game data for each picture like what level it was taken in and what it is a picture of. As of now to tell this subject Im just firing a single ray from the center and pushing that objects name to a snapshot objects list of subjects List, then that snap is added to another list that will be eventually serialized and saved to file. The next trick is to make sure the saved game data lines up with each saved PNG. Here's my class so far:

http://pastebin.com/WMVrNBbu




Saturday, March 7, 2015

Moving from Scaleform to Unity UI

We started a new project in Advanced Production last semester called "Quantum Keeper", a PC game with the objective of teaching middle school kids about the revolutionary war. The game is a 3rd person adventure game with stealth game play and some puzzle solving. As with all recent Advanced Production projects we decided to make the game in Unity. My professor had just purchased a Scaleform key and wanted us to make the UI using Scaleform. Knowing that learning Scaleform could be valuable skill I chose to develop the UI. It was definitely an adventure, I had to learn Actionscript and the basics of real UI development.

Because of the learning curve with learning Scaleform and the fact that I would be passing on my work to a new programmer after I graduate in the spring, our programming lead thought it would be better if I re made the UI in the new Unity UI included in 4.6. Honestly for our purposes, Unity UI does just fine and my workflow is a bit nicer now since I don't have to go back and forth between Flash, Unity, and Mono.

The menu hasn't been skinned yet but most of the options functionality has been added, as of now settings are saved to PlayerPrefs but we may switch to JSON in the future.



Here is my Main Menu Manager script as of Feb 16.

Menu Manager:
http://pastebin.com/fn1nVEJW

I've just started the HUD this week, beginning with the Minimap and Compass.
When I started the mini map I tried simply moving an image around behind a mask and lining it up with the player location in the world but had a lot of issues lining it up correctly with map rotation. My solution was to use an orthographic camera pointed down at the player that could only see UI elements. I used a render texture where the map goes to render that camera, then placed a screenshot of the level under the level marked as UI as well as a marker that follows above the player. The result works well and the performance drop was tiny.



As of the time of the screenshot I was updating the compass, which is a default progress bar, by shooting a vector straight away from the camera, and towards the objective, then getting that angle and turning it positive or negative based on which side was closer to some empty objects placed on the left and right of the player's vision. This caused some jumpiness when you got too close to the objective so what I'm doing now is just firing a vector left and getting its angle relative to that. Then I convert that value to the progress bar percent and it results in a much smoother compass.

HUD manager as of Feb 18th:
http://pastebin.com/0siAPMcF

So far so good, I'll post updates to problems I find as I go.


Here are some of the Scaleform Scripts
Scripts I was working on at the time. They are not properly commented since the Scaleform version of the UI was very early in development.

HUD Manager:
http://pastebin.com/HXXsiEsH

HUD Cam:
http://pastebin.com/R2HT6ELs