Saturday, March 30, 2013

Mobile Devices in Gaming



Given that this week’s lecture is on interaction devices (lecture found here), the topic for this week’s blog post is…………. You guessed it, interaction devices. As my interest lies in mobile games I will talk about the interaction on mobile devices.

For those of us that used to and/or continue to play console games, one important aspect is the button mapping. The ability to change the button mappings in game is an important feature for more advanced players who play video games a lot and have developed a button mapping that they are comfortable with.

Let’s take Call of Duty for example. The franchise has well over 40 million players across its franchise (Jaradat, 2012). For a majority of these players, they are long-time fans of the series and veterans of first person shooters. Although there is a default controller mapping, it is likely that over time, these players have created their own button mapping for such games that allows them to perform better when in-game. As a result, they will always change the settings to their liking.
In 2009, Activision had released a version of Call of Duty for mobile devices. If we look at devices such as Apple’s iPhone, the touch screen means that there are no buttons. Essentially, we have gone from this:

Source: http://avgjoegeek.net/5-tips-on-how-to-play-call-of-duty-mw3-like-a-boss/


To this:

Source: http://en.wikipedia.org/wiki/IPhone_3G


So how exactly does a game that requires a fairly complex button mapping go from having many buttons and analog sticks to a device with no buttons, gyroscopes, and an accelerometer?


Looking at the above sample gameplay video / review, it is easy to immediately see one of the issues with the layout: it is rather intrusive for the player. Although the lack of buttons means that virtual buttons can be placed anywhere on the screen, at the core, the game still requires two analog sticks in order to move and look around. As we can see, although the person reviewing the app was able to move around without much problem, looking around however, was a big issue. The right hand will constantly be in the way as players are trying to look around and thus ruins the experience for the player. 

Don’t expect the hardware to change anytime soon though. The iPhone’s perceived affordance is for touching. When one looks at the iPhone, the touch screen suggests one interacts with it through touching its screen. This of course has spurred a new generation of games, simple games with intuitive controls that take advantage of such a screen. When we look at games such as Angry Birds, the game was designed for a touch screen. The simple control (use your finger to slingshot the bird) makes it easy for players to quickly pick up the game and play it. On the other hand, a game such as Call of Duty was not designed for touch devices. It was originally a PC / console game and as a result its controls work well when playing on PCs and gaming consoles.

Mobile devices are an interesting domain for video games. If we think back, controllers have always been used to play video games. Arcades cabinets had joysticks and buttons, home gaming consoles had controllers with analog sticks, d-pads and buttons, and, more recently, we have seen additional sensors being incorporated into controllers such as acceleration in the Wii and motion tracking in the Kinect. If we shift our focus back to mobile devices, they never really had all of these bells and whistles. Phones back then had a keypad for dialing and texting, and games were often limited to really simple games such as snake, matching cards, or ricochet. These games did not require complex interactions and oftentimes the player’s choice in terms of movement was limited. In snake, for example, the player needs only to be able to move up, down, left, or right in order to eat the segment and get longer. 

Of course, with the shift to touch screen devices as well as devices that feature accelerometers, gyroscopes, and many other sensors, they have opened up a world of new possibilities for game developers. These sensors are allowing us to overcome the problem of not having analog sticks and buttons and encourages us to find creative ways to turn the devices into controllers.  This of course leads back to the importance of button mapping, a topic that was important back when arcades were popular.

For mobile games, there is no standard for button mapping. Some games are meant to be played with the screen vertical while some are horizontal, and, different games will require different control schemes that make sense for that specific game. There is a direct mapping between the input design and the gameplay here. We could take an accelerometer based control system from a racing game and stick it into Angry Birds, but, since the game does not require us to drive birds around and try to hit targets, the controls will make no sense at all.

To conclude, I think mobile devices are a rather interesting domain to study as the lack of buttons gives developers greater creativity in terms of how they wish to do things. On top of that, the lack of buttons means we will likely see more creative interfaces and input designs that will allow us to enjoy our games even more. Mobile games are taking it to the next level; by taking away the controller, the standard mappings, and allowing the user to customize button mappings, game developers are encouraged to explore the area of input design more in order to come up with something that works well for their game.

References

Jaradat, F. (2012, February 12). Over 40 million monthly players across call of duty franchise, breakdown by title. Retrieved from http://mp1st.com/2012/02/12/over-40-million-monthly-players-across-call-of-duty-franchise-breakdown-by-title/  

Monday, March 25, 2013

Usability Heuristics in Troll Tower Defence


As I approach the end of the school year, it is once again crunch time for me, which means working on a million different projects all due roughly around the same time (just joking about having one million things due; I’ve got 15 things on my to-do list at time of writing, all of which are worth marks). But enough about that, this week, seeing as we had a guest lecture where we had a sneak peek at the project he was working on as well as discussed UI design issues with the game, I will also do the same.

For those that remember, back at the beginning of the year, I mentioned I was working on a poker game as well as did some testing on it to see how usable the interface was based off of what we are learning in HCI class. Unfortunately, due to other school work creeping up, that project has been put on hold for a while now and there are no new updates.

On the other hand, I have been working on another project. This project started out as a side project that I would only work on in my spare time, but, seeing as I needed a few more portfolio pieces for my demo reel, I had decided to use the game as a portfolio piece, which is why there has been a lot of progress on it (in terms of gameplay).

First, let me start by talking about the game. The game, titled “Troll Tower Defence”, is an iOS game that I am planning to release on the app store once it is complete. I am sure most of you know what tower defence type games are, and, for those that don’t, basically wave after wave of enemies approach along a path, and as the player, you must build towers at strategic points on the map and upgrade them in order to kill the monsters along the path before they reach their goal. If too many enemies slip by, then the player loses the game.

Here is a screenshot of the game in action:


May not be the most exciting thing in the world, but I am a fan of tower defence games and figured hey, why not make one where troll faces are invading?

Okay, I’ll admit here that I haven’t done too much research in this area here and so this next part will talk about the Nielsen usability that I have already violated (Nielsen, 1995):

Visibility of System Status: Maybe my game doesn’t violate this heuristic… yet. In the screenshot above, you can see that vital information is shown at the top of the screen: the player’s lives, the number of gold they have, as well as the current wave. I will need to find a different font as well as increase the font size as I am sure most of you have realized that it is really small and hard to see.

User Control and Freedom: This is a biggie. In the game’s current state, players are able to build the tower and place it wherever they wish; however, there are no clearly marked “emergency exits”. Once a building has been placed, it is there until the end of the game. Also, there is no way to pause the game if something important comes up in the player’s real world and they need to stop focusing on the game for a minute. The fix here will be to support a remove option so that players can sell their tower back for money as well as to implement a pause state.

Flexibility and Efficiency of Use: One of the main tasks, and a core component of the game is building towers. As this is done very often, it would make sense if I allow the player to tap the button for the tower they wish to purchase, and then continue tapping the map to continuously purchase the same tower. Currently, the user must tap the button for the tower they wish to buy, then tap on the map for where they wish to place it. This process is repeated each time the player wishes to place a new tower.

Help Users Recognize, Diagnose, and Recover from Errors: I do have a lot of debug statements that print out the status of my game when it is in debug mode, although this will not be of any use to the player. At the moment, if the game encounters and error and fails, then it fails and the user is not given any notice. It will most likely be beneficial to have a pop-up of some sort appear whenever there is an error and explain it to the player in plain English.

Help and Documentation: This here is another one that I have violated, primarily because I haven’t gotten to code this section of the game yet. Looking at the current main menu screen, you can see that players don’t really have an option except to figure out how to play the game on their own. I’ve tested the game with a few people so far and all of them have pointed out that the lack of instructions have made it hard for them to figure out how they are to perform the tasks that they need to in the game. Sometime in the next couple of weeks I will be adding a help / instruction section to the game so that players are able to read this and learn how to play the game.


I haven’t violated these heuristics yet, but at the rate I am going now I probably will very soon as there is not much time left and a lot of work to do.

  • Match between system and real world
  • Consistency and standards
  • Error prevention
  • Recognition rather than recall
  • Aesthetic and minimalist design



As a gameplay programmer, I think being able to satisfy all 10 heuristics is a very big issue. Violating any one of them can easily lead to potential usability issues in the end product. Therefore, I think it is of utmost importance that all 10 heuristics are met; otherwise, you can have the most impressive game out there with all the bells-and-whistles, but, at the end of the day, players are going to say your game sucks because they’re not sure how to play it or what it is they are supposed to do in your game.

References

Nielsen, J. (1995, January 01). 10 usability heuristics for user interface design. Retrieved from http://www.nngroup.com/articles/ten-usability-heuristics/

Monday, March 18, 2013

Direct Manipulation in Games

In this week's blog post, I'd like to take some time to talk about direct manipulation in video games. Direct manipulation is basically the continuous representation of an object, combined with rapid, reversible, and incremental actions and feedback that these objects have (Nacke, 2013). The game that I will be using as an example throughout the post (and one that I recently started playing) is SimCity.

Source: http://upload.wikimedia.org/wikipedia/en/5/5f/SimCity_2013_Limited_Edition_cover.png
SimCity is a city planning and building game. In the game, players zone areas of their city for purposes such as residential, business, and industrial. Once zoned, Sims begin moving into the city, working at factories and spending their hard earned money at stores.

The game serves as a good example of direct manipulation in video games. If we look closely at the actions the players perform in the game, they are similar to actions that would occur in the real world. An obvious example could be the government zoning a residential area, home builders building new houses, and eventually, new people moving into the city.

In general, the game feels like an interactive and animated version of Google Maps. Users are able to zoom in and out of the map to get a better view. When the user zooms in, they are able to see fine details of the game, which provides a further example of how the game is in-line with the definition of direct manipulation. Once an area has been zoned, users will be able to see the construction process of new buildings in the city. As it would be silly for players to wait months and months for construction of new buildings like in the real world, obviously game time progresses a lot faster and the player is able to see the entire construction process. Of course, in the later stages of the game, the player may find that their layout is no longer suitable for the population of the city and may choose to re-organize the city. Players are given the ability to do this through demolishing a building, removing the zoning permit (if desired), and then re-zoning the area.

Overall, the user interface in the game is easy to learn, makes sense, and is easy to master over time. The ability to create a city and share it with other players and friends is a factor that encourages users to master the interface as well as to continuously explore to see what new options are available as the player as they progress through the game.

In last week's lecture, one of the topics covered was command line vs. display editors. The example in the lecture was regarding word processors and operating systems. For many of us, the costs and benefits associated with each are easy to identify: command line can be more powerful but requires users to memorize different syntax for different systems; display editors are able to offer a WYSIWYG interface and offer icons to help easily identify features, but lacks some of the more sophisticated features. To add to that and provide a more gaming related example, we can look at the transition of computer games throughout history.

For those that were around in the 70s, they may remember a little text-based game called Zork.

Source: http://upload.wikimedia.org/wikipedia/en/0/01/Screenshot_of_Zork_running_on_Frotz_through_iTerm_2_on_Mac_OSX.png
Zork provides a good example of what a command line text-based game would be like, as users had to read blocks of text and input actions they wish their character to perform. From there, the 80s brought us the first SimCity:

Source: http://upload.wikimedia.org/wikipedia/commons/c/c8/Micropolis_-_empty_map.png
As we can see, there's more of a WYSIWYG interface, although there is still a lot of text in the game. Text provides the user with important information regarding their current game state, while the icons in the middle provided a quick way to develop the city, and, of course, the map provided users with a current view of their city.

Fast forward to 2013, advances in the games as well as computer hardware industry have allowed developers to craft more realistic and exciting experiences, as well as improve on old ones for players to once again enjoy. Below is a screenshot of the recently released SimCity:

Source: http://img.popherald.com/uploads/2013/03/sim-city-5-2013-traffic.jpeg
As you can see, there have been major improvements. Users have the ability to zoom in, the game is able to simulate Sims with AI, and there is a minimal amount of text and lots of icons.

This is not to say that we are now able to create perfect interfaces without any problems whatsoever. SimCity's user interface does come with its own set of problems. For starters, the game's heavy reliance on icons means that users must take the time to learn what each icon means and where to find certain buildings that they wish to construct. This was definitely one of the problems I faced as a new player playing the game. There was an instance where I wished to specialize my city as a gambling city and build my first casino; problem is, which category would casino fall under?

A second problem that is closely related to the first is the use of icons that may be confusing to new players. Looking at the image above, there are certain parts of the UI that may confuse a new player, for example, what does the shield icon on the bottom left represent? What would the green, blue, and yellow bars represent at the bottom of the screen?

A third problem with SimCity's UI, albeit a minor one, is the lack of keyboard shortcuts. Having keyboard shortcuts is essential for expert players as they do not wish to waste time going through menus and know exactly what they want, and where they want it. For these users, speed is important as they have remembered all the keyboard shortcuts. On the other hand, in SimCity, speed does not necessarily provide users with any particular advantage, hence why it is a minor problem for the game. If we look at RTS games such as Starcraft, the lack of keyboard shortcuts would be a significant problem for Blizzard. Many players of Starcraft are veterans, and, even for new players, they quickly realize the importance of knowing keyboard shortcuts, as every second in the game can make the difference between winning or losing a game.

To finish off, we've gone a long way in the gaming industry. From the text-based games of the 70s to sophisticated and beautiful UIs found in today's games, each game has a different UI that is specifically designed to work well for that type of game. Therefore, it is monumentally important that developers understand their target audience, and design a UI that is meaningful, easy to learn, and works well for both beginners and experts in order to have a successful game.

References

Nacke, L. (2013). Uoit infr 4350 lecture 7: Direct manipulation [Web]. Retrieved from http://www.youtube.com/watch?feature=player_embedded&v=Oq7PRoALJV4

Tuesday, March 12, 2013

AR in Games - Continued

In the previous post, I discussed AR and its current application in games, as well as provided several examples. For this post, I will be expanding upon AR in games through discussing my current work for the final project in HCI class.

One of the fastest and easiest ways for us to get up and running is through using ARToolKit. It is a simple toolkit that works with C++ and OpenGL. For the most part, the toolkit is easy to understand and work with, and within an hour, I was able to get setup and running. Below is an example of what the output looks like when everything runs fine:

Source: http://www.hitl.washington.edu/artoolkit/documentation/userstartup.htm
Currently, our game plan is to create and demo a Tamagotchi style game. For those that don't know, Tamagotchi is basically a virtual pet game. Users interact with their pet through taking care of their needs just like a real pet. For example, pets need to go to the washroom, be fed, play, as well as rest. I won't go into too much detail about it, but those that wish to read more can click here.

In our version (currently unnamed), users will rely on AR cards and watch their pets come to life as the camera picks up on the markers. Models of the pets would pop up and they would interact based on the type of AR card being used. For example, if a person placed their pet AR card along with a food card, lets say, the pizza card, then they will watch their pet eat the pizza.

For those that read my last post, you will probably remember that the theme of the game is change. The game must help the player change their lifestyle whether it be to remind them to eat healthier or to get a bit more exercise everyday or even just to socialize more instead of sitting in front of the computer all day. So how does an AR virtual pet game help users change? Basically, the virtual pet serves as a reminder to the player to make more health conscious choices and gradually change their lifestyle for the better. In the game, the player is free to choose foods and activities for their pets. If the player feeds their pet too much junk food and allows them to sit around and get no exercise all day, then the pet eventually becomes overweight, then unhealthy, and will live a much shorter life. On the other hand, through eating healthy foods such as salads and regularly exercising, pets will be much happier and live a longer life.

Currently, we are planning to show the player how many calories are in each food after it is consumed as well as showing animations of the pet getting overweight or in shape. For many of us, we don't bother to look at nutritional facts on most foods we eat before we eat them. In this game, it works in a similar fashion; players may think "ooh, my pet would like to eat bacon today", but, after eating bacon, a number of nutritional facts will appear on screen, which causes the player to think "uh oh, maybe that wasn't the best choice for my pet". Hopefully, players will gradually start to remember these facts and give foods a second thought before they buy or consume junk foods in the future.

With this idea, I'll admit we aren't the first to it as there have been several prototypes developed already. Zentium has already developed an application called iKat for Windows Phone 7 in which a virtual pet appears on the users phone (Ponder, 2010).

The iKat demo shows the idea for the pet part of the game, but doesn't show the interactions behind it. The video below provides a better example of what the final game may look like:


On the downside, some questions have been raised regarding how useful and groundbreaking such a game would be. As mentioned before, we won't be the first to attempt such an app. On top of that, do people really want to carry a bunch of AR cards in their pocket when they are going around town? Wouldn't it be easier to remove the AR portion of this and have a simple web game so that users can just log on and play whenever they wish?

At the moment, there is some debate in the group as to whether or not we should instead work with Microsoft's Kinect. The Kinect is capale of doing AR as well and the following video demonstrates what can be possible:




After watching the demo, one possible idea for an HCI game could be a game that helps players become more environmentally friendly and put waste in the appropriate bins. For example, garbage could fly towards the user, and the user must dodge actual garbage while catching recyclable items such as cans and put them into a recycling bin. Such a game could potentially get the player to start recognizing items that are recyclable and remind them to throw it into the recycling instead of the trash.

Such a training program could prove to be a good idea, especially since I see AR being useful in training applications or simulator type games. For example, Microsoft's Kinect can be used with a live video stream and previous brain scans in order to more easily identify where certain parts of the brain are as well as help identify what could be problem areas, which areas a disease could spread to, etc (Ostrovsky, 2013)

Personally, I don't mind working on either one of the projects, whether it is the AR Tamagotchi game or some other Kinect AR game, as it is a new technology I will be working with and I'd like to learn more about it. For the record, however, my vote goes for working on a training or simulator type AR game that uses the Kinect. The Kinect seems way more fun to work with and it feels as if it is easier to do something new and exciting with it instead of coding it from scratch with the ARToolKit. Either way, I will continue to update my progress on this project in the future as well as post videos and pictures of the demo once development begins.

References

George, P. (2010, March 11). Augmented reality app ikat gets demoed; tamagotchi fans rejoice. Retrieved from http://www.wpcentral.com/augmented-reality-app-ikat-gets-demoed-tamagotchi-fans-rejoice 

Ostrovsky, G. (2013, March 11). Microsoft kinect helps bring augmented reality to operating rooms. Retrieved from http://www.medgadget.com/2013/03/microsoft-kinect-helps-bring-augmented-reality-to-operating-rooms.html



Sunday, March 3, 2013

Augmented Reality in Games



Augmented reality… what is it? In short, augmented reality, or AR, “is a term used for a wide range of related technologies aimed at integrating virtual content and data with live, real-time media” (Mullen, 2011). Those of us who remember the movie Terminator will most likely remember terminator vision; the terminator would look at an object or person and a display would instantly provide information about the object currently in the terminators vision. For those of you who haven’t seen Terminator or need a quick refresher:

Source:  

Back in 1984 (when the movie was released), I’m sure a lot of people would question whether or not such a thing existed, and, even if it did, was some really fancy future technology. Well, fast forward 30 years later, this technology is now available. The great thing about it: it’s free (for the most part). Libraries have been made available to programmers; allowing developers to come up with exciting applications and games for users. Of course, with the exponential growth in the number of smartphone users as well as rapid advancements in technology, most of us already have phones that can allow us to do some basic AR.

Take for example, Aurasma, an iPhone app demonstrated by Matt Mills during his TED talk. One of the cool things about AR with Aurasma is that it uses image recognition; this is monumentally helpful in many areas of our everyday lives: advertisers can now have interactive ads from posters, newspapers and pictures can now be interactive (remember seeing those in the Harry Potter movies?), and museums can benefit through providing guests with information through the app, just to name a few examples (Mills, 2012).

I’m sure by now most of you will figure out why I am doing this blog post, but I will still ask the question for you anyways: why are you writing about AR in games? The answer is simple: because that is my next project! Over the next 2 months my group and I will be playing around with AR technologies and turning it into a game prototype. The theme for the game: change. Our task is basically to come up with a game demo that helps players make changes in their lives, whether it is to go out an exercise more, to recycle more, and to walk more often or even just to spend more time with friends and family rather than sitting in front of the computer all day.
With that being said, here is my current idea for this project: a game where AR is used to help identify various pieces of garbage and help players be more environmentally friendly by reminding them to recycle more often. The inspiration for this idea comes from the game Sort ‘Em Up


The game being proposed will take advantage of cameras that are readily available on all of today’s smartphones; kids could walk around and the game will pick up various common household items such as: pop cans, plastic jugs, cardboard containers, egg cartons, scrap pieces of fruit, batteries, newspaper, and glass, etc. The game should correctly identify which items are recyclable and once the player is in front of garbage cans / recycling containers, the game can animate the bins to help make it obvious which container the item belongs in, and can also play animations if the player correctly puts the right item in the right bin.

Based on what I have read so far in Tony Mullen’s Prototyping Augmented Reality book, creating such an app will require using the phone’s GPS as well as accelerometers, gyroscopes, and other sensors in order to help determine where the phone is currently located, and which way it is facing, etc. so that it is able to properly process the information on screen (Mullen, 2011). This is a lot more work compared to a marker system, where markers are used as a reference point, however, we must also consider that doing this game with a marker would not make sense. We would have to tape markers all over the place just to get this to work! Although a marker-less system is a lot more work and is still an area of research, I believe that it will pay off and can be a good educational game for kids.

To start wrapping things up, we’ve gone a long way in the field of augmented reality. From an idea, to being an expensive gadget that only companies with thousands of research dollars can afford, to being widely available to everyday people like you and me; this is just the beginning of AR and I can’t wait to see what others can do in the near future. One particular project I am particularly interested in is Google’s new AR game, Ingress. In Ingress, players play on one of two sides: either the “Englightened”, a group of users who try to activate portals around the world in order to be able to control people’s minds; or the “Resistance”, a group of users that try to prevent the other group from achieving world dominance (Newton, 2012). The game requires users to go around the city and explore, looking for portals. Once found, they must “hack” the portal in order to gain control. Here is the trailer for the game:
  


Personally, I haven’t played the game and will try it out if I ever get my hands on an Android device. This game does raise several issues though: are users okay with looking like a tourist when they walk around the city with a phone constantly in front of their face? Are users going to get sidetracked because they are so hooked onto the game and end up walking farther away from the office on their way back from lunch? Can the game pose a possible safety issue if a user is too glued to their smartphone screen and not paying enough attention to the real world?

There is definitely room for improvement in the future as we see more and more impressive uses for AR technology, but for now, I am still fairly new to this technology but will be doing more research on this in the next two months as I work on a game prototype for Dr. Nacke’s HCI class.

References
 

Mills, M. (Performer) (2012). Image recognition that triggers augmented reality [Web series episode]. In TED Talks. Retrieved from http://www.ted.com/talks/matt_mills_image_recognition_that_triggers_augmented_reality.html

Mullen, T. (2011). Prototyping augmented reality. (1st ed., p. 1). Indianapolis, Indiana: John Wiley & Sons, Inc.

Newton, C. (2012, November 16). Inside ingress, google's new augmented-reality game. Retrieved from http://news.cnet.com/8301-1023_3-57550819-93/inside-ingress-googles-new-augmented-reality-game/