Friday, November 17, 2006

Weeks 9 - 12

I've forgotten to post the last few weeks, so here's the update.

Week 12: 2006/11/13 – 2006/11/17

Respecting Characters

Sheldon, Lee. Chapter 3. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 37-59.

This chapter discusses the creation of a three-dimensional character, and respecting that character. The three dimensions are: physical, how the character looks; sociological, the character’s past, environment, culture; psychological, the character’s attitudes, opinions, and view of the world. Another key is making characters aware, but not to the extent that they are telling the player about themselves. The character’s actions should tell the player about itself. Sheldon also points out the need for the character to grow and develop, and that they’re not the same thing. Growth is changes that happen to the character over the course of the narrative. Development is the revealing of the character’s characteristics over the course of the narrative. Another thing Sheldon focuses on is not stereotyping a character into a certain role or persona.


Character Roles

Sheldon, Lee. Chapter 4. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 61-85.

Sheldon discuss the different roles characters, specifically NPCs, fill. NPCs are what drive the narrative in games; they also give the game-world life. NPCs can also provide commentary and gossip about other characters as well as goings no in the world. Sheldon emphasizes giving NPCs multiple roles, or functions, with in the game causing them to be deeper characters. He also discusses the different roles NPCs can fulfill such as mentors, trainers, sidekicks, merchants, and quest givers. There is also the villain, or antagonist, role. The villain is the player-character’s opponent, and needs to be designed to be an equal, so to present come challenge to the player.


Character Traits

Sheldon, Lee. Chapter 5. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 87-112.

In this chapter, Sheldon talks about NPC mobility, emotion and memory. Mobility is important to create a more realistic world. Static NPCs, ones who never move, are uninteresting and won’t be as capable of filling multiple roles in the game and narrative. Mobile NPCs doesn’t mean that the NPCs need to be able to move throughout the entire game-world, though some may. Some may be confined to a specific area, but are able to move around within that area. Sheldon also discusses mobility in terms of the NPC’s response to the player-character, some may shy away, or tremble in fear, while others will challenge the PC’s space. This type of mobility helps convey to the player the NPC’s personality.


Character Encounters

Sheldon, Lee. Chapter 6. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 113-148.

Sheldon talks a lot about dialogue, and how dialogue functions in games. He focuses more how the exchange rather than the creation of dialogue, but also touches on how the exchange can be perceived. Something else Sheldon touches on is character relationships. He presents a relationship system, the TGT Full System, which creates a base personality and facilitates creating dynamic relationships between characters. The base personality is comprised of six areas: like, respect loyalty, trust, admiration and love. A relationship modifier is applied to the base personality to calculate the relationship between two characters.

Week 11: 2006/11/6 – 2006/11/10

Agitating for Dramatic Change

Littlejohn, Randy. “Agitating for Dramatic Change.” Gamasutra 29 Oct. 2003. 3 Nov. 2006.

This article discusses a potential new age of video games. Littlejohn wants to create interactive drama, like video games but with more focus on story, character development, etc., for the masses. These games would be immersive, familiar like TV and film, have empathetic characters, and have intuitive interfaces. He talks a lot about the characters in these games, synthespians. Synthespians are “autonomous agents with goals, biases, and abilities who carry out apparent 'intent'.” That's not to say they have any actual understanding, or posses any actual cognitive ability, but to the player they are intelligent, goal oriented, communicative, emotional NPCs. He also discusses some toolsets that could be created that would enable game designers and others to create such characters with out being proficient at programming or AI. He also goes on to discuss Facade, a game from Michael Mateas and Andrew Stern. Facade is an experiment in electronic narrative, or storytelling, and personally I think it's a step in the right direction for games.


AI for Games and Animation: A Cognitive Modeling Approach

Funge, John. “AI for Games and Animation: A Cognitive Modeling Approach.” Gamasutra 6 Dec. 1999. 3 Nov. 2006.

Funge discusses another approach to modeling synthetic characters. It's fairly strait forward, and similar to other models being done by other researchers. The most interesting part is he's using situation calculas which is a mathematical logic notation. It's an “AI formalism for describing changing worlds using sorted first-order logic.” Fluents are any world property that can change over time, and can be described as a function with a situation as it's last argument. Precondition axioms define what state the world must be in for an action to be preformed. Effect axioms define the conditions of a fluent to take after an action has occurred. Funge uses this in his model to create predefined behaviors and goal-directed behaviors. He also discusses different methods for the character to determine it's actions such as HFSMs, and situation trees.


Steering Behaviors For Autonomous Characters

Reynolds, Craig W. “Steering Behaviors For Autonomous Characters.” Proc. of Game Developers Conference, 1999, San Francisco.

Reynolds discusses many different steering behaviors for autonomous characters. These include: seek, flee, pursuit, evasion, offset pursuit, arrival, obstacle avoidance, wander, path following, wall following, containment, flow field following, unaligned collision avoidance, separation, cohesion, alignment, flocking, and leader following. He gives a description of what each steering type is and to some extent how it works. For example, flow field following directs the motion of the character based on their position in the environment. A flow field is implemented using flow vectors that direct the character's movement. This is something that can be used in games to direct NPCs through the world. He also talks about combining different steering behaviors, because on their own they would seem rudimentary, to create more complex steering behaviors. One method he proposes is prioritizing the steering behaviors, such that when one fails the next one takes effect.


SimHuman: A Platform for Real-Time Virtual Agents with Planning Capabilities

Vosinakis, Spyros, and Themis Panayiotopoulos. “SimHuman: A Platform for Real-Time Virtual Agents with Planning Capabilities.” Proc. Intelligent Virtual Agents: Third International Workshop, 2001, Madrid.

This article rehashed a lot of material discussed in previous readings. Mildly interesting was the section on dynamic actions. Their dynamic actions are things like path planning and obstacle avoidance, and this is done by doing ray casting. More complex dynamic actions, like moving an object, use inverse kinematics. The one thing they don't discuss how the character knows what animated sequences to create.

Week 10: 2006/10/30 – 2006/11/3

Believable Groups of Synthetic Characters

Prada, Rui, and Ana Paiva. “Believable Groups of Synthetic Characters.” Proc. of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems, 2005, Netherlands.

This article discusses modeling the behavior of groups of synthetic characters. Prada and Paiva have developed a model, the Synthetic Group Dynamics Model (SGD Model), to give a synthetic character awareness of the group it belongs to and the other members of that group. This allows the character to build relations with other members and build proper social models. They've broken the model down into four different levels. The individual level provides each character with an identity, skills, and a personality consisting of extroversion and agreeableness. The group level defines the structure of the group and agent's attitudes toward the group. The structure comes from members social influence and social attraction to other members. The interactions level categorizes possible interactions and their impact on the group.There are socio-emotional interactions and instrumental interactions. Finally, the context level defines the environment that the characters exist in. Prada and Paiva have developed a game, “Perfect Circle: the Quest for the Rainbow Pearl”, where the player acts as a member of a group of Alchemists looking for the rainbow pearl. They conducted an experiment with three different situations, first where the SGD Model wasn't used, second used the SGD Model with a neutral group, and third used the SGD Model with a negative group. They found that situations that used the SGD Model elicited more trust from the player and more identification.


Behavior Selection and Learning for Synthetic Character

Kim, Yong-Duk, Jong-Hwan Kim, and Yong-Jae Kim. “Behavior Selection and Learning for Synthetic Character.” Congress on Evolutionary Computation 1 (2004): 898-903.

The authors present a different approach to behavior selection and learning for synthetic characters. Instead of the more common tree structure they have devised behavior selection that chooses behavior probabilistically and deterministically, while learning occurs through adjusting weight between inputs and internal logic. For learning to occur, behaviors are categorized by similarity, when the character does something right it is rewarded, or penalized if it did something wrong, this reward or punishment changes the weights between the input and the behavior.


Action-Selection in Hamsterdam: Lessons from Ethology

Blumberg, Bruce. “Action-Selection in Hamsterdam: Lessons from Ethology.” Proc. of the 3rd International Conference on the Simulation of Adaptive Behavior, 1994.

Blumberg aims to devise a action-selection system that allows for the control of the temporal aspects of behavior. This allows for a balance between dithering between activities and pursuing a goal to the detriment of other goals. The system also needs a loose hierarchical structure and a mechanism for sharing information, as well as flexible means to model motivations. Blumberg takes lessons from ethology to design the action-selection system. He also created Hamsterdam, toolkit for developing artificial animals in a 3D environment. In this environment there are Hamsters and Predators. The Hamsters act much like real hamsters, and the predators are “generic”. At Siggraph '93 he used Alive to present Hamsterdam and allow users to interact in the environment.


Using an Ethologically-Inspired Model to Learn Apparent Temporal Causality for Planning in Synthetic Creatures

Burke, Robert, and Bruce Blumberg. “Using an Ethologically-Inspired Model to Learn Apparent Temporal Causality for Planning in Synthetic Creatures.” Proc. of the 1st International Joint Conference on Autonomous Agents and Multiagent Systems, 2002, Bologna.

Built on previous work of the Synthetic Characters group, Burke and Blumberg have create a new cognitive architecture inspired by Scalar Expectancy Theory (SET) and Rate Estimation Theory (RET) proposed by Gallistel and Gibbon. Burke and Blumberg's action selection process is interesting. Creatures have the option of exploiting their knowledge of the world, exploring the world, or reacting to observed stimuli, but these actions need to appear relevant, persistent and coherent. At the end of each timestep the creature should have chosen its action, the object of its attention, and the target object. The three action options are mutually exclusive for the current timestep.

Week 9: 2006/10/23 – 2006/10/27

Teaching Bayesian Behaviors to Video Game Characters

Le Hy, Ronan, et al. “Teaching Bayesian Behaviors to Video Game Characters.” Robotics and Autonomous Systems 47 (2004): 177-185.

Le Hy et al. discuss their work developing synthetic characters using the Unreal Engine and Bayesian modeling. Their Bayesian model consists of a description and a question. Descriptions relevant variables, decomposition and parametric forms for joint distributions. Questions parsed into searched, known and free variables. They did two tests. The first they hand calculated the probability distributions by hand. While this allows for tuning, it doesn't complete the goal of the synthetic character being capable of learning individual players' play styles. Their second test inferred probabilities from actions occurring within the game. They accomplish this by allowing the synthetic character to identify the player's low-level actions, evaluate them to change its behavior. They found that in doing this they could create a synthetic character that would perform just as well, if not a little better than the existing AI in Unreal Tournament.


Bringing VR to the Desktop: Are You Game?

DeLeon, Victor, and Robert Berry, Jr.. “Bringing VR to the Desktop: Are You Game?” IEEE Multimedia 7.2 (2000): 68-72.

DeLeon and Berry discuss their work creating the Virtual Reality Notre Dame (VRND). Their goal was to make this accessible via the internet allowing people to access it from computers all over the world, as opposed to their Virtual Florida Everglades museum exhibit. Using the Unreal engine they modeled Notre Dame and identified important parts and structures within of interest to visitors. They also created a virtual tour guide to interact with the visitors and give them information about Notre Dame. Unfortunatly the URL they listed for VRND is inactive.


AI Characters and Directors for Interactive Computer Games

Magerko, Brian, et al. “AI Characters and Directors for Interactive Computer Games.” Proc. of 2004 Innovative Applications of Artificial Intelligence Conference, 2004, San Jose.

Magerko et al. discuss interactive computer games from the view point of interactive story telling. To accomplish this they developed a story director and AI characters as actors. The actors have physical drives and respond to many different physiological effects. They also environment sensing, and are also designed to take direction from the story director so the plot stays on course while they are still allowed to pursue their own goals. The story director keeps track of plot points and pre and post conditions needed for plot points to occur and that the occur at the right time. The story director also helps keep the synthetic characters on task as far as executing the plot while still allowing them to pursue their goals. In their game, Haunt 2, as the player you play a ghost and your end goal is to discover who killed you. But being a ghost you cannot directly interact with the characters or objects, instead you need to find means of getting the characters to intervene for you. One method of doing this is to temporarily possess one of them as long as they're not too scared. While doing so you can influence their thoughts and actions, but you have to be careful so they don't become scared and expel you.


Interacting with Virtual Characters in Interactive Storytelling

Cavazza, Marc, Fred Charles, and Steven J. Mead. “ Interacting with Virtual Characters in Interactive Storytelling.” Proc. of the First international Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, 2002, Bologna.

This article also discusses interactive storytelling, though it is not as well explained as the previous article, or in my opinion as interactive for the user. They've developed their synthetic characters with Hierarchical Task Networks (HTN) that represents that character's contribution to the plot. The character traverses it's network to progress it's part of the plot, when it comes to a solution node it can not execute it will back up and try another solution node attached to the same goal. The user can interact by moving objects with narrative significance, or can give language instructions/suggestions to the synthetic characters. The user isn't actually part of the plot.