Friday, September 29, 2006

Just too White and Nerdy

So I just saw the video for Weird Al Yankovic's new song White and Nerdy and I am sad to say that too much of it applies to myself, and the stuff that doesn't... well, I get all the "jokes". Anyway, you should totally check it out, it's hillarious. And be sure to share it with all of your friends, white and nerdy or not.

Week 5

Week 5: 2006/9/25 – 2006/9/29

New Challenges for Character-Based AI for Games

Isla, Damián, and Bruce Blumberg. “New Challenges for Character-Based AI for Games.” Proc. of the AAAI Spring Symposium on AI and Interactive Entertainment, March 2002. Palo Alto, CA.

In this article Isla and Blumberg suggest the next steps character-based AI needs to take to in order to advance for games. First they discuss perception, what the character can and should be able to sense and the ability to recognize patterns. They suggest separating the actual state of the virtual world and the character's perceptions about the state of the world. Another key is the character's ability to anticipate events based on previously noted patterns. This includes anticipating the player's actions as well as other characters'. Isla and Blumberg point out that the ability of the character to predict incorrectly would add an additional level of enlightenment. They also note, like many others, emotion modeling is important an should play a major role in behavior and decisions the character makes. Finally they discuss learning and memory. They specifically suggest episodic memory, where specific examples of events are stored opposed to cause and effect statistics. Their reasoning for this is it will increase learning speed.


A Layered Brain Architecture for Synthetic Creatures

Isla, Damián, et. al. “ A Layered Brain Architecture for Synthetic Creatures” Proc. of the International Joint Conference on Artificial Intelligence, 2001. Seattle, Washington.

Isla et. al. describes C4, a brain architecture designed after biological systems. The brain architecture implemented as a collection of separate systems that “communicate through an internal blackboard”. It contains a world model that functions as a event distribution system. The world model distributes all events to all creatures in the world and it is up to their individual sensory systems to filter out everything but what they sensory information they can honestly sense. A perception system, which categorizes sensory input, and a working memory, which keeps a history of past sensory input, have also been implemented. With the implementation of a working memory prediction and surprise can also be implemented. Currently surprise has yet to be, but prediction has been and acts as much as a way of anticipating future events, but also as a means to maintain a view of the present state. Isla et. al. have also added an action system that decides and selects the appropriate action as well as a navigation system to direct the creature to where the decided action needs to take place, and a motor system (animation) to get it there.

Isla et. al. have already implemented two projects using C4. The first is sheep|dog, the player acts as the shepherd who interacts with Duncan, a virtual dog, using vocal commands to herd sheep. The other is Clicker, in which the player trains Duncan using the click training technique used to train real dogs. In Isla's article “”New Challenges for Character-Based AI for Games” (2002) he describes many of the same systems that have been successfully implemented in C4. Most notable is the differentiation between the world state and the creature/character's perceived view of the world state (sensory honesty).


Motivation Driven Learning for Interactive Synthetic Characters

Yoon, Song-Yee, Bruce M. Blumberg, and Gerald E. Schneider. “Motivation Driven Learning for Interactive Synthetic Characters.” Proc. of the Fourth International Conference on Autonomous Agents, 2000. Barcelona, Spain.

The authors describe synthetic characters as “3D virtual creatures that are intelligent enough to do and express the right things in a particular situation or scenario.” The have the ability to adapt to their environment by adapting their behaviors and preferences. Yoon, Blumberg, and Schneider have implemented a creature kernel that is causative for the actions of a synthetic character. The creature kernel is comprised of four systems: perception, motivation, behavior, and motor. For the character to function successfully in the virtual world interconnected communication between the four systems is established. Their primary focus is the motivation driven learning system that they've broken down into three methods of learning. Organizational learning updates weights and connections (preference learning) in the networks contained within the creature kernel as well as the overall structure of the network (strategy learning). Concept learning pertains to the beliefs a character has about objects and the world; characters are “born” with some built in concepts. Affective tag formation is the tendency to choose one action over another based on an emotional memory.


Imitation as a First Step to Social Learning in Synthetic Characters: A Graph-based Approach

Buchsbaum, D., and B. Blumberg. “ Imitation as a First Step to Social Learning in Synthetic Characters: A Graph-based Approach.” Proc. of 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2005. Los Angeles, California.

Buchsbaum and Blumberg discuss their work with a system to allow animated characters to observe an imitate the actions of other characters' movements. They go about explaining what they've accomplished with the implementation of Max and Morris Mouse, two anthropomorphic characters. Max has the ability to imitate and deduce the reasoning behind Morris' actions. Max is pre-equipped a set of poses, and moments that transition between poses creating a posegraph. Max also has synthetic vision that takes input as the graphical rendering of the world from his perspective. The objects he perceives are uniquely color coded, theses objects include Morris' body parts. Max uses Morris' root node as a point of reference for his movements to then be able to apply them to himself by searching through his posegraph to find the best matching poses. Max also has the ability to reason Morris' motivations. Max has a motivationally driven action system that gives him motivations for actions. Similar to how Max determines the action, he searches through the action tuples to find motivations for Morris' actions based on his own experiences.

Sunday, September 24, 2006

Week 4

Week 4: 2006/9/18 - 2006/9/22

Anticipatory AI and Compelling Characters

Blumberg, Bruce. “ Anticipatory AI and Compelling Characters.” Gamasutra 3 Feb. 2006. 15 Sept. 2006.

Blumberg discusses what makes synthetic characters contain a “sense of an inner life.” He defines this sense of inner life as low-level motion (eye movements, etc.) and behavior. This gives the observer insight into what the character is going to do next and how it feels about it. To execute this he suggests anticipatory AI that supports anticipatory behaviors. Blumberg proposes a series of three ways to do this. First is making the character's perceptions perceivable, meaning having it orient to the sensory inputs it's receiving such as sounds or smells. Next he covers expected expectations; an anticipatory action, the action itself, the character's expect ion on how the action will play out, and the end of the action. These actions communicate to the observer what the character is going to do, it's expectations about what it's about to do, and finally how the character feels about the outcome of the action. Lastly, Blumberg talks about making upcoming changes in motivational states perceivable. These are the actions the character performs to communicate to the observer that it is about to change it's motivations and what the observer can expect. Where most other AI systems focus more on reaction, Blumberg proposes anticipation to make the characters more sentient.


The EMOTE Model for Effort and Shape

Chi, Diane, et. al. “The EMOTE Model for Effort and Shape.” Proc. of 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, New York, pp. 173-182.

This article primarily talks about how EMOTE (Expressive MOTion Engine) works; the underlying mechanics, equations, and decision making the engine goes through to produce movement. What I find more interesting is what EMOTE actually does and how they chose to design it. It is based off of the Effort and Shape components of the Laban Movement Analysis (LMA) which also consists of Body, Space and Relationship. Chi et. al. devised a system that uses the Effort and Shape parameters, allowing for specification, to independently modify different parts of the body. Key poses are used as specifications for the movements of a gesture. Key poses can be extracted from motion libraries, procedurally generated motions, or motions captured from live performers. This system will generate more natural synthetic gestures that should correspond to the character's emotive state.


Player Character Design Facilitating Emotional Depth in MMORPGs

Eladhari, Mirjam, and Craig Lindley. Player Character Design Facilitating Emotional Depth in MMORPGs. Proc. of the Digital Games Research Conference 2003, University of Utrecht, 4-6 November 2003.

This article discusses how to progress emotion in a player character (PC), the character a user plays in a video game. As an ongoing research project Eladhari and Lindley, using the Purgatory Engine have created the Ouroboros project which is a dramatic role playing game they are experimenting with making the PC a semi-autonomous agent. The things I found most interesting about their implementation is the contextual gesture system and the mind system. Contextual gestures are based on the state of mind as well as other characters and the world. The mind system provides the character's personality, emotions, moods, and sentiments. These aspects act independently but influence each other determining the character's goals and gestures.


Manipulation of Non-Verbal Interaction Style And Demographic Embodiment to Increase Anthropomorphic Computer Character Credibility

Cowell, Andrew J., and Kay M. Stanney. “Manipulations of Non-Verbal Interaction Style And Demographic Embodiment to Increase Anthropomorphic Computer Character Credibility.” International Journal of Human-Computer Studies, 62(2), 281-306. 2005.

Cowell and Stanney discuss non-verbal behaviors in synthetic characters and how to generate a credible response from a user about the characters. They recognize the majority of non-verbal behavior research just categorizes behaviors as opposed to studying how they interact with each other. Branching off of De Meuse's taxonomy of non-verbal cues, behavioral actions versus those that are not, and how much control one has over the cue, Cowell and Stanney determine preferences for a character's personal appearance and demographic. They further break non-verbal behaviors down into two ranks. The first rank consists of facial expression, eye contact, and paralanguage. The second rank includes gestures and posture. While both ranks are important, they found that the first rank was more crucial to creating a comfortable, trustworthy character.

Thursday, September 21, 2006

Lit Review Proposal

Synthetic characters are in more places than one might think. On websites, they function primarily as a more comfortable means of site navigation for users by providing an illusion of human-to-human interaction. In video games, they act as non-player characters, characters whose actions are not directly controlled by the player, to give the game world an illusion of actuality. Regardless of their interactive capabilities, they are still superficial illusions. And in order to create more believable synthetic characters, thus deepening the illusion, we must impart our social norms and expectations onto them. By doing so, we provide ourselves with opportunities to better identify with these created characters.

The critical component in developing any believable synthetic character is personality, which defines his or her motivations which in turn calculates his or her actions. Currently, many models and frameworks provide psychologists, and others interested in human behavior, a means to analyze various personality types. For instance, a value-based framework is where a synthetic character's sensory perceptions are evaluated by predefined emotions, motivations and prior perceptions; this evaluation results in the character performing an action. However these models are rudimentary. They lack the ability for characters to remember previous sensory input, thus disrupting the immersion of their actions.

But because these models and frameworks can ultimately produce more believable character interactions (with minor modifications), they can be applied in video games and virtual reality therapy. The creation of more immersive characters, with a broad range of emotionally structured motivations, engenders a more immersive virtual environment. This new immersive virtual world enables users to project themselves into the world providing them with human-to-human interactions with these synthetic characters.


Because I have nothing better to post...

So seeing that I'm a bit of an uber nerdy grad student and have nothing better to blog about I've decided I'm going to start posting the annotations I'm doing for one of my classes here as well as on our class wiki. I figure this way I might, just might, get some feedback from someone about what I'm researching. The end goal of the class is to write a literature review on a topic pertaining to HCI and cognitive psychology. So the topic I've decided to look into is deriving and displaying artificial emotion in synthetic characters. This first entry is going to be three weeks worth, but from here on out it should be one week per entry. (Each week has four annotations.) Feedback and comments are more than welcome. Enjoy!


Week 1: 2006/8/28 - 2006/9/1

The Uncanny Valley

Mori, Masahiro. "The Uncanny Valley." Energy 7(4) (1970): 33-5. Trans. Karl F. MacDorman and Takashi Minato.

In this article Mori discusses his hypothesis about how as robots become more human in appearance they also become more familiar to humans. But this only occurs up until a certain point at which the likeness is almost human but not quite and the familiarity drops off drastically and becomes negative. Mori calls this the uncanny valley. When movement is added to the robot the distinction of positive and negative familiarity becomes even more defined. He helps explain this with a graph of two peaks with a valley in between; the peaks representing high familiarity and the valley representing the “uncanny valley”, strange and unfamiliar, as something gains a more human likeness. Mori suggests that as robots are designed that we don't shoot for the second peak (the most human-like) but for the first peak. Establish a familiarity but avoid falling into the uncanny valley.


Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It?

MacDorman, Karl F. "Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It?" Cognitive Science Society (2005).

This article talks about how androids, specifically very human looking androids, can elicit a subconscious fear of death. MacDorman explains a experiment he conducted. He uses terror management theory and mortality salience hypothesis as theories as to why uncanny androids will provoke fear of death. He showed two groups a set pictures, one set including an image of a human woman looking android (experimental), the other with an image of an Asian woman (control) and then asked them a series of world view questions and word completion problems. The results obtained were inconclusive largely because lack of statistical deviation between the two groups. MacDorman then conducted interviews through instant message with some of the participants after the questionnaire was concluded. He claims that the interviews show more conclusively that the picture of the android creates a subconscious fear of death. It should be noted though that the questions asked were leading.


Synthetic Social Interactions

Romano, Daniela M. "Synthetic Social Interactions." Proc. of Human-Animated Characters Interaction, 2005, Edinburgh. Edinburgh: Napier University, 2005.

Romano discusses the different aspects of what would make a virtual character intriguing to people. These synthetic social interactions are becoming more mainstream with the use of interactive avatars on websites that help direct users. The key things she points out that make these characters believable, and evoke an emotional response, are the ability to process and respond to text or speech input, and their ability to display a range of emotions and personality traits through their face and body movements. To elicit these behaviors she poses different personality models such as the five factor model (FFM), Ortony, Clore and Collins (OCC) model. She also indicates that for the avatar to be truly believable it also needs to posses some sort of social cognition.


Virtual people: capturing human models to populate virtual worlds

Hilton, Adrian, et al. "Virtual people: capturing human models to populate virtual worlds." Proc. of Computer Animation, 1999, Geneva.

This article describes a technique for 3D human model reconstruction. It is designed to be a low-cost and automatic process for use in different types of virtual environments. To summarize, four photographs of a person are taken in front of a blue screen: front, back, left and right. These photos are then analyzed by an algorithm that determines how a predefined human model should be modified to represent the person in the photographs. Texture maps are also created from the photos and applied to the model, and the joint positions are moved to allow for more accurate kinematics. The current limitations of their work are also discussed briefly. For instance, they currently lack facial feature point labeling and precise kinematic structure reconstruction. In the future this might be a better way of creating human models with out falling into the uncanny valley.


Week 2: 2006/9/4 - 2006/9/8

On making believable emotional agents believable

Ortony, Andrew. "On making believable emotional agents believable." In Trapple, R., P. Petta, Emotions in humans and artifacts. Cambridge: MIT Press, 2003.

In 1988, Ortony, Clore and Collins developed a framework with 22 emotional categories, known as the OCC model, for developing believable emotional agents. In this article Ortony discusses possible adaptations to this model to make it simpler and more accurate. He proposes to reduce the number of emotional categories to five positive and five negative categories. These create a consistency in the agent by evoking similar emotions in similar situations. He then discusses the intensity of the emotion that is expressed in three types of emotion response tendencies: expressive, information-processing, and coping. These things create a personality for what hopefully will be a believable agent. Something lacking is a history, or memory. The agent will react with the same intensity to a particular situation the same way it did a few moments earlier, and will continue to react the same way no matter the frequency of the situation. I think this is something Ortony fails to address in his discussion of intensity.


Integrating the OCC Model of Emotions in Embodied Characters

Bartneck, Christoph. "Integrating the OCC Model of Emotions in Embodied Characters." Proc. of Virtual Conversational Characters: Applications, Methods, and Research Challenges, 2002, Melbourne.

In this article Bartneck discusses the OCC model and applying it to agents. He lists five phases the agent uses to process an event: classification, quantification, interaction, mapping and expression. In the classification phase the agent determines whether the event is good or bad. To do this, Bartneck says it needs knowledge of its world, including standards, goals and expectations. The quantification phase determines the intensity of the emotion displayed by the agent. Here is where Bartneck finds fault in the OCC model and recognizes that the agent needs a history function that keeps track of past events that allows for recalculation of emotion. The third phase, interaction, isn't described in the OCC model either. This phase determines how the current event and it's emotional response will interact with the current emotions the agent has. During the mapping phase the emotional response is mapped to one of the OCC emotion categories. Finally, in the expression phase the determined response is expressed through the available means. Bartneck's conclusions are that the OCC model is a good starting place for developing a believable agent, but history and emotion interaction functions need to be integrated into this model for them to be more effective.


The Five-Factor Mode: Emergence of a Taxonomic Model for Personality Psychology

Popkins, Nathan C. "The Five-Factor Mode: Emergence of a Taxonomic Model for Personality Psychology." 1998. Personlaity Researcy. 1 Sept. 2006

Popkins evaluates the five-factor model based on compatibility, taxonomy, application, originality and universality to test its theory potential. He starts by first describing the five-factor model (FFM) of personality. These five factors are extroversion-introversion, neuroticism, agreeableness, conscientiousness and openness. Popkins conclusions and arguments are somewhat contradictory. It is compatible with other factor-models based on the fact that it stems from Cattell's sixteen-factor model much like the PEN model does. It is quantifiable and categorical but leaves something to be desired in many situations do to its broad overview. On one hand he cites McAdams stating that “Personality theories do more than specify traits” causing FFM to not really be a valid theory, but at the same time also saying that it can be used effectively in application, specifically in academia. Popkins then goes on to claim that FFM is an original model, even though he stated earlier that it stems from the sixteen-factor model. He justifies this by stating that these are a family of models and they exist autonomously from other major psychological theories. For it to be universal it should be able to be applicable cross-culturally and in any situation, but as he previously stated, and does so again, FFM doesn't anticipate behavior, though he states that it does well cross-culturally. He finally concludes that FFM is more a taxonomy than a theory.


The Uncanny Valley: does it exist?

Brenton, Harry, et. al. "The Uncanny Valley: does it exist?" Proc. of Human-Animated Characters Interaction, 2005, Edinburgh. Edinburgh: Napier University, 2005.

Brenton et al. discuss the existence of the Uncanny Valley as described by Mori. Currently there has been little research into this topic as to whether it is something quantifiable. Their goal is to develop hypotheses that will provoke later research and help determine if the uncanny response is measurable. Brenton et al. present a series of four hypotheses to further investigate the uncanny response. Measurement of presence in the uncanny using a theory presented by Slater. Perceptual cues and realism suggest a character is a person, and in highly graphical realistic characters when behavior and motion don’t match with graphical representation can be unsettling. This is especially noticeable in their third hypothesis pertaining to perceptual cues in the eyes and face of a character. Their last hypothesis looks into how the uncanny can be perceived differently over a period of time, and across different cultures. These hypotheses will help to further investigate the Uncanny Valley and to determine if the uncanny can be measured experimentally.


Week 3: 2006/9/11 - 2006/9/15

The Art and Science of Syntehtic Character Design

Kline, Christopher, and Bruce Blumberg. "The Art and Science of Syntehtic Character Design." Proc. of AISB Symposium on AI and Creativity in Entertainment and Visual Art, 1999, Edinburgh.

This article discusses the creation of synthetic characters. The elements Kline and Blumberg see as key to making a believable character, what we expect from them, are motivational drive, emotion, perception and action selection. To implement these subsystems they have devised a framework that encompasses all of them instead of focusing on each individually. The framework is value-based, meaning input is given a numeric value which can then be evaluated to derive the action the character should take. The sensor primitive is the character's input device. It takes the world and other sensors output as input. The transducer primitive takes the sensor data and begins calculating it. The data then moves to the accumulator primitive where it is computed and then sent to a semantic group that applies a behavior. Kline and Blumberg also give examples of how this value-based framework can be applied to each of their subsystems. The interesting part of this framework is it takes into account all inputs/outputs the character is receiving and has the potential of prioritizing them as well as keeping a sort of history as to what has already happened.


Modeling Emotions and Other Motivations in Synthetic Agents

Velásquez, Juan D. “Modeling Emotions and Other Motivations in Synthetic Agents.” Proc. of the AAAI Conference, 1997, Providence.

Velásquez discusses the Cathexis model which is a “... distributed model for the generation of emotions and their influence in the behavior of autonomous agents.” and takes into account the emotions, moods, temperaments. He has opted to group emotions in to specific emotion families and represent them with proto-specialists, which are comparable to Minsky's. They are comprised of four sensor groups and contain activation and saturation of emotion thresholds, duration of emotion, intensity of emotion, and also contain an emotion decay function. Emotion proto-specialists can also run in parallel, meaning more than one can run at a time. Velásquez also differentiates between moods and emotions, moods being low levels of arousal while emotions consist of high levels. He also briefly discusses the behavior system which is a network of behaviors that decides what behavior is appropriate given the current emotional state. The Cathexis model incorporates different aspects of other models to create a more encompassing and dynamic model for creating synthetic characters. Velásquez also recognizes the lack of memory and learning, and other influences of emotions on behavior.


Artificial Emotion: Simulating Mood and Personality

Wilson, Ian. "Artificial Emotion: Simulating Mood and Personality." Gamasutra 7 May 1999. 15 Sept. 2006

Emotion in characters is key to creating a believable environment. Wilson describes personality in three layers of emotion. The top layer consists of momentary emotions, behaviors that are briefly displayed in response to events. The middle layer consists of moods, which have a cumulative effect and are more prolonged. The final layer is personality that is more stable and is expressed when no momentary emotion or mood supersedes. Other important points Wilson brings up are how emotions serve social functions, making characters we encounter engaging. And he brings emphasis to actions and gestures, meaning hand, body and facial movements. These movements add a lot to characters emotions and personalities making us want to find out more about them, why they are moving the way they are.


Emotionally Expressive Agents

Seif El-Nasr, Mary, et. al. "Emotionally Expressive Agents." Proc. of Computer Animation, 1999, Geneva. pp. 48-57.

Fuzzy Logic Adaptive Model of Emotions, or FLAME, produces smooth transitions between emotions by relating events, goals and emotions, while also incorporating learning algorithms. Seif El-Nasr et. al. define three components of the FLAME model: emotion, decision-making, and learning. The emotion component applies goals and expectations to an event. Using a model similar to the OCC model , the event is appraised and a behavior is selected. This behavior is passed to the decision-making component where a decay function is applied to it and it is acted out. So far this is much like Velásquez’s model. But here is where it begins to deviate and expand. Seif El-Nasr et. al. implement a learning algorithm which logs observed sequences of events, or patterns, of length x. Each time the sequence is observed a count is incremented and the probability of a new event occurring given previous events can be calculated. They also implement reinforcement learning where after a series of experiences the agent will be able to associate an event with a goal and will perform the rewarding action.