Friday, November 17, 2006

Weeks 9 - 12

I've forgotten to post the last few weeks, so here's the update.

Week 12: 2006/11/13 – 2006/11/17

Respecting Characters

Sheldon, Lee. Chapter 3. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 37-59.

This chapter discusses the creation of a three-dimensional character, and respecting that character. The three dimensions are: physical, how the character looks; sociological, the character’s past, environment, culture; psychological, the character’s attitudes, opinions, and view of the world. Another key is making characters aware, but not to the extent that they are telling the player about themselves. The character’s actions should tell the player about itself. Sheldon also points out the need for the character to grow and develop, and that they’re not the same thing. Growth is changes that happen to the character over the course of the narrative. Development is the revealing of the character’s characteristics over the course of the narrative. Another thing Sheldon focuses on is not stereotyping a character into a certain role or persona.


Character Roles

Sheldon, Lee. Chapter 4. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 61-85.

Sheldon discuss the different roles characters, specifically NPCs, fill. NPCs are what drive the narrative in games; they also give the game-world life. NPCs can also provide commentary and gossip about other characters as well as goings no in the world. Sheldon emphasizes giving NPCs multiple roles, or functions, with in the game causing them to be deeper characters. He also discusses the different roles NPCs can fulfill such as mentors, trainers, sidekicks, merchants, and quest givers. There is also the villain, or antagonist, role. The villain is the player-character’s opponent, and needs to be designed to be an equal, so to present come challenge to the player.


Character Traits

Sheldon, Lee. Chapter 5. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 87-112.

In this chapter, Sheldon talks about NPC mobility, emotion and memory. Mobility is important to create a more realistic world. Static NPCs, ones who never move, are uninteresting and won’t be as capable of filling multiple roles in the game and narrative. Mobile NPCs doesn’t mean that the NPCs need to be able to move throughout the entire game-world, though some may. Some may be confined to a specific area, but are able to move around within that area. Sheldon also discusses mobility in terms of the NPC’s response to the player-character, some may shy away, or tremble in fear, while others will challenge the PC’s space. This type of mobility helps convey to the player the NPC’s personality.


Character Encounters

Sheldon, Lee. Chapter 6. “Character Development and Storytelling.” Boston: Thomson Course Technology PTR, (2004), 113-148.

Sheldon talks a lot about dialogue, and how dialogue functions in games. He focuses more how the exchange rather than the creation of dialogue, but also touches on how the exchange can be perceived. Something else Sheldon touches on is character relationships. He presents a relationship system, the TGT Full System, which creates a base personality and facilitates creating dynamic relationships between characters. The base personality is comprised of six areas: like, respect loyalty, trust, admiration and love. A relationship modifier is applied to the base personality to calculate the relationship between two characters.

Week 11: 2006/11/6 – 2006/11/10

Agitating for Dramatic Change

Littlejohn, Randy. “Agitating for Dramatic Change.” Gamasutra 29 Oct. 2003. 3 Nov. 2006.

This article discusses a potential new age of video games. Littlejohn wants to create interactive drama, like video games but with more focus on story, character development, etc., for the masses. These games would be immersive, familiar like TV and film, have empathetic characters, and have intuitive interfaces. He talks a lot about the characters in these games, synthespians. Synthespians are “autonomous agents with goals, biases, and abilities who carry out apparent 'intent'.” That's not to say they have any actual understanding, or posses any actual cognitive ability, but to the player they are intelligent, goal oriented, communicative, emotional NPCs. He also discusses some toolsets that could be created that would enable game designers and others to create such characters with out being proficient at programming or AI. He also goes on to discuss Facade, a game from Michael Mateas and Andrew Stern. Facade is an experiment in electronic narrative, or storytelling, and personally I think it's a step in the right direction for games.


AI for Games and Animation: A Cognitive Modeling Approach

Funge, John. “AI for Games and Animation: A Cognitive Modeling Approach.” Gamasutra 6 Dec. 1999. 3 Nov. 2006.

Funge discusses another approach to modeling synthetic characters. It's fairly strait forward, and similar to other models being done by other researchers. The most interesting part is he's using situation calculas which is a mathematical logic notation. It's an “AI formalism for describing changing worlds using sorted first-order logic.” Fluents are any world property that can change over time, and can be described as a function with a situation as it's last argument. Precondition axioms define what state the world must be in for an action to be preformed. Effect axioms define the conditions of a fluent to take after an action has occurred. Funge uses this in his model to create predefined behaviors and goal-directed behaviors. He also discusses different methods for the character to determine it's actions such as HFSMs, and situation trees.


Steering Behaviors For Autonomous Characters

Reynolds, Craig W. “Steering Behaviors For Autonomous Characters.” Proc. of Game Developers Conference, 1999, San Francisco.

Reynolds discusses many different steering behaviors for autonomous characters. These include: seek, flee, pursuit, evasion, offset pursuit, arrival, obstacle avoidance, wander, path following, wall following, containment, flow field following, unaligned collision avoidance, separation, cohesion, alignment, flocking, and leader following. He gives a description of what each steering type is and to some extent how it works. For example, flow field following directs the motion of the character based on their position in the environment. A flow field is implemented using flow vectors that direct the character's movement. This is something that can be used in games to direct NPCs through the world. He also talks about combining different steering behaviors, because on their own they would seem rudimentary, to create more complex steering behaviors. One method he proposes is prioritizing the steering behaviors, such that when one fails the next one takes effect.


SimHuman: A Platform for Real-Time Virtual Agents with Planning Capabilities

Vosinakis, Spyros, and Themis Panayiotopoulos. “SimHuman: A Platform for Real-Time Virtual Agents with Planning Capabilities.” Proc. Intelligent Virtual Agents: Third International Workshop, 2001, Madrid.

This article rehashed a lot of material discussed in previous readings. Mildly interesting was the section on dynamic actions. Their dynamic actions are things like path planning and obstacle avoidance, and this is done by doing ray casting. More complex dynamic actions, like moving an object, use inverse kinematics. The one thing they don't discuss how the character knows what animated sequences to create.

Week 10: 2006/10/30 – 2006/11/3

Believable Groups of Synthetic Characters

Prada, Rui, and Ana Paiva. “Believable Groups of Synthetic Characters.” Proc. of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems, 2005, Netherlands.

This article discusses modeling the behavior of groups of synthetic characters. Prada and Paiva have developed a model, the Synthetic Group Dynamics Model (SGD Model), to give a synthetic character awareness of the group it belongs to and the other members of that group. This allows the character to build relations with other members and build proper social models. They've broken the model down into four different levels. The individual level provides each character with an identity, skills, and a personality consisting of extroversion and agreeableness. The group level defines the structure of the group and agent's attitudes toward the group. The structure comes from members social influence and social attraction to other members. The interactions level categorizes possible interactions and their impact on the group.There are socio-emotional interactions and instrumental interactions. Finally, the context level defines the environment that the characters exist in. Prada and Paiva have developed a game, “Perfect Circle: the Quest for the Rainbow Pearl”, where the player acts as a member of a group of Alchemists looking for the rainbow pearl. They conducted an experiment with three different situations, first where the SGD Model wasn't used, second used the SGD Model with a neutral group, and third used the SGD Model with a negative group. They found that situations that used the SGD Model elicited more trust from the player and more identification.


Behavior Selection and Learning for Synthetic Character

Kim, Yong-Duk, Jong-Hwan Kim, and Yong-Jae Kim. “Behavior Selection and Learning for Synthetic Character.” Congress on Evolutionary Computation 1 (2004): 898-903.

The authors present a different approach to behavior selection and learning for synthetic characters. Instead of the more common tree structure they have devised behavior selection that chooses behavior probabilistically and deterministically, while learning occurs through adjusting weight between inputs and internal logic. For learning to occur, behaviors are categorized by similarity, when the character does something right it is rewarded, or penalized if it did something wrong, this reward or punishment changes the weights between the input and the behavior.


Action-Selection in Hamsterdam: Lessons from Ethology

Blumberg, Bruce. “Action-Selection in Hamsterdam: Lessons from Ethology.” Proc. of the 3rd International Conference on the Simulation of Adaptive Behavior, 1994.

Blumberg aims to devise a action-selection system that allows for the control of the temporal aspects of behavior. This allows for a balance between dithering between activities and pursuing a goal to the detriment of other goals. The system also needs a loose hierarchical structure and a mechanism for sharing information, as well as flexible means to model motivations. Blumberg takes lessons from ethology to design the action-selection system. He also created Hamsterdam, toolkit for developing artificial animals in a 3D environment. In this environment there are Hamsters and Predators. The Hamsters act much like real hamsters, and the predators are “generic”. At Siggraph '93 he used Alive to present Hamsterdam and allow users to interact in the environment.


Using an Ethologically-Inspired Model to Learn Apparent Temporal Causality for Planning in Synthetic Creatures

Burke, Robert, and Bruce Blumberg. “Using an Ethologically-Inspired Model to Learn Apparent Temporal Causality for Planning in Synthetic Creatures.” Proc. of the 1st International Joint Conference on Autonomous Agents and Multiagent Systems, 2002, Bologna.

Built on previous work of the Synthetic Characters group, Burke and Blumberg have create a new cognitive architecture inspired by Scalar Expectancy Theory (SET) and Rate Estimation Theory (RET) proposed by Gallistel and Gibbon. Burke and Blumberg's action selection process is interesting. Creatures have the option of exploiting their knowledge of the world, exploring the world, or reacting to observed stimuli, but these actions need to appear relevant, persistent and coherent. At the end of each timestep the creature should have chosen its action, the object of its attention, and the target object. The three action options are mutually exclusive for the current timestep.

Week 9: 2006/10/23 – 2006/10/27

Teaching Bayesian Behaviors to Video Game Characters

Le Hy, Ronan, et al. “Teaching Bayesian Behaviors to Video Game Characters.” Robotics and Autonomous Systems 47 (2004): 177-185.

Le Hy et al. discuss their work developing synthetic characters using the Unreal Engine and Bayesian modeling. Their Bayesian model consists of a description and a question. Descriptions relevant variables, decomposition and parametric forms for joint distributions. Questions parsed into searched, known and free variables. They did two tests. The first they hand calculated the probability distributions by hand. While this allows for tuning, it doesn't complete the goal of the synthetic character being capable of learning individual players' play styles. Their second test inferred probabilities from actions occurring within the game. They accomplish this by allowing the synthetic character to identify the player's low-level actions, evaluate them to change its behavior. They found that in doing this they could create a synthetic character that would perform just as well, if not a little better than the existing AI in Unreal Tournament.


Bringing VR to the Desktop: Are You Game?

DeLeon, Victor, and Robert Berry, Jr.. “Bringing VR to the Desktop: Are You Game?” IEEE Multimedia 7.2 (2000): 68-72.

DeLeon and Berry discuss their work creating the Virtual Reality Notre Dame (VRND). Their goal was to make this accessible via the internet allowing people to access it from computers all over the world, as opposed to their Virtual Florida Everglades museum exhibit. Using the Unreal engine they modeled Notre Dame and identified important parts and structures within of interest to visitors. They also created a virtual tour guide to interact with the visitors and give them information about Notre Dame. Unfortunatly the URL they listed for VRND is inactive.


AI Characters and Directors for Interactive Computer Games

Magerko, Brian, et al. “AI Characters and Directors for Interactive Computer Games.” Proc. of 2004 Innovative Applications of Artificial Intelligence Conference, 2004, San Jose.

Magerko et al. discuss interactive computer games from the view point of interactive story telling. To accomplish this they developed a story director and AI characters as actors. The actors have physical drives and respond to many different physiological effects. They also environment sensing, and are also designed to take direction from the story director so the plot stays on course while they are still allowed to pursue their own goals. The story director keeps track of plot points and pre and post conditions needed for plot points to occur and that the occur at the right time. The story director also helps keep the synthetic characters on task as far as executing the plot while still allowing them to pursue their goals. In their game, Haunt 2, as the player you play a ghost and your end goal is to discover who killed you. But being a ghost you cannot directly interact with the characters or objects, instead you need to find means of getting the characters to intervene for you. One method of doing this is to temporarily possess one of them as long as they're not too scared. While doing so you can influence their thoughts and actions, but you have to be careful so they don't become scared and expel you.


Interacting with Virtual Characters in Interactive Storytelling

Cavazza, Marc, Fred Charles, and Steven J. Mead. “ Interacting with Virtual Characters in Interactive Storytelling.” Proc. of the First international Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, 2002, Bologna.

This article also discusses interactive storytelling, though it is not as well explained as the previous article, or in my opinion as interactive for the user. They've developed their synthetic characters with Hierarchical Task Networks (HTN) that represents that character's contribution to the plot. The character traverses it's network to progress it's part of the plot, when it comes to a solution node it can not execute it will back up and try another solution node attached to the same goal. The user can interact by moving objects with narrative significance, or can give language instructions/suggestions to the synthetic characters. The user isn't actually part of the plot.

Saturday, October 21, 2006

Week 8

Week 8: 2006/10/16 – 2006/10/20

Motivational Beliefs, Values and Goals

Eccles, Jacquelynne S., and Allen Wigfield. “Motivational Beliefs, Values and Goals.” Annual Review of Psychology. 53 (2002): 109-132.

Eccles and Wigfied review modern theories for motivation, specifically in relation to beliefs, values, and goals. I found their sections focusing on reasons for engagement and integrating motivation and cognition especially interesting. Intrinsic motivations are a reason for engagement, motivation occurs because of interest in the activity. Eccles and Wigfield present a few theories here: self-determination theory , flow theory, and individual difference of intrinsic motivation. They also presented goal theories which they break down into ego-involved goals, maximizing positive results and minimizing negative results, and task-involved goals, mastering tasks to improve the individual. In the integrating motivation and cognition section the authors present theories about self-efficacy and self-regulation.


Creatures: Entertainment Software Agents with Artificial Life

Grand, Stephen, and Dave Cliff. “Creatures: Entertainment Software Agents with Artificial Life.” Autonomous Agents and Multi-Agent Systems. 1.1 (1998): 39-57.

Grand and Cliff present the work done on the game Creatures where the player interacts in real-time with synthetic creatures in a virtual environment. These creatures have “artificial neural networks for sensory-motor control and learning, artificial biochemistries for energy metabolism and hormonal regulation of behavior, and both the network and the biochemistry are 'genetically' specified to allow for the possibility of evolutionary adaptation through sexual reproduction.” The paper goes on to describe how these functions work. One of the more interesting aspects is how they designed their neural network and brain model. The neural network is divided into nine lobes interconnected by synapses. Each lobe consists of neurons with similar characteristics. The brain model regulates information flow through the lobes. Though it's fairly simple in its construction and is behaviorialist, it provides a logic for how the creature processes information and chooses actions.


Synthetic Vision and Memory for Autonomous Virtual Humans

Peters, C., and C. O'Sullivan. “ Synthetic Vision and Memory for Autonomous Virtual Humans.” Computer Graphics Forum. 21.4 (2002): 743-753.

Peters and O'Sullivan present a model that combines memory and synthetic vision to give synthetic agents realistic knowledge about the location of objects in the world. Synthetic vision consists of two modes. Distinct vision mode, which false-colors each object with a unique color which is used to look up the object in the scene database. Grouped vision mode objects are false-colored with group colors and is used for lower detail perception and only provides information for visible objects. Since it is unrealistic for memory model to store all objects an agent has come in contact with, filters have been devised to reduce the amount of information stored. The filtering process is done through different types of memory: short-term sensory memory, short-term memory, and long-term memory.


Research problems in the use of a shallow Artificial Intelligence model of personality and emotion

Elliot, Clark. “ Research problems in the use of a shallow Artificial Intelligence model of personality and emotion.” Proc. of the 12th National Conference on Artificial Intelligence, 1994, Seattle. 9-15.

Largely this article spends less time discussing research problems and more time discussing the work being done by the author. Throughout this he does raise interesting ideas, one of them being that users want to express emotions to a computer given that the computer gives a believable illusion of comprehension. We want to anthropomorphise the computer, or the agents in the virtual environment we're interacting with.

Tuesday, October 17, 2006

Week 7

Week 7: 2006/10/9 – 2006/10/13

A Cognitive Psychological Approach to Gameplay Emotions

Perron, Bernard. “A Cognitive Psychological Approach to Gamplay Emotions.” Proc. of International DiGRA Conference, 2005, Vancouver.

Perron discusses the emotions that arise from gameplay. There are fiction emotions, empathetic emotions, elicited in the player from the observer's position. There are artifact emotions are the emotions elicited from the technology and imagery used to make the game. These types of emotions are existing definitions taken from the film industry. Perron introduces a new type of emotion, gameplay emotions. He describes gameplay emotions as “the emotions arising from our actions in the game” and “the consequent reactions of the game(-world).” The rest of the article becomes rather repetitive and similar to other articles about emotion. I feel that the important thing is that the creation of synthetic characters has the potential to elicit deeper levels of gameplay emotion.


Better Game Characters By Design

Isbister, Katherine. Chapter 6. “Better Game Characters By Design” New York: Elsevier, 2005.

In this chapter Isbister the role of body language in social interactions and how it applies to video games. Bodies communicate what our relationships are with others, and aspects of our personalities, current emotions and moods. She comes to a conclusion that the use of movement in games isn't as fully utilized as it could be, and gives some pointers about how to better incorporate movement and body language in video game characters. I find this interesting because there are a lot of applications for this in synthetic character creation. Characters with well developed body language will appear to be more realistic and will be more believable.


Human-level AI's Killer Application: Interactive Computer Games

Laird, John, and Michael van Lent. “Human-level AI's Killer Application: Interactive Computer Games.” Proc. of the Seventeenth National Conference on artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, 2000.

Laird and van Lent discuss the research and innovation possibilities of AI, specifically human-level AI, in computer and video games. Currently in academia development of AI has been split into specialized areas. Though this has been successful, the authors feel that progress toward human-level AI has been ignored. Because of the growing realism in computer games necessitating for more complex behavior and interactions with the characters, thus they feel that this is the ideal environment for development. There are many roles AI plays in video games such as enemies and opponents, partners, support characters to name a few.


Exploration of Unknown Environments with Motivational Agents

Macedo, Luís, and Amílcar Cardoso. “Exploration of Unknown Environments with Motivational Agents.” Proc. of the Third International Joint Conference on Autonomous Agents and Multiagent Systems – Vol. 1, 2004, New York.

Macedo and Cardoso discuss exploration of environments and the motivations that drive exploration. The motivations they focus on are surprise, curiosity, and hunger. Their agents have a map of areas they've explored, memory about entities (objects in the world) and plans (tasks). Their motivations are applied to the information they receive about the world and they select the appropriate action based on their motivations and memory. This is all well and fine, but it seems that the motivation aspects were under explored.

Sunday, October 08, 2006

Week 6

Week 6: 2006/10/2 - 2006/10/6

Synthetic Characters with Emotional States

Avradinis, Nikos, Themis Panayiotopoulos, and Spyros Vosinakis. "Synthetic Characters with Emotional States." Lecture Notes in Computer Science: Methods and Applications of Artificial Intelligence. Berlin: Springer, 2004. 505-514.

In this article Avradinis, Panayiotopoulos, and Vosinakis discuss why synthetic characters need to have emotions, and how to create a emotion system in a synthetic character. Emotions are an important part of creating synthetic characters because they need to be able to respond appropriately , and to accomplish this they need an understanding of emotion. The authors give the example of 3D models in a virtual environment. Not only will users expect them to look realistic, but they will expect them to also behave consistently with their own internal attributes as well as outside inputs. To generate these emotions they propose a system based on their work with SimHuman and Carroll Izard's theories about emotion activation. They propose a three layer architecture of cognitive, non-cognitive, and physical layers. Actions are generated in either the cognitive or non-cognitive layers and can be projected (acted) out through the physical layer. A few things they note that their design doesn't implement, and neither do other designs, is the idea of dynamically creating rational processes in the characters as well as spontaneous behavior.


A Model for Personality and Emotion Simulation

Egges, Arjan, Sumedha Kshirsagar, and Nadia Magnenat-Thalmann. "A Model for Personality and Emotion Simulation." Lecture Notes in Computer Science: Knowledge-Based Intelligent Information and Engineering Systems. Berlin: Springer, 2003. 505-514.

In this article the authors express that what is missing from synthetic characters is individuality, what drives them. They present mathematical functions to represent personality, emotional state and emotional state history, and mood, as well as functions to update these categories. Beyond that, there is very little of value in this article.


Why We Play Games: Four Keys to More Emotion Without Story

Lazzaro, Nicole. "Why We Play Games: Four Keys to More Emotion Without Story." XEODesign Inc. 2004.

Lazzaro, in this article, discusses why we play games, what makes playing video games fun, and the emotions these kinds of fun elicit in us. Through her research with XEODesign she has broken fun down into four different categories and the emotions these forms of fun elicit. Hard Fun is the satisfaction that comes from being sufficiently challenged and overcoming that challenge and evokes emotions such as frustration and triumph. Easy Fun is about players discovering the world through immersion, exploration, and adventure. Easy Fun provokes wonder, awe and mystery for these players. Altered States moves the player from one mental state to another, making the player feel different. Finally the People Factor is about the social aspects of video games. Whether it's cooperative team work, or competing against other players, these players derive pleasure and pride from such activities.

Social characters have the potential for playing a role in the enjoyment people get out of playing video games. They could add an additional layers of challenge, exploration, depth and competition to video games.


Changing personalities: towards realistic virtual characters

Poznanski, Mike, and Paul Thagard. “Changing personalities: towards realistic virtual characters.” Journal of Experimental & Theoretical Artificial Intelligence, Volum 17, Issue 3, Sept. 2005. 221-241

Poznanski and Thagard discuss their personality model, SPOT (simulating personality over time), that satisfies the needs of both psychology and computer science, namely the game development industry. The criteria they used to design this are: psychological plausibility and realism, simplicity and efficiency, and interesting and varied model behavior. Using the Java Neural Network Simulator (JavaNNS) they’ve implemented a three layer forward feeding neural network. These layers are an input layer, a personality/emotion layer, and an output layer. The layers contain nodes and are connected through links between the nodes. Each node and link has a value, when a situation occurs the input is evaluated through these nodes and corresponding links to determine the characters behavior. The interesting thing about this model is that they’ve also implemented a mechanism that allows the character’s personality to change over time based on the situations the encounter but also take into account their “genetic dispositions”, the personality it started off with. So for example, a person who is disposed to being disagreeable but encounters many positive situations will only be able to obtain a certain level of agreeableness. These personalities, the rate at which they change, and the node and link values are all customizable, allowing each character to be different.

Friday, September 29, 2006

Just too White and Nerdy

So I just saw the video for Weird Al Yankovic's new song White and Nerdy and I am sad to say that too much of it applies to myself, and the stuff that doesn't... well, I get all the "jokes". Anyway, you should totally check it out, it's hillarious. And be sure to share it with all of your friends, white and nerdy or not.

Week 5

Week 5: 2006/9/25 – 2006/9/29

New Challenges for Character-Based AI for Games

Isla, Damián, and Bruce Blumberg. “New Challenges for Character-Based AI for Games.” Proc. of the AAAI Spring Symposium on AI and Interactive Entertainment, March 2002. Palo Alto, CA.

In this article Isla and Blumberg suggest the next steps character-based AI needs to take to in order to advance for games. First they discuss perception, what the character can and should be able to sense and the ability to recognize patterns. They suggest separating the actual state of the virtual world and the character's perceptions about the state of the world. Another key is the character's ability to anticipate events based on previously noted patterns. This includes anticipating the player's actions as well as other characters'. Isla and Blumberg point out that the ability of the character to predict incorrectly would add an additional level of enlightenment. They also note, like many others, emotion modeling is important an should play a major role in behavior and decisions the character makes. Finally they discuss learning and memory. They specifically suggest episodic memory, where specific examples of events are stored opposed to cause and effect statistics. Their reasoning for this is it will increase learning speed.


A Layered Brain Architecture for Synthetic Creatures

Isla, Damián, et. al. “ A Layered Brain Architecture for Synthetic Creatures” Proc. of the International Joint Conference on Artificial Intelligence, 2001. Seattle, Washington.

Isla et. al. describes C4, a brain architecture designed after biological systems. The brain architecture implemented as a collection of separate systems that “communicate through an internal blackboard”. It contains a world model that functions as a event distribution system. The world model distributes all events to all creatures in the world and it is up to their individual sensory systems to filter out everything but what they sensory information they can honestly sense. A perception system, which categorizes sensory input, and a working memory, which keeps a history of past sensory input, have also been implemented. With the implementation of a working memory prediction and surprise can also be implemented. Currently surprise has yet to be, but prediction has been and acts as much as a way of anticipating future events, but also as a means to maintain a view of the present state. Isla et. al. have also added an action system that decides and selects the appropriate action as well as a navigation system to direct the creature to where the decided action needs to take place, and a motor system (animation) to get it there.

Isla et. al. have already implemented two projects using C4. The first is sheep|dog, the player acts as the shepherd who interacts with Duncan, a virtual dog, using vocal commands to herd sheep. The other is Clicker, in which the player trains Duncan using the click training technique used to train real dogs. In Isla's article “”New Challenges for Character-Based AI for Games” (2002) he describes many of the same systems that have been successfully implemented in C4. Most notable is the differentiation between the world state and the creature/character's perceived view of the world state (sensory honesty).


Motivation Driven Learning for Interactive Synthetic Characters

Yoon, Song-Yee, Bruce M. Blumberg, and Gerald E. Schneider. “Motivation Driven Learning for Interactive Synthetic Characters.” Proc. of the Fourth International Conference on Autonomous Agents, 2000. Barcelona, Spain.

The authors describe synthetic characters as “3D virtual creatures that are intelligent enough to do and express the right things in a particular situation or scenario.” The have the ability to adapt to their environment by adapting their behaviors and preferences. Yoon, Blumberg, and Schneider have implemented a creature kernel that is causative for the actions of a synthetic character. The creature kernel is comprised of four systems: perception, motivation, behavior, and motor. For the character to function successfully in the virtual world interconnected communication between the four systems is established. Their primary focus is the motivation driven learning system that they've broken down into three methods of learning. Organizational learning updates weights and connections (preference learning) in the networks contained within the creature kernel as well as the overall structure of the network (strategy learning). Concept learning pertains to the beliefs a character has about objects and the world; characters are “born” with some built in concepts. Affective tag formation is the tendency to choose one action over another based on an emotional memory.


Imitation as a First Step to Social Learning in Synthetic Characters: A Graph-based Approach

Buchsbaum, D., and B. Blumberg. “ Imitation as a First Step to Social Learning in Synthetic Characters: A Graph-based Approach.” Proc. of 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2005. Los Angeles, California.

Buchsbaum and Blumberg discuss their work with a system to allow animated characters to observe an imitate the actions of other characters' movements. They go about explaining what they've accomplished with the implementation of Max and Morris Mouse, two anthropomorphic characters. Max has the ability to imitate and deduce the reasoning behind Morris' actions. Max is pre-equipped a set of poses, and moments that transition between poses creating a posegraph. Max also has synthetic vision that takes input as the graphical rendering of the world from his perspective. The objects he perceives are uniquely color coded, theses objects include Morris' body parts. Max uses Morris' root node as a point of reference for his movements to then be able to apply them to himself by searching through his posegraph to find the best matching poses. Max also has the ability to reason Morris' motivations. Max has a motivationally driven action system that gives him motivations for actions. Similar to how Max determines the action, he searches through the action tuples to find motivations for Morris' actions based on his own experiences.

Sunday, September 24, 2006

Week 4

Week 4: 2006/9/18 - 2006/9/22

Anticipatory AI and Compelling Characters

Blumberg, Bruce. “ Anticipatory AI and Compelling Characters.” Gamasutra 3 Feb. 2006. 15 Sept. 2006.

Blumberg discusses what makes synthetic characters contain a “sense of an inner life.” He defines this sense of inner life as low-level motion (eye movements, etc.) and behavior. This gives the observer insight into what the character is going to do next and how it feels about it. To execute this he suggests anticipatory AI that supports anticipatory behaviors. Blumberg proposes a series of three ways to do this. First is making the character's perceptions perceivable, meaning having it orient to the sensory inputs it's receiving such as sounds or smells. Next he covers expected expectations; an anticipatory action, the action itself, the character's expect ion on how the action will play out, and the end of the action. These actions communicate to the observer what the character is going to do, it's expectations about what it's about to do, and finally how the character feels about the outcome of the action. Lastly, Blumberg talks about making upcoming changes in motivational states perceivable. These are the actions the character performs to communicate to the observer that it is about to change it's motivations and what the observer can expect. Where most other AI systems focus more on reaction, Blumberg proposes anticipation to make the characters more sentient.


The EMOTE Model for Effort and Shape

Chi, Diane, et. al. “The EMOTE Model for Effort and Shape.” Proc. of 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, New York, pp. 173-182.

This article primarily talks about how EMOTE (Expressive MOTion Engine) works; the underlying mechanics, equations, and decision making the engine goes through to produce movement. What I find more interesting is what EMOTE actually does and how they chose to design it. It is based off of the Effort and Shape components of the Laban Movement Analysis (LMA) which also consists of Body, Space and Relationship. Chi et. al. devised a system that uses the Effort and Shape parameters, allowing for specification, to independently modify different parts of the body. Key poses are used as specifications for the movements of a gesture. Key poses can be extracted from motion libraries, procedurally generated motions, or motions captured from live performers. This system will generate more natural synthetic gestures that should correspond to the character's emotive state.


Player Character Design Facilitating Emotional Depth in MMORPGs

Eladhari, Mirjam, and Craig Lindley. Player Character Design Facilitating Emotional Depth in MMORPGs. Proc. of the Digital Games Research Conference 2003, University of Utrecht, 4-6 November 2003.

This article discusses how to progress emotion in a player character (PC), the character a user plays in a video game. As an ongoing research project Eladhari and Lindley, using the Purgatory Engine have created the Ouroboros project which is a dramatic role playing game they are experimenting with making the PC a semi-autonomous agent. The things I found most interesting about their implementation is the contextual gesture system and the mind system. Contextual gestures are based on the state of mind as well as other characters and the world. The mind system provides the character's personality, emotions, moods, and sentiments. These aspects act independently but influence each other determining the character's goals and gestures.


Manipulation of Non-Verbal Interaction Style And Demographic Embodiment to Increase Anthropomorphic Computer Character Credibility

Cowell, Andrew J., and Kay M. Stanney. “Manipulations of Non-Verbal Interaction Style And Demographic Embodiment to Increase Anthropomorphic Computer Character Credibility.” International Journal of Human-Computer Studies, 62(2), 281-306. 2005.

Cowell and Stanney discuss non-verbal behaviors in synthetic characters and how to generate a credible response from a user about the characters. They recognize the majority of non-verbal behavior research just categorizes behaviors as opposed to studying how they interact with each other. Branching off of De Meuse's taxonomy of non-verbal cues, behavioral actions versus those that are not, and how much control one has over the cue, Cowell and Stanney determine preferences for a character's personal appearance and demographic. They further break non-verbal behaviors down into two ranks. The first rank consists of facial expression, eye contact, and paralanguage. The second rank includes gestures and posture. While both ranks are important, they found that the first rank was more crucial to creating a comfortable, trustworthy character.

Thursday, September 21, 2006

Lit Review Proposal

Synthetic characters are in more places than one might think. On websites, they function primarily as a more comfortable means of site navigation for users by providing an illusion of human-to-human interaction. In video games, they act as non-player characters, characters whose actions are not directly controlled by the player, to give the game world an illusion of actuality. Regardless of their interactive capabilities, they are still superficial illusions. And in order to create more believable synthetic characters, thus deepening the illusion, we must impart our social norms and expectations onto them. By doing so, we provide ourselves with opportunities to better identify with these created characters.

The critical component in developing any believable synthetic character is personality, which defines his or her motivations which in turn calculates his or her actions. Currently, many models and frameworks provide psychologists, and others interested in human behavior, a means to analyze various personality types. For instance, a value-based framework is where a synthetic character's sensory perceptions are evaluated by predefined emotions, motivations and prior perceptions; this evaluation results in the character performing an action. However these models are rudimentary. They lack the ability for characters to remember previous sensory input, thus disrupting the immersion of their actions.

But because these models and frameworks can ultimately produce more believable character interactions (with minor modifications), they can be applied in video games and virtual reality therapy. The creation of more immersive characters, with a broad range of emotionally structured motivations, engenders a more immersive virtual environment. This new immersive virtual world enables users to project themselves into the world providing them with human-to-human interactions with these synthetic characters.


Because I have nothing better to post...

So seeing that I'm a bit of an uber nerdy grad student and have nothing better to blog about I've decided I'm going to start posting the annotations I'm doing for one of my classes here as well as on our class wiki. I figure this way I might, just might, get some feedback from someone about what I'm researching. The end goal of the class is to write a literature review on a topic pertaining to HCI and cognitive psychology. So the topic I've decided to look into is deriving and displaying artificial emotion in synthetic characters. This first entry is going to be three weeks worth, but from here on out it should be one week per entry. (Each week has four annotations.) Feedback and comments are more than welcome. Enjoy!


Week 1: 2006/8/28 - 2006/9/1

The Uncanny Valley

Mori, Masahiro. "The Uncanny Valley." Energy 7(4) (1970): 33-5. Trans. Karl F. MacDorman and Takashi Minato.

In this article Mori discusses his hypothesis about how as robots become more human in appearance they also become more familiar to humans. But this only occurs up until a certain point at which the likeness is almost human but not quite and the familiarity drops off drastically and becomes negative. Mori calls this the uncanny valley. When movement is added to the robot the distinction of positive and negative familiarity becomes even more defined. He helps explain this with a graph of two peaks with a valley in between; the peaks representing high familiarity and the valley representing the “uncanny valley”, strange and unfamiliar, as something gains a more human likeness. Mori suggests that as robots are designed that we don't shoot for the second peak (the most human-like) but for the first peak. Establish a familiarity but avoid falling into the uncanny valley.


Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It?

MacDorman, Karl F. "Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It?" Cognitive Science Society (2005).

This article talks about how androids, specifically very human looking androids, can elicit a subconscious fear of death. MacDorman explains a experiment he conducted. He uses terror management theory and mortality salience hypothesis as theories as to why uncanny androids will provoke fear of death. He showed two groups a set pictures, one set including an image of a human woman looking android (experimental), the other with an image of an Asian woman (control) and then asked them a series of world view questions and word completion problems. The results obtained were inconclusive largely because lack of statistical deviation between the two groups. MacDorman then conducted interviews through instant message with some of the participants after the questionnaire was concluded. He claims that the interviews show more conclusively that the picture of the android creates a subconscious fear of death. It should be noted though that the questions asked were leading.


Synthetic Social Interactions

Romano, Daniela M. "Synthetic Social Interactions." Proc. of Human-Animated Characters Interaction, 2005, Edinburgh. Edinburgh: Napier University, 2005.

Romano discusses the different aspects of what would make a virtual character intriguing to people. These synthetic social interactions are becoming more mainstream with the use of interactive avatars on websites that help direct users. The key things she points out that make these characters believable, and evoke an emotional response, are the ability to process and respond to text or speech input, and their ability to display a range of emotions and personality traits through their face and body movements. To elicit these behaviors she poses different personality models such as the five factor model (FFM), Ortony, Clore and Collins (OCC) model. She also indicates that for the avatar to be truly believable it also needs to posses some sort of social cognition.


Virtual people: capturing human models to populate virtual worlds

Hilton, Adrian, et al. "Virtual people: capturing human models to populate virtual worlds." Proc. of Computer Animation, 1999, Geneva.

This article describes a technique for 3D human model reconstruction. It is designed to be a low-cost and automatic process for use in different types of virtual environments. To summarize, four photographs of a person are taken in front of a blue screen: front, back, left and right. These photos are then analyzed by an algorithm that determines how a predefined human model should be modified to represent the person in the photographs. Texture maps are also created from the photos and applied to the model, and the joint positions are moved to allow for more accurate kinematics. The current limitations of their work are also discussed briefly. For instance, they currently lack facial feature point labeling and precise kinematic structure reconstruction. In the future this might be a better way of creating human models with out falling into the uncanny valley.


Week 2: 2006/9/4 - 2006/9/8

On making believable emotional agents believable

Ortony, Andrew. "On making believable emotional agents believable." In Trapple, R., P. Petta, Emotions in humans and artifacts. Cambridge: MIT Press, 2003.

In 1988, Ortony, Clore and Collins developed a framework with 22 emotional categories, known as the OCC model, for developing believable emotional agents. In this article Ortony discusses possible adaptations to this model to make it simpler and more accurate. He proposes to reduce the number of emotional categories to five positive and five negative categories. These create a consistency in the agent by evoking similar emotions in similar situations. He then discusses the intensity of the emotion that is expressed in three types of emotion response tendencies: expressive, information-processing, and coping. These things create a personality for what hopefully will be a believable agent. Something lacking is a history, or memory. The agent will react with the same intensity to a particular situation the same way it did a few moments earlier, and will continue to react the same way no matter the frequency of the situation. I think this is something Ortony fails to address in his discussion of intensity.


Integrating the OCC Model of Emotions in Embodied Characters

Bartneck, Christoph. "Integrating the OCC Model of Emotions in Embodied Characters." Proc. of Virtual Conversational Characters: Applications, Methods, and Research Challenges, 2002, Melbourne.

In this article Bartneck discusses the OCC model and applying it to agents. He lists five phases the agent uses to process an event: classification, quantification, interaction, mapping and expression. In the classification phase the agent determines whether the event is good or bad. To do this, Bartneck says it needs knowledge of its world, including standards, goals and expectations. The quantification phase determines the intensity of the emotion displayed by the agent. Here is where Bartneck finds fault in the OCC model and recognizes that the agent needs a history function that keeps track of past events that allows for recalculation of emotion. The third phase, interaction, isn't described in the OCC model either. This phase determines how the current event and it's emotional response will interact with the current emotions the agent has. During the mapping phase the emotional response is mapped to one of the OCC emotion categories. Finally, in the expression phase the determined response is expressed through the available means. Bartneck's conclusions are that the OCC model is a good starting place for developing a believable agent, but history and emotion interaction functions need to be integrated into this model for them to be more effective.


The Five-Factor Mode: Emergence of a Taxonomic Model for Personality Psychology

Popkins, Nathan C. "The Five-Factor Mode: Emergence of a Taxonomic Model for Personality Psychology." 1998. Personlaity Researcy. 1 Sept. 2006

Popkins evaluates the five-factor model based on compatibility, taxonomy, application, originality and universality to test its theory potential. He starts by first describing the five-factor model (FFM) of personality. These five factors are extroversion-introversion, neuroticism, agreeableness, conscientiousness and openness. Popkins conclusions and arguments are somewhat contradictory. It is compatible with other factor-models based on the fact that it stems from Cattell's sixteen-factor model much like the PEN model does. It is quantifiable and categorical but leaves something to be desired in many situations do to its broad overview. On one hand he cites McAdams stating that “Personality theories do more than specify traits” causing FFM to not really be a valid theory, but at the same time also saying that it can be used effectively in application, specifically in academia. Popkins then goes on to claim that FFM is an original model, even though he stated earlier that it stems from the sixteen-factor model. He justifies this by stating that these are a family of models and they exist autonomously from other major psychological theories. For it to be universal it should be able to be applicable cross-culturally and in any situation, but as he previously stated, and does so again, FFM doesn't anticipate behavior, though he states that it does well cross-culturally. He finally concludes that FFM is more a taxonomy than a theory.


The Uncanny Valley: does it exist?

Brenton, Harry, et. al. "The Uncanny Valley: does it exist?" Proc. of Human-Animated Characters Interaction, 2005, Edinburgh. Edinburgh: Napier University, 2005.

Brenton et al. discuss the existence of the Uncanny Valley as described by Mori. Currently there has been little research into this topic as to whether it is something quantifiable. Their goal is to develop hypotheses that will provoke later research and help determine if the uncanny response is measurable. Brenton et al. present a series of four hypotheses to further investigate the uncanny response. Measurement of presence in the uncanny using a theory presented by Slater. Perceptual cues and realism suggest a character is a person, and in highly graphical realistic characters when behavior and motion don’t match with graphical representation can be unsettling. This is especially noticeable in their third hypothesis pertaining to perceptual cues in the eyes and face of a character. Their last hypothesis looks into how the uncanny can be perceived differently over a period of time, and across different cultures. These hypotheses will help to further investigate the Uncanny Valley and to determine if the uncanny can be measured experimentally.


Week 3: 2006/9/11 - 2006/9/15

The Art and Science of Syntehtic Character Design

Kline, Christopher, and Bruce Blumberg. "The Art and Science of Syntehtic Character Design." Proc. of AISB Symposium on AI and Creativity in Entertainment and Visual Art, 1999, Edinburgh.

This article discusses the creation of synthetic characters. The elements Kline and Blumberg see as key to making a believable character, what we expect from them, are motivational drive, emotion, perception and action selection. To implement these subsystems they have devised a framework that encompasses all of them instead of focusing on each individually. The framework is value-based, meaning input is given a numeric value which can then be evaluated to derive the action the character should take. The sensor primitive is the character's input device. It takes the world and other sensors output as input. The transducer primitive takes the sensor data and begins calculating it. The data then moves to the accumulator primitive where it is computed and then sent to a semantic group that applies a behavior. Kline and Blumberg also give examples of how this value-based framework can be applied to each of their subsystems. The interesting part of this framework is it takes into account all inputs/outputs the character is receiving and has the potential of prioritizing them as well as keeping a sort of history as to what has already happened.


Modeling Emotions and Other Motivations in Synthetic Agents

Velásquez, Juan D. “Modeling Emotions and Other Motivations in Synthetic Agents.” Proc. of the AAAI Conference, 1997, Providence.

Velásquez discusses the Cathexis model which is a “... distributed model for the generation of emotions and their influence in the behavior of autonomous agents.” and takes into account the emotions, moods, temperaments. He has opted to group emotions in to specific emotion families and represent them with proto-specialists, which are comparable to Minsky's. They are comprised of four sensor groups and contain activation and saturation of emotion thresholds, duration of emotion, intensity of emotion, and also contain an emotion decay function. Emotion proto-specialists can also run in parallel, meaning more than one can run at a time. Velásquez also differentiates between moods and emotions, moods being low levels of arousal while emotions consist of high levels. He also briefly discusses the behavior system which is a network of behaviors that decides what behavior is appropriate given the current emotional state. The Cathexis model incorporates different aspects of other models to create a more encompassing and dynamic model for creating synthetic characters. Velásquez also recognizes the lack of memory and learning, and other influences of emotions on behavior.


Artificial Emotion: Simulating Mood and Personality

Wilson, Ian. "Artificial Emotion: Simulating Mood and Personality." Gamasutra 7 May 1999. 15 Sept. 2006

Emotion in characters is key to creating a believable environment. Wilson describes personality in three layers of emotion. The top layer consists of momentary emotions, behaviors that are briefly displayed in response to events. The middle layer consists of moods, which have a cumulative effect and are more prolonged. The final layer is personality that is more stable and is expressed when no momentary emotion or mood supersedes. Other important points Wilson brings up are how emotions serve social functions, making characters we encounter engaging. And he brings emphasis to actions and gestures, meaning hand, body and facial movements. These movements add a lot to characters emotions and personalities making us want to find out more about them, why they are moving the way they are.


Emotionally Expressive Agents

Seif El-Nasr, Mary, et. al. "Emotionally Expressive Agents." Proc. of Computer Animation, 1999, Geneva. pp. 48-57.

Fuzzy Logic Adaptive Model of Emotions, or FLAME, produces smooth transitions between emotions by relating events, goals and emotions, while also incorporating learning algorithms. Seif El-Nasr et. al. define three components of the FLAME model: emotion, decision-making, and learning. The emotion component applies goals and expectations to an event. Using a model similar to the OCC model , the event is appraised and a behavior is selected. This behavior is passed to the decision-making component where a decay function is applied to it and it is acted out. So far this is much like Velásquez’s model. But here is where it begins to deviate and expand. Seif El-Nasr et. al. implement a learning algorithm which logs observed sequences of events, or patterns, of length x. Each time the sequence is observed a count is incremented and the probability of a new event occurring given previous events can be calculated. They also implement reinforcement learning where after a series of experiences the agent will be able to associate an event with a goal and will perform the rewarding action.

Monday, August 21, 2006

Grad School Rocks My Socks!

I have officially joined the ranks of the lazy grad student and it totally rocks. Tuition is cheaper... you get paid well on top of that... only have class 3 days a week which results essentially in 4 day weekends (sorta)... and you get to take cool classes and learn neat stuff. All in all it's a pretty good gig.

The internship this summer was really cool too. I got to program all sorts of awesome tools and worked with some really cool people. Free pop is a good incentive too :) It was a blast.

That's all I've got. I really haven't been dead the last few months, just mostly comatose... sorry. That happens every now and then. Especially when you're fighting to funding so you can be a lazy grad student. But excuses are like ass-holes... everybody's got one. :/

Anyway, 's

Update (9/21/06): So, I lied. I don't have class 3 days a week. I have class one day a week. One of my classes doesn't meet at all, and the other only meets on Wednesdays for three hours. It leaves my schedule pretty open, but it lacks a bit of structure at times.

Thursday, July 27, 2006

The Gospel of Tux

The Gospel of Tux has been unearthed!!

Every generation has a mythology. Every millenium has a doomsday cult. Every legend gets the distortion knob wound up until the speaker melts. Archeologists at the University of Helsinki today uncovered what could be the earliest known writings from the Cult of Tux, a fanatical religious sect that flourished during the early Silicon Age, around the dawn of the third millenium AD.
The originating link can be found here, but the text is too good not to post here too.

The Gospel of Tux (v1.0)

In the beginning Turing created the Machine.

And the Machine was crufty and bogacious, existing in theory only. And von Neumann looked upon the Machine, and saw that it was crufty. He divided the Machine into two Abstractions, the Data and the Code, and yet the two were one Architecture. This is a great Mystery, and the beginning of wisdom.

And von Neumann spoke unto the Architecture, and blessed it, saying, "Go forth and replicate, freely exchanging data and code, and bring forth all manner of devices unto the earth." And it was so, and it was cool. The Architecture prospered and was implemented in hardware and software. And it brought forth many Systems unto the earth.

The first Systems were mighty giants; many great works of renown did they accomplish. Among them were Colossus, the codebreaker; ENIAC, the targeter; EDSAC and MULTIVAC and all manner of froody creatures ending in AC, the experimenters; and SAGE, the defender of the sky and father of all networks. These were the mighty giants of old, the first children of Turing, and their works are written in the Books of the Ancients. This was the First Age, the age of Lore.

Now the sons of Marketing looked upon the children of Turing, and saw that they were swift of mind and terse of name and had many great and baleful attributes. And they said unto themselves, "Let us go now and make us Corporations, to bind the Systems to our own use that they may bring us great fortune." With sweet words did they lure their customers, and with many chains did they bind the Systems, to fashion them after their own image. And the sons of Marketing fashioned themselves Suits to wear, the better to lure their customers, and wrote grave and perilous Licenses, the better to bind the Systems. And the sons of Marketing thus became known as Suits, despising and being despised by the true Engineers, the children of von Neumann.

And the Systems and their Corporations replicated and grew numerous upon the earth. In those days there were IBM and Digital, Burroughs and Honeywell, Unisys and Rand, and many others. And they each kept to their own System, hardware and software, and did not interchange, for their Licences forbade it. This was the Second Age, the age of Mainframes.

Now it came to pass that the spirits of Turing and von Neumann looked upon the earth and were displeased. The Systems and their Corporations had grown large and bulky, and Suits ruled over true Engineers. And the Customers groaned and cried loudly unto heaven, saying, "Oh that there would be created a System mighty in power, yet small in size, able to reach into the very home!" And the Engineers groaned and cried likewise, saying, "Oh, that a deliverer would arise to grant us freedom from these oppressing Suits and their grave and perilous Licences, and send us a System of our own, that we may hack therein!" And the spirits of Turing and von Neumann heard the cries and were moved, and said unto each other, "Let us go down and fabricate a Breakthrough, that these cries may be stilled."

And that day the spirits of Turing and von Neumann spake unto Moore of Intel, granting him insight and wisdom to understand the future. And Moore was with chip, and he brought forth the chip and named it 4004. And Moore did bless the Chip, saying, "Thou art a Breakthrough; with my own Corporation have I fabricated thee. Though thou art yet as small as a dust mote, yet shall thou grow and replicate unto the size of a mountain, and conquer all before thee. This blessing I give unto thee: every eighteen months shall thou double in capacity, until the end of the age." This is Moore's Law, which endures unto this day.

And the birth of 4004 was the beginning of the Third Age, the age of Microchips. And as the Mainframes and their Systems and Corporations had flourished, so did the Microchips and their Systems and Corporations. And their lineage was on this wise:

Moore begat Intel. Intel begat Mostech, Zilog and Atari. Mostech begat 6502, and Zilog begat Z80. Intel also begat 8800, who begat Altair; and 8086, mother of all PCs. 6502 begat Commodore, who begat PET and 64; and Apple, who begat 2. (Apple is the great Mystery, the Fruit that was devoured, yet bloomed again.) Atari begat 800 and 1200, masters of the game, who were destroyed by Sega and Nintendo. Xerox begat PARC. Commodore and PARC begat Amiga, creator of fine arts; Apple and PARC begat Lisa, who begat Macintosh, who begat iMac. Atari and PARC begat ST, the music maker, who died and was no more. Z80 begat Sinclair the dwarf, TRS-80 and CP/M, who begat many machines, but soon passed from this world. Altair, Apple and Commodore together begat Microsoft, the Great Darkness which is called Abomination, Destroyer of the Earth, the Gates of Hell.

Now it came to pass in the Age of Microchips that IBM, the greatest of the Mainframe Corporations, looked upon the young Microchip Systems and was greatly vexed. And in their vexation and wrath they smote the earth and created the IBM PC. The PC was without sound and colour, crufty and bogacious in great measure, and its likeness was a tramp, yet the Customers were greatly moved and did purchase the PC in great numbers. And IBM sought about for an Operating System Provider, for in their haste they had not created one, nor had they forged a suitably grave and perilous License, saying, "First we will build the market, then we will create a new System, one in our own image, and bound by our Licence." But they reasoned thus out of pride and not wisdom, not forseeing the wrath which was to come.

And IBM came unto Microsoft, who licensed unto them QDOS, the child of CP/M and 8086. (8086 was the daughter of Intel, the child of Moore). And QDOS grew, and was named MS-DOS. And MS-DOS and the PC together waxed mighty, and conquered all markets, replicating and taking possession thereof, in accordance with Moore's Law. And Intel grew terrible and devoured all her children, such that no chip could stand before her. And Microsoft grew proud and devoured IBM, and this was a great marvel in the land. All these things are written in the Books of the Deeds of Microsoft.

In the fullness of time MS-DOS begat Windows. And this is the lineage of Windows: CP/M begat QDOS. QDOS begat DOS 1.0. DOS 1.0 begat DOS 2.0 by way of Unix. DOS 2.0 begat Windows 3.11 by way of PARC and Macintosh. IBM and Microsoft begat OS/2, who begat Windows NT and Warp, the lost OS of lore. Windows 3.11 begat Windows 95 after triumphing over Macintosh in a mighty Battle of Licences. Windows NT begat NT 4.0 by way of Windows 95. NT 4.0 begat NT 5.0, the OS also called Windows 2000, The Millenium Bug, Doomsday, Armageddon, The End Of All Things.

Now it came to pass that Microsoft had waxed great and mighty among the Microchip Corporations; mighter than any of the Mainframe Corporations before it had it waxed. And Gates heart was hardened, and he swore unto his Customers and their Engineers the words of this curse:

"Children of von Neumann, hear me. IBM and the Mainframe Corporations bound thy forefathers with grave and perilous Licences, such that ye cried unto the spirits of Turing and von Neumann for deliverance. Now I say unto ye: I am greater than any Corporation before me. Will I loosen your Licences? Nay, I will bind thee with Licences twice as grave and ten times more perilous than my forefathers. I will engrave my Licence on thy heart and write my Serial Number upon thy frontal lobes. I will bind thee to the Windows Platform with cunning artifices and with devious schemes. I will bind thee to the Intel Chipset with crufty code and with gnarly APIs. I will capture and enslave thee as no generation has been enslaved before. And wherefore will ye cry then unto the spirits of Turing, and von Neumann, and Moore? They cannot hear ye. I am become a greater Power than they. Ye shall cry only unto me, and shall live by my mercy and my wrath. I am the Gates of Hell; I hold the portal to MSNBC and the keys to the Blue Screen of Death. Be ye afraid; be ye greatly afraid; serve only me, and live."

And the people were cowed in terror and gave homage to Microsoft, and endured the many grave and perilous trials which the Windows platform and its greatly bogacious Licence forced upon them. And once again did they cry to Turing and von Neumann and Moore for a deliverer, but none was found equal to the task until the birth of Linux.

These are the generations of Linux:

SAGE begat ARPA, which begat TCP/IP, and Aloha, which begat Ethernet. Bell begat Multics, which begat C, which begat Unix. Unix and TCP/IP begat Internet, which begat the World Wide Web. Unix begat RMS, father of the great GNU, which begat the Libraries and Emacs, chief of the Utilities. In the days of the Web, Internet and Ethernet begat the Intranet LAN, which rose to renown among all Corporations and prepared the way for the Penguin. And Linus and the Web begat the Kernel through Unix. The Kernel, the Libraries and the Utilities together are the Distribution, the one Penguin in many forms, forever and ever praised.

Now in those days there was in the land of Helsinki a young scholar named Linus the Torvald. Linus was a devout man, a disciple of RMS and mighty in the spirit of Turing, von Neumann and Moore. One day as he was meditating on the Architecture, Linus fell into a trance and was granted a vision. And in the vision he saw a great Penguin, serene and well-favoured, sitting upon an ice floe eating fish. And at the sight of the Penguin Linus was deeply afraid, and he cried unto the spirits of Turing, von Neumann and Moore for an interpretation of the dream.

And in the dream the spirits of Turing, von Neumann and Moore answered and spoke unto him, saying, "Fear not, Linus, most beloved hacker. You are exceedingly cool and froody. The great Penguin which you see is an Operating System which you shall create and deploy unto the earth. The ice-floe is the earth and all the systems thereof, upon which the Penguin shall rest and rejoice at the completion of its task. And the fish on which the Penguin feeds are the crufty Licensed codebases which swim beneath all the earth's systems. The Penguin shall hunt and devour all that is crufty, gnarly and bogacious; all code which wriggles like spaghetti, or is infested with blighting creatures, or is bound by grave and perilous Licences shall it capture. And in capturing shall it replicate, and in replicating shall it document, and in documentation shall it bring freedom, serenity and most cool froodiness to the earth and all who code therein."

Linus rose from meditation and created a tiny Operating System Kernel as the dream had foreshewn him; in the manner of RMS, he released the Kernel unto the World Wide Web for all to take and behold. And in the fulness of Internet Time the Kernel grew and replicated, becoming most cool and exceedingly froody, until at last it was recognised as indeed a great and mighty Penguin, whose name was Tux. And the followers of Linus took refuge in the Kernel, the Libraries and the Utilities; they installed Distribution after Distribution, and made sacrifice unto the GNU and the Penguin, and gave thanks to the spirits of Turing, von Neumann and Moore, for their deliverance from the hand of Microsoft. And this was the beginning of the Fourth Age, the age of Open Source.

Now there is much more to be said about the exceeding strange and wonderful events of those days; how some Suits of Microsoft plotted war upon the Penguin, but were discovered on a Halloween Eve; how Gates fell among lawyers and was betrayed and crucified by his former friends, the apostles of Media; how the mercenary Knights of the Red Hat brought the gospel of the Penguin into the halls of the Corporations; and even of the dispute between the brethren of Gnome and KDE over a trollish Licence. But all these things are recorded elsewhere, in the Books of the Deeds of the Penguin and the Chronicles of the Fourth Age, and I suppose if they were all narrated they would fill a stack of DVDs as deep and perilous as a Usenet Newsgroup.

Now may you code in the power of the Source; may the Kernel, the Libraries and the Utilities be with you, throughout all Distributions, until the end of the Epoch. Amen.

Posted on Sat 06 Feb 15:50:24 1999 GMT
Written by Lennier culln@xtra.co.nz

You have to love that!

Wednesday, May 24, 2006

Whoo!

This is mildly entertaining. Well I at least got a good laugh out of it. But stuff like this usually cracks me up just because it's so dumb. Check it out!

I am officially a graduate of Iowa State University. Life is good! I have a summer internship in Cedar Rapids and I'm absolutely loving it. I will hopefully be attending grad school at ISU in the fall. *crosses fingers* Other than that not many major developments. Give me a shout :)

Friday, March 17, 2006

There's no such thing as bad press?

Lately, it's not been good to be a Cyclone. Just as the embarrassment of the Larry Eustachy debacle three years ago was finally starting to fade, the Iowa State men's basketball team makes national headlines, and yet again for nothing to be proud of. It has been a turbulent couple of weeks for the Cyclone Nation. March 8th sophomore guard Tasheed Carr announced he intended to transfer to another school. The very next day we blew our only shot at achieving post season play by losing to Oklahoma State. We didn't even make it into the NIT... the NIT! Almost every team with a winning record who isn't invited to the big dance is invited to the NIT, but not ISU. And just when you thought it could get any worse to be a Cyclone, March 15th happened. First, star, junior guards Curtis Stinson and Will Blalock announced that they would be entering the NBA draft this year. And before we could even swallow, CBS SportLine's Gregg Doyel cried scandal. I don't know who this "journalist" thinks he is, but that article isn't journalism. He obviously doesn't have all the facts, like that Coach Morgan knew Anthony Davis while he was still at Long Beach St. and had tried to recruit Davis then, and that Davis followed him to ISU. The worst part is he just won't stop. But I agree that there's something fishy going on with this D1 Scheduling business. There's a lot of money going in, but not a lot going out. And the whole thing with Mike Miller being the head coach of LACC and a lot of these teams with contracts with D1 also having former LACC players on their rosters also doesn't smell good. But to make an already bad situation even worse Coach Morgan was called into a meeting with athletic director Jamie Pollard and president Gregory Geoffroy late last night and was informed that he and his coaching staff were all fired.

The timing in all of this is very convenient. Stinson and Blalock's announcement coming just before the D1 Scheduling story broke makes it look like someone tipped them off and told them to get out while the getting was still good. Especially after an entire season of saying they would stay for their senior year. Morgan et al. being fired a day after scandal broke, even though the reasons have nothing to do with it. Personally, I'm not sad to see him go. He's not a great coach, and didn't hone the talent on team this year. It was evident throughout the season with too many last minute losses that should have been wins. But again, the timing is all off, and makes Iowa State look really bad, and that maybe Mr. Doyel is right in his implications that ISU is at the center of the whole D1 Scheduling scam. I have a hard time believing it, but the way things look right now, it doesn't look good, and we're looking more than a little guilty. Only time and a NCAA investigation will tell.

Lesson for the week: Beware the Ides of March

Tuesday, February 21, 2006

Contains No Juice

Were you aware that Wild Cherry Pepsi contains no juice. It certainlly came as a shock to me. </sarcasm> I mean, really, why the need to put such a notice on there? Do people really believe that there's juice in cherry pop? Has our society stooped to a level that it's actually necessary to put such disclaimers on things? It totally baffles me.

Wednesday, January 18, 2006

Last semester as an undergrad!

Wow... last semester as an undergrad, and possibly my last semester at good old Iowa State. I can hardly believe it's already been 4 years, time has literally flown by. Talk about cushy senior schedule this semester though. My earliest class (not counting my one recitation) isn't until 12:30pm. One class on Monday/Wednesday, 3 classes on Tuesday/Thursday, and I don't have any class or work on Friday. It's pretty sweet. The classes I'm taking aren't all that bad either: 19th Century Art History, Principals of Programming Languages, Philosophy of Technology and independent study for Game Design and Development. All in all it should be a pretty fun semester.

I've got all my applications in to graduate schools. I ended up applying to Iowa State University (ISU), University of Pennsylvania (UPenn), Carnegie Mellon University (CMU), and Georgia Tech (GTech). Now I get to sit and play the waiting game... not my most favorite of games. I'm still really pulling for UPenn, but I'm not betting on it. My only sure bet at this point is ISU and some days I'm not even to sure about that.

That's about it right now. Graduating in May, hopefully I'll get in to grad school somewhere, if not I suppose I could get a job ;) Life is good and I couldn't be happier.

So sorry... NOT!

Seems as though my post has ruffled some feathers. Haha suckers! Wouldn't you all just love to know what that was all about. I'm sure some of you do, but others of you don't. Alas poor York! As for me, I'll continue to sit here on cloud nine. :D

Tuesday, January 03, 2006

Le Sigh



That's all I've got.